Page tree
Skip to end of metadata
Go to start of metadata
The material in this wiki is with reference to versions of SAP Data Services lower than 4.2 Support Pack 2. Due to changes in product the content may not be fully valid for SAP Data Services 4.2 Support Pack 2 or above.

Hadoop

Use the following instructions to setup Hadoop.

  1. Download an Apache Hadoop distribution matching the version used in your vendor’s Hadoop distribution, such as Apache Hadoop 0.20.203 or Apache Hadoop 1.0.x. See Hadoop Releases.
  2. Unpack the Apache Hadoop distribution in a directory we’ll label HADOOP_HOME.
  3. Backup the HADOOP_HOME/conf directory.
    • If you are using an Apache Hadoop 2 distribution, you'll find conf replaced by HADOOP_HOME/etc/hadoop and should use this instead. 
  4. Copy the Hadoop conf directory from a node in your Hadoop cluster to HADOOP_HOME. This enables the machine to access your Hadoop cluster without being a part of it.
  5. Execute the command export HADOOP_HOME=<hadoop-install-dir>.
  6. CD to the Data Services installation’s bin folder.
  7. Execute the command source ./al_env.sh to configure the Data Services environment.
  8. CD to the $LINK_DIR/hadoop/bin directory where LINK_DIR has been setup by the al_env.sh script to point to the Data Services installation directory.
  9. Execute the command source ./hadoop_env.sh -e to configure Data Services for Hadoop.
  10. Execute the command $HADOOP_HOME/bin/hadoop fs -ls /.
    • You should see a list of directories present in your HDFS.
    • If not, check whether the JAVA_HOME variable is hardcoded in the $HADOOP_HOME/conf/hadoop-env.sh file. Vendors sometimes modify this script from the Apache Hadoop version. JAVA_HOME has already been set by the Data Services al_env.sh script so you can try commenting out JAVA_HOME with a # character in the $HADOOP_HOME/bin/hadoop-env.sh file.
    • If this still doesn’t work, try commenting out the HADOOP_OPTS variable in the $HADOOP_HOME/bin/hadoop-env.sh file.
  11. Optionally, execute the command ./hadoop_env.sh -c to configure Text Data Processing for Hadoop.

Pig

Use the following steps to setup Pig.

  1. Download an Apache Pig release matching the version used in your vendor’s Hadoop distribution, such as Apache Pig 0.9.2. See Pig Releases.
  2. Unpack the Apache Pig release in a directory.
  3. Execute the command export PATH=/<my-path-to-pig>/pig-n.n.n/bin:$PATH.
  4. Execute the command pig
    • You should be presented with a > prompt. Type quit and press Enter.
    • If not, ensure the PATH contains the Pig bin directory.

Hive (Optional)

If you are not going to be accessing data you may store in Hive, you don't need to configure Hive for the Data Services Job Server to use. 

  1. Download an Apache Hive release matching the version used in your vendor’s Hadoop distribution, such as Apache Hive 0.9.0. See Hive Releases.
  2. Unpack the Apache Hive release in a directory.
  3. Backup the <my-path-to-hive>/conf directory.
  4. Copy the Hive conf directory from a node in your Hadoop cluster to <my-path-to-hive>. This enables the machine to access your Hive metastore.
  5. Execute the command export PATH=/<my-path-to-hive>/hive-n.n.n/bin:$PATH.
  6. Execute the command hive
    • You should be presented with a hive> prompt. Type show databases; and press Enter. Type quit; and press Enter.
    • If not, ensure the PATH contains the Hive bin directory.

Cloudera 4.x

  1. Copy the libhdfs* libraries from your CDH 4.x node’s /usr/lib64 directory to the /usr/lib64 directory on your clean Linux machine.
  2. Copy the /usr/lib/hadoop, /usr/lib/hadoop-hdfs, /usr/lib/hadoop-0.20-mapreduce, /usr/lib/pig, and /usr/lib/hive directories from one of your CDH 4.x nodes to the /usr/lib directory on your clean Linux machine.
  3. Export the /usr/lib/hadoop directory from step 1 as HADOOP_HOME.
  4. Add $HADOOP_HOME/bin to the $PATH.
  5. Try to execute ‘hadoop fs –ls /’. You should be able to browse your HDFS.
  6. Ensure that any JARs specified in the /usr/lib/hive/conf/hive-site.xml reside on the machine at the locations specified. If, for instance, you upgraded your CDH version, this file may reference updated JARs from the upgrade. Either copy the files to this machine or adjust the path to the files to point to your $HADOOP_HOME/lib or /usr/lib/hive/lib directories.
  7. Add /usr/lib/pig/bin and /usr/lib/hive/bin to the $PATH.
  8. Install the Data Services Job Server on the vanilla Linux machine.
  9. Ensure your $LINK_DIR/hadoop/bin/hadoop_env.sh script contains the following bold elements:
    • LD_LIBRARY_PATH=/usr/lib64:$HADOOP_HOME/c++/Linux-amd64-64/lib:$LINK_DIR/ext/jre/lib/amd64/server:$LD_LIBRARY_PATH
    • #classes=`ls $HADOOP_HOME/lib/guava*.jar $HADOOP_HOME/lib/commons*.jar $HADOOP_HOME/client-0.20/*.jar`
    • classes=`ls $HADOOP_HOME/client-0.20/*.jar`
    • CLASSPATH="$classes:$CLASSPATH:$HADOOP_HOME/etc/hadoop:$HADOOP_HOME/conf"
  10. Source <DS_install>/bin/al_env.sh if you haven’t.
  11. Source $LINK_DIR/hadoop/bin/hadoop_env.sh -e.

Data Services Job Server

Use the $LINK_DIR/bin/svrcfg CLI to setup or re-start your Data Services Job Server to ensure it picks up the Hadoop environment once you've configured the following.

NOTE: The user that starts the Job Server must have read/write access to the HDFS.

  • No labels

1 Comment

  1. Unknown User (98tdg8bh)

    Hi Justin,

    Nice article. Although I am having problem setting it up. At the point when I execute $HADOOP_HOME/bin/hadoop fs -ls /, it does not show the dsfs, it shows the current directory. Do you have any idea why it is happening?

    Asadul