Skip to content

Instantly share code, notes, and snippets.

@dkilcy
Last active December 26, 2015 08:38
Show Gist options
  • Save dkilcy/7123243 to your computer and use it in GitHub Desktop.
Save dkilcy/7123243 to your computer and use it in GitHub Desktop.
Pseudo-distributed mode setup for Hadoop 1.2.1 setup on Linux Mint development system.
There are 3 modes: Standalone, Pseudo-Distributed, Fully-Distributed
Notes:
Standalone is limited to only 1 mapper and reducer where everything runs in 1 JVM.
Standalone uses the regular linux filesystem (no "hadoop fs")
In pseudo-distributed, HDFS is not a "real" physical filesystem, you can only add/view/delete it using "hadoop fs"
Need to export jar file and run using "hadoop jar" to avoid version mismatch (exp with JobTracker)
When building apps use the pom.xml from Cloudera in the github.
Below is all for Pseudo-distributed setup on single machine under Linux Mint 15 (Cinammon)
===========================================================================================================================
Setup
===========================================================================================================================
tar zxvf hadoop-1.2.1.tar.gz
sudo mv hadoop-1.2.1 /opt
sudo chown -R dkilcy.dkilcy /opt/hadoop-1.2.1/
sudo ln -sf /opt/hadoop-1.2.1 /opt/hadoop
===========================================================================================================================
sudo apt-get install ssh
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh localhost
===========================================================================================================================
Add to /etc/sysctl:
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
Verify:
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
===========================================================================================================================
Add to /etc/profile:
export HADOOP_PREFIX=/opt/hadoop
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_PREFIX/bin
===========================================================================================================================
core-site.xml
===========================================================================================================================
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
===========================================================================================================================
hadoop-env.sh
===========================================================================================================================
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use. Required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45
# Extra Java CLASSPATH elements. Optional.
# export HADOOP_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000
# Extra Java runtime options. Empty by default.
export HADOOP_OPTS=-server
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS
# Extra ssh options. Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
# Where log files are stored. $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
# host:path where hadoop code should be rsync'd from. Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop
# Seconds to sleep between slave commands. Unset by default. This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1
# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
# the users that are going to run the hadoop daemons. Otherwise there is
# the potential for a symlink attack.
# export HADOOP_PID_DIR=/var/hadoop/pids
# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER
# The scheduling priority for daemon processes. See 'man nice'.
# export HADOOP_NICENESS=10
===========================================================================================================================
hdfs-site.xml
===========================================================================================================================
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop</value>
</property>
</configuration>
===========================================================================================================================
mkdir -p /data/hadoop
chown -R dkilcy.dkilcy /data/hadoop
hadoop namenode -format
./start-all.sh
jps
hadoop fs -put ~/git/hadoop-programs/partitioner/HDFS/input/sample.txt /user/dkilcy/input
hadoop fs -ls /user/dkilcy/input
hadoop jar partitioner.jar com.kilcyconsulting.hadoop.partitioner.Partitioner \
/user/dkilcy/input /user/dkilcy/output \
-Dfs.defaultFS=hdfs://localhost:9000 -Dmapred.job.tracker=localhost:9001
hadoop fs -ls /user/dkilcy/output
hadoop fs -rmr /user/dkilcy/output
http://localhost:50030/jobtracker.jsp
http://localhost:50070/dfshealth.jsp
===========================================================================================================================
References:
===========
http://hadoop.apache.org/docs/r1.2.1/
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment