I decided to setup a Hadoop cluster and write a MapReduce job for my distrbuted systems final project. I had done this before with an earlier release and it was fairly straight forward. It turns out it is still straight forward with Hadoop 0.20.2, but the process is not well documented and the configuration has changed. Hopefully I can clear up the process here.
What is Hadoop MapReduce?
MapReduce is a powerful distributed computation technique pioneered by Google. Hadoop MapReduce is an open source implementation written in Java that is maintained by the Apache Software Foundation. Hadoop MapReduce consists of two main parts: the Hadoop distrbuted file system (HDFS) and the MapReduce system.
Getting Hadoop
The first step is to download Hadoop. Go to http://hadoop.apache.org/mapreduce. It is worthwhile to read up on how Hadoop and MapReduce work before you move onto the installation and configuration.
Plan The Installation
Before the actual installation there is a bit of planning to be done. Hadoop works best when run from a local file system. However for convienceince it is also nice to have a common NFS file share to save configuration and log files. Below is an image of what I setup. For the distributed setup at least two nodes are required.
Initial Setup
Before doing any setup of the actual Hadoop system there is some initial setup that needs to be completed, namely the creation of a directory on each node and a shared ssh key. The first step is the easiest. A hadoop install directory needs to be created on each nodes that is going to be a part of the system. The directory must have the same name and location on each node. It is recommended not to use an NFS file share for the installation directory as it can affect performance.
After the install directory has been created a shared ssh key needs to be generated on each node and added to the authorized_hosts file. This allow for passwordless ssh login and is required by the Hadoop cluster startup scripts.
Open Firewall Ports
Hadoop requires a number of ports to be open for the system to work.
Port | Function |
50010 | DataNode Port |
50020 | JobTracker Service |
50030 | MapReduce Administrative Page |
50105 | Backup/Checkpoint node |
54310 | HDFS File System |
54311 | JobTracker Service |
50060 | TaskTracker Port |
50070 | DFS Administrative Webpage (namenode) |
50075 | DataNode Port |
50090 | SecondaryNameNode Port |
Configuration Files
There are three main configuration files that need to be edited: hdfs-site.xml, mapred-site.xml, and core-site.xml. Each file resides in the conf folder where Hadoop is extracted from. There are a lot of parameters that can go into each file but only a few basic ones needs to be set. I have provided my configuration files below. The final file that needs to be edited is hadoop-env.sh, which is a shell script that sets up Hadoop environment variables. At the very least the $JAVA_HOME variable needs to be uncommented and properly set.
Set the Slaves and Master
The master node needs to be defined in the hadoop_dir/conf/masters file. Each slave node needs to be hadoop_dir/conf/slaves file, one machine name/IP address per line.
Deploy the Installation and Configuration Files
The installation and configuration files need to be deployed to each node in the cluster. The easiest way to do this is through scp. I wrote the script below so that I could run a command on each node in my cluster. Another alternative is the Cluster SSH program (cssh). Either approach is preferable to logging onto each node to run a command.
Using my run_comm.sh script I ran scp on each node in the cluster:
./run_comm.sh "scp -r ~/hadoop /opt/hadoop/hadoop"
This runs the command in quotes on each node in the cluster. In this case I copied the Hadoop installation fom the NFS share (my home directory) to a local directory on each node.
Formatting the NameNode
Now that the Hadoop files are on each node the NameNode can be formatted to setup the Hadoop File System.
hadoop_dir/bin/hadoop namenode -format
Starting the Hadoop File System
Now that the namenode has been formatted the distributed file system (DFS) can be started. This is done by using the start-dfs.sh script in the bin directory of the Hadoop installation.
hadoop_dir/bin/start-dfs.sh
The status of the Hadoop File System can be viewed from the administrative page on on the master server, http://master_server:50070.
Starting the MapReduce System
The final step to setting up MapReduce is to start the MapReduce system. This is done by using the start-mapred.sh script that is located in the bin directory of the Hadoop installation.
hadoop_dir/bin/start-mapred.sh
The status of the MapReduce system can be viewed from the administrative page on on the master server, http://master_server:50030.
Submitting a MapReduce Job
Now that the cluster is up and running it is ready to start accepting MapReduce jobs. This is done using the hadoop executable from the bin directory of the Hadoop installation and a jar file that contains a MapReduce program. An example of running the WordCount demo program provided with Hadoop is shown below.
hadoop_dir/bin/hadoop jar jar_location/wordcount.jar org.myorg.WordCount /file_dir_in_hdfs /output_dir_in_hdfs
Related
Related Posts
December 29, 2021
Five Traits of Highly Effective Solution Architects
The role of Solutions Architect is one of the most versatile, challenging, and…
August 10, 2021
Goodbye (and Good Riddance) EC2-Classic
AWS has recently announced the retirement of EC2-Classic, albeit a year from…
Hello James,
thank you for sharing this information. please can you email the configuration files (core-site.xml, hdfs-site.xml, mapred-site.xml, hadoop-env.sh) to my email address? the link don’t seem to be working.
thank you once again.
Joseph