vCenter 5.1 Upgrade – WAIT!

After a 12 hour hellish attempt at upgrading a vCenter 5.0 update 1 to 5.1 I have one bit of advice – wait! There are a number of bugs in the new Single Sign On service that can cause quite a headache. I’ve been able to get it working on some clusters, but in general its too buggy for production release.

Most of the error are around security. The default installation created an MSSQL Express 2008 R2 database with a password that is only 6 characters. That is real problem if you have password security enforcement in your environment. The bug is documented here:

There is another bug that restricts certain special characters from being used in the SSO default admin password.

After several hours on the phone with VMware support they suggested waiting until a hotfix comes out – which should be soon.

It looks like VMware release another buggy piece of code too early so that they could announce at VMworld that 5.1 would be coming soon. The removal of the vRAM entitlements is the main motivation for doing an upgrade.

UPDATE: VMware released a new version on the 25th:

vCenter Won’t Start (database full)

There are a number of reasons that vCenter won’t start. One really fun one is a full database. If vCenter was installed with SQL express than there is a 4GB database size limit. The instructions here help reduce the database size:

NX-OS FCoE License

Enabling FCoE on a Nexus swich is pretty simple:

feature fcoe

What’s not all that well documented is that uses a temporary 120 day license. After the 120 days the switch disables FCoE. That is a very bad thing, especially when the switch is providing the storage backbone for a vSphere cluster running around 200VMs.

To properly license FCoE a license number must be registered on Cisco’s website. This generates a license file that gets uploaded to the switch.

Move NetApp Root Volume (vol0) to a New Aggrigate

By default vol0 is the root volume on a NetApp storage device and is stored on aggregate aggr0. After accidentally assigning too many disks to aggr0 I found the need to decrease the size of the aggregate. Unfortunately this is not possible. I had to create a new aggregate to copy vol0 to and then change the new volume to be the the root volume.


Typically on Cisco IOS the copy command will use the default management interface for TFTP traffic. After quite a bit of throubleshooting I found out this is not the case with the NX-OS. You need to put in a vrf. By default for the management interface ‘management’ must be entered.

Nexus_5010_Switch# copy tftp:// bootflash
Enter vrf (If no input, current vrf 'default' is considered): management
Trying to connect to tftp server......
Connection to Server Established.

Run Internet Explorer 7 On Vista and Windows 7

I do a lot of web development and constantly switch back and forth between browers to make sure that sites look the same. Since I run Windows 7 my choice of what version of Internet Explorer I can use is limited to IE8. However it turns out that there is a very nice set of developers tool in IE8 that is accessible by pressing the F12 key (or going to Tools – Developer Tools). It is very easy to switch between IE7 and IE8.

Getting Hadoop MapReduce 0.20.2 Running On Ubuntu

I decided to setup a Hadoop cluster and write a MapReduce job  for my distrbuted systems final project. I had done this before with an earlier release and it was fairly straight forward. It turns out it is still straight forward with Hadoop 0.20.2, but the process is not well documented and the configuration has changed. Hopefully I can clear up the process here.

What is Hadoop MapReduce?

MapReduce is a powerful distributed computation technique pioneered by Google. Hadoop MapReduce is an open source implementation written in Java that is maintained by the Apache Software Foundation. Hadoop MapReduce consists of two main parts: the Hadoop distrbuted file system (HDFS) and the MapReduce system.

Getting Hadoop

The first step is to download Hadoop. Go to It is worthwhile to read up on how Hadoop and MapReduce work before you move onto the installation and configuration.

Plan The Installation

Before the actual installation there is a bit of planning to be done. Hadoop works best when run from a local file system. However for convienceince it is also nice to have a common NFS file share to save configuration and log files. Below is an image of what I setup. For the distributed setup at least two nodes are required.

Initial Setup

Before doing any setup of the actual Hadoop system there is some initial setup that needs to be completed, namely the creation of a directory on each node and a shared ssh key. The first step is the easiest. A hadoop install directory needs to be created on each nodes that is going to be a part of the system. The directory must have the same name and location on each node. It is recommended not to use an NFS file share for the installation directory as it can affect performance.

After the install directory has been created a shared ssh key needs to be generated on each node and added to the authorized_hosts file. This allow for passwordless ssh login and is required by the Hadoop cluster startup scripts.

Open Firewall Ports

Hadoop requires a number of ports to be open for the system to work.

Port Function
50010 DataNode Port
50020 JobTracker Service
50030 MapReduce Administrative Page
50105 Backup/Checkpoint node
54310 HDFS File System
54311 JobTracker Service
50060 TaskTracker Port
50070 DFS Administrative Webpage (namenode)
50075 DataNode Port
50090 SecondaryNameNode Port

Configuration Files

There are three main configuration files that need to be edited: hdfs-site.xml, mapred-site.xml, and core-site.xml. Each file resides in the conf folder where Hadoop is extracted from. There are a lot of parameters that can go into each file but only a few basic ones needs to be set. I have provided my configuration files below. The final file that needs to be edited is, which is a shell script that sets up Hadoop environment variables. At the very least the $JAVA_HOME variable needs to be uncommented and properly set.




Set the Slaves and Master

The master node needs to be defined in the hadoop_dir/conf/masters file. Each slave node needs to be hadoop_dir/conf/slaves file, one machine name/IP address per line.

Deploy the Installation and Configuration Files

The installation and configuration files need to be deployed to each node in the cluster. The easiest way to do this is through scp. I wrote the script below so that I could run a command on each node in my cluster. Another alternative is the Cluster SSH program (cssh). Either approach is preferable to logging onto each node to run  a command.

Using my script I ran scp on each node in the cluster:

./ "scp -r ~/hadoop /opt/hadoop/hadoop"

This runs the command in quotes on each node in the cluster. In this case I copied the Hadoop installation fom the NFS share (my home directory) to a local directory on each node.

Formatting the NameNode

Now that the Hadoop files are on each node the NameNode can be formatted to setup the Hadoop File System.

hadoop_dir/bin/hadoop namenode -format

Starting the Hadoop File System

Now that the namenode has been formatted the distributed file system (DFS) can be started. This is done by using the script in the bin directory of the Hadoop installation.


The status of the Hadoop File System can be viewed from the administrative page on on the master server, http://master_server:50070.

Starting the MapReduce System

The final step to setting up MapReduce is to start the MapReduce system. This is done by using the script that is located in the bin directory of the Hadoop installation.


The status of the MapReduce system can be viewed from the administrative page on on the master server, http://master_server:50030.

Submitting a MapReduce Job

Now that the cluster is up and running it is ready to start accepting MapReduce jobs. This is done using the hadoop executable from the bin directory of the Hadoop installation and a jar file that contains a MapReduce program. An example of running the WordCount demo program provided with Hadoop is shown below.

hadoop_dir/bin/hadoop jar jar_location/wordcount.jar org.myorg.WordCount /file_dir_in_hdfs /output_dir_in_hdfs

Performance Report in the Virtual Infrastructure Client

VMware vCenter server reports a lot of performance information and displays tables in the Virtual Infrastructure client. They provide a nice at a glace view, but do not allow for anything more. While poking around the GUI I found a feature to export the performance data to Excel by going to file-reports-performance. This is a nifty tool that is not very well documented.

Prompt a User for Input In Powershell

Occasionally it is necessary to prompt a user for input in a Powershell script. In my case I just need to remind the user to do something, but the same command can get the user input and store it in a variable.

$input=read-host "Hey user, enter some text!"

The text that enters in the above example is save in the $input variable.

Automate Website Visits With Powershell and Internet Explorer

For my research I found the need to automatically visit a webpage to run a setup and a teardown script. Turns out that it is fairly easy to do. The script is included below.

#cd to Internet Explorer
cd "C:\Program Files\Internet Explorer"
#point ie to the teardown script
#sleep for one second - this is needed so IE has a chance to start before it is killed
sleep 1
#kill the internet explorer process
get-process iexplore | stop-process