One of my pandemic projects that may have gotten a bit out of hand was building a vSphere home lab. The initial plan was to simply upgrade a 7-year-old standalone ESXi server, but quickly turned into buying a 1/4 height rack. I think I ended up with a pretty solid build that’s power efficient, and most importantly, pretty quiet. It also quickly evolved into upgrading my home network to support 10Gbps.

The Rack

After some back and forth on rack size, I went with the tallest that would fit in my closet. Fortunately, that was a 1/4 height 12U rack. One main constraint was on depth. I only had about 25″ total. This meant no standard-length servers, but that aligned with the servers that would soon live to call the rack home. After a lot of searching I landed on this rack on Amazon. It paired nicely with the CyberPower CPS1215RMS. Since this is mostly a lab environment a UPS didn’t make too much sense and I’ve been burned by batteries dying and causing issues. Fortunately, I’ve got pretty solid power that’s been more reliable than any of the rack-based UPSs I’ve seen.

ESXi Servers

After looking through a lot of options for lab ESXi servers I landed on the Supermicro 5018D-FN8T. With a built-in processor, low power, quiet fan mode, 10 x 1GbE ports, and 2 x 10GbE it made the most sense. Oh and this bad boy can pack in up to 128GB of DDR4 ECC RDIMMs, in addition to a host of other ports and features along with built-in rack ears. I went with RDIMMs so I can pack in maximum RAM. For this build it was 2 x 32GB sticks per server so there is room for upgrade in the future. I slapped in a 2.5″ SSD and HDD so there was local VM storage. For the ESXi hypervisor a 32GB USB 3.0 thumb drive worked out just fine. The processor was already installed so it was a pretty straight forward build overall. Two of these servers made for a decently sized and flexible cluster. I was considering vSAN, but have been burned enough times that I wasn’t going to even bother with it. Enter a FreeNAS custom built server for NAS/SAN.

FreeNAS – SAN/NAS

My main requirement for a storage was a 1U server footprint. Thanks to short form factor if the rack, I needed a short footprint server which took quite a bit of searching. I went with Supermicro SYS-1019S-MC0T due to it’s short length, 1U form factor, 2x10GbE ports, and 8 x SAS/SATA 2.5″ drive bays. It was a perfect match for the rack, with the exception of the full length rails that came with it. Thankfully, Supermicro makes a shorter version of the rails (model number MCP-290-00056-0N). One issue that became immediately apparent was the noise. This server SCREAMS at idle. Out of box it’s not suitable for a small condo. Some additional searching lead me to swap out the OEM fans with Noctua NF-A4x20 PWM. They are a tad underpowered, but pretty much silent. So far so good with the server staying cool enough and being whisper quiet.

With this barebones server I did need to get a processor, but found a cheap Intel Core i3-7100 that has been doing the job just fine. Since the board accepts standard RAM and I happen to have this memory kit laying around that I was able to put it to good use.

FreeNAS was an obvious choice for for the storage OS. It’s super lightweight and support iSCSI multi-pathing as well as VAAI so ESXi can offload storage tasks to the storage server. The server’s 10GbE NICs are also officially supported. My first attempt was an old Supermicro server from eBay with a cheap 10GbE adapter. Turns out neither was suited for FreeNAS, lesson learned.

That just left drives. I happen to like the Samsung SSDs so I went with 4 x 1 TB SAMSUNG 870 QVO. Since I still had 4 more drive bays open, I was on the hunt for more drives. The backplane works with SAS so I figured why not enterprise SAS. I picked up 4 x 900GB Toshiba SAS drives for $20 each. That should have been my go to for all 8 drive bays! Although now I can sport two tiers of storage. I still have some benchmarking to do for NFS vs iSCSI on both SSD and SAS 10K RPM driver.

Network Overhaul

With all these 10GbE capable servers, it was time for a 10GbE capable switch. It turns out they get pretty expensive. I landed on the Ubiquiti Switch XG 16 (found it on Facebook marketplace for 1/2 off) after a lot of searching for the optimal feature-to-price breakdown. I got the box all setup and then the first problem arose – I needed to have a Uibuiti controller. They offer this through their cloud, but you need to have 10 devices under managed. My budget and available rackspace, let alone requirements, did not make that viable. I had already purchased a 6-port EdgeRouter (Which has a local management plane), but was not compatible with the new switch. It is worth mentioning that the EdgeRouter is kick-ass and faster router for the money, albeit something I quickly outgrew for this build. I was able to setup the UISP management software on a VM, but I didn’t want have to manage another VM with all the care and feeding needs.

Another shortcoming of the EdgeRouter was a lack of 10GbE uplink. Since the 10GbE switch doesn’t do L3 routing I needed a router that was capable of routing at 10Gbps. It turns out the for money the Dream Machine Pro was a good fit. It offered a 10GbE internet port (I wish I could get FIOS to use anywhere near that) and a 10GbE LAN port, as well as 8 x GbE ports.

Outcome

The build was certainly a huge upgrade and learning experience. Overall it turned out to be a pretty solid lab build (usage screenshot below). Hopefully this build can help other! Feel free to leave comments on thoughts and questions.