Some time ago, back in December of 2015, I started a three-part story on how to have a supported SMB or lab environment. It consisted of a couple of Hewlett Packard Enterprise Microserver Gen 8’s and a NAS by QNAP. Check out the three part article here. Although this is still a good way to go, HPE released the Gen10 of the Microserver about eight months ago and from a personal standpoint I don’t like it. It still has a memory limitation of 32 gigs and it incorporates an AMD Opteron CPU with which I had a few bad experiences in the past.

The Search

So on the lookout for new lab servers I started a Google search and looked at what other folks are using. That last part was quite easy as I was invited to a home-lab group on Facebook. That group is a bunch of very pleasantly disturbed people who run a home lab in all kinds of configurations. What I noticed, however, is that a lot of them run phased out enterprise equipment like the Dell R710. I quite like that box. I actually still own one but for a home lab, it has two huge drawbacks: noise and power-usage. When you switch it on, it sounds like you are sitting in the middle of runway 24R on LAX while a 747 is taking off over your head. After running it for a week, you will receive calls and letters from your electricity company telling you that you just made it to their ‘favourite customer’-list due to your power usage. I wanted something small and powerful that could take more than 32 gigs of RAM and still be quiet and low in energy usage.

I looked at Intel NUC’s and Gigabyte Brix during my Google voyage. But as they are really systems aimed at consumer usage, they lack connectivity, proper remote management and supported memory configurations. The ones that do qualify from a memory perspective mostly go up to 32 or sometimes 64 gigs but with only 2 SO-DIMMs slots, that would mean it would cost a fortune to put a proper amount of RAM in it.

Basically, I wanted something smaller in size than the Microserver and with more punch and extensibility than the Intel NUCs. I remember that at VMworld in 2017, William Lam did a Hackathon with small Supermicro servers. While Googling I came across that again and I decided to take a look at those.

The Choice

As it turns out, those ‘little’ boxes I looked at, are a full blown server using Intel’s SoC version of the Xeon. They support up to 128 gigs of ECC RAM. They also have lights out management including a remote KVM console in HTML5 and they come with 1 gig NICs and 10 gig NICs on board (plus for the E300 a PCIe slot onboard to add more). Last but not least, they have room for an mSATA disk, an M.2 disk AND a ‘normal’ SATA drive, meaning you can run vSAN on this box without any drawbacks.

So, the choice went between 2 Supermicro servers: The SYS-E300-8D and the SYS-E200-8D. The E300 had a lesser CPU (4 cores) but more on board connectivity plus a PCIe slot. The E200 had a better CPU (6 cores) but instead of 6 gigabyte NICs it has 2 (and 2 x 10 gig but I do not run a 10G network yet) and no room for extending via a PCIe card. The rest of the specs are identical. Because I usually have no CPU issues in my home-lab but as I run iSCSI storage I do use 6 NICs, the E300 became the winner.

The Physical Network

Back in 2015, when I started to build out my network a bit more, I chose HPE’s 1910 line. It was quick, it supported LACP, layer 3 “light” and understood VLAN’s. Although I didn’t use VLANs back in that time, I did wanted the support as I was planning on implementing the technology. Then I discovered Ubiquiti and the rest is history. We are now a Ubiquiti-“household” with switches, accesspoints and a router. It might not be the quickest boy on the block, but it’s so easy to configure for VLAN’s and so much more and it all ties into each other that I decided to buy a 48 port (non-PoE, I’m not insane) switch for my home-lab.

With VLANs implemented in the physical world it seemed only fair we would also do that in the virtual world and so we did. It was very easy to define the correct portgroups with the associated VLAN tags so now it all works like a charm.

The Storage

For those of you who did follow the three-part story on my previous home-lab journey will remember I picked a QNAP TS-453a back in 2015 to do iSCSI. That went well in the beginning but I did notice a slowdown over time. After a complete reconfigure of the QNAP it picked up speed again but as it had just 4 bays we ran out of space and I did not want to go to a huge 8 bay NAS again, I started thinking and I came up with this idea: As the Microservers are being phased out anyway but are perfectly in working order, why not fill one with 4 proper disks, add an extra SSD for caching and put FreeNAS onto it? I did this last year when the NAS we used store our personal stuff on, was short of dying. You can read all about it in this post I wrote back then. In the mean time, I updated the Microservers from their original Celerons to Xeon E3’s so they pack plenty of punch to run a storage box. Why not build another one, but for iSCSI? And so we did. If you look at the picture, the blue box is the iSCSI SAN for the home-lab and the red one is the NAS for filesharing.

As it turns out, FreeNAS is very capable of supporting VMware ESXi on iSCSI. The combination is quick enough to not notice delays from the vSphere side. The configuration consists of 4 disks set up in the equivalent of RAID10 on ZFS, supported by an SSD for caching that what could not be written to disk quickly enough. As this is the former ESXi host, it still has the 6 NIC’s, 4 of which are now dedicated to iSCSI traffic for the home-lab environment. With the 16 gigs of RAM in the box, it has enough working space to keep the data flowing quickly. While migrating data, we touched 120MB/s per session.

Remote Management

The new servers come with remote management integrated. And quite nice too. What is likeble above what I know from iDRAC or ILO is that they’ve been using JAVA or .NET for ages and this comes with an HTML5 KVM integrated. That means no odd popups, no strange JAVA prompts, no ‘ignore security’ buttons, just an interactive website that shows you your ESXi console the way you want it. And that works quite well too. Obviously you need a browser that supports HTML5 but any modern browser does.


So there you go. The new home-lab in all it’s shine and colour. I must admit that the Supermicro servers were slightly more expensive than I initially planned. Many thanks to Supermicro’s RenĂ© Hasewinkel for supporting VMGuru and helping in making this a bit less capital intensive. For the future I plan to upgrade the hosts to run VSAN on them, eventually phasing out the iSCSI part and maybe even moving it to 10 Gig Ethernet. These servers certainly make it possible.