tech_lab_itunes_logoA couple of weeks ago, I started with “A Low Budget SMB Environment“. This post was mainly focussed on the selection and purchase of the different components. This post will take it to the next level: Assembly. We will look at the specific parts, assemble the whole environment, install ESX and configure the storage.

Two things I’d like to address before we start. After the first post, I received comments about the budget. It was too expensive. Especially for a lab environment, a lot of readers tend to pick up used hardware, plug in some local storage or install a software NAS and be done with it. “Good enough for a home lab” and I agree with that motivation. If you are a student with only a couple of hundreds to spare and not 5.000, this will also do the trick for you. But from an SMB perspective, you should want a supportable infrastructure where in case of defects you have someone to fall back on (HP, QNAP, VMware). The focus for SMB should be that it is a production environment. You can’t really afford the system to be down. Other comments I got were about single points of failure, specially with regards to NAS and switch. My response to that is: True, these are single points of failure and you could cover them by buying two of each. This will augment your budget with about 1.500 euro’s ($1,700). If you can afford that, superb, by all means go for it. The focus of this article, however, is to put you into business as cheap as is supportable. Therefore I choose the configuration the way it is in this article.

Right, that being said, let’s get started with the build.

Server AssemblyMicroserver Box

The server we selected is the HP Microserver Gen8. It’s a very basic little cube that is not rack-mountable but then again, most smaller companies just have an IT corner instead of a serverroom. The box is very compact. The server comes packed with a powercable and a documentation package. As you may remember from the previous post, we also purchased two 8GB RAM modules and a MicroSD card. So, after unboxing and unwrapping everything, All the parts for the microserver on vmware, ready for assembly you should have a small collection of hardware in front of you. Please be aware that we’ve selected the RAM modules carefully. The Microserver is pretty picky when it comes to RAM modules. You can order the HP certified modules with the server but you will end up spending a lot more money. We went for Kingston ValueRAM modules (specifically: Kingston KVR16E11/8) that retail for about 70 Euro’s. For the MicroSD card we just got what was top of the rack in the store. This was a SanDisk 16GB MicroSD module. You can go with less, as VMware ESX does not require that much storage. Do buy a proper brand. You would not want your SDCard to die on you before you are well up and running.

To install both the RAM modules and the MicroSD card, we need to open the server. This is quite easy. On the back there are two thumbscrews that are quicly undone without the aid of any tools. Once you pull the cover back and up, you will find both installation locations. inserting the microsd card to install vmware onThe RAM modules are located on the left side of the server, the MicroSD card slot is located on the motherboard on the right side of the server. The MicroSD cardslot actually has little platform in front so you can lay down the card and carefully slide it forward into the slot. Push it forward untill you hear the click that the card is properly inserted into the slot.

The RAM modules are located on the other side of the server. Carefully push down the brackets on both ends of the memory slot to remove the SD-RAM module that came with the server (either 2GB or 4GB, depending on the ordernumber). After we’ve installed the two 8GB modules, you can store this install RAM in microserver for vmwaremodule in the box the 8GB modules came in. Push back the brackets on both memory slots so you can insert the modules. Make sure you line the modules up correctly, as the little cut in the module should be on the left side. If the module is lined up correctly, push down firmly but gently until the brackets on both sides click into place. If you did it correctly, the modules are now installed and you can go and remount the cover.

Two remarks I’d like to put here. First, the MicroSD cardslot is not hotplug. Bare that in mind if you run into trouble while installing or otherwise. The card is only detected correctly by the server when the card is properly inserted in the slot before you push the powerbutton. Second, in the kitlist we included a low profile network card by HP. As we purchased this card after we did the build, it’s not included in this instruction. However, it is installed just as easily as the other components. No tools needed. You do need to unplug the small plug next to the memory modules and the powerplug located next to the MicroSD cardslot. After that you can unscrew the thumbscrew on the back of the server and pull back the mainboard. Push down on the cardslot to remove the spacer. Now you can slide in the networkcard, push it down into the PCIe slot and push down the locking mechanism to secure the card. Slide back the motherboard into the case and reattach the plugs you unplugged before.

Microserver first bootupSo, if everything went well, you can now plug in keyboard, monitor and power and push the powerbutton. The server has a two staged BIOS. In the first stage it will check all the components for compatibility. If you BIOS gets stuck here, it might be because something is not mounted properly or something you bought is not compatible. If this is the case, you will have to remove the incompatible part and start a search for a working one. Once you get to the screen displayed to the right, you’re in the green. HP servers offer a bootmenu on keypress to select a different bootdevice rather than the default. For the Microserver this is located under F11.

Now, before we move on, it is important to have your installation media prepared. You can either use a USB stick or a USB CDROM but your server does not offer any other manual options (not quite true, you can do a networkboot if you have a networkbootserver with an ESX image available). In this case we went for the easiest and quickest option, which is a USB stick. Now, VMware does not offer a bootable USB image. You can download the ESXi ISO image and put that on a USB drive. Please do note that HP offers a special ISO image for ESX. This image includes the optimal drivers for HP hardware as well as drivers for the systems management chips inside the system. It is recommended to use this HP image. Now, to make your USB bootable, we have used a tool called Yumi USB Boot. This can be downloaded at the Pendrive Linux website. Yumi is a Windows tool that lets you select an ISO image, the HP ESX image in our case, and mount microserver selecting microsd card to install vmware onthat under the option “Try Unlisted ISO (GRUB)”. The tool will then transfer your HP ESX ISO image to the USB stick and install a bootmenu as well as a bootsector.

When you pressed F11 and selected the USB Boot option, you should see the bootmenu. Move down to the GRUB option, press enter, you should now see your HP ESX entry. Select this and press enter again. The ESX bootable installer should now start and you are presented with the default screens. Please follow the instructions on the screen. When you have to select the disk where to install ESX, it should display the MicroSD card as “HP ILO”. If it does not display this, your card is pobably not mounted or working correctly. esx final confirmation screenPlease keep in mind that this is not a hotplug storage device, so power off the server before you remove the cover to check on the MicroSD card (it’s safer to power it off anyway before you remove the cover). If the card is correctly displayed, you can select it by moving down to it with the arrow keys until the HP ILO entry is marked yellow and press enter. Follow the instructions on the screen until the end of the installation questionaire. After pushing F11 ESX will install itself on the MicroSD card and you are good to go.

When you are really doing this step by step, you of course have 2 other servers to configure, but this should be a piece of cake by now. As the installation procedure takes a while, we’ll move on the the storage part here. Preparing the QNAP NAS also takes some time.

Storage Assembly

QNAP box selected for SMB projectThe box of the QNAP contains a bit more than the HP Server box. The disks are not included but they do deliver everything else. It comes with:

  • The QNAP NAS itselfQNAP what's in the box, ready for assembly
  • A poweradapter and powercable
  • Two networkcables
  • An infrared remote control
  • Screws for 3 1/2 inch and 2 1/2 inch harddisks

Now, one thing I do not get, is why there are two networkcables. The QNAP NAS has four networkinterfaces and as IT guys, we love to use them all. Why not include four cables then? It’s not like the purchase price will go up by tens of euros or dollars. Anyway, to connect them all, you will have to buy two more cables of about 1.5 meters.

QNAP mounting SSD for cacheQNAP kindly delivered 2 packs of screws, one pack of which fits 2 1/2 inch disks, like our SSD. The other pack fits 3 1/2 inch disks like the ones we intend to use for the RAID set. The cradles in the NAS are marked where to put the screws for the 2 1/2 inch and 3 1/2 inch disks. For the first assemblyQNAP seagate enterprise NAS disks we mount the Samsung SSD, then we mount the three Seagate Enterprise NAS disks. We chose the Samsung 850 Pro SSD. This drive should have the quality to perform on a high level for a long time in the NAS configuration we chose. The Seagate Enterprise NAS disks spin at 7200 RPM. This speed guarantees that data will be served up quickly, although it will not be the most powerfriendly solution.

We are going to configure the SSD for cache and the three Seagate disks as a RAID 5 volume. Because this NAS has 4 drivebays, we are a bit limited in the amount of disk configurations we can pick from. The SSD is taking up one slot, leaving 3 open slots. The most safe construction for your data is to put the three disks in a RAID 5 volume. This way the data is striped over 3 disks. For every write action to two disks, one parity block is written to the third. Basically, if you have 3 disks, you offer up 1 disk in space for parity blocks for data safety. In total we will be having slightly less than 4 TB storage available, where the SSD will speed up reads from the NAS with a predictive algorithm. This way data is even more quickly available for ESX to process.

QNAP does allow for a write cache as well, mounting disks in the QNAP bracketsbut to have that, you need at least two SSD’s. As we only have four bays, that would seriously impact our datastorage. Fortunately, there is a solution. Imagine you need to expand your storage as the 4 TB storage you initially purchased, is running at capacity. For this, there are expansion racks available from QNAP. The UX500P is a 5 bay chassis. The UX800p is an 8 bay chassis. These expansion chassis are connected via USB3 cables.

QNAP first boot with disk configSo, when the disks are in, it’s time to fire it up. you do not need to hook it up to the network just yet, although it can’t hurt. There are no fileservices available until you start them for the first time. It takes a bit of time before the QNAP NAS is ready booting. If you have a DHCP server in your network, the NAS will request an IP address from the DHCP and display this in the screen on the device. Point your browser to that address with SSL,

When you log on for the first time, you will be greeted with the first boot wizard. This wizard will guide you through the initial setup, selecting your disks, setting up RAID and preparing for general NAS service. This QNAP NAS has several functions and options, but as this article is focussed on setup in QNAP First Boot Wizardcombination with a VMware cluster, we don’t want it to spend CPU time on services we don’t use anyway. So in the first screen, select “business use”, that way the NAS will not start any other features in the background we don’t need. In the next screen you enter the system name as well as the new Admin password. Next QNAP first boot wizardyou need to enter network settings. Enter a fixed IP here, unless you want to run into the risk that after a reboot of the NAS you need to reconfigure ESX as the IP has changed.

As we have a ‘non-standard’ disk configuration (one SSD with three HDD’s) you need to pick “Configure Disks Later” in this screen. The last screen gives you an overview of what you configured. When you click “apply” the settings are made active and you can log into the admin interface.

Next, in the Admin interface, select Storage Manager. This screen shows you the plugged in disks and a wizard to configure them. Select the three Seagate disks and configure them as a RAID 5 set. QNAP Storage manager creating RAID volumeWhile this RAID 5 set is being created (this takes a couple of hours) you can continue to configure a volume on the RAID set. Created a static volume over the maximum span of the disks. In our case this will create a volume of 3.62TB. When you click “Finish” your NAS is almost ready. The final step is to add the SSD as a cache disk. Therefore, go to the Storage manager again and select “Cache Acceleration”. QNAP RAID group syncHere, click “Add SDD”, follow the wizard to add the SSD and assign it to the NAS. Now all read actions to the NAS are cached by the SSD. Now, we need to wait until the RAID creation is ready. As mentioned, this can take a couple of hours. When this is ready, we can go and create a couple of fileshares for NFS so the servers can connect to the NAS and store virtual machines there.

QNAP also has the option of iSCSI to connect to ESX. Now, we picked NFS for a couple of reasons. First, NFS is easy to configure. We create the share, select the IPs who can connect, run throught the wizard on the side of ESX and we’re in business. Second, iSCSI traffic is pretty specific and not handled equally well by every switch that is out there. We’re using a good switch but it’s not optimized for high throughput traffic. NFS is the safest bet that will work like a charm in this environment.

In the next part, we will configure ESX, connect to the storage, install vCenter and deploy our first virtual machine.