A low budget SMB environment – Part 3: Configure and Run
This is the final part of the series on a low budget SMB or lab environment with VMware and vendor support. In this part we will configure our hosts, connect them to the storage, deploy vCenter and deploy our first virtual machine. In the last part we installed ESX and assembled the QNAP storage with the disks. So we first need to configure our hosts, create an NFS share to store our virtual machines (VM’s) on, first of which will be our vCenter VM.
There are several ways of offering storage to your ESX environment. In large environments you will often see a storage solution connected via Fiberchannel (FC) or Fiberchannel over Ethernet (FCoE). Other solutions run on iSCSI via gigabit ethernet. For this SMB or lab solution we chose to go with the NFS protocol as it does not have very special requirements on the networking side, is relatively easy to set up and supported by VMware and most NAS vendors.
Beware that this is quite the task with a lot of steps, some of which have to be followed tediously or you will end up with a non-functional environment. Then again, it’s the final stretch to a brand new virtual infrastructure, so let’s go!
Configuring the Host(s)
First of all, it is a good idea to configure the Intregrated Lights Out management module in the servers. HP has developed this technology over years. Integrated in the microservers is version 4 of ILO. During the boot process it will display a message on the screen and if you push F8 you can configure it with an IP address. Once you’ve done that, you can log into the ILO console and check your server.
ILO has a license model. The license dictates what additional features are enabled. I’ve purchased an Advanced License so I can have a remote console. It’s handy if you are too lazy to get up and walk over to your servers or if you have put the servers into a corner where there is no room for a display. Otherwise, ILO always offers you the virtual power button so you can switch the server on and off. As long as there is power available and the server is plugged in, ILO will be available as well, even if the server itself is not switched on.
In the previous episode we installed ESX on the server using a Micro SD-Card as a storage device. I’m assuming you’ve done this with all other hosts as well. Now it’s time to move to the next step. First of all, you need to assign network addresses to the ESX hosts. As my lab network already has a bunch of addresses assigned, my IP’s picked from my own free-list. If you start of with a new install, I recommend you pick the range of IP’s for the servers and keep a few extra in case you need to add another host. Once you’ve selected the IP address for your host, go to the console of your first server, press F2 and enter the password you chose during the installation procedure. After you pressed enter, you will enter what is called the DCUI or Direct Console User Interface. Here you can configure the local networking cards, assign an v4 or v6 IP (or both), assign DNS IPs and enter a host name. For our example, I’ve chosen not to use IPv6 and just stick with IPv4. You can of course assign and enable the protocols any way you prefer.
First you should check if your server is properly connected. As you can see in the example, I only have one cable plugged in right now. Before I finalize the configuration, I should plug them all in and make sure all inferfaces show 1000Mbit/Full Duplex. For now, 1 interface will suffice. If you do not see all interfaces, it could be one of two options: either ESX does not support the networking card you’ve plugged in or the card is not working. For scenario 1 there might be a solution. For scenario 2 you might want to check if the card is properly inserted into the PCIe slot and locked down by the bracket. This can be a little tricky as the server has very limited room for the low profile card. Make sure you switch off the server at all times when you decide to open the case and fiddle with internal bits. A shortout is created easily and it’s not likely HP will refund you or swap out your server if you brick it yourself.
So after you checked the network is connected and working, you can enter an IP address. As mentioned, for this lab environment we are sticking to IPv4 and we are disabling IPv6. In general, ESX does not need a reboot when you enter or change the administration address in the DCUI, but if you disable a protocol like we do now with IPv6, it will want to reboot to disable that stack. After you’ve entered the IPv4 address, press escape, move down to IPv6, chose DISABLE, press escape again and move down to DNS. Here you can enter the name servers for your network, if you have any.
In the same section, you also enter the hostname for the host. If you have an external name server, it is good to enter the hostnames and assigned IP addresses there as well. It helps you keeping track of who is who, which is very handy in case of problems. It however is not an obligation. ESX and vCenter will work fine with just the IP addresses entered. Notice that you do not enter license numbers anywhere in this screen. Licenses are entered in vCenter and assigned to hosts. In vSphere, licenses are not assigned to single hosts. This is convenient as well as practical. If a host dies or gets reassigned, it’s easy to reassign the license to another system without having to log into the console of that system to painfully enter a range of numbers.
Setting up Storage
So, when you are all set and done, your host will reboot and you will eventually see the status screen. Now, for the combination with our QNAP storage, it is recommended to install the QNAP plugin so ESX hosts can make use of hardware acceleration with your NAS. To do this, you need to enable SSH in the DCUI (this is found in the main configuration screen under “Troubleshooting Options”). Once you’ve enabled SSH, download the QNAP VAAI Plugin at the QNAP site here. Unzip and using a program like FileZilla or WinSCP to put the VIB on the host. Log into the host with your ‘root’ account and password and enter this command to install the VIB:
esxcli software vib install –v /QNAP-QNPNasPlugin-1.0-1.0i.vib
It takes a few moments but it will install the VIB into your host. This works up to ESX v6. After this you can go forward and download the QNAP vSphere Plugin as well. If you install this on the same client as you install the VMware’s VI-Client, you will have an extra QNAP tab in your VI-Client. So, this concludes the work we need to do on our hosts. If all went well, you should be able to open a browser, go to your host, download the VI-Client, enter the IP, username (root) and accompanying password there and it should look like the picture on the right.
As you can see, there are 2 messages on the server. First of all, no datastores have been connected and there is no log stash for the host. First we remediate the datastore part. You have already installed the VI-Client. Now install the QNAP vSphere Plugin. Once you’ve done that, you can add your QNAP NAS at the QNAP Tab. Enter the IP address you’ve chosen for the NAS and your admin credentials. We’ve selected NFS only as we’re not going to set up iSCSI. Press ADD and it should display the specifics of your purchased NAS. After this step we can configure an NFS datastore to store our VM’s on. I
f you right click on your host, you should get a sub-menu where near the last entry a QNAP menu option should be. It looks something like the screenshot in the right. When you select “Create Datastore”, you should be able to select your NAS and the host where you want it to be added. In the next screen you should select NFS and in the following screen you can enter a name for your datastore and the volume on which it should be placed. If you configured it like we did
in Part 2, you should have 1 volume only. If you click next, you will be presented with a short summary screen. Click finish and the wizard should start a task that will configure a datastore on your QNAP NAS and connect it to your vSphere host.
Once this task is down, you can start to remediate the other message. Go to the Configuration Tab in the VI client and click on “Advanced”. Scroll down and look for the entry that starts with “Syslog”. If you click the plus sign, it expands to “Global”. Click on Global and look for the entry “Syslog.global.LogDir”. In the textbox in the right screen you can enter the name of your NAS between the brackets and behind that the name of the datastore you’ve just created, followed by the name of the host you are configuring. Please be careful that you enter a different name here for every host, as it is a log folder where you will run into trouble if multiple hosts write to the same log file. Take a look at the screenshot on the right for the exact syntax and make sure you have no errors in the datastore name. When you’re ready, click OK and all the messages in your VI-Client should be gone.
Installing vCenter
Now we can move on to install vCenter. For this lab environment we’ve picked the vCenter Server Appliance or VCSA. It is to be expected that in time this will be the only vCenter VMware will maintain. Two huge advantages are that you neither need a Windows VM (and license) for it nor do you need to install an SQL server. The VCSA brings it all along in one handy Linux appliance. Once you’ve burned or mounted the ISO file (I’m assuming you know what to do with ISO files), open the “vcsa-setup.html” file in the root of the ISO. Now, I’ve had some bad experiences in the past with FireFox and Chrome with this, so I’m recommending you open it with Internet Explorer.
This has always worked for me. Once you’ve opened the file in your browser, you should get a popup that queries you to install the VMware Client Integration Plugin. This part is essential, as your vCenter installation will not continue until you install this little part of software. Run through the wizard and once it has finished, you can reopen or refresh the browser screen with the HTML file. If your installation procedure went well, you should see the blue screen with the option to either install or upgrade your vCenter. As we
have no instance yet, click install and a wizard should start. Pick one of the hosts you prepared, accept the certificate notification and enter the username (root) and password. Once you’ve clicked next, you will be presented with one of the most important questions when installing vCenter: do you want to install vCenter with an integrated Platform Services Controller( PSC) or do you want your PSC configured externally. The PSC, amongst other things, handles single sign-on requests and manages licenses. Our environment is small and will
probably not exceed 5 hosts and only a few admin users, we can safely select the integrated option here. Larger deployments will opt for a different deployment model. In the next screens we can create our Single Sign-on Domain. You can have this integrated with Microsoft’s Active Directory, but as our environment will only be small, it might be a bit too complicated to configure this. Enter a password for the administrator and a name for the SSO Domain. This will be the name you will have to enter behind the administrator account to log on. You will
want to keep this easy to remember, you will be entering it quite some times. Next the wizard wants to know how big the environment is going to be. The option you select here will decide how big or small the vCenter VM will be once the wizard is finished. As we will be a small environment with less than 10 hosts, select the option Tiny. This will result in a vCenter VM with 2 vCPU’s and 8 Gb of RAM. Next you can select your database model. Select the “Embedded Database” option here. The next screens will ask you for network information. Make sure you document this information carefully. Your vCenter is the main management
interface to your virtual environment. Not being able to log in to vCenter will cause you a lot of grief. The network option is autogenerated. You can modify this later. As we’ve decided to keep it simple, pick IPv4 in the next box and “Static”. You can enter the desired IP address below that as well as the desired host name. Be aware that with the information you enter here, the installation procedure will generate SSL certificates that contain this information. If you want to change this information afterwards, you will also need to regenerate the SSL certificates. The last option is time sync. I always opt for NTP or Network Time Protocol here and use “pool.ntp.org” as an address. This is an easy way to make sure your
vCenter always has the right time. Having the right time is important when you need to troubleshoot problems. If hosts are not in sync, it is difficult to find out how one event influenced the other when you cannot be sure when they occurred. The last screen is a summary of all the information we’ve entered. Check it carefully and make sure all the information is correct. Once you press Finish, you will see the HTML site again with a status-bar that slowly moves forward. Take your time and have a coffee. Once the installation has successfully finished installing vCenter, you can log on to it using the administrator credentials you entered earlier. Make sure you enter the complete domain name behind the administrator account, like “administrator@yourdomain.lan” as a username. Enter the password and you will be presented with a virgin vCenter.
Configuring vCenter
So, the first thing we need to do after logging into vCenter is adding the hosts we previously installed ESX on. To do that, we first define a ‘virtual’ datacenter. In this datacenter you can configure your cluster of hosts. Now, this cluster is important as it is the boundary of what vMotion can use to distribute VM’s over hosts. vMotion is the technology VMware
has developed to move a virtual machine from one host to the next without shutting it down or pausing it. Within a cluster, vMotion can move VM’s freely between every host based on resources. This technology is called DRS or Distributed Resource Scheduling. DRS also has the option to put a host into sleep-mode if the system load is so low that only a few hosts are needed to run the VM’s, like at night or during the weekend. This way money can be saved because power is saved.
Another technology is HA or High Availability. With HA virtual machines can be protected against a host suddenly stopping. HA will make sure the VM’s it protects are restarted on another host as soon as it detects the failure.
Last but not least is EVC or Enhanced vMotion Compatibility. For vMotion to work, every host in the cluster has to have the exact same CPU. This would mean that in time, when new hosts with new CPU’s arise, you would not be able to add them to your cluster. EVC fixes that. In EVC you pick a level of technology that all the CPU’s in your cluster understand, where the oldest CPU in the cluster is the limiter. After EVC is enabled for your cluster, all hosts in that cluster are configured to present identical CPU features and ensure CPU compatibility for vMotion. This way you can move VM’s between new and old CPU’s. (1 remark: this only works with CPU’s from the same vendor. Even with EVC enabled you cannot mix AMD and Intel together).
Once your cluster is configured, you can finally add your first host to it. Right-click on the cluster and select “Add Host”. Enter the IP address and the logon credentials (root + password) and click Next. Usually, you have not yet entered licensing information yet, so you can skip over that entry and leave the grace period license. Even though it’s recommended to enable “Lockdown” for your hosts, we will leave it off for now. This can be a security risk as someone can still manage a single host with the VI-Client next to vCenter and open VM screens or change network settings, so consider changing it eventually. Click next at the resource pool and now you should
be ready. Once you click finish, your host will be added to vCenter and you can start managing it. You can now add the other hosts through the same procedure, making sure you enter the same cluster information. When this task is finished, we can move on to configuring the hosts for vMotion and connect the storage to all hosts alike.
Setting up Networks
vMotion works over the network. To make sure it’s quick, we’ve decided to use 2 separate VMKernel ports to do so and let the other one handle the rest of the traffic. To keep things simple, we picked the same network and IP range. You can however completely separate the network traffic. In a nutshell, it looks like the picture on the left. Two VMKernel ports handle vMotion traffic only where the VMKernel port called “Management Network” handles the rest. Of course there are different options but we stick with this for now. To make this work, click properties next to Standard Swtich vSwitch0 and click Add. You then have the option to pick Virtual Machine or VMKernel.
Pick VMkernel and click Next. In this screen you can enter a Network Label. Make sure your label is clear and short. We’ve picked “vMotion1” as we will also be creating a vMotion2. Leave VLAN ID but make sure you enable “Use this port group for vMotion”.
Click Next. In the next screen you have to enter a unique IP address for this port group. Make sure you keep track of what IP’s you are using as every host will have 2 interfaces (and unique 2 IP’s) just for vMotion and they all have to be in the same network to function. When you click next you will get the summary. Check that the information is correct and click finish if it is. Now you have to go back in, so select the VMKernel port you just created and click Edit. Go to the NIC Teaming tab and make sure that
only 1 NIC is active and all others are moved into “Unused”. Note carefully which NIC you are assigning to this VMKernel and repeat the exercise for VMKernel “vMotion2” with a different IP and NIC. Once that wizard is finished, go into the “Management Network” VMKernel and move both NICs you just selected for vMotion to “Unused”. Make sure you move the other 2 NICs to Active. When you’re done, your network configuration should more or less look like the first picture of this section.
I’m aware that this is not a best practice setup but it’s one that works very well in a small lab or SMB setup. In larger deployments with more or converged network adapters you would assign tow NICs per host to every sort of traffic, making sure they are in different VLANs and preferably physical networks. In this setup we have 4 NICs to work with where moving them around between hosts quickly is more important than waiting a second longer for a response. That being said, if you know your way around ESX more, you can modify the configuration to better suit you needs.
Rolling out your first VM
Once you’ve reached this point, your config should be ready to receive some work. Make sure every host has your datastore(s) connected so they all can see a VM being placed there. One thing I always like to do is generate a separate datastore for ISO’s and templates. This way they are always easy to find. You can use the QNAP plugin again to create an ISO datastore and mount it to all hosts. Using vCenter you can then upload your Windows ISO’s and use them when you want to create a VM.
When creating a new VM, you can simply select the CD-ROM pull down menu, pick “Datastore ISO file” and browse to the ISO datastore and pick the installation files you need. Make sure you click “Connect” behind the CD-ROM entry, otherwise the ISO will not be mounted in your VM and nothing will happen when you boot it up. Once you’re ready, right-click on the VM or pick the menu from the top and click power on. Your brand new VM should start booting from the disk you selected and start installing the OS you picked. And there you go, ready to run whatever you need to run or want to test.
Some Final Words
It was a lot of work sorting everything out for this series or articles. Like mentioned earlier, I’m quite aware that we’re not following best practices everywhere and some goals can be reached in several different ways. Then again, this configuration is quite easy to set up if you compare it. The hardware hardly requires manual intervention, the standard HP vSphere ISO works out of the box and the QNAP configuration is fully GUI supported. Also, from a financial point of view, this lab environment can be put together on a limited budget and still offer all the bells and whistles a full enterprise environment has. I’d like to thank QNAP for sponsoring the NAS used in this series.
Related Posts
Leave a Reply Cancel reply
You must be logged in to post a comment.