First of all, we show you the setup we used to test. Our minilab consists of an HP ML310 server, equipped with an Intel Xeon 3065 CPU, 8 gigs of RAM and a dual port Broadcom Gigabit Ethernet NIC. Both ports are linked to an HP ProCurve 1800-24 Gigabit switch. From this switch, again 2 links are connected to both ports on the TS869-Pro. The settings on both sides make sure VMs see a 2Gbit pipe to the storage.
On the virtual side of things, we tested with both a Windows 7 and a Windows 2008 R2 VM. Technically, there should not be a difference between the two as they use the same kernel. Both VMs have 2 Gigs of RAM. To make testing interesting, we configured our VMware server to use NFS, but we also connected an iSCSI volume of 250 gigs into the VM using Microsoft’s own iSCSI software. To look at performance, we will test both connections.
On to the testing then. Both VMs are running on our VMware lab server with 2 Gigs of RAM from an VMDK on an NFS volume. To test, we ran HD-Tune in both VMs where we ran 3 benchmarks on the NFS volume (Root disk) and the same 3 tests on the iSCSI connected volume. So, first of all, the results on NFS from the Windows 7 machine. Here are the statistics that HD-Tune created
So, as you can see, access times are 6.8 ms with a transfer rate of 62.7 MB/s while reading randomly from the volume. The IOPS are certainly not bad considering we are only using two SATA disks in this test. Some caching will certainly be in there to create values like that but that is no problem in every day use. When copying data blocks, speed goes up to about 100 MB/s, which we also saw while copying the ISO to the fileshare.
Now, here are the same tests on iSCSI:
So, these tests show different values. The average throughput is lower, but so are the access times. And the throughput seems to be more consistent, although the cave-ins we see on NFS could also be a switching problem.
Now, we performed the same tests on a Windows 2008 R2 Server VM. Actually, we didn’t expect to see a big difference in values, as the technical specs of the Windows 7 kernel are identical to the Windows 2008 R2 kernel. Here are the results for the NFS volume
So, on NFS the results are not very spectacularly different. The throughput is a bit higher and the access times are a bit lower. Overall you could say that the server edition is a bit more efficient than the workstation edition when it comes to storage. On the the iSCSI statistics:
Now, these values show the exact opposite. Here Windows Server seems to be a bit less efficient as access times seem to be higher and throughput is lower. This could be a Windows Server versus Windows Workstation issue or inefficiencies in the iSCSI initiator software. Overall however, these values are still very nice, considering we only have about 160-200 realtime IOPS to spend.
Now, another point about testing is that we test with one VM at a time. That is no real life scenario as you would never have just one virtual machine running in your lab or business. So, we also checked the resources we used on the NAS while we were running our tests. How much CPU and RAM are we consuming when testing 1 VM, that will show how much space is left for others. The results are below:
These values are taken from the internal monitoring program on the QNAP. Our thoughts on that: with one VM pushing things to the limit, there is at least 2/3 of RAM left and more than enough CPU capacity to quadruple the load.
So, concluding, if you max out the disks in the unit to provide IOPS (there are 8 slots available), you should be able to run 16 to 20 VMs with medium load from this NAS without any problems. If the load is even less, you are looking at even more VMs. This will most certainly be enough for any small to medium size SMB or Lab to run their virtual platform on.