HP StorageWorks IO Accelerator
In May 2010 HP introduced the new HP Storageworks IO Accelerator modules for HP Blade and ProLiant servers.
So what is this IO module. It is a mezzanine card that is available for HP C-class Blades or a PCI-e I/O card for HP ProLiant servers.
It is available in three different capacities: 80GB, 160GB and 320GB. But the MOST interesting is that it can deliver 100,000 IOPS.
Because it was not certified by VMware this IO Accelerator could not be used in ESX implementations. Until now! And today HP released drivers for VMware vSphere 4.0 Update 1.
The delivers outstanding opportunities for virtualizing IO intensive workloads like VDI. But the downside is that the disk is local to the ESX host and not shared storage. Which of course is not a major issue but it limits this solution to specific use cases.
But what if you combine it with another great HP product, the HP LeftHand Virtual SAN Appliance (VSA)?
Sure, it will introduce a slight performance penalty. HP/LeftHand suggests that the performance overhead is somewhere around 10%. Simply put, there is overhead from running the VMware layer, and there is overhead from formatting storages as VMFS -> iSCSI -> VMFS. But 10% performance penalty on 100.000 IOPS still leaves 90.000 IOPS and with 20 IOPS per virtual desktop, you can still support 4.500 desktops provided that you have enough storage space.
I think with this setup you will have a simple, high performance VDI solution in a box which can simply be expanded across multiple boxes.
I’m curious, does anyone have experience with the HP StorageWorks IO Accelerator module in a HP ProLiant or C-class blade?
Been using them in blade based build servers since their release. We were on the beta for VMware support as well but have not moved beyond the testing phase.
HP just certified a 1.28 TB version for rack mount servers. Take a DL58x and place 4 of these inside running the VSA or something like Nexentastor and you have a killer storage solution
Thanks for the feedback.
If I may ask. Why did you not move beyond the testing phase for VMware?
Hey Erik, waiting for a shared storage solution. In a server virtualization scenario, not VDI, I don’t think local fusion has a strong a play. you could look to it for ESX page files, and consider higher overcommitment ratios because the penalty has been lowered, but then your vmotions will be heavier in terms of server and network usage.
Took a quick test on a BL460CG7 with 160gb IO Accelerator http://i54.tinypic.com/2dmff36.png
(RR IOPS 1= Round Robin, IOPS 1, EVA 6100)