vSphere and HP Virtual Connect Flex-10
On a regular basis we have info sessions with our most important vendors. Last week we had a session with HP to tell us more about virtualization in their hardware products. The session was especially targeted at Flex-10. Flex-10 is the way how HP breaks s a 2 x 10Gb Ethernet pipe into a flexible, easy to change, smaller Ethernet ports.
Why is this so important for us virtual friends? Of course it is a huge cost saver not only in hardware but also in management of the environment but the most important thing is that it opens up a lot of new virtual design opportunities.
.
One of the coolest things is that we now can make a design for up to 4 blade chassis with each physical 16 server blades and let’s say 320 virtual servers where all the traffic between the servers never leaves the blade chassis. It is all handled with the blade chassis. Also all of the vSphere traffic like VMotion and service console can be handled within the chassis at 10Gb speeds.
In the near future even vCenter can take actions like limiting or expanding the NIC speed dynamically when it notices a need for that on a certain virtual server or desktop.
In the new HP BL G7 line, that is coming in the first quarter of next year, there will be 2 standard NICs that are capable of transporting also fiber channel over the same layer.
HP has made a nice book about the Virtual Connect Flex-10 technology.
You can get a copy of it on: http://h18000.www1.hp.com/products/blades/virtualconnect/
Related Posts
7 Comments
Leave a Reply Cancel reply
You must be logged in to post a comment.
Flex-10 has major drawback on very low number of VLANs permitted per “multiple networks” trunk, if I recall correctly it is still 28 VLANs per trunk. You can run ESX trunk ports in tunnel mode but those require 1 to 1 mapped (dedicated) uplinks to network core, which is anything but FLEXible.
Flex-10 has major drawback on very low number of VLANs permitted per “multiple networks” trunk, if I recall correctly it is still 28 VLANs per trunk. You can run ESX trunk ports in tunnel mode but those require 1 to 1 mapped (dedicated) uplinks to network core, which is anything but FLEXible.
Sounds like a great idea, but the VLAN Limit would be an issue for me..
Simon
Sounds like a great idea, but the VLAN Limit would be an issue for me..
Simon
to go with this technolody, why don’t we consider FCoE, Inifniband from Cisco and Xsigo? Just a thought as they are delivering the I/O virtualization in different manner.
to go with this technolody, why don’t we consider FCoE, Inifniband from Cisco and Xsigo? Just a thought as they are delivering the I/O virtualization in different manner.