VMware vSphere 4.1 released
A few minutes ago VMware has released the new version of VMware vSphere, version 4.1.
This new vSphere version contains 150 new features and has improved scalability, memory management, DRS, etc.
Besides all the new features the greatest news is that vSphere 4.1 is the last version which will have an ESX version (with service console). As of the next version there will only be two versions, ESXi embedded and installable.
Below you will find a detailed list of features that are include with the vSphere 4.1 release:
- Scalable vMotion;
- Wide VM NUMA;
- Storage I/O can be shaped by I/O shares and limits through the new Storage I/O Control quality of service (QoS) feature;
- Network I/O can be partitioned through a new QoS engine that distinguish between virtual machines, vMotion, Fault Tolerance (FT) and IP storage traffic;
- Memory compression will allow to compress RAM pages instead of swapping on disk, improving virtual machines performance;
- Distributed Resource Scheduling (DRS) now can follow affinity rules defining a subset of hosts where a virtual machine can be placed;
- Virtual sockets can now have multiple virtual CPUs. Each virtual CPU will appear as a single core in the guest operating system;
- Support vCenter on 64 bit operating systems only;
- A team physical network interface cards in a vNetwork Distributed Switch can now dynamically load balance traffic;
- Health check status and operational dashboard are available for HA configurations;
- vSphere Client is no more part of the ESX and ESXi installation packages. At the end of the installation process administrators are redirected online to download the client;
- ESXi installation can be scripted. The script can start from a CD or over a PXE boot source, and can install the hypervisor on local or remote disks;
- ESX can boot from iSCSI targets (support for iBFT);
- NFS performance stats are included in esxtop and vCenter Server, as well as through the vSphere SDK;
- Virtual machines serial ports redirection over the network;
- Support for up to 4 vMotion concurrent live migrations in 1GbE networks and up to 8 concurrent live migrations in 10GbE networks;
- Support for USB pass-through (virtual machines can use ESX/ESXi local USB devices);
- Support for administrator password change in Host Profiles;
- Support for FT in DRS clusters with with Enhanced vMotion Compatibility (EVC);
- Support for iSCSI TCP/IP Offload Engine (TOE) network interface cards (both 10GB an 1GB);
- Support for 8GB Fibre Chanell HBAs;
- Support for IPsec on IPv6 network configurations;
- Support for multiple Data Recovery virtual appliances;
- Support for Microsoft Volume Shadow Service (VSS) in Windows Server 2008 and 2008 R2 guest operating systems for vStorage APIs for Data Protection (VADP);
- Update Manager (VUM) can now patch additional 3rd party modules for ESX (like EMC PowerPath);
- Virtual to Virtual (V2) migration for offline Hyper-V virtual machines in vCenter Converter;
- ESX and ESXi direct support for Microsoft Active Directory;
- Support for Intel Xeon 7500 / 5600 / 3600 CPU series (this includes EVC support);
- Support for AMD Opteron 4000 / 6000 CPU series (this includes EVC support).
All these new features result in a new list of configuration limits:
- 3,000 virtual machines per cluster (compared to 1,280 in vSphere 4.0);
- 1,000 hosts per vCenter Server (compared to 300);
- 15,000 registered VMs per vCenter Server (compared to 4,500);
- 10,000 concurrently powered-on VMs per vCenter Server (compared to 3,000);
- 120 concurrent Virtual Infrastructure Clients per vCenter Server (compared to 30);
- 500 hosts per virtual Datacenter object (compared to 100);
- 5,000 virtual machines per virtual Datacenter object (compared to 2,500).
By implementing the above, like Scalable vMotion, Wide VM Numa, Memory Compression, features VMware has realized a significant performance improvement.
Scalable vMotion
vSphere 4.1 supports up to 8 concurrent virtual machines live migrations and VMware seems to have renamed the feature in Scalable vMotion. The engine has been significantly reworked to reach a throughput of 8GB/sec on a 10GbE link, 3 times the performance scored in version 4.0.
Wide VM Numa
The vSphere 4.1 NUMA scheduler has been reworked to improve performance when a certain virtual machine needs more cores than the ones available on a certain NUMA node, assuming that the server has a large number of NUMA nodes. Depending on workloads and configurations, the performance improvement is up to 7%.
Transparent Memory Compression
vSphere 4.1 introduces a new memory over-commit technique called Transparent Memory Compression (TMC) that compresses on the fly the virtual pages that should be otherwise swapped on disk. Each virtual machine has a compression cache where vSphere can store compressed pages of 2KB or smaller size.
TMC is enabled by default on ESX/ESXi 4.1 hosts but the administrator can define the compression cache limits or disable TMC completely.
This results in a performance regain of 15% when there’s a fair amount of memory over-commitment and a regain of 25% in case of heavy over-commitment.
Storage I/O Control
vSphere 4.1 introduces the capability to define quality of service prioritization for the I/O activity on a single host or a cluster of hosts. The prioritization, which can be enable or disable on specific datastores, is enforced through shares and limits.
The ESX/ESXi hosts monitors the latency in communication with the datastore of choice. As soon such latency exceeds a defined threshold the datastore is considered congested. At that point all VMs accessing that datastore are prioritized according to their defined shares. The administrator can even define the amount of I/O operations per second (IOPS) that each virtual machine can reach.
VMware reports an improvement up to 36% in certain scenarios.
Additional performance enhancements
vSphere 4.1 introduces additional improvements in other areas like Storage vMotion, thanks to the support for 8GB Fibre Channel HBAs. VMware reports a performance improvement (in terms of IOPS) of 50% compared to 4GB FC HBAs and five times better performance in throughput usage.
Support for NFS storage has been improved too, and it now features up to 15% reduction in CPU cost for read & writes as well as up to 15% improvement in throughput usage.
iSCSI support has been improved too, with the new support for iSCSI TCP Offload Engine (TOE) network interface cards.
VMware reports a performance improvement of up to 89% in CPU read cost and 83% in CPU write cost.
vSphere 4.1 has additional new capabilities for its networking layer with the introduction of support for Large Receive Offload (LRO), which allows to receive network packets larger than the MTU size. This is useful only for Linux guest operating systems that support LRO. LRO support translates in a 5-30% performance improvement in throughput usage and 40-60% decrease in CPU cost depending on workloads.
vSphere 4.1 also introduces asynchronous transmission of network packets through a new TX Wordlets scheduler.
It translates into a throughput improvement of 2X for intra-VMs traffic and up to 10% throughput improvement for VM-to-host traffic.
Last but not least, vSphere 4.1 also introduces better performance for VDI when used in conjunction with View.
The creation of new virtual desktops now is 60% faster and their power on timing is 3 to 4 times faster.
If you’re interested and eager to upgrade your existing vSphere 4.0 installation, read this VMware KB article, which describes best practices for upgrading ESX(i) and vCenter to version 4.1.
And here’s a summary of KB articles used in this article:
KB Article: 1022842 – Changes to DRS in vSphere 4.1
KB Article: 1022290 – USB support for ESX/ESXi 4.1
KB Article: 1022263 – Deploying ESXi 4.1 using the Scripted Install feature
KB Article: 1021953 – I/O Statistics in vSphere 4.1
KB Article: 1022851 – Changes to vMotion in vSphere 4.1
KB Article: 1022104 – Upgrading to ESX 4.1 and vCenter Server 4.1 best practices
KB Article: 1023118 – Changes to VMware Support Options in vSphere 4.1
KB Article: 1021970 – Overview of Active Directory integration in ESX 4.1 and ESXi 4.1
KB Article: 1021769 – Configuring IPv6 with ESX and ESXi 4.1
KB Article: 1022844 – Changes to Fault Tolerance in vSphere 4.1
KB Article: 1023990 – VMware ESX and ESXi 4.1 Comparison
KB Article: 1022289 – Changing the number of virtual CPUs per virtual socket in ESX/ESXi 4.1
Related Posts
7 Comments
Leave a Reply Cancel reply
You must be logged in to post a comment.
How do we move from 4.0 to 4.1 without wrecking the place? Any tips, tricks, do's and dont's? :)
I'm upgrading our OTAP today. So I'will get back on that later.
How about an upgrade if you are still on ESX 3.5….do you first need to upgrade to 4.0 before going to 4.1?
Build numbers database updated, http://www.vmguru.nl/wordpress/build-numbers/
Here's a VMware KB article with ways to upgrade to 4.1 (or not)
http://kb.vmware.com/selfservice/microsites/sea…
I documented the upgrade. You can find it here: http://www.vmguru.nl/wordpress/2010/07/how-to-u…