VMworld 2013.png

In today’s VMworld keynote VMware announced general availability of VMware vSphere 5.5 which introduces many new features and enhancements that further extend the core capabilities of the vSphere platform.

VMware vSphere 5.5 includes the following core vSphere ESXi Hypervisor enhancements:

  • Hot-pluggable SSDPCIe devices
    Solid-state disks (SSDs) are becoming more prevalent in the enterprise datacenter. Similarly as with SATA and SAS hard disks, users are now able to hot-add or hot-remove an SSD device while a vSphere host is running, and the underlying storage stack detects the operation.
  • Support for Reliable Memory Technology
    To provide greater resiliency and to protect against memory errors, vSphere ESXi Hypervisor can now take advantage of Reliable Memory Technology, a CPU hardware feature through which a region of memory is reported from the hardware to vSphere ESXi Hypervisor as being more “reliable.” This information is then used to optimize the placement of the VMkernel and other critical components such as the initial thread, hostd and the watchdog process and helps guard against memory errors.
  • Enhancements to CPUC-states
    In vSphere 5.1 and earlier, the balanced policy for host power management leveraged only the performance state (P-state), which kept the processor running at a lower frequency and voltage. In vSphere 5.5, the deep processor power state (C-state) also is used, providing additional power savings. Another potential benefit of reduced power consumption is with inherent increased performance, because turbo mode frequencies on Intel chipsets can be reached more quickly while other CPU cores in the physical package are in deep C-states.

VMware vSphere 5.5 provides the following virtual machine–related enhancements:

  • Virtual machine compatibility with VMware ESXi 5.5
    vSphere 5.5 introduces a new virtual machine compatibility with several new features such as LSI SAS support for Oracle Solaris 11 OS, enablement for new CPU architectures, and a new advanced host controller interface (AHCI). This new virtual-SATA controller supports both virtual disks and CD-ROM devices that can connect up to 30 devices per controller, with a total of four controllers. This enables a virtual machine to have as many as 120 disk devices, compared to the previous limit of 60.
  • Expanded supportfor hardware-accelerated graphics vendor
    vSphere 5.1 already provided support for hardware-accelerated 3D graphics (vGPU) inside a virtual machine. That support was limited to only NVIDIA-based GPUs. With vSphere 5.5, vGPU support has been expanded to include both Intel- and AMD-based GPUs. Virtual machines with graphic-intensive workloads or applications that typically have required hardware-based GPUs can now take advantage of additional vGPU vendors, makes and models.There are three supported rendering modes for a virtual machine configured with a vGPU: automatic, hardware and software. Virtual machines still can use VMware vSphere vMotion, even across a heterogeneous mix of vGPU vendors, without any downtime or interruptions to the virtual machine.Automatic mode:
    When a GPU is not available at the destination vSphere host, software rendering automatically is enabled.
    Hardware mode:
    When a GPU does not exist at the destination vSphere host, a vSphere vMotion action is not attemptedFor Windows 7 and 8, vGPU support can be enabled using both the vSphere Web Client and VMware Horizon View.For Fedora 17 or later, Ubuntu 12 or later and Red Hat Enterprise Linux (RHEL) 7 vGPU can only be enabled using the vSphere Web Client.
  • Graphic acceleration support for Linux guest operating systems
    With vSphere 5.5, graphic acceleration is now possible for Linux guest OSs. Leveraging a GPU on a vSphere host can help improve the performance and scalability of all graphics-related operations.

 

In addition, VMware vSphere 5.5 includes the following vCenter Server enhancements:

  • vCenter Single Sign-On Server security enhancements
    vCenter Single Sign-On server 5.5 can now be configured to connect to its Microsoft SQL Server database without requiring the customary user IDs and passwords, as found in previous versions. This enables customers to maintain a higher level of security when authenticating with a Microsoft SQL Server environment that also houses the vCenter Single Sign-On server database. The only requirement is that the virtual machine used for vCenter Single Sign-On server be joined to a Microsoft Active Directory domain.
  • vSphere Web Client platform support andUI improvements
    With vSphere 5.5, full client support for Mac OS X is now available in the vSphere Web Client. This includes native remote console for a virtual machine. Fully supported browsers include both Firefox and Chrome. Improved usability experience – The vSphere Web Client includes the following key new features that improve overall usability and provide the administrator with a more native application feel: Drag & drop, Filters, Recent items.
  • vCenter ServerAppliance configuration maximum increases
    With the release of vSphere 5.5, vCenter Server Appliance now uses a reengineered, embedded vPostgres database that can now support as many as 500 vSphere hosts or 5,000 virtual machines. With new scalability maximums and simplified vCenter Server deployment and management, vCenter Server Appliance offers an attractive alternative to the Windows version of vCenter Server.
  • vSphere AppHA application monitoring
    In vSphere 5.5, VMware has simplified application monitoring for vSphere HA with the introduction of vSphere App HA. This new feature works in conjunction with vSphere HA host monitoring and virtual machine monitoring to further improve application uptime. vSphere App HA can be configured to restart an application service when an issue is detected. It is possible to protect several commonly used, off-the-shelf applications. vSphere HA can also reset the virtual machine if the application fails to restart.
  • vSphereDRS virtual machine–virtual machine affinity rule enhancements
    vSphere DRS can configure DRS affinity rules, which help maintain the placement of virtual machines on hosts within a cluster. Various rules can be configured. One such rule, a virtual machine–virtual machine affinity rule, specifies whether selected virtual machines should be kept together on the same host or kept on separate hosts. A rule that keeps selected virtual machines on separate hosts is called a virtual machine–virtual machine anti-affinity rule and is typically used to manage the placement of virtual machines for availability purposes.In versions earlier than vSphere 5.5, vSphere HA did not detect virtual machine–virtual machine anti-affinity rules, so it might have violated one during a vSphere HA failover event. vSphere DRS, if fully enabled, evaluates the environment, detects such violations and attempts a vSphere vMotion migration of one of the virtual machines to a separate host to satisfy the virtual machine–virtual machine anti-affinity rule.In a large majority of environments, this operation is acceptable and does not cause issues. However, some environments might have strict multi-tenancy or compliance restrictions that require consistent virtual machine separation. Another use case is an application with high sensitivity to latency. To address the need for maintaining placement of virtual machines on separate hosts—without vSphere vMotion migration—after a host failure, vSphere HA in vSphere 5.5 has been enhanced to conform with virtual machine–virtual machine anti-affinity rules. Application availability is maintained by controlling the placement of virtual machines recovered by vSphere HA without migration. This capability is configured as an advanced option in vSphere 5.5.

 

VMware vSphere 5.5 also includes the following storage-related enhancements:

  • Support for 62TB VMDK
    VMware is increasing the maximum size of a virtual machine disk file (VMDK) in vSphere 5.5. The previous limit was 2TB—512 bytes. The new limit is 62TB. The maximum size of a virtual Raw Device Mapping (RDM) is also increasing, from 2TB—512 bytes to 62TB. Virtual machine snapshots also support this new size for delta disks that are created when a snapshot is taken of the virtual machine.
  • Microsoft Cluster Services updates
    With vSphere 5.5 VMware is introducing a number of additional features to continue supporting customers that implement MSCS in their vSphere environments. In vSphere 5.5, VMware supports the following features related to MSCS:
    – Microsoft Windows 2012
    – Round-robin path policy for shared storage
    – iSCSI protocol for shared storage
    – Fibre Channel over Ethernet(FCoE) protocol for shared storageHistorically, only Fibre Channel (FC) based shared storage was supported in MSCS environments. With vSphere 5.5, this restriction has been relaxed to include support for FCoE and iSCSI. With regard to the introduction of round-robin support, a number of changes were made concerning the SCSI locking mechanism used by MSCS when a failover of services occurs. To facilitate this new path policy, changes have been implemented that make it irrelevant which path is used to place the SCSI reservation; any path can free the reservation.
  • 16GB End-to-end support
    In vSphere 5.0, VMware introduced support for 16Gb FC HBAs. However these HBAs were throttled down to work at 8Gb. In vSphere 5.1, VMware introduced support to run these HBAs at 16Gb. However, there is no support for full, end-to-end 16Gb connectivity from host to array. To get full bandwidth, a number of 8Gb connections must be created from the switch to the storage array. In vSphere 5.5, VMware introduces 16Gb end-to-end FC support. Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.
  • Permanent Device Loss AutoRemove
    Permanent device loss (PDL) is a situation that can occur when a disk device either fails or is removed from the vSphere host in an uncontrolled fashion. PDL detects if a disk device has been permanently removed—that is, the device will not return—based on SCSI sense codes. When the device enters this PDL state, the vSphere host can take action to prevent directing any further, unnecessary I/O to this device. With vSphere 5.5, a new feature called PDL AutoRemove is introduced. This feature automatically removes a device from a host when it enters a PDL state.PDL AutoRemove occurs only if there are no open handles left on the device. The auto-remove takes place when the last handle on the device closes. If the device recovers, or if it is re-added after having been inadvertently removed, it will be treated as a new device.
  • vSphere Replication interoperability
    In vSphere 5.0, there were interoperability concerns with VMware vSphere Replication and VMware vSphere Storage vMotion, as well as with VMware vSphere Storage DRS. There were considerations to be made at both the primary site and the replica site. At the primary site, because of how vSphere Replication works, there are two separate cases of support for vSphere Storage vMotion and vSphere Storage DRS to be considered:
    – Moving a subset of the virtual machine’s disks
    – Moving the virtual machine’s home directoryThis works fine in the first case—moving a subset of the virtual machine’s disks with vSphere Storage vMotion or vSphere Storage DRS. From the vSphere Replication perspective, the vSphere Storage vMotion migration is a “fast suspend/resume” operation, which vSphere Replication handles well.The second case—a vSphere Storage vMotion migration of a virtual machine’s home directory—creates the issue with primary site migrations. In this case, the vSphere Replication persistent state files (.psf) are deleted rather than migrated. vSphere Replication detects this as a power-off operation, followed by a power-on of the virtual machine without the “.psf” files.This triggers a vSphere Replication full sync, wherein the disk contents are read and checksummed on each side, a fairly expensive and time-consuming task. vSphere 5.5 addresses this scenario.At the primary site, migrations now move the persistent state files that contain pointers to the changed blocks along with theVMDKs in the virtual machine’s home directory,thereby removing the need for a full synchronization. This means that replicated virtual machines can now be moved between datastores, by vSphere Storage vMotion or vSphere Storage DRS, without incurring a penalty on the replication. At the replica site, the interaction is less complicated because vSphere Storage vMotion is not supported for the replicated disks. vSphere Storage DRS cannot detect the replica disks: They are simply “disks”—there is no “virtual machine.” While the .vmx file describing the virtual machine is there, the replicated disks are not actually attached until test or failover occurs.

 

VMware vSphere 5.5 also introduces the following networking-related enhancements:

  • Improved LACP capabilities
    In vSphere 5.1, LACP is supported. LACP is a standards-based method to control the bundling of several physical network links together to form a logical channel for increased bandwidth and redundancy purposes. It dynamically negotiates link aggregation parameters such as hashing algorithms, number of uplinks, and so on, across vSphere Distributed Switch and physical access layer switches. In case of any link failures or cabling mistakes, LACP automatically renegotiates parameters across the two switches. This reduces the manual intervention required to debug cabling issues.The following key enhancements are available on vSphere Distributed Switch with vSphere 5.5:
    – Comprehensive load-balancing algorithm support : 22 new hashing algorithm options are available.
    – Support for multiple link aggregation groups (LAGs): 64 LAGs per host and 64 LAGs per VMware vSphere VDS.
    – Because LACP configuration is applied per host, this can be very time consuming for large deployments.In this release, new workflows to configure LACP across a large number of hosts are made available through templates.
  • Traffic filtering
    Traffic filtering is the ability to filter packets based on the various parameters of the packet header. This capability is also referred to as access control lists (ACLs), and it is used to provide port-level security. The VDS supports packet classification, based on the following three different types of qualifiers:
    – MAC SA and DA qualifiers
    – System traffic qualifiers: vSphere vMotion, vSphere management, vSphere FT, and so on
    – IP qualifiers: Protocol type, IP SA, IP DA, and port numberAfter the qualifier has been selected and packets have been classified, users have the option to either filter or tag those packets. When the classified packets have been selected for filtering, users have the option to filter ingress, egress, or traffic in both directions.
  • Quality of Service tagging
    Two types of Quality of Service (QoS) marking/tagging common in networking are 802.1p Class of Service (CoS), applied on Ethernet/layer 2 packets, and Differentiated Service Code Point (DSCP), applied on IP packets. The physical network devices use these tags to identify important traffic types and provide QoS based on the value of the tag. VMware has supported 802.1p tagging on VDS since vSphere 5.1. The 802.1p tag is inserted in the Ethernet header before the packet is sent out on the physical network. In vSphere 5.5, the DSCP marking support enables users to insert tags in the IP header. IP header–level tagging helps in layer 3 environments, where physical routers function better with an IP header tag than with an Ethernet header tag.
  • SR-IOV Enhancements
    Single-root I/O virtualization (SR-IOV) is a standard that enables one PCI Express (PCIe) adapter to be presented as multiple, separate logical devices to virtual machines. In this release, the workflow of configuring the SR-IOV–enabled physical NICs is simplified. Also, a new capability is introduced that enables users to communicate the port group properties defined on the vSphere standard switch (VSS) or VDS to the virtual functions. The new control path through VSS and VDS communicates the port group–specific properties to the virtual functions. For example, if promiscuous mode is enabled in a port group, that configuration is then passed to virtual functions, and the virtual machines connected to the port group will receive traffic from other virtual machines.
  • 40GB NIC Support
    Support for 40GB NICs on the vSphere platform enables users to take advantage of higher bandwidth pipes to the servers. In this release, the functionality is delivered via Mellanox ConnextX-3 VPI adapters configured in Ethernet mode.