vSphere 6 vMotion Enhancements

VMware vSphere vMotion capabilities have been enhanced in this release, enabling users to perform live migration of virtual machines across virtual switches, across vCenter Server systems, and across long distances of up to 100ms RTT.

These new vSphere vMotion enhancements enable greater flexibility when designing vSphere architectures that were previously restricted to a single vCenter Server system due to scalability limits and multisite or metro design constraints. Because vCenter Server scale limits no longer are a boundary for pools of compute resources, much larger vSphere environments are now possible.

vSphere administrators now can migrate across vCenter Server systems, enabling migration from a Windows version of vCenter Server to vCenter Server Appliance or vice versa, depending on specific requirements. Previously, this was a difficult task and caused a disruption to virtual machine management. This can now be accomplished seamlessly without losing historical data about the virtual machine.

Cross vSwitch vMotion

Cross vSwitch vMotion allows you to seamless migrate a virtual machines across different virtual switches while performing a vMotion. This means that you are now longer restricted by the network you created on the vSwitches in order to vMotion a virtual machine.

With this new functionality you can now migrate a virtual machine to a new cluster with a separate vDS without interruption for instance during datacenter migrations. This further increases agility, reducing the time it takes to replace/refresh hardware and increases availability during planned maintenance activities.

For this to work you need the source and destination portgroups to share the same L2. The IP address within the VM will not change.

vMotion will work across a mix of switches (standard and distributed). Previously, you could only vMotion from vSS to vSS or within a single vDS. This limitation has been removed. The following Cross vSwitch vMotion migrations are possible:

  • vSS to vSS.
  • vSS to vDS.
  • vDS to vDS.
  • vDS to VSS is not allowed.

Cross vCenter vMotion

But Cross vSwitch vMotion is not the only vMotion enhancement. vSphere 6 also introduces support for Cross vCenter vMotion. vMotion can now perform the following changes simultaneously:

  • Change compute (vMotion) – Performs the migration of virtual machines across compute hosts.
  • Change storage (Storage vMotion) – Performs the migration of the virtual machine disks across datastores.
  • Change network (Cross vSwitch vMotion) – Performs the migration of a VM across different virtual switches.
  • Change vCenter (Cross vCenter vMotion) – Performs the migration of the vCenter which manages the VM.

All of these types of vMotion are seamless to the guest OS.

Cross vCenter vMotion

Like with vSwitch vMotion, Cross vCenter vMotion requires L2 network connectivity since the IP of the VM will not be changed. This functionality builds upon Enhanced vMotion and shared storage is not required. Target support for local (single site), metro (multiple well-connected sites), and cross-continental sites.

With vSphere 6 vMotion you can now:

  • Migrate from a VCSA to a Windows version of vCenter & vice versa.
  • Replace/retire vCenter server without distruption.
  • Resource pooling across vCenters where additional vCenters were used due to vCenter scalability limits.
  • Migrate VMs across local, metro, and continental distances.
  • Public/Private cloud environments with several vCenters.

There are several requirements for Cross vCenter vMotion to work:

  • Only vCenter 6.0 and greater will be supported. All instances of vCenter prior to version 6.0 will need to be upgraded before this this feature will work. For example, an instance of vCenter 5.5 and 6.0 will not work.
  • Both the source and the destination vCenter servers will need to be joined to the same SSO domain if you want to perform the vMotion using the vSphere Web Client. If the vCenter servers are joined to different SSO domains, it’s still possible to perform a Cross vCenter vMotion, but you must use the API.
  • You will need at least 250 Mbps of available network bandwidth per vMotion operation.
  • Lastly, although not technically required for the vMotion to successfully complete, L2 connectivity is required on the source and destination portgroups. When a Cross vCenter vMotion is performed, a Cross vSwitch vMotion is done as well. The virtual machine portgroups for the VM will need the share an L2 network because the IP will within the guest OS will not be updated.

 These are some of the features with Cross vCenter vMotion:

  • The VM UUID or VM Instance ID will always remain the same across all vCenter servers in the environment. This is not the same as the Managed ID, MoRef, or BIOS UUID. UUID location is found in the vmx file under the title “vc.uuid”
  • The historical data and settings for the VM are retained when it’s migrated using Cross vCenter vMotion. This includes Events, Alarms and Task history. Performance data is only kept on the source vCenter server.
  • Additionally HA/DRS settings that will persist after the vMotion are:
    • Affinity/Anti Affinity Rules
    • Automation level
    • Start-up priority
    • Host isolation response
  • These are the resource settings that be migrated:
    • Resource Settings
    • Shares
    • Reservations
    • Limits
  • MAC Address are generated in such a way they are guaranteed to be unique to that vCenter server. When a VM is migrated off a vCenter server it will keep the same MAC address at the destination vCenter. Additionally, that MAC address is added to a local blacklist on the source vCenter server to guarantee that server does not reuse that MAC address in case it happens to generate the same MAC for a new VM.

Long distance vMotion

Long Distance vMotion is an extension of Cross vCenter vMotion however targeted for environments where vCenter servers are spread across large geographic distances and where the latency across sites is 100ms or less.

Although spread across a long distance, all the standard vMotion guarantees are honored. This does not require VVOLs to work. A VMFS/NFS system will work also.

Long distance vMotionWith Long Distance vMotion you can now:

  • Migrate VMs across physical servers that spread across a large geographic distance without interruption to applications
  • Perform a permanent migration for VMs in another datacenter.
  • Migrate VMs to another site to avoid imminent disaster.
  • Distribute VMs across sites to balance system load.
  • Follow the sun support.

There are several requirements for Long Distance vMotion to work:

  • The requirements for Long Distance vMotion are the same as Cross vCenter vMotion, except with the addition of the maximum latency between the source and destination sites must be 100 ms or less, and there is 250 Mbps of available bandwidth.
  • To stress the point: The VM network will need to be a stretched L2 because the IP of the guest OS will not change. If the destination portgroup is not in the same L2 domain as the source, you will lose network connectivity to the guest OS. This means in some topologies, such as metro or cross-continental, you will need a stretched L2 technology in place. The stretched L2 technologies are not specified. Any technology that can present the L2 network to the vSphere hosts will work, as it’s unbeknown to ESX how the physical network is configured. Some examples of technologies that would work are VXLAN, NSX L2 Gateway Services, or GIF/GRE tunnels.
  • There is no defined maximum distance that will be supported as long as the network meets these requirements. Your mileage may vary, but are eventually constrained by the laws of physics.
  • The vMotion network can now be configured to operate over an L3 connection.

Replication-Assisted vMotion

Replication-assisted vMotion enables customers, with active-active replication set up between two sites, to perform a more efficient vMotion resulting in huge time and resource savings. With Replication-assisted vMotion customers can save as much as 95 percent more efficient depending on the size of the data.

Increased vMotion Network Flexibility

In addition to a multiple network stack, NFC traffic can be isolated from other traffic. This allows operations such as cloning from a template to be sent over a dedicated network rather than sharing the management network as in previous versions. This allows more fine tuned control of network resources.

The next version of ESXi will have multiple TCP/IP stacks. This allows the vSphere services to operate with their own:

  • Memory Heap.
  • ARP Tables.
  • Routing Table.
  • Default Gateway.

vMotion Network Flex

Previously ESX had only one networking stack. This improves scalability and offers flexibility by isolating vSphere services to their own stack. This also allows vMotion to work over dedicated Layer 3 network.