Yesterday was a big day on the release front. A lot of vSphere and related products received updates. I must admit I was pleasantly surprised by the large number of fixes in the various products and components. If you look at the release notes it becomes obvious that a lot of attention went into stability and reliability. The releases mentioned here are available for download.

If you are going to update, always check the Hardware Compatibility List. It is better to check and see that it is Ok than having downtime because you need to do a restore or have other issues.

With no further ado let’s dive into the updates.

 

vCenter 6.5 U1

I listed a couple of updates I think are very interesting. For a complete list of updates and fixed, check the Release Notes for vCenter 6.1 U1.

One of them is the demise of the 3rd party virtual switch. It means that customers using 3rd party virtual switches such as the IBM DVS 5000v, HPE 5900v, and Cisco Nexus 1000v will need to migrate off of those switches prior to upgrading to vSphere 6.5 U1. It was announced long before, but now it is really, really happening. source

  • vCenter Server appliance GUI and CLI installers now working on Microsoft Windows 2012 x64 bit, Microsoft Windows 2012 R2 x64 bit, Microsoft Windows 2016 x64 bit, and macOS Sierra.
  • vCenter Server 6.5 Update 1 supports Guest OS customization for Ubuntu 17.04 OS.
  • VMware vSphere Storage APIs – Data Protection (VADP) now also support Windows Server 2016 and Red Hat Enterprise Linux RHEL 7.3 as operating systems to perform proxy ck tockup.
  • Update Manager can be used for the upgrade of ESXi and vSAN stack
  • vCenter Server now supports Microsoft SQL Server 2016, Microsoft SQL Server 2016 SP1, and Microsoft SQL Server 2014 SP2.
  • The HTML5-based vSphere Client now supports most content library and OVF deployment operations, as well as operations on roles and permissions, basic customization of the Guest OS, and additions to virtual machine, host, datastore, and network management.
  • The HTML5-based vSphere Client now supports French, Swiss-French, and Swiss-German keyboards.
  • Linked vCenter Server instances now support up to 15 vCenter Server instances, 5,000 ESXi hosts, 50,000 powered on virtual machines, and 70,000 registered virtual machines.
  • You cannot take a file-based backup on proxy enabled mode, even though the backup server is listed on a NO_PROXY list
  • vSphere Data Protection appliance (VDP) 6.1.4 does not support Transport Layer Security (TLS) with version 1.2
  • vCenter Server stops responding and vpxd continuously crashes with multiple QueryHostReplicationCapabilities errors
  • The ldapSchemaTool does not work to configure custom schema mapping for LDAP identity source.
  • Guest customization fails on Linux operating systems
  • Guest Customization fails with error: GUESTCUST_EVENT_NETWORK_SETUP_FAILED
  • Guest Customization Failure: GUESTCUST_EVENT_CUSTOMIZE_FAILED
  • Failure when writing diagnostic logs to the /var/log/vmware/cm/cm.log file
  • During the migration or upgrade to vCenter Server Appliance 6.5, some deployment sizes are not available for selection
  • During the command-line installation, upgrade, and migration processes to vCenter Server Appliance 6.5, no structured status file is provided.
  • During a vCenter Server Appliance upgrade, the upgrade requirement error message does not indicate that the root password has expired
  • When you upgrade vCenter Server Appliance that resides on an ESXi host with the free Hypervisor license, the upgrade fails with an internal error.
  • Upgrade to vCenter Server Appliance 6.5 might fail because of a vpxd-firstboot failure
  • If an older version of the OpеnSSL DLLs are installed, upgrading to vSphere 6.5 fails
  • vCenter Server pre-upgrade check fails with duplicate names in a network folder error
  • The temporary log file autodeploy-service.log might grow to a quite large size over time.
  • Affinity rules configured on vCenter Server 5.5 can cause crashes after upgrading to vCenter Server 6.5
  • A multistep upgrade of vCenter Server on a Windows VM fails with error messages in Upgrade runner precheck
  • The file replication status is not updated in vCenter High Availability (HA) when no file replication is going on
  • Direct Console User Interface (DCUI) screen appears garbled
  • When a user performs a query for tags that are attached to several objects, performance issues can result in vSphere 6.5. The problems can get so bad that the vSphere Web Client freezes.
  • vCenter Server Appliance generates unreadable, encoded email alerts.
  • A vCenter High Availability (HA) cluster might enter a degraded state after 60 days of deployment
  • VM Snapshot Size (GB) alarm is not triggered after the VM is powered on.
  • New alarm configured with status unset fails to work in vCenter Server 6.5
  • The vAPI runtime logs for VMware Lifecycle Manager API (vmonapi) service are not rotated, causing the logs to be stored into а single large in size file
  • vCenter HA health monitoring shows that the appliance configuration is in sync, even when the Passive node is down
  • Port mirroring sessions cannot be removed or modified
  • When you add ports to a vSphere Distributed Switch you get an error
  • The vpxd service crashes when you add ports to a newly imported vSphere Distributed Switch
  • vCenter Server crashes due to ODBC error
  • Virtual machines configured to use EFI firmware fail to PXE boot in some DHCP environments
  • IP address or DNS servers configuration fails due to a crash in the network configuration manager code
  • A runtime exception “Unable to retrieve data about the distributed switch” might occur while upgrading vSphere Distributed Switch (vDS) from 5.0 to 6.5 version
  • A user with privilege to manage a vCenter object cannot see the object’s advanced performance charts.
  • File-based backups for vCenter Server Appliance are failing over SCP
  • Password masking on prompt and improved usage and error reporting, when updating the service account information on vCenter Server for Windows
  • You cannot use custom ESXi SSL certificates with keys that are longer than 2048 bits
  • Certificate regeneration fails with an error on vCenter Server 6.5
  • Joining of а vCenter Server host to the Disjoint Active Directory domain in vSphere 6.5 can cause a service failure
  • Host configuration might not be available after vCenter Server restarts
  • vSphere Syslog Collector fails to start when you configure the default data directory
  • vSphere Machine SSL certificate replacement fails when the old and new entries in the SubjectAltName field do not match
  • In the process of applying a host profile, the pre-check remediation fails with a general system error
  • When you enable the vSAN feature in the vSphere cluster, you might see a false event message
  • OVF tool fails to upload OVF or OVA files larger than 10 GB
  • The ovftool command-line option --allowAllExtraConfig never worked as designed. In vSphere 6.5 Update 1, this
    option is no longer supported.
  • After you update vCenter Server to version 6.5.x, you might see the vSAN old name in the vSphere Web Client
  • OVF deployment does not properly import vApp OVF templates that contain macro property references
  • A slash symbol in the inventory object names is displayed as %2f in the vSphere Web Client and the vSphere Client 6.5
  • OVF templates on a web server that is behind a proxy cannot be deployed or uploaded to a content library
  • Networking and storage settings are not visible for some ESXi hosts in the vSphere Web Client
  • Related events panel in vSphere Web Client 6.5 displays an error
  • You cannot edit DRS and HA settings in the vSphere Web Client
  • You might see an error message “Hardware status error: querySpec.metricId” on the Host Summary page of the vSphere Web Client
  • When you clone a virtual machine to a vSAN datastore in a different vCenter Server Instance, the loading progress bar on the Select storage page of the Clone to Virtual Machine wizard doesn’t stop loading

Back to top

ESXi 6.5 U1

Release notes

ESXi 6.5 U1 contains a couple of driver updates:

  • Cavium qlnativefc driver
  • VMware nvme driver
  • Intel i40en driver with Lewisburg 10G NIC Support
  • Intel ne1000 driver with Lewisburg 1G NIC Support
  • Intel igbn driver
  • Intel ixgben driver
  • Broadcom ntg3 driver

A lot of items are fixed or improved in ESXi 6.5U1. I  grouped them in themes: Purple Screens, Responsiveness, Performance, Storage, Network, Various. Just click one of the themes, or scroll down.

Purple Screens:

  • An ESXi host might fail with purple diagnostic screen when collecting performance snapshots with vm-support due to calls for memory access after the data structure has already been freed.
  • An ESXi host might fail with a purple screen on shutdown if IPv6 mld is used because of a race condition in tcpip stack.
  • An ESXi host might fail with a purple screen and a Spin count exceeded (refCount) - possible deadlock with PCPU error
  • If a pNIC is disconnected and connected to a virtual switch, the VMware NetQueue load balancer must identify it and pause the ongoing balancing work. In some cases, the load balancer might not detect this and access wrong data structures. As a result, you might see a purple screen.
  • For latency-sensitive virtual machines, the netqueue load balancer can try to reserve exclusive Rx queue. If the driver provides queue-preemption, then netqueue load balancer uses this to get exclusive queue for latency-sensitive virtual machines. The netqueue load balancer holds lock and execute queue preemption callback of the driver. With some drivers, this might result in a purple screen in the ESXi host, especially if a driver implementation involves sleep mode.
  • An ESXi host might fail with purple screen when you turn off globally the IPv6 support and reboot the host.
  • An ESXi host might fail with a purple screen because of a race condition when multiple multipathing plugins (MPPs) try to claim paths.
  • The NFS v3 client does not properly handle a case where NFS server returns an invalid filetype as part of File attributes, which causes the ESXi host to fail with a purple screen.
  • An ESXi host might fail with a purple screen if the virtual machines running on it have large capacity vRDMs and use the SPC4 feature
  • An ESXi host might fail with a purple screen if the VMFS6 datastore is mounted on multiple ESXi hosts, while the disk.vmdk has file blocks allocated from an increased portion on the same datastore
  • An ESXi host might fail with a purple screen because of a CPU heartbeat failure
  • An ESXi host might fail with purple screen if the virtual machine with large virtual disks uses the SPC-4 feature
  • An ESXi host might fail with purple screen when running HBR + CBT on a datastore that supports unmap
  • An ESXi host might fail with a purple screen because of the system cannot recognize or reserve resources for the USB device
  • An ESXi host might fail with purple screen when a Fault Tolerance Secondary virtual machine (VM) fails to power on and the host runs out of memory
  • ESXi host fails with purple diagnostic screen when mounting a vSAN disk group
  • Using objtool on a vSAN witness host causes ESXi host to fail with purple diagnostic screen
  • ESXi host fails with purple diagnostic screen due to incorrect adjustment of read cache quota
  • A host in a vSAN cluster fails with a purple diagnostic screen due to internal race condition

Responsiveness

  • The ESXi 6.5 host fails to join the Active Directory domain and the process might become unresponsive for an hour before returning the Operation timed out error, if the host uses only IPv4 address, and the domain has IPv6 or mixed IPv4 and IPv4 setup.
  • f you disconnect your ESXi host from vCenter Server and some of the virtual machines on that host are using LAG, your ESXi host might become unresponsive when you reconnect it to vCenter Server after recreating the same lag on the vCenter Server side.
  • An ESXi host might become unresponsive with no heartbeat NMI state on AMD machines with OHCI USB Host Controllers.
  • An ESXi host might stop responding if a LUN unmapping is made on the storage array side
  • An ESXi host might become unresponsive if the VMFS-6 volume has no space for the journal
  • SSD congestion might cause multiple virtual machines to become unresponsive
  • The virtual machine might become unresponsive due to active memory drop
  • When you take a snapshot of a virtual machine, the virtual machine might become unresponsive.
  • ESXi 5.5 and 6.x hosts stop responding after running for 85 days
  • Virtual Machine stops responding during snapshot consolidation
  • Windows 2012 terminal server running VMware tools 10.1.0 on ESXi 6.5 stops responding when many users are logged in.
  • The lsi_mr3 driver and hostd process might stop responding due to a memory allocation failure in ESXi 6.5
  • When you hot-add an existing or new virtual disk to a CBT (Changed Block Tracking) enabled virtual machine (VM) residing on VVOL datastore, the guest operation system might stop responding
  • Installation on TPM 1.2 machine hangs early during boot
  • Entering maintenance mode would time out after 30 mins even if the specified a timeout is larger than 30 mins.
  • Virtual machines with a paravirtual RDMA (PVRDMA) device run RDMA applications to communicate with the peer queue pairs. If an RDMA application attempts to communicate with a non-existing peer queue number, the PVRDMA device might wait for a response from the peer indefinitely. As a result, the virtual machine becomes inaccessible if during a snapshot operation or migration, the RDMA application is still running.

Performance

  • Wrong NUMA placement of a preallocated virtual machine leading to sub-optimal performance
  • Performance issues on Windows Virtual Machine (VM) might occur after upgrading to VMware ESXi 6.5.0 P01 or 6.5 EP2
  • Resolve the performance drop in Intel devices with stripe size limitation
  • The performance counter cpu.system incorrectly shows a value of 0 (zero)

Network

  • When registering  hardware version 3 virtual machines, ESXi hostd service disconnects from vCenter Server
  • An ESXi host might lose network connectivity when performing a stateless boot from Auto Deploy if the management vmkernel NIC has static IP and is connected to a Distributed Virtual Switch.
  • By default, each ESXi host has one virtual switch, vSwitch0. During the installation of ESXi, the first physical NIC is chosen as the default uplink for the vSwitch0. In case that NIC linkstate is down, the ESXi host might not have a network connection, although the other NICs linkstate is up and have a network access.
  • A loss of network connectivity might occur when the e1000/e1000e driver tells the e1000/e1000e vmkernel emulation to skip a descriptor
  • On some servers, a USB network device is integrated in IMM or iLO to manange the server. When you reboot IMM by using the vSphere Web Client or an IMM or iLO command, the transaction on the USB network device is lost.
  • NICs using ntg3 driver might experience unexpected loss of connectivity. The network connection cannot be restored until you reboot the ESXi host. The devices affected are Broadcom NetXtreme I 5717, 5718, 5719, 5720, 5725 and 5727 Ethernet Adapters.
  • The vmxnet3 device tries to access the memory of the guest OS while the guest memory preallocation is in progress during the migration of virtual machine with Storage vMotion. This results in an invalid memory access and the ESXi 6.5 host failure.
  • The igb native driver on an ESXi host  always works in auto-negotiate speed and duplex mode. The auto-negotiate support causes a duplex mismatch issue if a physical switch is set manually to a full-duplex mode.
  • Virtual machine configured to use EFI firmware will fail to obtain an IP address when trying to PXE boot if the DHCP environment responds by IP unicast. The EFI firmware was not capable of receiving a DHCP reply sent by IP unicast.
  • The original packet buffer can be shared across multiple destination ports if the packet is forwarded to multiple ports (such as broadcast packet). If the VLAN offloading is disabled and you modify the original packet buffer, the VLAN tag will be inserted to the packet buffer before it is forwarded to the guest VLAN. The other port will detect a packet corruption and drop.
  • When the physical link status of a vminc is changed, for example the cable is unplugged or the switch port is shutdown, the generated output from the esxcli command might give a wrong link status on Intel 82574L based NICs (Intel Gigabit Desktop CT/CT2). You must manually restart the NIC to get the actual link status.
  • The Windows 2012 domain controller supports SMBv2, whereas likewise stack on ESXi supports only SMBv1. With this release, the likewise stack on ESXi is enabled to support SMBv2.
  • Couldn't enable keep alive warnings occur during VMware NSX and partner solutions communication through a VMCI socket (vsock).
  • Intel I218 NIC resets frequently in a heavy traffic scenario
  • A vMotion migration of a Virtual Machine (VM) gets suspended for some time, and further fails with timeout

Storage

  • An ESXi host might lose connectivity to VMFS datastore
  • Disabled frequent lookup to an internal vSAN metadata directory (.upit) on virtual volume datastores. This metadata folder is not applicable to virtual volumes
  • Hosts in a vSAN cluster have high congestion which leads to host disconnects
  • Cannot enable vSAN or add ESXi host into a vSAN cluster due to corrupted disks
  • vSAN Configuration Assist issues a physical NIC warning for lack of redundancy when LAG is configured as the active uplink
  • vSAN cluster becomes partitioned after the member hosts and vCenter Server reboot
  • Large File System overhead reported by the vSAN capacity monitor
  • If a vSAN cluster has objects with size of 0 bytes, and those objects have any components in need of repair, CLOMD might crash.
  • vSphere API FileManager.DeleteDatastoreFile_Task fails to delete DOM objects in vSAN
  • When all NFS datastores are disabled within the Host Profile document, extracted from a reference host, the host profile’s remediation might fail with compliance errors and existing datastores are removed or new are added during the remediation.
  • For a Pure Storage FlashArray device you have to add manually the SATP rule to set the SATP, PSP and IOPs. A new SATP rule is added to ESXi to set SATP to VMW_SATP_ALUA, PSP to VMW_PSP_RR, and IOPs to 1 for all Pure Storage FlashArray models.
  • Modification of IOPS limit of virtual disks with enabled Changed Block Tracking (CBT) fails with errors in the log files
  • When you hot-add two or more hard disks to a VMware PVSCSI controller in a single operation, the guest OS can see only one of them.
  • If a VVol VASA Provider returns an error during a storage profile change operation, vSphere tries to undo the operation, but the profile ID gets corrupted in the process.
  • Per host Read or Write latency displayed for VVol datastores in the vSphere Web Client is incorrect.
  • When you use vSphere Storage vMotion on vSphere Virtual Volumes storage, the UUID of a virtual disk might change. The UUID identifies the virtual disk and a changed UUID makes the virtual disk appear as a new and different disk. The UUID is also visible to the guest OS and might cause drives to be misidentified.
  • After installation or upgrade certain multipathed LUNs will not be visible
  • The recompose operation in Horizon View might fail for desktop virtual machines residing on NFS datastores with stale NFS file handle errors, because of the way virtual disk descriptors are written to NFS datastores
  • Digest VMDK files are not deleted from the VM folder when you delete a VM
  • vSphere Storage vMotion might fail with an error message if it takes more than 5 minutes
  • Non-Latin characters might be displayed incorrectly in VM storage profile names

 

Various

  • Virtual machine shuts down automatically with the MXUserAllocSerialNumber: too many locks error
  • Dell 13G Servers use DDR4 memory modules. These modules are displayed with status “Unknown” on the Hardware Health Status Page in vCenter Server.
  • When a keyboard is configured with a different layout other than the U.S. default, and later it is unplugged and plugged again to the ESXi host, the newly-connected keyboard is assigned with U.S. default layout instead of the user-selected layout.
  • The vmswapcleanup jumpstart plugin fails to start with a message in syslog.
  • SNMP agent is reporting the same value for both ifOutErrors and ifOutOctets counters, when they should be different.
  • When using the vSphere Web Client to attempt to change the value of the Syslog.global.logDirUnique option, this option appears grayed out, and cannot be modified.
  • During the boot of an ESXi host, error messages related to execution of the jumpstart plug-ins iodm and vmci are observed in the jumpstart logs.
  • Existing VMs using Instant Clone and new ones, created with or without Instant Clone, lose connection with the Guest Introspection host module. As a result, the VMs are not protected and no new Guest Introspection configurations can be forward to the ESXi host. You are also present with a Guest introspection not ready warning in the vCenter Server UI.
  • When the PVRDMA driver is installed on a guest OS that supports PVRDMA device, the PVRDMA driver might fail to load properly when the guest OS is powered on. You might be stuck with the unavailable device in link down state until you manually reload the PVRDMA driver.
  • In vSphere 6.5 the secure heartbeat feature supported adding ESXi hosts with certificates with exactly 2048-bit keys. If you try to add or replace the ESXi host certificate with a custom certificate with a key longer than 2048 bits, the host gets disconnected from vCenter Server.
  • PCI passthru does not support devices with MMIO located above 16 TB where MPNs are wider than 32 bits.
  • You are prompted for a password twice when connecting to an ESXi host through SSH if the ESXi host is upgraded from vSphere version 5.5 to 6.5 while being part of a domain.
  • In the host profile section, a compliance error on Security.PasswordQualityControl is observed, when the PAM password setting in the PAM password profile is different to the advanced configuration option Security.PasswordQualityControl.
  • If the mandatory field in the VMODL object of the profile path is left unset, a serialization issue might occur during the answer file validation for network configuration, resulting in a vpxd service failure.
  • Setting the /Power/PerfBias advanced configuration option is not available. Any attempt to set it to a value returns an error.
  • vSphere 6.5 does not support disjointed Active Directory domain. The disjoint namespace is a scenario in which a computer’s primary domain name system (DNS) suffix does not match the DNS domain name where that computer resides.
  • The XHCI related platform errors reported in ESXi VMKernel Logs
  • Remove the redundant controller reset when starting controller
  • The Marvell Console device on the Marvell 9230 ACHI controller is not available
  • Major upgrade of dd image booted ESXi host to version 6.5 by using vSphere Update Manager fails with the Cannot execute upgrade script on host error.
  • The previous software profile version of an ESXi host is displayed in esxcli software profile get command output after execution of a esxcli software profile update command. Also software profile name is not marked Updated in esxcli software profile get command output after an ISO upgrade.
  • Unable to collect vm-support bundle from ESXi 6.5 host because when generating logs in ESXi 6.5 through the vSphere Web Client, the select specific logs to export text box is blank.
  • A host scan operation fails with a RuntimeError in ImageProfile module if the module contains VIBs for a specific hardware combination.
  • A reconfigure operation of a powered-on virtual machine that sets an extraConfig option with an integer value might fail with SystemError
  • vSphere Guest Application Monitoring SDK fails for VMs with vSphere Fault Tolerance enabled
  • The guestinfo.toolsInstallErrCode variable is not cleared on Guest OS reboot when installing VMware Tools
  • You cannot change some ESXi advanced settings such as /Net/NetPktSlabFreePercentThreshold because of the wrong default value.

Back to top

VMware Tools 10.1.10

Full release notes

  • While uninstalling VMware Tools in a Linux guest operating system, VMware Tools uninstaller is unable to stop vmtoolsd service. This issue occurs in Linux distributions such as Ubuntu 15.04 and later, RHEL7 and later, and SLES12 and later.
  • Installing VMware Tools on a 64-bit Windows virtual machine might result in an error
  • Mouse movements in RDP sessions to Windows virtual machines are affected by MKS console mouse movements
  • VMware Tools upgrade on power cycle fails on Windows operating system
  • VMware Tools upgrade fails if /tmp is mounted as noexec
  • Quiesced snapshot fails on a Japanese Windows Server 2008 R2 in vSphere
  • Quiesced snapshots of Windows Server 2012 and Windows Server 2012 R2 virtual machines with VMware Tools 10.1.0 fails with an error
  • VMware Tools re-installation in repair mode triggers a warning
  • Connecting to View fails with a black screen intermittently
  • Upgrading VMware Tools to 10.1.0 in a Windows guest operating system results in system event log
  • WMI performance adapter service fails on windows guest operating systems

Back to top

vSAN 6.6.1

Full release notes

vSAN has a comple of new interesting items.

Update Manager can scan the vSAN cluster and recommend host baselines that include updates, patches, and extensions. It manages recommended baselines, validates the support status from vSAN HCL, and downloads the correct ESXi ISO images from VMware.  vSAN requires Internet access to generate build recommendations. If your vSAN cluster uses a proxy to connect to the Internet, vSAN can generate recommendations for patch upgrades, but not for major upgrades.

Performance diagnostics analyzes previously executed benchmark tests. It detects issues, suggests remediation steps, and provides supporting performance graphs for further insight. Performance Diagnostics requires participation in the Customer Experience Improvement Program (CEIP).

Gen-9 HPE controllers in pass-through mode now support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.

Before you upgrade, check the release notes. There are some known issues that might prevent you from updating to 6.6.1. For some there are workarounds available, for some there are not.

Back to top

vSphere Replication 6.5.1

Full release notes for vSphere Replication 6.5.1

With the release of vSphere 6.5Update 1 there is also a new version of vSphere Replication, in other words vSphere Replication 6.5.1 is compatible with VMware vSphere 6.5 Update 1. It supports an upgrade migration path from vCenter Server Virtual Appliance 6.0 Update 3 to vCenter Server Virtual Appliance 6.5 Update 1 by delivering a direct upgrade path from vSphere Replication 6.1.2 to vSphere Replication 6.5.1.

vSphere Replication 6.5.1 now supports the following external databases:

  • Microsoft SQL Server 2014 Service Pack 2
  • Microsoft SQL Server 2016 Service Pack 1

Operating System support has been added for the folling:

  • Windows Server 2016
  • CentOS 6.9
  • RHEL 7.3.5
  • Ubuntu 17.04 non Long Term Support (LTS)

Back to top

Site Recovery Manager 6.5.1

Full release notes for Site Recovery Manager 6.5.1

VMware Site Recovery Manager 6.5.1 is compatible with VMware vSphere 6.5 Update 1. It provides the following new features:

  • Supports upgrade migration path from vCenter Server Virtual Appliance 6.0 Update 3 to vCenter Server Virtual Appliance 6.5 Update 1 by delivering a direct upgrade path from Site Recovery Manager 6.1.2 to Site Recovery Manager 6.5.1.
  • Site Recovery Manager 6.5.1 now supports the following external databases:
    • Microsoft SQL Server 2014 Service Pack 2
    • Microsoft SQL Server 2016 Service Pack 1
  • Site Recovery Manager 6.5.1 now supports the following guest operating systems:
    • Windows Server 2016
    • CentOS 6.9
    • RHEL 7.3.5
    • Ubuntu 17.04 non Long Term Support (LTS)

Back to top