vMotion issues when using NFS storage with vSphere 5.5 Update 2

When vMotioning a VM (specifically the .vswp file) residing on an NFS datastore you may see the following error.

vMotion fails with the error: remote host IP_Address failed with status Busy

This issue originally occurred in vSphere 4.1 but appears to have reappeared in vSphere 5.5 Update 2.

Luckily there is a workaround for now, until VMware can investigate and resolve the problem.

The workaround is to modify the advanced setting “Migrate.VMotionResolveSwapType” from the default of 1, to 0 on both the source and destination hosts. If you want to solve this for your entire cluster, then every host needs to be modified.

To modify the setting:
  1. Launch the vSphere Client and log in to your vCenter Server.
  2. Select the source ESX host and then click the Configuration tab.
  3. Click Software > Advanced Settings > Migrate.
  4. Under the Migrate options, locate the line containing Migrate.VMotionResolveSwapType. By default, it is set to 1.
  5. Change the value to 0.
  6. Click OK.
  7. Repeat Steps 2 to 6 for all hosts in the cluster.

The official VMware KB is below.

vMotion fails with the error: remote host IP_Address failed with status Busy(1031636)

Cloning VMs – Why less (I/O & throughput) is better!

I’ve seen the picture below floating around Twitter and LinkedIn which shows a 32GB VM being cloned in just 7 seconds on an All Flash Array (AFA) and has got a lot of attention.

The AFA peaked at over 7000MB/s during this time showing the AFA is capable of some serious throughput!345363bf-bbb3-4389-aafa-71c81f182de3-large

At this stage some people may be thinking im talking about Nutanix, so I would like to point out the above AFA is not a Nutanix NX-9000 All Flash Node.

So why did I write this post?

I am still surprised that technical people find this sort of test and result impressive, because to me the fact the AFA used 7000MB/s of bandwidth to perform the clone means it has not intelligently performed the clone and the process has used additional capacity while potentially having a high impact on the other workloads using the storage.

At this stage I guess I should explain what I mean by intelligently clone.

An intelligent clone in my mind is where:

a) The clone takes a few seconds to occur
b) The clone is offloaded to the storage layer
c) Uses almost zero I/O & bandwidth to perform the clone
d) Uses almost zero additional space

So in the above example, the solution has cloned the VM in a few seconds, so a) has been satisfied, and since there is no information provided I’m going to give it the benefit of the doubt and say the clone was offloaded to the storage layer, so im assuming (rightly or wrongly) that b) is also satisfied.

But what about c) and d).

If the clone uses 7000MB/s of bandwidth that must have some impact (if not a significant impact) on other workloads running on the storage, even if it is only for 7 seconds.

The clone was also writing data throughout the 7 seconds, so its also duplicating the data.

So the net result is a fast yet high impact (capacity / performance) clone.

Back in 2012, when I worked at IBM, I wrote this post (Netapp Edge VSA – Rapid Cloning Utility) about intelligent cloning, as a customer was suffering terrible VDI recompose times due to using a big dumb storage solution which had no inteligent cloning capabilities. The post shows even on an old IBM x3850 M2 with slow old 4 core processors running a Virtual Storage Appliance running on 3 peices of spinning rust (146GB SAS disks) and it still completes the task in just 4.73 seconds per clone in full compliance with the 4 items I identified as aspects of intelligent cloning (below).

a) The clone takes a few seconds to occur
b) The clone is offloaded to the storage layer
c) Uses almost zero I/O & bandwidth to perform the clone
d) Uses almost zero additional space

The reason intelligent cloning is so much faster is because there is no need to duplicate a VM, the intelligent cloning process simply creates pointers back to the original file (which remains Read Only) and only uses I/O & capacity when new data is created.

The process is actually mostly dependant on vCenter to register the new VM which is why the process takes a couple of seconds as the process takes almost no time at the storage layer. The size of the VM being cloned is irrelevant. (Note: In my post from 2012 it was a 10Gb VM although again the size has no impact on the speed of an intelligent clone)

In the post from 2012, I made the following observation:

Even if you have the worlds fastest array (insert you favorite vendor here), storage connectivity and the biggest and most powerful ESXi hosts the process of cloning a large number of virtual machines will still;

1. Take more time to complete than an intelligent cloning process like RCU

2. Impact the performance of your ESXi hosts and more than likley production VMs

3. Impact the performance of your storage network & array (and anything that uses it , physical or virtual).

So fast forward to 2015, we have lots of really fast All-Flash storage solutions, but for tasks like cloning, even these super fast all-flash solutions can’t outperform a single controller (2vCPU) Virtual Storage appliance running on an old IBM x3850 M2 server running in my test lab using intelligent cloning from back in 2012.

I also wrote this article (Is VAAI beneficial with Virtual Storage Appliance (VSA) based solutions ?) recently explaining the benefits of VAAI-NAS and how VAAI-NAS supports intelligent cloning even with Virtual Storage Appliance solutions.

In Summary:

I find a clone taking a few seconds and using next to no throughput and capacity to be impressive. This is a perfect example of less I/O and throughput (to perform the same task) being better!

Its great if a storage array has the capability to drive many GB/s of throughput, but its totally unnecessary for cloning and is only demonstrating the lack of intelligent cloning capabilities for the storage solution.

In my opinion its much better for a storage solutions to use its high performance capability for driving I/O to virtual machines servicing business applications than for tasks like cloning which can be done intelligently.

To show off more real world performance capabilities of a storage solution (especially an All-Flash array), the example really has to include multiple workloads with different I/O characteristics. This is something the storage industry (all vendors) continues to fail to provide and its something I would like to be a part of changing as things like “Peak” performance are no where near as important as “consistent” performance.

Back on topic though, If cloning is something you or your customers require, for say a VDI, Cloud deployment or just for rapid provisioning of testing & development VMs, consider a storage solution which has intelligent cloning capabilities such as VAAI-NAS which integrates with products like Horizon View (VCAI Clones) and vCloud Director (FAST Provisioning).

Virtual machines lose network connectivity after HA fail over. (ESXi 5.5)

If you’re running ESXi 5.5 pre Release 2068190 (Update 2) and have not upgraded from GA release or are avoided upgrading due to the NFS issue in Update 1 and are running on a vSphere Distributed Switch (VDS) then read on:

When configured with Static Port Binding, which is the default and recommended port binding setting (See VMware KB: 1022312) after a HA event, a problem has been discovered which prevents VMs from connecting to the network and the VM starts up with its vmNIC in a “Disconnected” state.

You may receive the following error when trying to reconnect the vmNIC.

DVSfailure2

If you are having this problem you can connect the VMs to a dvPortGroup which uses Ephemeral Binding to get the environment online. This could be used for VMs such as Domain Controllers and vCenter (and its dependencies) then once these VMs are online, this will allow all other VMs to connect to the network normally.

Once everything is back online, I recommend you connect all VMs back to their original dvPortGroup/s with Static Binding.

If you don’t have a dvPortGroup with Ephemeral binding, create a Standard vSwitch and connect a single NIC to it, follow the same process, then migrate the VMs back to the dvPortGroup once they are online and return the pNIC to the dvSwitch.

To future proof the environment, you may choose to create a dvPortGroup for the Infrastructure VMs and use Ephemeral Binding, or just have a dvPortGroup with Ephemeral binding ready to use just in case.

The good news is VMware have already resolved this issue in ESXi Update 2 (Release: 2068190) so I recommend bypassing Update 1 (due to the NFS bug) and going straight to Update 2 which means you will avoid the issue altogether.