Bug Life: vSphere 6.0 Network I/O Control & Custom Network Resource Pools

In a previous post How to configure Network I/O Control (NIOC) for Nutanix (or any IP Storage) I showed just how easy configuring NIOC is back in the vSphere 5.x days.

In was based around the concepts of Shares and Limits, of which I have always recommended shares which enable fairness while allowing traffic to burst if/when required. NIOC v2 was a Simple, and effective solution for sure.

Enter NIOC V3 in vSphere 6.0.

Once you upgrade to NIOC v3 you can no longer use the vSphere C# client and NIOC also now has the concept of bandwidth reservations as shown below:

NIOCoverview

I am not really a fan of reservations in NIOC or for CPU (memory is good though) and in fact I’ll go as far as to say NIOC was great in vSphere 5.x and I don’t think it needed any changes.

However with vSphere 6.0 Release 2494585 when attempting to create a custom network resource pool under the “Resource Allocation” menu by using the “+” icon (as shown below) you may experience issues.

As shown below, before even pressing the “+” icon to create a network resource pool, the Yellow warning box tells us we need to configure a bandwidth reservation for virtual machine system traffic first.

issue1

So my first though was, Ok, I can do this, but why? I prefer using Shares as opposed to Limits or reservations because I want traffic to be able to burst when required and for no bandwidth to be wasted if certain traffic types are not using it.

In any case, I followed the link in the warning and went to set a minimal reservation of 10Mbit/s for Virtual machine traffic as shown below.

Pix3

When pressing “Ok” I was greeted with the below error saying the “Resource settings are invalid”. As shown below I also tried higher reservations without success.

Pix2

I spoke to a colleague and had them try the same in a different environment and they also experienced the same issue.

I have currently got a call open with VMware Support. They have acknowledge this is an issue and is being investigated. I will post updates as I hear from them so stay tuned.

vMotion issues when using NFS storage with vSphere 5.5 Update 2

When vMotioning a VM (specifically the .vswp file) residing on an NFS datastore you may see the following error.

vMotion fails with the error: remote host IP_Address failed with status Busy

This issue originally occurred in vSphere 4.1 but appears to have reappeared in vSphere 5.5 Update 2.

Luckily there is a workaround for now, until VMware can investigate and resolve the problem.

The workaround is to modify the advanced setting “Migrate.VMotionResolveSwapType” from the default of 1, to 0 on both the source and destination hosts. If you want to solve this for your entire cluster, then every host needs to be modified.

To modify the setting:
  1. Launch the vSphere Client and log in to your vCenter Server.
  2. Select the source ESX host and then click the Configuration tab.
  3. Click Software > Advanced Settings > Migrate.
  4. Under the Migrate options, locate the line containing Migrate.VMotionResolveSwapType. By default, it is set to 1.
  5. Change the value to 0.
  6. Click OK.
  7. Repeat Steps 2 to 6 for all hosts in the cluster.

The official VMware KB is below.

vMotion fails with the error: remote host IP_Address failed with status Busy(1031636)