Bug Life: vSphere 6.0 Network I/O Control & Custom Network Resource Pools

In a previous post How to configure Network I/O Control (NIOC) for Nutanix (or any IP Storage) I showed just how easy configuring NIOC is back in the vSphere 5.x days.

In was based around the concepts of Shares and Limits, of which I have always recommended shares which enable fairness while allowing traffic to burst if/when required. NIOC v2 was a Simple, and effective solution for sure.

Enter NIOC V3 in vSphere 6.0.

Once you upgrade to NIOC v3 you can no longer use the vSphere C# client and NIOC also now has the concept of bandwidth reservations as shown below:

NIOCoverview

I am not really a fan of reservations in NIOC or for CPU (memory is good though) and in fact I’ll go as far as to say NIOC was great in vSphere 5.x and I don’t think it needed any changes.

However with vSphere 6.0 Release 2494585 when attempting to create a custom network resource pool under the “Resource Allocation” menu by using the “+” icon (as shown below) you may experience issues.

As shown below, before even pressing the “+” icon to create a network resource pool, the Yellow warning box tells us we need to configure a bandwidth reservation for virtual machine system traffic first.

issue1

So my first though was, Ok, I can do this, but why? I prefer using Shares as opposed to Limits or reservations because I want traffic to be able to burst when required and for no bandwidth to be wasted if certain traffic types are not using it.

In any case, I followed the link in the warning and went to set a minimal reservation of 10Mbit/s for Virtual machine traffic as shown below.

Pix3

When pressing “Ok” I was greeted with the below error saying the “Resource settings are invalid”. As shown below I also tried higher reservations without success.

Pix2

I spoke to a colleague and had them try the same in a different environment and they also experienced the same issue.

I have currently got a call open with VMware Support. They have acknowledge this is an issue and is being investigated. I will post updates as I hear from them so stay tuned.

How to configure Network I/O Control (NIOC) for Nutanix (or any IP Storage)

This video shows how to configure Network I/O Control (NIOC) as per Nutanix Best Practices, however this configuration is also applicable to any IP based Storage.

For more information see the Nutanix vNetworking Best Practices Guide.

Related Articles:

1. Network I/O Control Shares/Limits for ESXi Host using IP Storage

2. Network I/O Control for ESXi Host using IP Storage (4x10Gb NICs)

3. Example VMware vNetworking Design for IP Storage

Example VMware vNetworking Design w/ 2 x 10GB NICs (IP based or FC/FCoE Storage)

I have had a large response to my earlier example vNetworking design with 4 x 10GB NICs, and I have been asked, “What if I only have 2 x 10GB NICs”, so the below is an example of an environment which was limited to just two (2) x 10GB NICs and used IP Storage.

If your environment uses FC/FCoE storage, the below still applies and the IP storage components can simply be ignored.

Requirements

1. Provide high performance and redundant access to the IP Storage (if required)
2. Ensure ESXi hosts could be evacuated in a timely manner for maintenance
3. Prevent significant impact to storage performance by vMotion / Fault Tolerance and Virtual machines traffic
4. Ensure high availability for all network traffic

Constraints

1. Two (2) x 10GB NICs

Solution

Use one dvSwitch to support all VMKernel and virtual machine network traffic and use “Route based of Physical NIC Load” (commonly refereed to as “Load Based teaming”).

Use Network I/O control to ensure in the event of contention that all traffic get appropriate network resources.

Configure the following Network Share Values

IP Storage traffic : 100
ESXi Management: 25
vMotion: 25
Fault Tolerance : 25
Virtual Machine traffic : 50

Configure two (2) VMKernel’s for IP Storage and set each on a different VLAN and Subnet.

Configure VMKernels for vMotion (or Multi-NIC vMotion), ESXi Management and Fault Tolerance and set to active on both 10GB interfaces (default configuration).

All dvPortGroups for Virtual machine traffic (in this example VLANs 6 through 8) will be active on both interfaces.

The above utilizes LBT to load balance network traffic which will dynamically move workload between the two 10GB NICs once one or both network adapters reach >=75% utilization.

vNetworking BLOG 2x10gb

Conclusion

Even when your ESXi hosts only have two x 10Gb interfaces, VMware provides enterprise grade features to ensure all traffic (including IP Storage) can get access to sufficient bandwidth to continue serving production workloads until the contention subsides.

This design ensures that in the event a host needs to be evacuated, even during production hours, that it will complete in the fastest possible time with minimal or no impact to production. The faster your vMotion activity completes, the sooner DRS can get your cluster running as smoothly as possible, and in the event you are patching, the sooner your maintenance can be completed and the hosts being patched are returned to the cluster to serve your VMs.

Related Posts

1. Example Architectural Decision – Network I/O Control for ESXi Host using IP Storage (4 x 10 GB NICs)
2. Network I/O Control Shares/Limits for ESXi Host using IP Storage