Example Architectural Decision – Network I/O Control for ESXi Host using IP Storage

Problem Statement

With 10GB connections, the proposed ESXi hosts will have less physical connections, but more bandwidth per connection than a host with 1GB NICs. In this case, 4 x 10GB NICs needs to cater for all traffic (including IP storage) for the ESXi hosts.

The design needs to ensure all types of traffic have sufficient burst and sustained bandwidth without negatively impacting other types of traffic.

How can this be achieved?

Assumptions

1. No additional Network cards (1gb or 10gb) can be supports
2. vSphere 5.0 or later
3. 2 x 48 port 10GB and 2 x 48 port 1GB switches exist in the environment
4. ESXi host are 4 way servers with 512GB RAM which are expected to run large numbers of VMs with varying workloads
5. Multi-NIC vMotion is not required due to using 10Gb NICs

Motivation

1.When using bandwidth allocation, use “shares” instead of “limits,” as the former has greater flexibility for unused capacity redistribution.
2. Ensure IP Storage (NFS) performance is optimal
3.Ensure vMotion activities (including a host entering maintenance mode) can be performed in a timely manner without impact to IP Storage or Fault Tolerance
4. Fault tolerance is a latency-sensitive traffic flow, so it is recommended to always set the corresponding resource-pool shares to a reasonably high relative value in the case of custom shares.

Architectural Decision

Separate VMware infrastructure functions (VMKernel) from virtual machine network traffic by creating two (2) dvSwitches (each with 2 x 10GB connections), dvSwitch-Admin and dvSwitch-Data

Enable Network I/O control, and configure NFS and/or iSCSI traffic with a share value of 100 and vMotion & FT which will have share value of 25.

Configure the two (2) VMKernel’s for IP Storage on dvSwitch-Admin and set to be Active on one 10GB interface and Standby on the second.

Configure the VMKernel for vMotion on dvSwitch-Admin as Active on one interface and standby on the second and vice-versa for FT.

Configure all dvPortGroups for Virtual Machine data on dvSwitch-Data.

Justification

1. The share values were chosen to ensure storage traffic is not impacted as this can cause flow on effects for the environments performance. vMotion & FT are considered important, but during periods of contention, should not monopolize or impact IP storage traffic.
2. IP Storage is more critical to ongoing cluster and VM performance than vMotion or FT
3. IP storage requires higher priority than vMotion which is more of a burst activity and is not as critical to VM performance
4. Which a share value of 25,  Fault Tolerance still has ample bandwidth to support the maximum supported FT machines per host of 4 even during periods of contention
5. Which a share value of 25, vMotion still has ample bandwidth to support multiple concurrent vMotion’s during contention however performance should not be impacted on a day to day basis. With up to 8 vMotion’s supported as it is configured on a 10GB interface. (Limit of 4 on a 1GB interface)
6. The environment required 1GB switches to accommodate for various devices, such as Out of Band management & IP KVM devices, as such having ESXi management on 2 x 1GB ports was not adding significant cost to the solution

Implications

1. In the unlikely event of significant and ongoing contention, performance for vMotion and FT may affect the ability to perform the evacuation of a host in a timely manner. This may impact the ability to performance scheduled maintenance.

Alternatives

1. Use all 4 x 10Gb NICs on a single dvSwitch, and use “Active” and “Standby” to ensure traffic remained on a specified NIC unless there was a failure. Leverage Network I/O control similar to the above example to ensure minimal impact of contention

See Example VMware vNetworking Design for IP Storage for an overview of the vNetworking design described in this example.

Example VMware vNetworking Design for IP Storage

On a regular basis, I am being asked how to configure vNetworking to support environments using IP Storage (NFS / iSCSI).

The short answer is, as always, it depends on your requirements, but the below is an example of a solution I designed in the past.

Requirements

1. Provide high performance and redundant access to the IP Storage (in this case it was NFS)
2. Ensure ESXi hosts could be evacuated in a timely manner for maintenance
3. Prevent significant impact to storage performance by vMotion / Fault Tolerance and Virtual machines traffic
4. Ensure high availability for ESXi Management / VMKernel and Virtual Machine network traffic

Constraints

1. Four (4) x 10GB NICs
2. Six (6) x 1Gb NICs (Two onboard NICs and a quad port NIC)

Note: So in my opinion the above NICs are hardly “constraining” but still important to mention.

Solution

Use a standard vSwitch (vSwitch0) for ESXi Management VMKernel. Configure vmNIC0 (Onboard NIC 1) and vmNIC2 (Quad Port NIC – port 1)

ESXi Management will be Active on vmNIC0 and vmNIC2 although it will only use one path at any given time.

Use a Distributed Virtual Switch (dvSwitch-admin) for IP Storage , vMotion and Fault Tolerance.

Configure vmNIC6 (10Gb Virtual Fabric Adapter NIC 1 Port 1) and vmNIC9 (10Gb Virtual Fabric Adapter NIC 2 Port 2)

Configure Network I/O with NFS traffic having a share value of 100 and vMotion & FT will each have share value of 25

Each VMKernel for NFS will be active on one NIC and standby on the other.

vMotion will be Active on vmNIC6 and Standby on vmNIC9 and Fault Tolerance vice versa.

vNetworking Example dvSwitch-Admin

Use a Distributed Virtual Switch (dvSwitch-data) for Virtual Machine traffic

Configure vmNIC7 (10Gb Virtual Fabric Adapter NIC 1 Port 2) and vmNIC8 (10Gb Virtual Fabric Adapter NIC 2 Port 1)

Conclusion

While there are many ways to configure vNetworking, and there may be more efficient ways to achieve the requirements set out in this example, I believe the above configuration achieves all the customer requirements.

For example, it provides high performance and redundant access to the IP Storage by using two (2)  VMKernel’s each active on one 10Gb NIC.

IP storage will not be significantly impacted during periods of contention as Network I/O control will ensure in the event of contention that the IP Storage traffic has ~66% of the available bandwidth.

ESXi hosts will be able to be evacuated in a timely manner for maintenance as

1. vMotion is active on a 10Gb NIC, thus supporting the maximum 8 concurrent vMotion’s
2. In the event of contention, worst case scenario vMotion will receive just short of 2GB of bandwidth. (~1750Mb/sec)

High availability is ensured as each vSwitch and dvSwitch has two (2) connections from physically different NICs and connect to physically separate switches.

Hopefully you have found this example helpful and for a example Architectural Decision see Example Architectural Decision – Network I/O Control for ESXi Host using IP Storage

vForum 2012 Sydney – TechTalk – VMware View 5.1 Desktop Deployment Solutions

At this years vForum in Sydney, I presented a TechTalk on VMware View 5.1 Desktop Deployment Solutions.

Below is the link to the recording, appologies the audio is not great as the TechTalk was done in the vForum lounge.

Josh Odgers – vForum 2012 TechTalk – VMware View 5.1 Desktop Deployment Solutions

As the video is not very clear I have provided a copy of the presentation below.

vForum VMware View Provisioning Options V0.