What’s .NEXT 2016 – PRISM integrated Network configuration for AHV

As I have previously discussed, AHV is the next generation hypervisor and brings similar value as traditional hypervisors with much improved management performance/resiliency while being easier to deploy and scale.

However one of the weak points of AHV was when it came to visualisation and configuration of the virtual networking (Open vSwitch) from a node perspective.

I am pleased to say in an upcoming release of AHV the configuration of virtual networking is integrated into PRISM Element.

The below screenshot shows an example of the Nutanix Controller VM (CVM) and User VMs (UVMs) connected to the underlying Bridges/Bonds which connect the virtual machines to the physical networking adapters.

NetworkVisual1

Next we can see a visualisation of grouped applications (groups of VMs) and which virtual networks they are connected to.

NetworkVisual2

Next we can see an end to end visualisation of Virtual machines grouped in this example by User, on the AHV host through to the physical network switches and ports.

NetworkVisual3

Stay tuned for upcoming posts with YouTube videos showing how virtual networking is configured and monitored for different use cases.

Related .NEXT 2016 Posts

Nutanix Tech Notes for VMware vSphere

I thought I would put together a single page which has links to all the current Nutanix Tech Notes relating to VMware vSphere as well as have a bit of a teaser list of upcoming documents.

This will be a living post, and updated regularly as new documents are released.

Tech Notes

1. Nutanix Storage Configuration for VMware vSphere

2. VMware vSphere Networking on Nutanix

3. VMware High Availability Configuration for Nutanix (Coming soon)

4. VMware Distributed Resource Scheduler on Nutanix (Coming soon)

5. VMware Storage configuration on Nutanix (Coming soon)

6. VMware vSphere Cluster design with Nutanix (Coming soon)

7. Optimal Virtual Machine design with Nutanix (Coming soon)

8. Monster VM design with Nutanix (Coming soon)

Example Architectural Decision – Jumbo Frames for IP Storage (Do not use Jumbo Frames)

Problem Statement

When using IP based storage over a converged 10GB network, should Jumbo Frames be used?

Requirements

1. Fully Supported storage

2. Maximum vSphere environment availability

3. Maximize performance where possible

Assumptions

1. Converged 10GB Network which is highly available

2. Two (or more) 10GB connections per ESXi host

Constraints

1. No dedicated network for IP storage traffic

Motivation

1. Simplify the environment

Architectural Decision

Do not use Jumbo Frames

Justification

1. Reduce the complexity in the environment for initial implementation

2. Simplify ongoing support / troubleshooting

3. For a Jumbo Frame to be transmitted without fragmentation, All devices end to end must support and be configured for Jumbo Frames

4. While there can be performance benefits of Jumbo Frames for IP Storage this is not generally seen across the board and depends on I/O types

5. Ensure IP storage packets are not fragmented or dropped by mis-configured devices or devices that do not support Jumbo Frames

6. Storage performance for the virtual environment will generally be constrained by the storage array controllers not the storage area network

7. Ensure packet fragmentation does not occur as all devices support a default MTU of 1500

8. Increasing the MTU will decrease the number of packets required for the same bandwidth but where the bottleneck is throughput (bytes) there will be minimal/no benefit

9. Jumbo Frames will only assist where the network is constrained at an interrupt level

Implications

1. IP Storage may have reduced performance in some circumstances compared to what Jumbo Frames may offer

Alternatives

1. Use Jumbo Frames

Relates Articles

1. Example Architectural Decision – Jumbo Frames for IP Storage (Use Jumbo Frames)

 Contributors

Thanks to Rob McNab (IBM) and Peter McCrystal (IBM) for their input into this example architectural decision.