What if my VMs storage exceeds the capacity of a Nutanix node?

I get this question a lot, What if my VM exceeds the capacity of the node its running on. The answer is simple, the storage available to a VM is the entire storage pool which is made up of all nodes within the cluster and is not limited to the capacity of any single node.

Let’s take an extreme example, a single VM is running on Node B (shown below) and all other nodes have no workloads. Regardless of if the nodes are “Storage only” such as NX-6035C or any Nutanix node capable of running VMs e.g.: NX3060-G4 the SSD and SATA tiers are shared.

AllSSDhybrid

The VM will write data to the SSD tier and only once the entire SSD tier (i.e.: All SSD in all nodes) reaches 75% capacity will ILM tier the coldest data off the to SATA tier. So if the SSD tier never reaches 75% you will have all data in SSD tier both local and remote.

This means multiple CVMs (Nutanix Controller VM) will service the I/O which allows for single VMs to achieve scale up type performance where required.

As the SSD tier exceeds 75% data is tiered down to SATA but active data will still reside in SSD tier across the cluster and be serviced with all flash performance.

The below shows there is a lot of data in the SATA tier but ILM is intelligent enough to ensure hot data remains in the SSD tier.

AllSSDwithColdData

Now what about Data Locality, Data Locality is maintained where possible to ensure the overheads of going across the network are minimized but simply put, if the active working set exceeds the local SSD tier Nutanix ensures maximum performance by distributing data across the shared SSD tier (not just two nodes for example) and services I/O through multiple controllers.

In the worst case where the active working set exceeds the local SSD capacity but fits within the shared SSD tier, you will have the same performance as a Centralised All Flash Array, in the best case, Data Locality will avoid the requirement to traverse the IP network and service reads locally.

If the active working set exceeds the shared SSD tier, Nutanix also distributes data across the shared SATA tier and services I/O from all nodes within the cluster as explained in a recent post “NOS 4.5 Delivers Increased Read Performance from SATA“.

Ideally I recommend sizing the Active working set of VMs to fit within the local SSD tier but this is not always possible. If you’re running Nutanix you can find out what the active working set of a VM is via PRISM (See post here) and if you’re looking to size for a Nutanix solution, use my rule of thumb for sizing for storage performance in the new world.

Example Architectural Decision – Jumbo Frames with IP Storage (Use Jumbo Frames)

Problem Statement

When using IP based storage over a converged 10GB network, should Jumbo Frames be used?

Requirements

1. Fully Supported storage

2. Maximum vSphere environment availability

3. Maximize performance where possible

Assumptions

1. Dedicated 10GB Storage Network which is highly available

2. Two 10GB connections per ESXi host dedicated to IP Storage

3. Storage array supports Jumbo Frames

4. Benefit of Jumbo Frames outweighs the complexity to implement/maintain/support

5. Network performance is constrained at an interrupt level

Constraints

1. Maximum of two connections per ESXi host for IP Storage

Motivation

1. Maximum performance and security

Architectural Decision

Use Jumbo Frames

Justification

1. There is a dedicated physical network for IP storage

2. All devices end to end support Jumbo Frames and this is enabled on all switches globally

3. As only IP storage traffic traverses the dedicated network, a larger MTU will not have any adverse effects on data network traffic.

4.  IP storage packets will not be fragmented or dropped as the storage network has been verified and configured to support Jumbo Frames. Thus avoiding costly re-transmits

5. No routing exists (or is required) for the IP storage network, as such the environment is flat and simple to support

6. IP Storage performance will not be constrained by MTU

7. A standard MTU of 1500 can optionally be configured at the VMKernel layer if performance is negatively impacted by Jumbo Frames without the need to modify the switch configuration which will support up to 9216 MTU

8. Increasing the MTU will decrease the number of packets required for the same bandwidth to help prevent IP storage network being constrained at an interrupt level

Implications

1. A dedicated network needs to be maintained for IP storage which reduces consolidation

2. Storage network needs to be configured for Jumbo Frames

3. The Storage controller needs to be configured for Jumbo Frames

4. The VMKernel/s need to be configured for Jumbo Frames

5. Where the networks becomes constrained at either an interrupt or throughput level, any benefit of Jumbo Frames may be reduced or lost and IP storage performance may degrade

Alternatives

1. Do not use Jumbo Frames

2. Use Jumbo Frames in a converged network (ie: No dedicated IP Storage switches)

Relates Articles

1. Example Architectural Decision – Jumbo Frames for IP Storage (Use Jumbo Frames)

 Contributors

Thanks to Rob McNab (IBM) and Peter McCrystal (IBM) for their input into this example architectural decision.