Nutanix Implementation of Data Avoidance & Reduction Technologies

While its not news that Nutanix Distributed Storage Fabric (NDSF) supports numerous data avoidance & reduction technologies, what is less well known is how these technologies can be enabled/disabled and used.

Before we begin, let me cover off what technologies NDSF offers:

Data Avoidance:

  • VAAI-NAS Fast File Clone (for ESXi)
  • View Composer for Array Integration (VCAI) for Horizon View
  • Native NDSF Clones (ESXi, Hyper-V and AHV)
  • ODX Copy Offload (Hyper-V)
  • Crash and Application Consistent snapshots (ESXi, Hyper-V and AHV)

Data Reduction:

  • Compression (In-Line and Post-Process)
  • Deduplication (Fingerprint on Write/In-Line for Performance Tier and/or Capacity Tier)
  • Erasure Coding (EC-X)

Data avoidance is designed to prevent the creation of unnecessary data which removes the requirement to leverage data reduction technologies. This means less work for the storage layer which results in more available front end IO to service the virtual machines.

An example of data avoidance is using VCAI with Horizon View to create Linked Clones near instantly which not only reduces space but ensures faster deployment and recompose activities with greatly reduced impact to the environment.

Data avoidance is greatly underrated in my opinion, as it results in lower compression/deduplication ratios, because there is no additional data to dedupe or compress. If Nutanix turned off these data avoidance technologies, it would result in HIGHER compression and dedupe ratios, which sounds great on a marketing slide or in a tweet, but in reality, avoiding work for the storage is a much better way to do things.

Some vendors report data avoidance such as snapshots in deduplication ratios, and this in my opinion is very misleading and designed to artifically inflate dedupe ratios for competitive purposes. For more information see: Deduplication ratios – What should be included in the reported ratio?

Data Reduction is still a valuable option to have but in my opinion its overrated. The reason I think its overrated is data reduction does not always work well. It greatly depends on your data type if you will see a good data reduction ratio or not, AND if the overheads (of which there is always an overhead) are worth it.

Let’s now focus on the NDSF implementation of Data Reduction technologies.

Compression:

Compression can be configured on new or existing containers and be set to In-Line or Post-Process. For post process, enter a “Delay” value e.g.: 60 to delay compression for 1 Hour, or 3600 for 1 day.

Compression

Compression can be reconfigured at any time, without the requirement to relocate VMs or reformat the storage. For data which is already compressed it will be uncompressed as part of a low priority background task (known as Curator). This ensures there is low/no impact of changing Compression settings, ensuring maximum flexibility for customers.

Because compression is configured per container, you can have VMs or even Virtual Disks running compression alongside VMs or Virtual Disks not running compression within the same NDSF cluster. This helps eliminate silos and ensures mixed workloads with different data types/profiles can co-exist efficiently.

Deduplication:

As with Compression, Deduplication can be configured on new or existing containers and be set to dedupe for the performance tier (SSD) and optionally for the Capacity (HDD) Tier. This means data reduction can be maximised for either or both tiers depending on customer requirements.

dedupeconfig

Again the same as Compression, Dedupe can be reconfigured at any time, without the requirement to relocate VMs or reformat the storage. For data which is already deduped the same low priority background task (Curator) rehydrates the data again ensuring there is low/no impact of changing dedupe settings and ensuring maximum flexibility for customers.

Because dedupe is configured per container, you can have VMs or even Virtual Disks running dedupe alongside VMs or Virtual Disks not running dedupe within the same NDSF cluster. Deduplication is also complimentary to Compression, meaning both can be ran at the same time to maximise data reduction and further eliminate silos ensuring mixed workloads can co-exist efficiently.

Erasure Coding (EC-X):

As with Compression & Dedupe, EC-X is enabled on a per container basis and is complimentary to both Compression and Dedupe. EC-X is a post-process only form of data reduction designed to work on Write cold data (meaning data which is not changing).

EC-X applies to data across the Performance Tier (SSD) and the Capacity Tier (SATA) which means the effective SSD capacity is increased, which means more data can be serviced by SSD, thus increasing performance.

ecxonoff

As previously discussed, NDSF supports different containers using different combinations of data reduction all within the same NDSF cluster to maximise efficiencies and eliminate unnecessary silos.

Summary:

Nutanix provides multiple technologies to minimise the data being stored on the distributed storage fabric while giving customers the flexibility to enable/disable and tune data reduction settings to suit different data profiles all within the same NDSF cluster.

Remember, “one size does not fit all” so it is importaint for the storage layer to be able treat your workloads differently based on their individual requirements.

Related Articles:

Nutanix – Improving Resiliency of Large Clusters with Erasure Coding (EC-X)

As cluster sizes increase, it is important to understand the chance of multiple concurrent failure also increases and to architect solutions to ensure resiliency is maintained.

Because scalability is one of many strengths of the Nutanix Distributed Storage Fabric, Nutanix supported multiple data protection levels (RF2 and RF3) to ensure resiliency could be scaled with cluster size.

However using RF3 results in reducing the usable capacity to approximately 33% of the formatted capacity of the drives within the cluster which means it is sometimes considered undesirable.

But because some customers require the ability to support multiple concurrent node failures without the chance of data loss or unavailability, RF3 has been required.

Enter Nutanix Erasure Coding (EC-X)!

Now lets say you have a 32 node cluster where each node has 10TB RAW.

With RF3 we would have approx 3.33TB usable per node for a total of 106.56TB in the cluster.

With EC-X enabled (assuming EC-X has been applied to all data) the usable capacity would DOUBLE to 6.66TB per node and 213.12TB for the cluster.

Here’s how it works.

For RF3, the Nutanix Distributed Storage Fabric writes and maintains three copies of each piece of data. The below shows three copies of data “A” and “B”.

RF3

The below is a simplified example of what the Nutanix Distributed Storage Fabric looks like once EC-X is applied to RF3 data.

RF3plusECX

As you can see, we now support twice the amount of data as RF3 while still having dual parity. As a result, using RF3 + EC-X gives customers using large clusters MORE usable capacity than RF2 (~50% of RAW) while providing dual parity (which enables the loss of two nodes without data loss/unavailability).

Not bad for a software only upgrade!

So what do I recommend customers who are running 32 node or larger clusters?

1. For customers running RF3 already, Consider enabling EC-X.
2. For customers running RF2, consider enabling RF3 and EC-X

What’s .NEXT? – Erasure Coding!

Up to now, Nutanix has used a concept known as “Replication Factor” or “RF” to provide storage layer data protection as opposed to older RAID technologies.

RF allows customers to configure either 2 or 3 copies of data depending on how critical the data is.

When using RF2, the usable capacity of RAW is 50% (RAW divide 2).

When using RF3, the usable capacity of RAW is 33% (RAW divide 3).

While these sound like large overheads, but in reality, they are comparable to traditional SAN/NAS deployments as explain in the two part post – Calculating Actual Usable capacity? It’s not as simple as you might think!

But enough on existing features, lets talk about an exciting new feature, Erasure coding!

Erasure coding (EC) is a technology which significantly increases the usable capacity in a Nutanix environment compared to RF2.

The overhead for EC depends on the cluster size but for clusters of 6 nodes or more it results in only a 1.25x overhead compared to 2x for RF2 and 3x for RF3.

For clusters of 3 to 4 nodes, the overhead is 1.5 and for clusters of 5 nodes 1.33.

The following shows a comparison between RF2 and EC for various cluster sizes.ErasureCodingAs you can see, the usable capacity is significantly increased when using Erasure Coding.

Now for more good news, in-line with Nutanix Uncompromisingly Simple philosophy, Erasure Coding can be enabled on existing Nutanix containers on the fly without downtime or the requirement to migrate data.

This means with a simple One-click upgrade to NOS 4.5, customers can get up to a 60% increase in usable capacity in addition to existing data reduction savings. e.g.: Compression.

So there you have it, more usable capacity for Nutanix customers with a non disruptive one click software upgrade…. (your welcome!).

For customers considering Nutanix, your cost per GB just dropped significantly!

Want more? Check out how to scale storage capacity separately from compute with Nutanix!

Related Articles:

1. Nutanix Erasure Coding (EC-X) Deep Dive