What is the performance impact & overheads of Inline Compression on Nutanix?

I’m frequently getting asked about Nutanix data reduction capabilities such as Deduplication, Erasure Coding and Compression and one of the most common questions (especially in a competitive situation) is:

“What is the performance impact and the overhead of Inline Compression on Nutanix?”

The short answer is, the pros outweigh the cons and this has been true for as long as I can remember with the Nutanix platform.

I have been testing of various applications, node types, cluster sizes and configurations and thought I would share some data on the overheads and performance impact of in-line compression which is what Nutanix (and I) recommend for most deployments including for business critical applications such as Oracle, MS SQL and MS Exchange.

In this case I was testing storage performance for MS Exchange using Jetstress.

Now without going into the exact configuration of the environment (to avoid competitors FUD), the test was simple. I created a Windows 2012 VM and configured Jetstress. I then performed 3 x 15min runs each of which completed a database checksum at the completion.

Following the 3 runs, I enabled In-line compression and repeated the same 3 tests.

The below chart is a screenshot from the Nutanix PRISM HTML 5 UI showing the Cluster wide IOPS, latency and throughput along with the Controller VM CPU utilisation.

PerformanceSummary

As we can see, the 6 performance runs are very similar across all metrics including the CVM CPU utilisation. The below table shows each run including database read latency and log write latency which are the two key performance metrics for MS Exchange Jetstress testing.

JetstressPerfwandwocompression

Note: The performance numbers above are not the peak or best performance Nutanix can deliver, they are just one of the many test scenarios I ran.

We can see the delta between the No Compression and Inline compression is almost zero. This test shows that while we all know inline data reduction has overheads on the I/O path, that does not necessarily translate into slower performance for the application.

In this case, Nutanix in-line compression is so efficient, that customers can enjoy excellent data efficiencies for applications like MS Exchange, with virtually no impact on performance or additional CPU overheads on the CVM.

Oh and all of this performance on Acropolis Hypervisor (AHV)!

vSphere | PVSCSI Adapters & striped/spanned NTFS volumes

A little while ago I wrote a post titled “Splitting SQL datafiles across multiple VMDKs for optimal VM performance” where I talked about how SQL databases can be split with minimal/no interruption to production to give better performance by spreading the IO load across multiple PVSCSI adapters and virtual machine disks (VMDKs).

In a follow up post titled “SQL & Exchange performance in a Virtual Machine” I mentioned the above article and concluded:

If the DBA is not confident doing this, you can also just add multiple virtual disks (connected via multiple PVSCSI controllers) and create a stripe in guest (via Disk Manager) and this will also give you the benefit of multiple vdisks.

Both posts have been very popular and one of the comments I got via twitter was that creating striped or spanned NTFS volumes in guest was not supported by VMware when using PVSCSI.

This is stated in VMware KB “Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters (1010398)” as shown below:

kbspanned

Prior to writing both posts I was aware of this KB, but after comprehensively testing this numerous times on different platforms over the years, and more recently on Nutanix, I concluded after liaising with many VMware experts (including several VCDXs) that this was either a legacy recommendation which needed to be updated, or simply a mistake by the author of the KB (which can happen as we’re all human).

As such, I followed up with VMware by raising a SR on August 14th 2016.

After following up several times I had given up waiting for an answer but I am pleased to say today (2nd November 2016) I finally got a reply.

vmwaregsspvscsi

In summary, spanned (and stripped volumes which was not mentioned in the KB) are supported and to quote VMware GSS “will have no issues”.

One strong recommendation I have is DO NOT use VMDKs hosted in different failure domains (e.g.: LUNs, SAN/NASs) in the one spanned/striped volume as this increases the size of the failure domain and your chances of the volume going offline.

So there you have it, if you need to increase the performance for an application and you are not confident to split databases at the application level, you can (typically) get increased IO performance by using striped volumes in guest which are quick and easy to setup. The only downside is you will need to take your DB offline to copy it to the new volume before bringing it back online.

Hope this puts peoples mind at ease about striped volumes with PVSCSI.

Nutanix AHV/AOS Functionality – Removing nodes

A Nutanix ADSF (Acropolis Distributed Storage Fabric) is designed to live forever, meaning as new nodes are added and older nodes removed, the cluster remains online and critically, in a fully resilient state at all times.

While this might not sound that critical, it avoids problems which have plagued legacy (and even many modern) datacenter products where forklift upgrades/replacements are not only complex, high risk and time consuming, they typically also reduce the resiliency of the platform throughout the process.

A common example of reduced resiliency is where one (of two) SAN/NAS controllers is taken offline during a fork lift storage controller upgrade, meaning a single failure can cause the storage to be offline.

Nutanix has now been shipping product for around 5 years so we have had many customers go through hardware refresh cycles, and many more who are about to embark on a HW refresh.

I thought I would quickly demonstrate how easy it is to remove an old node from a cluster and ensure existing and prospective Nutanix customers have the facts about the node removal process.

Firstly lets look at the environment the demonstration is performed on.

We have an AHV environment with 8 nodes with a mix of NX3050 and NX6050 spread over 3 blocks as shown in Nutanix PRISM UI (below).

EnvironmentSummary

To remove a host, all we need to do is go to the hardware tab in PRISM, click the host we want to remove and select Remove Host as shown below.

RemoveHost

No preparation tasks are required at all which also means less planning and change control is required. Once you select Remove Host, the host enters maintenance mode and starts performing the required tasks to remove the node as shown below.

RemoveHost2

As you can see, Acropolis OS (AOS) is removing each individual disk from the cluster before taking the node out of the cluster. This means the configured Resiliency Factor (RF) is always in compliance, ensuring that data is still available even in the event of a drive or node failure. This can be observed on the PRISM Home screen in the data resiliency view shown below.

DataResiliencyStatus

This process is handled by the curator function of AOS and because data is distributed throughout all nodes within the cluster, the process is both lower impact than traditional RAID based solutions or solutions using RAID+Replication, as well as faster because all nodes and therefore CVMs, SSDs and HDDs participate in the process. Nutanix ADSF does not mirror or replicate data from one node to another node, but to and from all nodes. This eliminates the potential bottleneck of a single node.

The following shows the speed at which Nutanix Distributed Storage Fabric (ADSF) performs the data migration even when the majority of data resides on the HDD tier (including in this example).

StoragePoolPerfNodeRemove

For a cluster with 20 x 1TB and 20 x 4TB SATA spindles for a total of 100TB of SATA and just 6.4TB SSD (or approx 6.5%) the node removal rate where it reached >830MBps quite impressive since most of the extents (data) which needed to be replicated throughout the cluster were retrieved from SATA tier.

The rate at which a node can be removed will vary depending on the front end I/O, node types and cluster size with larger cluster sizes able to remove nodes faster due to more available controllers (CMVs) and importantly more choice of source and destination of extents.

The process can be monitored via the Tasks view (shown earlier) or at a very granular level such as per disk (SSD or HDD).

The below shows us the status of the disk is Migrating Data and it also shows the drive had a significant amount of data on it as this was not an empty cluster demonstration. In fact this screen shot was taken about halfway through the node removal process.

DiskStatus

So many of you may be wondering what the CVM CPU utilisation is throughout this process During the process I took the following screenshot showing the eight Controller VMs, there vCPU configuration (8 vCPUs) and the CPU utilisation.

CVMCPUutilRemoveHost

As we can see, the utilisation ranges from just 6% through to 16% with an average of just under 10%. It should be noted these nodes are using Intel Ivy Bridge processors so with latest generation Intel Broadwell chipsets the process would use less percentage of CPU and perform faster (due to higher per core performance) than on this 3 year old equipment.

Note: The CVM is not just doing IO processing. It is providing the full AHV / AOS management stack which makes the fact the CVM is using under 10% CPU even more impressive.

The Remove host task also resets the configuration of the Controller VM (CVM) back to default which ensures the node can be quickly/easily added to a new or existing cluster.

The end result is a fully functional 7 node cluster as shown below.

EndResultNodeRemoval

Summary:

Node removal from a Nutanix cluster (regardless of hypervisor) is a 1-Click, Non disruptive operation which maintains cluster resiliency at all times while being a fast and low impact process.

Related Articles:

1. VMware you’re full of it (FUD) : Nutanix CVM/AHV & vSphere/VSAN overheads

2. Why Nutanix Acropolis hypervisor (AHV) is the next generation hypervisor

3. Think HCI is not an ideal way to run mission-critical x86 workloads? Think Again!