Nutanix AOS 5.5 delivers 1M read IOPS from a single VM, but what about 70/30 read/write?

I recently wrote Nutanix AOS 5.5 delivers 1M IOPS from a single VM, but what happens when you vMotion which showed the impact of a vMotion was around -10% for a period of approx. 3 seconds before read performance resumed back to pre-migration levels.

In this post I will be addressing the question about performance for a single VM with a more realistic 70% read, 30% write IO profile which was performed using an 8k IO size and what the impact is during and after a live migration.

While not surprising to Nutanix customers, this result shows a maximum starting baseline of 436K random read and 187k random write IOPS and immediately following the migration performance reduced to 359k read and 164k write IOPS before achieving greater performance than the original baseline @ 446k read and 192k IOPS within a few seconds.

So in comparison to 100% random read which achieved just over 1 million 8k IOPS, the 70/30 mix achieves in the ballpark of 600k IOPS which is very respectable. Not bad for a platform which Nutanix competitors continue to describe as only being good for VDI. Considering even the largest array from a leading all flash SAN vendor is only advertising performance in the hundreds of thousand random read range, it shows Nutanix unique hyper-converged architecture can achieve higher performance than a monolithic all flash array from a single VM.

This shows that with the unique Nutanix Acropolis Distributed Storage Fabric, very high performance at low latency can be achieved with real world IO patterns even during and after live migrating the virtual machine across a distributed platform.

This result is further evidence of the efficiency of Nutanix Acropolis Hypervisor, AHV (which is included at no additional charge with AOS) as well as the IO path running in user space (not the much hyped in-kernel). This is in part thanks to AHV Turbo Mode which optimised the IO path which was announced at .NEXT 2017 in Washington. In addition to these excellent levels of performance, they can be sustained even when using data protection features such as snapshots as shown in recent post I wrote about Nutanix X-ray tool where I used the Snapshot impact scenario to compare Nutanix AHV and a leading hypervisor and SDS product. If you don’t have time to read the post, in short, the Nutanix competitors performance degraded as snapshots were taken while Nutanix AHV’s performance remained consistent which is essential for real world scenarios, especially with business critical applications.

With Nutanix unique ability to scale out performance using storage only nodes, even higher performance can be achieved without modification to the virtual machine to applications which gives Nutanix further advantage over the competition.

Nutanix data locality ensures optimal performance by ensuring new data is always local to the VM and cold data can remain remote indefinitely while only hot data will be migrated locally if/when required at a 1MB granularity. This translates to intelligent data locality and not brute force locality as it is frequently mistaken to be.

Back to Part 1

What’s .NEXT 2017 – AHV Turbo Mode

Back in 2015 I wrote a series titled “Why Nutanix Acropolis Hypervisor (AHV) is the next generation hypervisor” which covered off many reasons why AHV was and would become a force to be reckoned with.

In short, AHV is the only purpose built hypervisor for hyper-converged infrastructure (HCI) and it has continued to evolve in terms of functionality and maturity while becoming a popular choice for customers.

How popular you ask? Nutanix officially reported 23% adoption as a percentage of nodes sold in our recent third quarter fiscal year 2017 financial highlights.

Over the last couple of years I have personally worked with numerous customers who have adopted AHV especially when it comes to business critical applications such as MS SQL, MS Exchange.

One such example is Shinsegae who is a major retailer running 50,000 MS Exchange mailboxes on Nutanix using AHV as the hypervisor. Shinsegae also runs MS SQL workloads on the same platform which has now become the standard platform for all workloads.

This is just one example of AHV proven in the field and at scale to have the functionality, resiliency and performance to support business critical workloads.

But at Nutanix we’re always striving to deliver more value to our customers, and one area where there is a lot of confusion and misinformation is around the efficiency of the storage I/O path for Nutanix.

The Nutanix Controller VM (CVM) runs on top of multiple hypervisors and delivers excellent performance, but there is always room for improvement. With our extensive experience with in-kernel and virtual machine based storage solutions, we quickly learned that the biggest bottleneck is the hypervisor itself.

Next2017TraditionalHypervisorBottleneck

With technology such as NVMe becoming mainstream and 3D XPoint not far behind, we looked for a way to give customers the best value from these premium storage technologies.

That’s where AHV Turbo mode comes into play.

Next2017AHVTurboMode

AHV Turbo mode is a highly optimised I/O path (shortened and widened) between the User VM (UVM) and Nutanix stargate (I/O engine).

These optimisation have been achieved by moving the I/O path in-kernel.

V

V

V

V

V

V

V

V

V

V

V

Just kidding! In-kernel being better for performance is just a myth, Nutanix has achieved major performance improvements by doing the heavy lifting of the I/O data path in User Space, which is the opposite of the much hyped “In-kernel”.

The below diagram show the UVM’s I/O path now goes via Frodo (a.k.a Turbo Mode) which runs in User Space (not In-kernel) and onto stargate within the Controller VM).

Next2017FrodoDatapath2

Another benefit of AHV and Turbo mode is that it eliminates the requirement for administrators to configure multiple PVSCSI adapters and spread virtual disks across those controllers. When adding virtual disks to an AHV virtual machine, disks automatically benefit from Nutanix SCSI and block multi-queue ensuring enhanced I/O performance for both reads and writes.

The multi-queue I/O flow is handled by multiple frodo threads (Turbo mode) threads and passed onto stargate.

Next2017FrodoUpdated

As the above diagram shows, Nutanix with Turbo mode eliminates the bottlenecks associated with legacy hypervisors, one such example is VMFS datastores which required VAAI Atomic Test and Set (ATS) to minimise the impact of locking when the numbers of VMs per datastore increased (e.g. >25). With AHV and Turbo mode, every vdisk has always had it’s own queue (not one per datastore or container) but frodo enhances this by adding a per-vcpu queue at the virtual controller level.

How much performance improvement you ask? Well I ran a quick test which showed amazing performance improvements even on a more than four year old IVB NX3450 which only has 2 x SATA SSDs per node and with the memory read cache disabled (i.e.: No reads from RAM).

A quick summary of the findings were:

  1. 25% lower CPU usage for the similar sequential write performance (2929MBps vs 2964MBps)
  2. 27.5% higher sequential read performance (9512MBps vs 7207MBps)
  3. A 62.52% increase in random read IOPS (510121 vs 261265)
  4. A 33.75% increase in random write IOPS (336326 vs 239193)

So with Turbo Mode, Nutanix is using less CPU and RAM to drive higher IOPS & throughput and doing so in user space.

Intel published “Code Sample: Hello World with Storage Performance Development Kit and NVMe Driver” which states “When comparing the SPDK userspace NVMe driver to an approach using the Linux Kernel, the overhead latency is up to 10x lower”.

This is just one of many examples which shows userspace is clearly not the bottleneck that some people/vendors have tried to claim with the “in-kernel” is faster nonsense I have previously written about.

With Turbo mode, AHV is the highest performance (throughput / IOPS) and lowest latency hypervisor supported by Nutanix!

But wait there’s more! Not only is AHV now the highest performing hypervisor, it’s also used by our largest customer who has more than 1750 nodes running 100% AHV!

Next20171650Nodes

What is the performance impact & overheads of Inline Compression on Nutanix?

I’m frequently getting asked about Nutanix data reduction capabilities such as Deduplication, Erasure Coding and Compression and one of the most common questions (especially in a competitive situation) is:

“What is the performance impact and the overhead of Inline Compression on Nutanix?”

The short answer is, the pros outweigh the cons and this has been true for as long as I can remember with the Nutanix platform.

I have been testing of various applications, node types, cluster sizes and configurations and thought I would share some data on the overheads and performance impact of in-line compression which is what Nutanix (and I) recommend for most deployments including for business critical applications such as Oracle, MS SQL and MS Exchange.

In this case I was testing storage performance for MS Exchange using Jetstress.

Now without going into the exact configuration of the environment (to avoid competitors FUD), the test was simple. I created a Windows 2012 VM and configured Jetstress. I then performed 3 x 15min runs each of which completed a database checksum at the completion.

Following the 3 runs, I enabled In-line compression and repeated the same 3 tests.

The below chart is a screenshot from the Nutanix PRISM HTML 5 UI showing the Cluster wide IOPS, latency and throughput along with the Controller VM CPU utilisation.

PerformanceSummary

As we can see, the 6 performance runs are very similar across all metrics including the CVM CPU utilisation. The below table shows each run including database read latency and log write latency which are the two key performance metrics for MS Exchange Jetstress testing.

JetstressPerfwandwocompression

Note: The performance numbers above are not the peak or best performance Nutanix can deliver, they are just one of the many test scenarios I ran.

We can see the delta between the No Compression and Inline compression is almost zero. This test shows that while we all know inline data reduction has overheads on the I/O path, that does not necessarily translate into slower performance for the application.

In this case, Nutanix in-line compression is so efficient, that customers can enjoy excellent data efficiencies for applications like MS Exchange, with virtually no impact on performance or additional CPU overheads on the CVM.

Oh and all of this performance on Acropolis Hypervisor (AHV)!