Benchmark(et)ing Nonsense IOPS Comparisons, if you insist – Nutanix AOS 4.6 outperforms VSAN 6.2

As many of you know, I’ve taken a stand with many other storage professionals to try to educate the industry that peak performance is vastly different to real world performance. I covered this in a post titled: Peak Performance vs Real World Performance.

I have also given a specific example of Peak Performance vs Real World Performance with a Business Critical Application (MS Exchange) where I demonstrate that the first and most significant constraining factor for Exchange performance is compute (CPU/RAM) so achieving more IOPS is unnecessary to achieve the business outcome (which is supporting a given number of Exchange mailboxes/message per day).

However vendors (all of them) who offer products which provide storage, whether it is as a component such as in HCI or a fully focused offering, continue to promote peak performance numbers. They do this because the industry as a whole has and continues to promote these numbers as if they are relevant and trying to one-up each other with nonsense comparisons.

VMware and the EMC federation have made a lot of noise around In-Kernel being better performance than Software Defined Storage running within a VM which is referred to by some as a VSA (Virtual Storage Appliance). At the same time the same companies/people are recommending business critical applications (vBCA) be virtualized. This is a clear contradiction, as I explain in an article I wrote titled In-Kernel verses Virtual Storage Appliance which in short concludes by saying:

…a high performance (1M+ IOPS) solution can be delivered both In-Kernel or via a VSA, it’s simple as that. We are long past the days where a VM was a significant bottleneck (circa 2004 w/ ESX 2.x).

I stand by this statement and the in-kernel vs VSA debate is another example of nonsense comparisons which have little/no relevance in the real world. I will now (reluctantly) cover off (quickly) some marketing numbers before getting to the point of this post.

VMware VSAN 6.2

Firstly, Congratulations to VMware on this release. I believe you now have a minimally viable product thanks to the introduction of software based checksums which are essential for any storage platform.

VMW Claim One: For the VSAN 6.2 release, “delivering over 6M IOPS with an all-flash architecture”

The basic math for a 64 node cluster = ~93700 IOPS / node but as I have seen this benchmark from Intel showing 6.7Million IOPS for a 64 node cluster, let’s give VMware the benefit of the doubt and assume its an even 7M IOPS which equates to 109375 IOPS / node.

Reference: VMware Virtual SAN Datasheet

VMW Claim Two: Highest Performance >100K IOPS per node

The graphic below (pulled directly from VMware’s website) shows their performance claims of >100K IOPS per node and >6 Million IOPS per cluster.

Reference: Introducing you to the 4th Generation Virtual SAN

Now what about Nutanix Distributed Storage Fabric (NDSF) & Acropolis Operating System (AOS) 4.6?

We’re now at the point where the hardware is becoming the bottleneck as we are saturating the performance of physical Intel S3700 enterprise-grade solid state drives (SSDs) on many of our hybrid nodes. As such we have moved onto performance testing of our NX-9460-G4 model which has 4 nodes running Haswell CPUs and 6 x Intel S3700 SSDs per node all in 2RU.

With AOS 4.6 running ESXi 6.0 on a NX9460-G4 (4 x NX-9040-G4 nodes), Nutanix are seeing in excess of 150K IOPS per node, which is 600K IOPS per 2RU (Nutanix Block).

The below graph shows performance per node and how the solution scales in terms of performance up to a 4 node / 1 block solution which fits within 2RU.

NOS46Perf

So Nutanix AOS 4.6 provides approx. 36% higher performance than VSAN 6.2.

(>150K IOPS per NX9040-G4 node compared to <=110K IOPS for All Flash VSAN 6.2 node)

It should be noted the above Nutanix performance numbers have already been improved upon in upcoming releases going through performance engineering and QA, so this is far from the best you will see.

but-wait-theres-more

Enough with the nonsense marketing numbers! Let’s get to the point of the post:

These 4k 100% random read IOPS (and similar) tests are totally unrealistic.

Assuming the 4k IOPS tests were realistic, to quote my previous article:

Peak performance is rarely a significant factor for a storage solution.

More importantly, SO WHAT if Vendor A (in this case Nutanix) has higher peak performance than Vendor B (in this case VSAN)!

What matters is customer business outcomes, not benchmark(eting)!

holdup

Wait a minute, the vendor with the higher performance is telling you peak performance doesn’t matter instead of bragging about it and trying to make it sound importaint?

Yes you are reading that correctly, no one should care who has the highest unrealistic benchmark!

I wrote things to consider when choosing infrastructure. a while back to highlight that choosing the “Best of Breed” for every workload may not be a good overall strategy, as it will require management of multiple silos which leads to inefficiency and increased costs.

The key point is if you can meet all the customer requirements (e.g.: performance) with a standard platform while working within constraints such as budget, power, cooling, rack space and time to value, you’re doing yourself (or your customer) a dis-service by not considering using a standard platform for your workloads. So if Vendor X has 10% faster performance (even for your specific workload) than Vendor Y but Vendor Y still meets your requirements, performance shouldn’t be a significant consideration when choosing a product.

Both VSAN and Nutanix are software defined storage and I expect both will continue to rapidly improve performance through tuning done completely in software. If we were talking about a product which is dependant on offloading to Hardware, then sure performance comparisons will be relevant for longer, but VSAN and Nutanix are both 100% software and can/do improve performance in software with every release.

In 3 months, VSAN might be slightly faster. Then 3 months later Nutanix will overtake them again. In reality, peak performance rarely if ever impacts real world customer deployments and with scale out solutions, it’s even less relevant as you can scale.

If a solution can’t scale, or does so in 2 node mirror type configurations then considering peak performance is much more critical. I’d suggest if you’re looking at this (legacy) style of product you have bigger issues.

Not only does performance in the software defined storage world change rapidly, so does the performance of the underlying commodity hardware, such as CPUs and SSDs. This is why its importaint to consider products (like VSAN and Nutanix) that are not dependant on proprietary hardware as hardware eventually becomes a constraint. This is why the world is moving towards software defined for storage, networking etc.

If more performance is required, the ability to add new nodes and the ability to form a heterogeneous cluster and distribute data evenly across the cluster (like NDSF does) is vastly more importaint than the peak IOPS difference between two products.

While you might think that this blog post is a direct attack on HCI vendors, the principle analogy holds true for any hardware or storage vendor out there. It is only a matter of time before customers stop getting trapped in benchmark(et)ing wars. They will instead identify their real requirements and readily embrace the overall value of dramatically simple on-premises infrastructure.

In my opinion, Nutanix is miles ahead of the competition in terms of value, flexibility, operational benefits, product maturity and market-leading customer service all of which matter way more than peak performance (which Nutanix is the fastest anyway).

Summary:

  1. Focus on what matters and determine whether or not a solution delivers the required business outcomes. Hint: This is rarely just a matter of MOAR IOPS!
  2. Don’t waste your time in benchmark(et)ing wars or proof of concept bake offs.
  3. Nutanix AOS 4.6 outperforms VSAN 6.2
  4. A VSA can outperform an in-kernel SDS product, so lets put that in-kernel vs VSA nonsense to rest.
  5. Peak performance benchmarks still don’t matter even when the vendor I work for has the highest performance. (a.k.a My opinion doesn’t change based on my employers current product capabilities)
  6. Storage vendors ALL should stop with the peak IOPS nonsense marketing.
  7. Software-defined storage products like Nutanix and VSAN continue to rapidly improve performance, so comparisons are outdated soon after publication.
  8. Products dependant upon propitiatory hardware are not the future
  9. Put a high focus on the quality of vendors support.

Related Articles:

  1. Peak Performance vs Real World Performance
  2. Peak performance vs Real World – Exchange on Nutanix Acropolis Hypervisor (AHV)
  3. The Key to performance is Consistency
  4. MS Exchange Performance – Nutanix vs VSAN 6.0
  5. Scaling to 1 Million IOPS and beyond linearly!
  6. Things to consider when choosing infrastructure.

Think HCI is not an ideal way to run your mission-critical x86 workloads? Think again! – Part 1

I recently wrote a post called Fight the FUD: Nutanix scale limitations which corrected some mis-information VCE COO Todd Pavone has stated in this article COO: VCE converged infrastructure not affected by Dell-EMC about Nutanix scalability.

In the same interview, Todd makes several comments ( see quote below) which I can only trust to be accurate for VSPEX Blue but as he refers more generally about Hyper-converged systems, I have to disagree with many of the comments from a Nutanix perspective, and thought it would be good to discuss where I see Nutanix.

Where does VSPEX Blue fit into the portfolio?

Hyper-converged by definition is where you use software to find technology to manage what people like to call a commoditized infrastructure, where there is no external storage. So, the intelligence is in the software, and you don’t require the intelligence in the infrastructure. In the market, everyone has had an appliance, which is just a server with embedded storage or some marketed software, and ideal for edge locations or for single use cases. But you’re not going to put SAP and run your mission-critical business on an appliance. They have scaling challenges, right? You get to a certain number of nodes, and then the performance degrades; you have to then create another cluster, another cluster. It’s just not an ideal way to go run your mission-critical x86 workloads. [It’s] good for an edge, good for a simple form factors, good for single use cases or what I’ll call more simplified workloads.

In this post I will be specifically discussing Nutanix HCI solution, and while I have experience with and opinions about other products in the market, I will let other vendors speak for themselves.

The following quotes are not in the order Todd mentioned them in the above interview, they have been grouped together/ordered to avoid overlap/repeating comments and to make this blog flow better (hopefully). As such, if any comments appear to be taken out of context, it is not my intention.

So let’s break down what Todd has said:

  • Todd: In the market, everyone has had an appliance, which is just a server with embedded storage or some marketed software, and ideal for edge locations or for single use cases.

I agree that Hyper-converged systems such as Nutanix run on commodity servers with embedded storage. I also agree Nutanix is ideal for edge locations and can be successfully used for single use cases, but as my next response will show, I strongly disagree with any implication that Nutanix (as the markets most innovative leader in HCI, source: Gartner with 52% market share according to IDC) is limited to edge or single use cases.

  • Todd: “It’s just not an ideal way to go run your mission-critical x86 workloads” & “But you’re not going to put SAP and run your mission-critical business on an appliance.”

Interestingly, Nutanix is the only certified HCI platform for SAP.

As an architect, when designing for mission critical workloads, I want a platform which can/is:

a) Start small and scale as required (for example as vBCA’s demands increase)
b) Highly resilient & have automated self healing
c) Fully automated non-disruptive (and low impact) maintenance
d) Easy to manage / scale
e) Deliver the required levels of performance

In addition to the above, the fewer dependancies the better, as there is less to go wrong, troubleshoot, create bottlenecks and so on.

Nutanix HCI delivers all of the above, so why wouldn’t you run vBCA on Nutanix? In fact, the question I would ask is, “Why would you run vBCA on legacy 3 tier platforms”!

With legacy 3 tier in my experience it’s more difficult to start small and scale, typically 3-tier solutions have only two controllers which cannot self heal in the event of a failure, have complex and time consuming patching/upgrading procedures, typically have multiple points of Management (not single pane of glass like Nutanix w/ Acropolis Hypervisor), are typically much more difficult to scale (and require rip/replace).

The only thing most monolithic 3-tier products provide (if architected correctly) is reasonable performance.

Here is a typical example of a Nutanix customer upgrade experience compared to a legacy 3-tier product.

HdexTweetUpgrades

Think the above isn’t a fair comparison? I agree! Nutanix vs Legacy is no contest.

When I joined Nutanix in 2013, I was immediately involved with testing of mission critical workloads & I have no problems saying performance was not good enough for some workloads. Since then Nutanix has focused on building out a large team (3 of which are VCDX with years of vBCA experience) focusing on business critical applications, now applications like SQL, Oracle (including RAC deployments), MS Exchange and SAP are becoming common workloads for our customers who originally started with Test/Dev or VDI.

Think of Nutanix like VMware in 2005, everyone was concerned about performance, resiliency and didn’t run business critical applications on VI3 (later renamed vSphere), but over time everyone (including myself) learned virtualization was infact not only suitable for vBCA it’s an ideal platform. I’m here to tell everyone, don’t make the same mistake (we all did with virtualization) and assume Nutanix isn’t suitable for vBCA and wait 5 years to realise the value. Nutanix is more than ready (and has been for a while) for Mission critical applications.

Regarding Todd’s second statement “But you’re not going to put SAP and run your mission-critical business on an appliance.”

If not on an appliance, then what are we supposed to put mission-critical application on? Regardless of what you think of traditional Converged products, the fact is they are actually just a single SKU for multiple different pre-existing products (generally from multiple different vendors) which have been pre-architected and configured. They are not radically different and nor do they eliminate ongoing operational complexity which is a strength of HCI solutions such as Nutanix.

If anything putting mission critical applications on a simple and highly performant/scalable HCI appliance based solution (especially Nutanix) makes more sense than Converged / 3 Tier products. Nutanix is no longer the new kid on the block, Nutanix is well proven across all industries and on different workloads, including mission critical. Hell, most US Federal agencies including the Pentagon uses Nutanix, how much more critical do you want?  (Also anyone saying VDI isn’t mission critical has rock’s in their head! Think if all your users are offline, how productive is your company and how much use are all your servers?)

Imagine if the sizing of a traditional converged solution is wrong, or a mission critical application outgrows it before its scheduled end of life. Well with Nutanix, add one or more nodes (no rip and replace) and vMotion the workload/s, and you’ve scaled completely non disruptively. In fact, with Nutanix you should intentionally start small and scale as close to a just in time fashion as possible so your mission-critical application can take advantage of newer HW over the 3-5 years! Lower CAPEX and better long term performance, sounds like a WIN/WIN to me!

Even if it were true that Converged (or any other product) had higher peak performance (which in the real world has minimal value) than a Nutanix HCI solution, so what? Do you really want to have point solutions (a.k.a Silos) for every different workload? No. I wrote the following post which covers things to consider when choosing infrastructure which covers why you want to avoid silos which I encourage you to read when considering any new infrastructure.

  • Todd: They have scaling challenges, right? You get to a certain number of nodes, and then the performance degrades; you have to then create another cluster, another cluster.”

My previous post Fight the FUD: Nutanix scale limitations covers this FUD off in detail. In short, Nutanix has proven numerous times we can scale linearly, see Scaling to 1 Million IOPS and beyond linearly! for an example (And this video is from October 2013). Note: Ignore the actual IO number, the importaint factor is the linear scalability, not the peak benchmark number which have little value in the real world as I discuss here: “Peak Performance vs Real World Performance”.

  • Todd:  [It’s] good for an edge, good for a simple form factors, good for single use cases or what I’ll call more simplified workloads.

To be honest i’m not sure what he means by “good for a simple form factors”, but I can only assume he is talking about how HCI solutions like Nutanix has compact 4 node per 2RU form factors and use less rack space, power, cooling etc?

As for single use cases, I recommend customers run mixed workloads for several reasons. Firstly, Nutanix is a truly distributed solution which means the more nodes in a cluster, the more performant & resilient the cluster becomes. Scaling out a cluster also helps eliminate silos which reduces waste.

I recently wrote this post: Heterogeneous Nutanix Clusters Advantages & Considerations which covers how mixing node types works in a Nutanix environment. The Nutanix Distributed Storage fabric has lots of back end optimisations (ran by curator) which have been developed over the years to ensure heterogeneous clusters perform well. This is an example of technology which marketing slides can’t represent the value of, but the real world value is huge.

I have been involved with numerous mission critical application deployments, and there are heaps of case studies available on the Nutanix website for these deployments available at http://www.nutanix.com/resources/case-studies/.

A final thought for Part 1, with Nutanix, you can build what you need today and have mission critical workloads benefit from latest generation HW on a frequent basis (e.g.: Annually) by adding new nodes over time and simply vMotioning mission critical VMs to the newer nodes. So over say a 5 year life span of infrastructure, your mission critical applications could benefit from the performance improvements of 5 generations of intel chipsets not to mention the ever increasing efficiency of the Nutanix Acropolis base software (formally known as NOS).

Try getting that level of flexibility/performance improvements with legacy 3 tier!

Next up, Part 2

 

PART 2 – Problems with RAID and Object Based Storage for data protection

Following on from Part 1, this post will discuss hyper-converged Distributed File Systems (i.e,: Nutanix) and compare with traditional SAN/NAS RAID and  hyper-converged solutions using Object storage for data protection.

The below diagram shows a 4 node hyper-converged solution using a Distributed File System with the same 4 x 4TB SATA drives with data protection using replication with 2 copies. (Nutanix calls this Resiliency Factor 2)

HyperconvergedDFSNormal

The first difference you may have noticed, is the data is much more granular than the Hyper-Converged Object store example in Part 1.

The second less obvious difference is the replicated copies of the data (i.e.: The data with Purple letters) on node 1 do not reside on a single other node, but are distributed throughout the cluster.

Now lets look at a drive failure example:

Here we see Node 1 has lost a Drive hosting 8 granular pieces of data 1MB in size each.

HyperconvergedDFSRecovery

Now the Distributed File System detects that the data represented by A,B,C,D,E,I,M,P has only a single copy within the cluster and starts the restoration process.

Lets walk through each step although these steps are completed concurrently.

1. Data “A” is replicated from Node 2 to Node 3
2. Data “B” is replicated from Node 2 to Node 4
3. Data “C” is replicated from Node 3 to Node 2
4. Data “D” is replicated from Node 4 to Node 2
5. Data “E” is replicated from Node 2 to Node 4
6. Data “I” is replicated from Node 3 to Node 2
7. Data “M” is replicated from Node 4 to Node 3
8. Data “P” is replicated from Node 4 to Node 3

Now the cluster has restored resiliency.

So what was the impact on each node?readwriteiorecovery

The above table shows a simplified representation of the workload of restoring resiliency to the cluster. As we can see, the workload (being 8 granular pieces of data being replicated) was distributed across the nodes very evenly.

Next lets look at the advantages of a Hyper-Converged Solution with a Distributed File System (which Nutanix uses).

  1. Highly granular distribution using 1MB extents not large Objects.
  2. The work required to restore resiliency after one drive (or node) failure was distributed across all drives and nodes in the Cluster leveraging all drives/nodes capability. (i.e.: Not constrained to the <100 IOPS of a single drive)
  3. The restoration rebuild is a low impact activity as the workload is distributed across the cluster and not dependant on source/destination pair of drives or nodes
  4. The rebuild has a low impact on the virtual machines running on the distributed file system and consistent performance is maintained.
  5. The larger the cluster the quicker and lower impact the rebuild is as the workload is distributed across a higher number of drives/nodes for the same size (Gb) worth of restoration.
  6. With Nutanix SSDs are used not only for Read/Write cache but as a persistent storage tier, meaning the recovering data will be written to SSD and where the data being recovered is not in cache (Memory or SSD tiers) it is still possible the data will be in the persistent SSD tier which will dramatically improve the performance of the recovery.

Summary:

As discussed in Part 1, Traditional RAID used by SAN/NAS and Hyper-converged solutions using Object based storage both suffer similar issues when recovering from drive or node failure.

Where as Nutanix Hyper-converged solution using the Nutanix Distributed File System (NDFS) can restore resiliency following a drive or node failure faster and with lower impact thanks to its highly granular and distributed architecture, meaning more consistent performance for virtual machines.