Dare2Compare Part 7 : HPE provides superior performance to Nutanix

In part 4, we covered off a series of failure scenarios and how the HPE/SVT product responds and the same scenarios and how Nutanix responds which clearly proved HPEs claim of having superior resiliency than Nutanix to be false and I would argue even highlighted how much more resilient the Nutanix platform is.

Now in part 7, I will address two false claims (below) that Nutanix has lower performance to HPE SVT and that Nutanix doesn’t post performance results.

Tweet #1 – HPE Simplivity 380 provides superior performance than Nutanix

Problem number 1 with HPE’s claim: Their URL is dead… so we cannot review what scenario/s they are claiming HPE/SVT is higher performing.

HPEBrokenURL

Before we discuss Nutanix performance, HPE have repeatably made further claims that Nutanix does not post performance results and have further complained there are no 3rd party published performance testing results.

One recent example of these claim is shown below which states: “I know you don’t publish performance results”

Nutanix does in fact publish performance data, which is validated by:

  • 3rd parties partners/vendors such as Microsoft and LoginVSI
  • Independant 3rd parties such as Enterprise Storage Group (ESG) and;
  • Internally created material

The following is a few examples of published performance data.

  1. Nutanix Citrix XenDesktop Validated by LoginVSI

In fairness to HPE, this is a recent example so let’s take a look at Nutanix track record with LoginVSI.

LoginVSIBenchmarks

Here we can see six examples dating back to Jan 2013 where Nutanix has made performance results with LoginVSI available.

2. Nutanix Reference Architecture: Citrix Validated Solution for Nutanix

This was a jointly developed solution between Citrix and Nutanix and was the first of it’s kind globally and was made available in 2014.

3. Microsoft Exchange Solution Reviewed Program (ESRP) – Storage

Nutanix has for many years been working with business critical applications such as MS Exchange and has published two ESRP solutions.

The first is for 24,000 Users on Hyper-V and the second is for 30k Users on AHV.

NutanixESRPScreenshot

Interestingly, while HPE/SVT have a reference architecture for MS Exchange, they do not have an ESRP for the platform and this is because they cannot provide a supportable configuration due to lack of multi-protocol support.

Nutanix on the other hand has Microsoft supportable configurations for ESXi, Hyper-V and AHV.

4. ESG Performance Analysis: Nutanix Hyperconverged Infrastructure

This report is an example of a 3rd party who has validated performance data for VDI, MS SQL and MS Exchange.

As we can clearly see with the above examples, Nutanix does and has for a long time provided publicly available performance data from many sources including independant 3rd parties.

Moving onto the topic of Nutanix vs HPE/SVT performance, I feel it’s importaint to first review my thoughts on this topic in detail in an article I wrote back in 2015 titled: Peak performance vs real world performance.

In short, I can get any two products and make one look better than the other by simply designing tests which highlight strengths or weaknesses of either product. This is why many vendors have a clause in the EULA preventing publishing of performance data without written permission.

One of the most importaint factors when it comes to performance is sizing. An incorrectly sized environment will likely not perform within acceptable levels, and this goes for any product on the market.

For next generation platforms like Nutanix, customers are protected from under-sizing because of the platforms ability to scale by adding additional nodes. In 2016 I wrote the post titled “Scale out performance testing with Nutanix Storage Only Nodes” which shows how adding additional storage only nodes to a Nutanix cluster increased IOPS by approx 2x while lowering read and write latency.

What is more impressive than the excellent performance improvements is this was done without any changes to the configuration of the cluster or virtual machines.

The same test performed on HPE/SVT and other SDS/HCI products cannot double the IOPS or decrease read/write latency as the SVT platform is not a distributed storage fabric.

Here in lies a major advantage to Nutanix. In the event Nutanix performance was no longer sufficient, or another platform was higher performance, say per node, then Nutanix can (if/when required) scale performance without rip/replace or reconfiguration to meet almost any performance requirement. The performance per node is not a limiting factor for Nutanix like it is with HPE/SVT and other platforms.

What about performance for customers who are maximising the ROI from existing physical servers using Acropolis Block Services. The benefits just keep coming. A server connected using ABS will improve its IOPS, latency and throughput when additional nodes are added to the Nutanix cluster automatically as the Acropolis Distributed Storage Fabric (ADSF) increases the number of paths dynamically so all Controller VMs in the cluster service ABS traffic as shown in the tweet below.

As such, regardless of if workloads are virtual or physical, when using Nutanix, performance can always be improved non-disruptively and without compromising the resiliency of the cluster by simply adding nodes (which BTW is a one click operation).

Summary:

  1. Nutanix has been publishing performance results through independant 3rd parties and partners for many years.
  2. Nutanix has validated solutions from Microsoft, LoginVSI and Citrix to name a few.
  3. Nutanix performance can scale well beyond HPE/SVT for both virtual and physical workloads
  4. Nutanix provides validated performance data across multiple hypervisors
  5. HPE/SVT have provided no evidence, scenarios or references to SVT being a higher performance platform.

Return to the Dare2Compare Index:

Dare2Compare Part 5 : Nutanix can’t claim single screen management w/o extra fees or GUIs

If you’ve not read Parts 1,2,3 and 4, we have already proven several claims by HPE Simplivity regarding Nutanix to be false, as well as explored the misleading way in which HPE SVT promote data efficiency.

The fun continues and in Part 5 we will discuss HPE’s claim that Nutanix does not have a “single screen management” (by which I assume they mean Single Pane of Glass) without extra fees or GUIs.

Unfortunately the URL was not working in the HPE tweet, I responded and made HPE aware of this so I could review specifically what they are claiming, but the link at the time of writing is still not working.

It’s funny HPE SVT mention this because Nutanix is the only HCI product which has a built in, distributed, scalable and multi hypervisor management solution.

The fact Nutanix has its own interface is a huge advantage especially because Nutanix is not dependant on any 3rd parties (e.g.: VMware vCenter) to install/configure and manage our platform. This reduces cost,complexity,risk,operational tasks and the list goes on.

Nutanix “PRISM Element” HTML 5 GUI is built into every Nutanix solution regardless of hypervisor or underlying hardware. The below screenshot shows the built in management capabilities to upgrade the Nutanix Acropolis (AOS) storage layer, the built in, scale out file server, the hypervisor (ESXi, Hyper-V or AHV) as well as upgrade Firmware, our Container support and our built in cluster imaging tool, Foundation.

PrismUogradeSoftware

This means regardless of hypervisor, many of the critical tasks can be performed straight within PRISM and does not require the long in the tooth VMware Update Manager (VUM) which is long overdue for an overhaul. In fact, Nutanix supports four (4) hypervisors using our management tool (PRISM) whereas HPE SVT only has GA support for ESXi.

For customers using Acropolis Hypervisor (AHV), 100% of the management can be performed within PRISM Element and central management of multiple clusters is performed through PRISM Central.

AHV comes with all Nutanix solutions at no extra cost regardless of hardware choice (including HPE Proliant). This means customers enjoy the benefits of the next generation hypervisor, designed and built for HCI and Enterprise Cloud.

Unlike HPE SVT for example, Nutanix does not have a limit of 8 nodes per datacenter or 32 per “federation”, PRISM element can support a cluster of any size (currently no support limits) and PRISM central manages all the clusters.

Nutanix management is not tied to or more importantly dependant on VMware vCenter or any other hypervisor management tool, which adds to the resiliency and simplicity of the Nutanix platform. PRISM automatically scales in both performance and resiliency as a cluster expands to ensure consistent performance for system administrators. This avoids the complexity of designing/installing and maintaining a highly available vCenter solution which also uses additional compute and storage resources.

Summary:

  1. Nutanix PRISM Element GUI is built in and comes included with every Nutanix deployment
  2. Nutanix PRISM is not limited by the number of nodes it can manage
  3. PRISM Central is used to manage multiple Nutanix clusters centrally if required but is not mandatory.
  4. Nutanix provides at no cost the next generation hypervisor (AHV) which has 100% of all management performed within PRISM GUIs.
  5. AHV eliminates the requirement for Hypervisor licensing (e.g.: VMware vSphere) which actually reduces overall costs, this is unique to Nutanix.
  6. PRISM supports 4 hypervisors (ESXi , Hyper-V, AHV and XenServer) which delivers a consistent management interface for multi-hypervisor environments which are becoming more and more common.

Many of the above points are unique to Nutanix and have been designed and built to be a truly webscale platform, not a ROBO/SMB or <32 node solution. Nutanix can start small and continue to scale to any size, with the PRISM Element management stack automatically scaling to suit as nodes are added.

Return to the Dare2Compare Index:

Expanding Capacity on a Nutanix environment – Design Decisions

I recently saw an article about design decisions around expanding capacity for a HCI platform which went through the various considerations and made some recommendations on how to proceed in different situations.

While reading the article, it really made me think how much simpler this process is with Nutanix and how these types of areas are commonly overlooked when choosing a platform.

Let’s start with a few basics:

The Nutanix Acropolis Distributed Storage Fabric (ADSF) is made up of all the drives (SSD/SAS/SATA etc) in all nodes in the cluster. Data is written locally where the VM performing the write resides and replica’s are distributed based on numerous factors throughout the cluster. i.e.: No Pairing, HA pairs, preferred nodes etc.

In the event of a drive failure, regardless of what drive (SSD,SAS,SATA) fails, only that drive is impacted, not a disk group or RAID pack.

This is key as it limited the impact of the failure.

It is importaint to note, ADSF does not store large objects nor does the file system require tuning to stripe data across multiple drives/nodes. ADSF by default distributes the data (at a 1MB granularity) in the most efficient manner throughout the cluster while maintaining the hottest data locally to ensure the lowest overheads and highest performance read I/O.

Let’s go through a few scenarios, which apply to both All Flash and Hybrid environments.

  1. Expanding capacityWhen adding a node or nodes to an existing cluster, without moving any VMs, changing any configuration or making any design decisions, ADSF will proactively send replicas from write I/O to all nodes within the cluster, therefore improving performance while reactively performing disk balancing where a significant imbalance exists within a cluster.

    This might sound odd but with other HCI products new nodes are not used unless you change the stripe configuration or create new objects e.g.: VMDKs which means you can have lots of spare capacity in your cluster, but still experience an out of space condition.

    This is a great example of why ADSF has a major advantage especially when considering environments with large IO and/or capacity requirements.

    The node addition process only requires the administrator to enter the IP addresses and its basically a one click, capacity is available immediately and there is no mass movement of data. There is also no need to move data off and recreate disk groups or similar as these legacy concepts & complexities do not exist in ADSF.

    Nutanix is also the only platform to allow expanding of capacity via Storage Only nodes and supports VMs which have larger capacity requirements than a single node can provide. Both are supported out of the box with zero configuration required.

    Interestingly, adding storage only nodes also increases performance, resiliency for the entire cluster as well as the management stack including PRISM.

  2. Impact & implications to data reduction of adding new nodesWith ADSF, there are no considerations or implications. Data reduction is truely global throughout the cluster and regardless of hypervisor or if you’re adding Compute+Storage or Storage Only nodes, the benefits particularly of deduplication continue to benefit the environment.

    The net effect of adding more nodes is better performance, higher resiliency, faster rebuilds from drive/node failures and again with global deduplication, a higher chance of duplicate data being found and not stored unnecessarily on physical storage resulting in a better deduplication ratio.

    No matter what size node/s are added & no matter what Hypervisor, the benefits from data reduction features such as deduplication and compression work at a global level.

    What about Erasure Coding? Nutanix EC-X creates the most efficient stripe based on the cluster size, so if you start with a small 4 node cluster your stripe would be 2+1 and if you expand the cluster to 5 nodes, the stripe will automatically become 3+1 and if you expand further to 6 nodes or more, the stripe will become 4+1 which is currently the largest stripe supported.

  3. Drive FailuresIn the event of a drive failure (SSD/SAS or SATA) as mentioned earlier, only that drive is impacted. Therefore to restore resiliency, only the data on that drive needs to be repaired as opposed to something like an entire disk group being marked as offline.

    It’s crazy to think a single commodity drive failure in a HCI product could bring down an entire group of drives, causing a significant impact to the environment.

    With Nutanix, a rebuild is performed in a distributed manner throughout all nodes in the cluster, so the larger the cluster, the lower the per node impact and the faster the configured resiliency factor is restored to a fully resilient state.

At this point you’re probably asking, Are there any decisions to make?

When adding any node, compute+storage or storage only, ensure you consider what the impact of a failure of that node will be.

For example, if you add one 15TB storage only node to a cluster of nodes which are only 2TB usable, then you would need to ensure 15TB of available space to allow the cluster to fully self heal from the loss of the 15TB node. As such, I recommend ensuring your N+1 (or N+2) node/s are equal to the size of the largest node in the cluster from both a capacity, performance and CPU/RAM perspective.

So if your biggest node is an NX-8150 with 44c / 512GB RAM and 20TB usable, you should have an N+1 node of the same size to cover the worst case failure scenario of an NX-8150 failing OR have the equivalent available resources available within the cluster.

By following this one, simple rule, your cluster will always be able to fully self heal in the event of a failure and VMs will failover and be able to perform at comparable levels to before the failure.

Simple as that! No RAID, Disk group, deduplication, compression, failure, or rebuild considerations to worry about.

Summary:

The above are just a few examples of the advantages the Nutanix ADSF provides compared to other HCI products. The operational and architectural complexity of other products can lead to additional risk, inefficient use of infrastructure, misconfiguration and ultimately an environment which does not deliver the business outcome it was originally design to.