NetApp HCI Versus Nutanix – The Rebuttal

I was made aware of a recent article from Rob Klusman at Netapp titled “Netapp HCI Verses Nutanix” by a Nutanix Technology Champion (NTC) who asked for us to respond to the article “cause there’s some b*llsh*t in it”.

** UPDATE **

Netapp have since removed the post, it can now be viewed via Google Cache here:

http://webcache.googleusercontent.com/search?q=cache:https://blog.netapp.com/netapp-hci-vs-nutanix/

I like it when people call it like it is, so here I am responding to the bullshit (article).

The first point I would like to address is the final statement in the article.

NetApp HCI is the first choice, and Nutanix is the second choice. Leading in an economics battle just doesn’t work if performance is lacking.

Rob rightly points out Nutanix leads the economic battle so kudos for that, but he follows up by implying Nutanix performance is lacking. Wisely Rob does not provide any follow up which can be discredited, so I will just leave you with these three posts discussing how Nutanix scales performance for Single VMs, Monster VMs and Physical servers from my Scalability, Resiliency & Performance blog series.

Part 3 – Storage Performance for a single Virtual Machine
Part 4 – Storage Performance for Monster VMs with AHV!
Part 5 – Scaling Storage Performance for Physical Machines

Rob goes on to make the claim:

Nutanix wants infrastructure “islands” to spread out the workloads

This is just incorrect and not only is it incorrect, Nutanix has been recommending mixed workload deployments for many years. Here is an article I wrote in July 2016 titled “The All-Flash Array (AFA) is Obsolete! where I conclude with the following summary:

MixedWorkloads2016

I specifically state mixed workloads including business critical applications are supported without creating silos. It’s important to note this statement was made in July 2016 before Netapp had even started shipping (Oct 25th 2017) their 3-tier architecture product which they continue to incorrectly refer to as HCI.

Gartner supports my statement that the Netapp product is not HCI and states:

“NetApp HCI competes directly against HCI suppliers, but its solution does not meet Gartner’s functional definition of HCI.”

Mixed workloads is nothing new for Nutanix, and not only is mixing workloads supported, I frequently recommend it as it increases performance and resiliency as described in detail in my blog series Nutanix | Scalability, Resiliency & Performance.

Now let’s address the “Key Differences” Netapp claim:

User interface. Both products have an intuitive graphical interface that is well integrated into the hypervisor of choice. But what’s not obvious is that simplicity goes well beyond where you click. NetApp HCI has the most extensive API in the market, with integration that allows end users to automate even the most minute features in the NetApp HCI stack.

The philosophy of Nutanix intuitive GUI (which Netapp concedes) is all features in the GUI must be made available via an API. In the PRISM GUI Nutanix provides the “REST API Explorer” (shown below) where users can easily understand the available operations to automate anything they choose.

RestAPIexplorer

NutanixRESTAPI

Next up we have:

Versatile scale. How scaling is accomplished is important. NetApp HCI scales in small infrastructure components (compute, memory, storage) that are all interchangeable. Nutanix requires growth in specific block components, limiting the choices you can make.

When vendors attack Nutanix, I am always surprised they try and attack the scalability capabilities as if anything, this is one of the strongest areas for Nutanix.

I’ve already referenced my my Scalability, Resiliency & Performance blog series where I go into a lot of detail on these topics but in short, Nutanix can scale:

  1. Storage Only by adding drives or nodes
  2. Compute Only by adding RAM or nodes
  3. Compute + Storage by adding drives and/or nodes

Back in mid 2013 when I joined Nutanix, the claim by Netapp was true as only one node type (NX-3450) was available, but later that same year the 1000 and 6000 series were released giving more flexibility and things have continued to become more flexible over the years.

Today the flexibility (or versatility) in scale for Nutanix solutions is second to none.

Performance. Today, it’s an absolute requirement for HCI to have an all-flash solution. Spinning disks are slightly less expensive, but you’re sacrificing production workloads. NetApp HCI only offers an all-flash solution.

Congratulations Netapp, you do all flash, just like everyone else (but you came to the party years later). There a many use cases for bulk storage capacity, be it all flash or hybrid, Nutanix provides NVMe+SATA-SSD, All SATA-SSD and SATA-SSD+SAS/SATA HDD options to cover all use cases and requirements.

Not only that but Nutanix allows mixing of All Flash and Hybrid nodes to further avoid the creation of silos.

Enterprise ready. This is an important test. One downfall of Nutanix software running on exactly the same CPU cores as your applications is the effect on enterprise readiness. Many of our customers have shifted away from Nutanix once they’ve seen what happens when a Nutanix component fails. It’s easier to move the VM workload off the current Nutanix system (the one that’s failing) than it is to wait for the fix. Nutanix does not run optimally in hardware-degraded situations. NetApp HCI has no such problem; it can run at full workloads, full bandwidth, and full speed while any given component has failed.

It’s a huge claim by Netapp to dispute Nutanix’ enterprise readiness, considering we have many more years of experience shipping product but hey, Netapp’s article is proving to be without factual basis every step of the way.

The beauty of Nutanix is the ability to self heal after failures (hardware or software) and then tolerate subsequent failures. Nutanix also has the ability to tolerate multiple concurrent failures including up to 8 nodes and 48 physical drives (NVMe/SSD/HDD).

Nutanix can also tolerate one or more failures and FULLY self heal without any hardware being replaced. This is critical as I detailed in my post: Hardware support contracts & why 24×7 4 hour onsite should no longer be required.

For more details on these failure scenarios checkout the Resiliency section of my blog series Nutanix | Scalability, Resiliency & Performance.

Workload performance protection. No one should attempt an advanced HCI deployment without workload performance protection. Only NetApp HCI provides such a guarantee, because this protection is built into the native technology.

 

One critical factor in delivering consistent high performance is data locality. The further data is from the compute layer, the more bottlenecks there are to potentially impact performance.  It’s important to Evaluate Nutanix’ original & unique implementation of Data Locality to understand that features such as QoS for Storage IO are features which are critical with scale up shared storage (a.k.a SAN/NAS) but when using a highly distributed scale out architecture, noisy neighbour problems are all but eliminated by the fact you have more controllers and that the controllers are local to the VMs.

Storage QoS is added complexity, and only required when a product such as a SAN/NAS has no choice but to deal with the IO blender effect where sequential IO is received as random due to competing workloads, this effect is minimised with Nutanix Distributed Storage Fabric.

Shared CPU cores. One key technical difference between the Nutanix product and NetApp HCI is the concept of shared CPU cores. Nutanix has processes running in the same cores as your applications, whereas NetApp HCI does not. There is a cost associated with sharing cores when applications like Oracle and VMware are licensed by core count. You actually pay more for those applications when Nutanix runs their processes on your cores. It’s important to do that math.

I’m very happy Rob raised the point regarding VMware’s licensing (part of what I’d call #vTAX), this is one of the many great reasons to move to Nutanix next generation hypervisor AHV (Acropolis Hypervisor).

In addition, for workloads like Oracle or SQL where licensing is an issue, Nutanix offers two solutions which address these issues:

  1. Compute Only Nodes running AHV
  2. Acropolis Block Services (ABS) to provide the Nutanix Distributed Storage Fabric (ADSF) to physical or virtual servers not running on Nutanix HCI nodes.

But what about the Nutanix Controller VM (CVM) itself? It is assigned vCPUs which share physical CPU cores with other virtual machines.

Sharing Physical cores is a bad idea as virtualisation has taught us over many years. Hold on, wait, no that’s not it (LOL!), Virtualisation has taught us we can share physical CPU cores very successfully even for mission critical applications where it’s done correctly.

Here is a detailed post on the topic titled: Cost vs Reward for the Nutanix Controller VM (CVM)

Asset fluidity. An important part of the NetApp scale functionality is asset fluidity – being able to move subcomponents of HCI around to different applications, nodes, sites, and continents and to use them long beyond the 3-year depreciation cycle.

This is possibly the weakest argument in Netapp’s post, Nutanix nodes can be removed non disruptively from a cluster and added to any other cluster including mixing all flash and hybrid. Brand new nodes can be mixed with any other generation of nodes, I regularly form large clusters using multiple generations of hardware.

Here is a tweet of mine from 2016 showing a 22 node cluster with four different node types across three generations of hardware (G3 being the original NX-8150, G4 and G5).

Data Fabric. The NetApp Data Fabric simplifies and integrates data management across clouds and on the premises to accelerate digital transformation. To plan an enterprise rollout of HCI, a Data Fabric is required – and Nutanix has no such thing. NetApp delivers a Data Fabric that’s built for the data-driven world.

I had to look up what Netapp mean by “Data Fabric” as it sounded to me like a nonsense marketing term, and surprise surprise I was right. Here is how Netapp describe “Data Fabric“.

Data Fabric is an architecture and set of data services that provide consistent capabilities across a choice of endpoints spanning on-premises and multiple cloud environments.

It’s a fluffy marketing phrase but the same could easily be argued about Nutanix Distributed Storage Fabric (ADSF). ADSF is hypervisor agnostic which straight away delivers a multiple platform solution (cloud or on premises) including AWS and Azure (below).

CloudSite

Nutanix can replicate and protect data including virtual machines across different hardware, clusters, hypervisors and clouds.

So the claim “Nutanix does not have a Data Fabric” is pretty laughable based on Netapp’s own description of “Data Fabric”.

Now the final point:

Choosing the Right Infrastructure for Your Enterprise

I’ve written about Things to consider when choosing infrastructure and my conclusion was:

ThingtoconsiderSummary

Nutanix has for many years provided a platform which can be your standard for all workloads and the number of niche workloads that cannot be genuinely supported are now so rare with all the enhancements we’ve made over the years.

The best thing about Nutanix, with our world class enterprise architect enablement and Nutanix Platform Expert (NPX) certification programmes, we ensure our field S.Es , Architects and certified individuals that design and implement solutions for customers every day know exactly when to say “No”.

This culture of customer success first, sales last, comes from our former President Sudheesh Nair who wrote this excellent article during his time at Nutanix

Quite possibly the most powerful 2-letter word in Sales – No

After addressing all the points raised by Netapp, it’s easy to see that Nutanix has a very complete solution thanks to years of development and experience with enterprise customers and their mission critical applications.

Have you read any other “b*llsh*t” you’d like Nutanix to respond to, if so, don’t hesitate to reach out.

Identifying & Resolving Excessive CPU Overcommitment (vCPU:pCore ratios)

Help! My performance is terrible and my consultant/vendor says it’s due to high/excessive CPU overcommitment! What do I do next?

Question: “How much CPU overcommitment is ok?”.

The answer is of course “It depends” and there are many factors including but not limited to, workload type, physical CPUs and how complimentary the workloads (other VMs) are.

Other common questions include:

“How much overcommitment do I have now?”

&

“How do I know if overcommitment is causing a performance problem?”.

Let’s start with “How much overcommitment do I have now?”.

With Nutanix this is very easy to work out, first goto the Hardware page in PRISM and click Diagram, then select one of your nodes as shown below.

PRISMHWDiagram

Once you’ve done that you will see below in the “Summary” section the CPU Model, No. of CPU Cores and No. of Sockets as shown below.

HostDetailsPRISMCPUHW

In this case we have 2 sockets and 20 cores total for a total of 10 physical cores per socket.

If you have multiple node types in your cluster, repeat this step for each different node type in your cluster. Then simply add up the total number of physical cores in the cluster.

In my example, I have three nodes, each with 20 cores for a total of 60 physical cores.

Next we need to find out how many vCPUs we’ve provisioned in the cluster. This can be found on the “VMs” page in PRISM as shown below.

ProvisionedvCPUsPRISM

So we have our 3 node cluster with 60 physical cores (pCores) and we have provisioned 130 vCPUs.

Now we can input the details into my vSphere Cluster Sizing Calculator and work out the overcommitment including our desired availability level (in my case, N+1) and we get the following:

ClusterSizingCalc2

The calculator is designed to be conservative and show information assuming the resources (CPU/RAM) required for the configured availability level are removed from the calculation. Put simply, the vCPU:pCore ratio assumes the N+1 host is not in the cluster which is how I personally size environments, especially for business critical applications.

The calculator shows us we have a 3.25:1 vCPU:pCore ratio.

For business critical applications like SQL, Exchange, Oracle, SAP etc, I always recommend sizing without CPU overcommitment (so <= 1:1) and ensuring the VMs are right sized to avoid poor performance and wasted resources.

Now that we know our overcommitment ratio, what’s next?

We need to find out if our overcommitment level is consistent with our original design and assess how the Virtual Machines are performing in the current state. A good design should call out the application requirements and critical performance factors such as CPU overcommitment and VM placement (e.g.: DRS Rules).

“How do I know if overcommitment is causing a performance problem?”.

One of the best ways to measure if a VM has CPU scheduling contention is by looking at “CPU Ready” or “Stolen time” in the AHV (or KVM) world.

CPU ready is basically the delay between time when the VM requests to be scheduled onto CPU cores and the time when it’s actually scheduled. One of the easiest way’s to present this is in a percentage of total time that the VM is waiting to be scheduled.

How Much CPU Ready is OK? My rule of thumb is:

<2.5% CPU Ready
Generally No Problem.

2.5%-5% CPU Ready
Minimal contention that should be monitored during peak times

5%-10% CPU Ready
Significant Contention that should be investigated & addressed

>10% CPU Ready
Serious Contention to be investigated & addressed ASAP!

With that said, the impact of CPU Ready will vary depending on your application so even 1% should not be ignored especially for business critical applications.

As CPU Ready is a critical performance metric, Nutanix decided to display this in PRISM on a per VM basis so customers can easily identify CPU scheduling contention.

Below we see the summary of a VMs performance which can be found on the VM’s page in PRISM after highlighting a VM. At the bottom of the page we see a graph showing CPU Ready.

VMPerformanceNTNXPRISM

CPU Ready of <2.5% is unlikely to be causing major issues for the majority of VMs, but in some latency sensitive applications like databases or video/voice, 2.5% could be causing noticeable issues so never disregard looking into CPU ready in your troubleshooting.

I recommend looking at a VM and if it’s showing even minimal CPU ready is say >1% and it’s a business critical application, follow the troubleshooting steps in this article until CPU Ready is <0.5% and measure the performance difference.

Key Point: If you have applications like SQL Always on availability groups, Oracle RAC or Exchange DAGs, one VM suffering CPU Ready will likely be having a flow on impact to the other VMs trying to communicate (or replicate) to it. So ensure all “dependancies” for your VM/app are not suffering CPU Ready before looking into other areas.

In short, Server A with no CPU Ready can be impacted when trying to communicate to Server B and being delayed because Server B has High CPU Ready.

The reason I bring this up is because it’s important not to get tunnel vision when looking at performance problems.

Now to the fun part, Troubleshooting/Resolutions to CPU Ready!

  1. Right size your VMs

Do NOT ignore this step! Your CPU overcommitment ratio is irrelevant, Right Sizing will always improve the efficiency and performance of your VMs. There is an increasing overhead at the hypervisor layer for scheduling more vCPUs, even with no overcommitment so ensure VMs are not oversized.

A common misconception is that 90% CPU utilisation is a bottleneck, in fact this can be a sign of a right sized VM. We need to ensure vCPUs are sized for peaks but unless a VM is pinned at 100% CPU for long periods of time, a short spike to 100% is not necessarily a problem.

Here is an example of the benefits of VM right sizing.

Once you have right sized your VMs, move onto step 2.

2. Size or place VMs within NUMA boundaries

First what is a NUMA boundary? It’s pretty simple, take the number of cores and divide by the number of sockets and that’s the NUMA boundary and also the maximum number of vCPUs a VM can be if you wish to benefit from maximum memory performance and optimal CPU scheduling.

The total host RAM is also a factor so divide the total RAM by the number of sockets and that’s the maximum RAM a VM can be assigned without breaching the NUMA boundary and paying an approx 30% performance penalty on memory performance.

Example: I had a customer who had MS Exchange running with 12vCPU / 96GB VMs on Nutanix nodes with 12 cores per socket. Exchange was running poorly (in the end due to a MS bug) but they insisted the problem was insufficient CPU. So they forced the customer to increase the VM to 18 vCPUs.

This did not solve the performance problem AND in fact made performance worse as the VM now suffered from very higher CPU Ready as VMs larger than a NUMA boundary can experience much higher CPU ready especially on hosts running other workloads. Moving back to 12 vCPUs relieved the CPU Ready and then Microsoft ultimately resolved the case with a patch.

3. Migrate other VMs off the host running the most critical VM

This is a really easy step to alleviate CPU scheduling contention and allows you to monitor the performance benefit of not having CPU overcommitment.

If the virtual machines performance improves you’ve likely found at least one of the causes of the performance problem. Now comes the harder part. Unless you can afford to have a single VM per host, you now need to identify complimentary workloads to migrate back onto the host.

What’s a complimentary workload? 

I’m glad you asked! Let me give you an example.

Let’s say we have a 10vCPU / 128GB RAM SQL Server VM which is right sized (of course) and our host is the NX-8035-G4 with 2 sockets of 10 cores per socket (20 cores total) and 256GB RAM. Being SQL we’ll also assume it has high IO requirements as it’s the backend for a business critical application.

Being Nutanix we also have a Controller VM using some resources (say 8vCPUs and 32GB RAM). For those who are interested see: Cost vs Reward for the Nutanix Controller VM (CVM)

A complimentary workload would have one or more of the following qualities:

a) Less than 96GB RAM (Host RAM 256GB, minus SQL VM 128GB, minus CVM 32GB = 96GB remaining)

b) vCPU requirements <= 2 (This would mean a 1:1 vCPU:pCore ratio)

c) Low vCPU requirements and/or utilization

d) Low IO requirements

e) Low capacity requirements (this would maximise the amount of SQL data which would remain local to the node for maximum read performance with data locality).

f) A workload which uses CPU/Storage at a different time of the day to the SQL workload.

e.g.: SQL might be busy 8am to 6pm, but workload may drop significantly outside those hours. A VM with high CPU/Storage IO requirements that runs from 7pm to midnight would potentially be a very complimentary workload as it would allow higher overcommitment and with minimal/no performance impact due to the hours of operation of the VMs not overlapping.

4. Migrate the VM onto a node with more physical cores

This might be an obvious one but a node with more physical cores has more CPU scheduling flexibility which can help reduce CPU Ready. Even without increasing the vCPUs on the VM, the VM has a better chance of getting time on the physical cores and therefore should perform better.

5. Migrate the VM onto a node with a higher CPU clock-rate

Another somewhat obvious one but it’s very common for vendors and customers to quote the number of vCPUs as a requirement when a “vCPU” is not a unit of measurement. A vCPU at best with no overcommitment is equal to one physical core and it goes downhill for there. Physical cores also vary in clock-rate (duh!) so a faster clock rate can have a huge impact on performance especially for those pesky single threaded applications.

Note: CPUs with higher clock rates typically have fewer cores, so don’t make the mistake of moving a VM to a node where it exceeds the NUMA boundary!

6. Turn OFF advanced power management on the physical server & use “High Performance” as your policy (in ESXi)

Advanced Power Management settings can save power and in some cases have minimal impact on performance, but when troubleshooting performance problems, especially around business critical applications, I recommend eliminating Power Management as a potential cause and once the performance problem is resolved, test re-enabling it if you desire.

7. Enable Hyperthreading (HT)

Hyperthreads can provide significant CPU scheduling advantages and in many cases improve performance despite a hyperthreading providing generally fairly low overall performance (typically 10%-30%) in CPU benchmarks.

Long story short, a VM in a Ready state is doing NOTHING, so enabling HT can allow it to be doing SOMETHING, which is better than NOTHING!

Also hypervisors are pretty smart, they preferentially schedule vCPUs to pCores so the busy VMs will more often than not be on pCores while the VMs with low vCPU requirements can be scheduled to hyperthreads. Win/Win.

Note: Some vendors recommend turning HT off, such as Microsoft for Exchange. But, this recommendation is really only applicable to Exchange running on physical servers. For virtualization always, always leave HT enabled and size workloads like Exchange with 1:1 vCPU to pCore ratios, then you will achieve consistent, high performance.

For anyone struggling with a vendor (like Microsoft) who is insisting on disabling HT when running business critical apps, here is an Example Architectural Decision on Hyperthreading which may help you.

Example Architectural Decision – Hyperthreading with Business Critical Applications (Exchange 2013)

8. Add additional nodes to the cluster

If you have right sized, migrated VMs to nodes with complimentary workloads, ensured optimal NUMA configurations, ensured critical VMs are running on the highest clock-rate CPUs etc and you’re still having performance problems, it may be time to bite the bullet and add one or more nodes to the cluster.

Additional nodes provide more CPU cores and therefore more CPU scheduling opportunities.

A common question I get is “Why can’t I just use CPU reservations on my critical VMs to guarantee them 100% of their CPU?”

In short, using CPU reservations does not solve CPU ready, I have also written an article on this topic – Common Mistake – Using CPU reservations to solve CPU ready

Wildcard: Add storage only nodes

Wait, what? Why would adding storage only nodes help with CPU contention?

It’s actually pretty simple, lower latency for read/write IO means less CPU WAIT which is the time the CPU is “waiting” for an IO to complete.

e.g.: If an I/O takes 1ms on Nutanix but 5ms on a traditional SAN, then moving the VM to Nutanix will mean 4ms less CPU WAIT for the VM, which means the VM can use it’s assigned vCPUs more efficiently.

Adding storage only nodes (even where the additional capacity is not required) will improve the average read/write latency in the cluster allowing VMs to be scheduled onto a physical core, get the work done, and release the pCore for another VM or to perform other work.

Note: Storage only nodes and the way data is distributed throughput the cluster is a unique capability for Nutanix. See the following article for an example on how performance is improved with storage only nodes with NO modification required to the VMs/Apps.

Scale out performance testing with Nutanix Storage Only Nodes

Summary:

There are a lost of things we can do to address CPU Ready issues, including thinking outside the box and enhancing the underlying storage with things like storage only nodes.

Other articles on CPU Ready

1. VM Right Sizing – An Example of the benefits

2. How Much CPU Ready is OK?

3. Common Mistake – Using CPU Reservations to solve CPU Ready

4. High CPU Ready with Low CPU Utilization

Expanding Capacity on a Nutanix environment – Design Decisions

I recently saw an article about design decisions around expanding capacity for a HCI platform which went through the various considerations and made some recommendations on how to proceed in different situations.

While reading the article, it really made me think how much simpler this process is with Nutanix and how these types of areas are commonly overlooked when choosing a platform.

Let’s start with a few basics:

The Nutanix Acropolis Distributed Storage Fabric (ADSF) is made up of all the drives (SSD/SAS/SATA etc) in all nodes in the cluster. Data is written locally where the VM performing the write resides and replica’s are distributed based on numerous factors throughout the cluster. i.e.: No Pairing, HA pairs, preferred nodes etc.

In the event of a drive failure, regardless of what drive (SSD,SAS,SATA) fails, only that drive is impacted, not a disk group or RAID pack.

This is key as it limited the impact of the failure.

It is importaint to note, ADSF does not store large objects nor does the file system require tuning to stripe data across multiple drives/nodes. ADSF by default distributes the data (at a 1MB granularity) in the most efficient manner throughout the cluster while maintaining the hottest data locally to ensure the lowest overheads and highest performance read I/O.

Let’s go through a few scenarios, which apply to both All Flash and Hybrid environments.

  1. Expanding capacityWhen adding a node or nodes to an existing cluster, without moving any VMs, changing any configuration or making any design decisions, ADSF will proactively send replicas from write I/O to all nodes within the cluster, therefore improving performance while reactively performing disk balancing where a significant imbalance exists within a cluster.

    This might sound odd but with other HCI products new nodes are not used unless you change the stripe configuration or create new objects e.g.: VMDKs which means you can have lots of spare capacity in your cluster, but still experience an out of space condition.

    This is a great example of why ADSF has a major advantage especially when considering environments with large IO and/or capacity requirements.

    The node addition process only requires the administrator to enter the IP addresses and its basically a one click, capacity is available immediately and there is no mass movement of data. There is also no need to move data off and recreate disk groups or similar as these legacy concepts & complexities do not exist in ADSF.

    Nutanix is also the only platform to allow expanding of capacity via Storage Only nodes and supports VMs which have larger capacity requirements than a single node can provide. Both are supported out of the box with zero configuration required.

    Interestingly, adding storage only nodes also increases performance, resiliency for the entire cluster as well as the management stack including PRISM.

  2. Impact & implications to data reduction of adding new nodesWith ADSF, there are no considerations or implications. Data reduction is truely global throughout the cluster and regardless of hypervisor or if you’re adding Compute+Storage or Storage Only nodes, the benefits particularly of deduplication continue to benefit the environment.

    The net effect of adding more nodes is better performance, higher resiliency, faster rebuilds from drive/node failures and again with global deduplication, a higher chance of duplicate data being found and not stored unnecessarily on physical storage resulting in a better deduplication ratio.

    No matter what size node/s are added & no matter what Hypervisor, the benefits from data reduction features such as deduplication and compression work at a global level.

    What about Erasure Coding? Nutanix EC-X creates the most efficient stripe based on the cluster size, so if you start with a small 4 node cluster your stripe would be 2+1 and if you expand the cluster to 5 nodes, the stripe will automatically become 3+1 and if you expand further to 6 nodes or more, the stripe will become 4+1 which is currently the largest stripe supported.

  3. Drive FailuresIn the event of a drive failure (SSD/SAS or SATA) as mentioned earlier, only that drive is impacted. Therefore to restore resiliency, only the data on that drive needs to be repaired as opposed to something like an entire disk group being marked as offline.

    It’s crazy to think a single commodity drive failure in a HCI product could bring down an entire group of drives, causing a significant impact to the environment.

    With Nutanix, a rebuild is performed in a distributed manner throughout all nodes in the cluster, so the larger the cluster, the lower the per node impact and the faster the configured resiliency factor is restored to a fully resilient state.

At this point you’re probably asking, Are there any decisions to make?

When adding any node, compute+storage or storage only, ensure you consider what the impact of a failure of that node will be.

For example, if you add one 15TB storage only node to a cluster of nodes which are only 2TB usable, then you would need to ensure 15TB of available space to allow the cluster to fully self heal from the loss of the 15TB node. As such, I recommend ensuring your N+1 (or N+2) node/s are equal to the size of the largest node in the cluster from both a capacity, performance and CPU/RAM perspective.

So if your biggest node is an NX-8150 with 44c / 512GB RAM and 20TB usable, you should have an N+1 node of the same size to cover the worst case failure scenario of an NX-8150 failing OR have the equivalent available resources available within the cluster.

By following this one, simple rule, your cluster will always be able to fully self heal in the event of a failure and VMs will failover and be able to perform at comparable levels to before the failure.

Simple as that! No RAID, Disk group, deduplication, compression, failure, or rebuild considerations to worry about.

Summary:

The above are just a few examples of the advantages the Nutanix ADSF provides compared to other HCI products. The operational and architectural complexity of other products can lead to additional risk, inefficient use of infrastructure, misconfiguration and ultimately an environment which does not deliver the business outcome it was originally design to.