Nutanix Resiliency – Part 9 – Self healing

Nutanix has a number of critically important & unique self healing capabilities which differentiate the platform from not only traditional SAN/NAS arrays but other HCI products.

Nutanix can fully automatically self heal not only from the loss of SSDs/HDDs/NVMe devices and node failure/s but also fully recover the management stack (PRISM) without user intervention.

First let’s go through the self healing of the data from device/node failure/s.

Let’s take a simple comparison between a traditional dual controller SAN and the average* size Nutanix cluster of eight nodes.

*Average is calculated by number of customers globally divide total nodes sold.

In the event of a single storage controller failure, the SAN/NAS is left with no resiliency and is at the mercy of the service level agreement (SLA) with the vendor to replace the component before resiliency (and in many cases performance) can be restored.

Compare that to Nutanix, and only one of the eight storage controllers (or 12.5%) are offline, leaving seven to continue serving the workloads and automatically restore resiliency, typically in just minutes as Part 1 demonstrated.

I’ve previously written a blog titled Hardware support contracts & why 24×7 4 hour onsite should no longer be required which covers this concept in more detail, but long story short, if restoring resiliency of a platform is dependant on the delivery of new parts, or worse, human intervention, the risk of downtime or dataloss is exponentially higher than a platform which can self heal back to a fully resilient state without HW replacement or human intervention.

Some people (or competitors) might argue, “What about a smaller (Nutanix) cluster?”.

I’m glad you asked, even a four node cluster can suffer a node failure and FULLY SELF HEAL into a resilient three node cluster without HW replacement or human intervention.

The only scenario where a Nutanix environment cannot fully self heal to a state where another node failure can be tolerated without downtime is a three node cluster. BUT, in a three node cluster, one node failure can be tolerated and data will be re-protected and the cluster will continue to function with just two nodes but a subsequent failure would result in downtime, but critically no data loss would occur.

Critically, Drive failures can still be tolerated in a degraded state where only two nodes are running.

Note: In the event of a node failure in a three node vSAN cluster, data is not re-protected and remains at risk until the node is replaced AND the rebuild is complete.

The only prerequisite for Nutanix to be able to perform the complete self heal of data (and even the management stack, PRISM) is that sufficient capacity exists within the cluster. How much capacity you ask, I recommend N-1 for RF2 configurations, or N+2 for RF3 configurations assuming two concurrent failures orone failure followed by a subsequent failure.

So worst case scenario for the minimum size cluster would be 33% for RF2 and 40% for a five node RF3 cluster. However, before the competitors break out the Fear, Uncertainty and Doubt (FUD), let’s look at how much capacity is required for self healing as the cluster sizes increase.

The following table shows the percentage of capacity required to fully self heal based on N+1 and N+2 for cluster sizes up to 32 nodes.

Note: These values assume the worst case scenario that all nodes are at 100% capacity so in the real world the overhead will be lower that the table indicates.

CapacityReservedForRebuild

As we can see, for an average size (eight node) cluster, the free space required is just 13% (rounded up from 12.5%).

If we take N+2 for an eight node cluster, the MAXIMUM free space required to tolerate two node failures and a full rebuild to a resilient state is still just 25%.

It is important to note that thanks to Nutanix Distributed Storage Fabric (ADSF), the free space does not need to account for large objects (e.g.: 256GB) as Nutanix uses 1MB extents which are evenly distributed throughout the cluster, so there is no wasted space due to fragmentation unlike less advanced platforms.

Note: The size of nodes in the cluster does not impact the capacity required for a rebuild.

A couple of advantages ADSF has over other platforms is that Nutanix does not have the concept of a “cache drive” or the construct of “disk groups”.

Using disk groups is a high risk to resiliency as a single “cache” drive failure can take an entire disk group (made up of several drives) offline forcing a much more intensive rebuild operation than is required. A single drive failure in ADSF is just that, a single drive failure and only the data on that drive needs to be rebuild, which is of course done in an efficient distributed manner (i.e.: A “Many to Many” operation as opposed to a “One to One” like other products).

The only time when a single drive failure causes an issue on Nutanix is with single SSD systems in which it’s the equivalent of a node failure, but to be clear this is not a limitation of ADSF, just that of the hardware specification chosen.

For production environments, I don’t recommend the use of single SSD systems as the Resiliency advantages outweigh the minimal additional cost of a dual SSD system.

Interesting point: vSAN is arguably always a single SSD system since a “Disk group” has just one “cache drive” making it a single point of failure.

I’m frequently asked what happens after a cluster self heals and another failure occurs. Back in 2013 when I started with Nutanix I presented a session at vForum Sydney where I covered this topic in depth. The session was standing room only and as a result of it’s popularity I wrote the following blog post which shows how a five node cluster can self heal from a failure into a fully resilient four node cluster and then tolerate another failure and self heal to a three node cluster.

This capability is nothing new and is far and away the most resilient architecture in the market even compared to newer platforms.

Scale Out Shared Nothing Architecture Resiliency by Nutanix

When you need to allow for failures of constructs such as “Disk Groups”, the amount of free space you need to reserve for failures in much higher as we can learn from a recent VMware vSAN article titled “vSan degraded device handling“.

Two key quotes to consider from the article are:

 we strongly recommend keeping 25-30% free “slack space” capacity in the cluster.

 

If the drive is a cache device, this forces the entire disk group offline

When you consider the flaws in the underlying vSAN architecture it becomes logical why VMware recommend 25-30% free space in addition to FTT2 (three copies of data).

Next let’s go through the self healing of the Management stack from node failures.

All components which are required to Configure, Manage, Monitor, Scale and Automate are fully distributed across all nodes within the cluster. There is no requirement for customers to deploy management components for core functionality (e.g.: Unlike vSAN/ESXi which requires VSAN).

There is also no need for users to make the management stack highly available, again unlike vSAN/ESXi.

As a result, there is no single point of failure with the Nutanix/Acropolis management layer.

Lets take a look at a typical four node cluster:

Below we see four Controller VMs (CVMs) which service one node each. In the cluster we have an Acropolis Master along with multiple Acropolis Slave instances.

Acropolis4nodecluster1

In the event the Acropolis Master becomes unavailable for any reason, an election will take place and one of the Acropolis Slaves will be promoted to Master.

This can be achieved because Acropolis data is stored in a fully distributed Cassandra database which is protected by the Acropolis Distributed Storage Fabric.

When an additional Nutanix node is added to the cluster, an Acropolis Slave is also added which allows the workload of managing the cluster to be distributed, therefore ensuring management never becomes a point of contention.Acropolis5NodeCluster

Things like performance monitoring, stats collection, Virtual Machine console proxy connections are just a few of the management tasks which are serviced by Master and Slave instances.

Another advantage of Nutanix is that the management layer never needs to be sized or scaled manually. There is no vApp/s , Database Server/s, Windows instances to deploy, install, configure, manage or license, therefore reducing cost and simplifying management of the environment.

Key point:

  1. The Nutanix Acropolis Management stack is automatically scaled as nodes are added to the cluster, therefore increasing consistency , resiliency, performance and eliminating potential for architectural (sizing) errors which may impact manageability.

The reason I’m highlighting a competitors product is because it’s important for customers to understand the underlying differences especially when it comes to critical factors such as resiliency for both the data and management layers.

Summary:

Nutanix ADSF provides excellent self healing capabilities without the requirement for hardware replacement for both the data and management planes and only requires the bare minimum capacity overheads to do so.

If a vendor led with any of the below statements (all true of vSAN), I bet the conversation would come to an abrupt halt.

  1. A single SSD is a single point of failure and causes multiple drives to concurrently go offline and we need to rebuild all that data
  2. We strongly recommend keeping 25-30% free “slack space” capacity in the cluster
  3. Rebuilds are a slow, One to One operation and in some cases do not start for 60 mins.
  4. In the event of a node failure in a three node vSAN cluster, data is not re-protected and remains at risk until the node is replaced AND the rebuild is complete.

When choosing a HCI product, consider it’s self healing capabilities for both the data and management layers as both are critical to the resiliency of your infrastructure. Don’t put yourself at risk of downtime by being dependant on hardware replacements being delivered in a timely manner. We’ve all experienced or at least heard of horror stories where vendor HW replacement SLAs have not been met due to parts not being available, so be smart, choose a platform which minimises risk by fully self healing.

Index:
Part 1 – Node failure rebuild performance
Part 2 – Converting from RF2 to RF3
Part 3 – Node failure rebuild performance with RF3
Part 4 – Converting RF3 to Erasure Coding (EC-X)
Part 5 – Read I/O during CVM maintenance or failures
Part 6 – Write I/O during CVM maintenance or failures
Part 7 – Read & Write I/O during Hypervisor upgrades
Part 8 – Node failure rebuild performance with RF3 & Erasure Coding (EC-X)
Part 9 – Self healing
Part 10: Nutanix Resiliency – Part 10 – Disk Scrubbing / Checksums

Dare2Compare Part 5 : Nutanix can’t claim single screen management w/o extra fees or GUIs

If you’ve not read Parts 1,2,3 and 4, we have already proven several claims by HPE Simplivity regarding Nutanix to be false, as well as explored the misleading way in which HPE SVT promote data efficiency.

The fun continues and in Part 5 we will discuss HPE’s claim that Nutanix does not have a “single screen management” (by which I assume they mean Single Pane of Glass) without extra fees or GUIs.

Unfortunately the URL was not working in the HPE tweet, I responded and made HPE aware of this so I could review specifically what they are claiming, but the link at the time of writing is still not working.

It’s funny HPE SVT mention this because Nutanix is the only HCI product which has a built in, distributed, scalable and multi hypervisor management solution.

The fact Nutanix has its own interface is a huge advantage especially because Nutanix is not dependant on any 3rd parties (e.g.: VMware vCenter) to install/configure and manage our platform. This reduces cost,complexity,risk,operational tasks and the list goes on.

Nutanix “PRISM Element” HTML 5 GUI is built into every Nutanix solution regardless of hypervisor or underlying hardware. The below screenshot shows the built in management capabilities to upgrade the Nutanix Acropolis (AOS) storage layer, the built in, scale out file server, the hypervisor (ESXi, Hyper-V or AHV) as well as upgrade Firmware, our Container support and our built in cluster imaging tool, Foundation.

PrismUogradeSoftware

This means regardless of hypervisor, many of the critical tasks can be performed straight within PRISM and does not require the long in the tooth VMware Update Manager (VUM) which is long overdue for an overhaul. In fact, Nutanix supports four (4) hypervisors using our management tool (PRISM) whereas HPE SVT only has GA support for ESXi.

For customers using Acropolis Hypervisor (AHV), 100% of the management can be performed within PRISM Element and central management of multiple clusters is performed through PRISM Central.

AHV comes with all Nutanix solutions at no extra cost regardless of hardware choice (including HPE Proliant). This means customers enjoy the benefits of the next generation hypervisor, designed and built for HCI and Enterprise Cloud.

Unlike HPE SVT for example, Nutanix does not have a limit of 8 nodes per datacenter or 32 per “federation”, PRISM element can support a cluster of any size (currently no support limits) and PRISM central manages all the clusters.

Nutanix management is not tied to or more importantly dependant on VMware vCenter or any other hypervisor management tool, which adds to the resiliency and simplicity of the Nutanix platform. PRISM automatically scales in both performance and resiliency as a cluster expands to ensure consistent performance for system administrators. This avoids the complexity of designing/installing and maintaining a highly available vCenter solution which also uses additional compute and storage resources.

Summary:

  1. Nutanix PRISM Element GUI is built in and comes included with every Nutanix deployment
  2. Nutanix PRISM is not limited by the number of nodes it can manage
  3. PRISM Central is used to manage multiple Nutanix clusters centrally if required but is not mandatory.
  4. Nutanix provides at no cost the next generation hypervisor (AHV) which has 100% of all management performed within PRISM GUIs.
  5. AHV eliminates the requirement for Hypervisor licensing (e.g.: VMware vSphere) which actually reduces overall costs, this is unique to Nutanix.
  6. PRISM supports 4 hypervisors (ESXi , Hyper-V, AHV and XenServer) which delivers a consistent management interface for multi-hypervisor environments which are becoming more and more common.

Many of the above points are unique to Nutanix and have been designed and built to be a truly webscale platform, not a ROBO/SMB or <32 node solution. Nutanix can start small and continue to scale to any size, with the PRISM Element management stack automatically scaling to suit as nodes are added.

Return to the Dare2Compare Index:

Dare2Compare Part 4 : HPE provides superior resiliency than Nutanix?

As discussed in Part 1, we have proven HPE have made false claims about Nutanix snapshot capabilities as part of the #HPEDare2Compare twitter campaign.

In part 2, I explained how HPE/Simplivity’s 10:1 data reduction HyperGuarantee is nothing more than smoke and mirrors and that most vendors can provide the same if not greater efficiencies, even without hardware acceleration.

In part 3, I corrected HPE on their false claim that Nutanix cannot support dedupe without 8vCPUs and in part 4, I will respond to the claim (below) that Nutanix has less resiliency than HPE Simplivity 380.

To start with, the biggest causes of data loss, downtime, outages etc in my experience are caused by human error. From poor design, improper use of a product, poor implementation/validation and a lack of operations procedures or discipline to follow procedures, the number of times I’ve seen properly designed solutions have issues I can count on one hand.

Those rare situations have came down to multiple concurrent failures at different levels of the solution (e.g.: Infrastructure, Application, OS etc), not just things like one or more drive or server failures.

None the less, HPE Simplivity are commonly targeting Resiliency Factor 2 (RF2) and claiming it not to be resilient because they lack a basic understanding of the Acropolis Distributed Storage Fabric and how it distributes data, rebuilds from failures and therefore how resilient it is.

RF2 is often mistakenly compared to RAID 5, where a single drive failure takes a long time to rebuild and subsequent failures during rebuilds are not uncommon which would lead to a data loss scenario (for RAID 5).

Lets talk about some failure scenarios comparing HPE Simplivity to Nutanix.

Note: The below information is accurate to the best of my knowledge and testing, experience with both products.

When is a write acknowledged to the Virtual machine

HPE Simplivity – They use what they refer to as an Omnistack Accelerator card (OAC) which uses “Super capacitors to provide power to the NVRAM upon a power loss”. When a write hits the OAC it is then acknowledged to the VM. It is assumed or even likely that the capacitors will provide sufficient power to commit the writes persistently to flash but the fact is that writes are acknowledged BEFORE it is committed to persistent media. HPE will surely argue the OAC is persistent, but until the data is on something such as a SATA-SSD drive I do not consider it persistent and invite you to ask your trusted advisor/s their option because this is a grey area at best.

This can be confirmed on Page 29 of the SimpliVity Hyperconverged Infrastructure Technology Overview:

OACPowerLossLol

Nutanix – Writes are only acknowledged to the Virtual Machine when the write IO has been checksummed and confirmed written to persistent media (e.g.: SATA-SSD) on the number of nodes/drives based on the configured Resiliency Factor (RF).

Writes are never written to RAM or any other non persistent media and at any stage you can pull the power from a Nutanix node/block/cluster and 100% of the data will be in a consistent state. i.e.: It was written and acknowledged, or it was not written and therefore not acknowledged.

The fact Nutanix only acknowledges writes when data is written to persistent media on two or more hosts makes the platform compliant with FUA and Write Through which for HPE SVT, in the best case is dependant on power protection (UPS and/or OAC Capacitors) means Nutanix is more resilient (less risk) and has a higher level of data integrity than the HPE SVT product.

Checkout “Ensuring Data Integrity with Nutanix – Part 2 – Forced Unit Access (FUA) & Write Through” for more information and this will explain how Nutanix is compliant to critical data integrity protocols such as FUA and Write through and you can make your mind up if the HPE product is or not. Hint: A product is not compliant to FUA unless data is written to persistent media before acknowledgement.

Single Drive (NVMe/SSD/HDD) failure

HPE Simplivity – Protects data with RAID 6 (or RAID 5 on small nodes) + Replication (2 copies). A single drive failure causes a RAID rebuild which is a medium/high impact activity for the RAID group. RAID rebuilds are well known to be slow, this is one reason why HPE chooses (and wisely so) to use low capacity spindles to minimise the impact of RAID rebuilds. But this choice to use RAID and smaller drives has implications around cost/capacity/rack unit/power/cooling and so on.

Nutanix – Protects data with configurable Replication Factor (2 or 3 copies, or N+1 and N+2) along with rack unit (block) awareness. A single drive failure causes a distributed rebuild of the data contained on the failed drive across all nodes within the cluster. This distributed rebuild is evenly balanced throughout the cluster for low impact and faster time to recover. This allows Nutanix to support large capacity spindles, such as 8TB SATA.

Two concurrent drive (NVMe/SSD/HDD) failures *Same Node

HPE Simplivity – RAID 6 + Replication (2 copies) supports the loss of two drive failures and as with a single drive failure causes a RAID rebuild which is a medium/high impact activity for the RAID group.

Nutanix – Two drive failure causes a distributed rebuild of the data contained on the failed drives across all nodes within the cluster. This distributed rebuild is evenly balanced throughout the cluster for low impact and faster time to recover. This allows Nutanix to support large capacity spindles, such as 8TB SATA. No data is lost even when using Resiliency Factor 2 (which is N+1), despite what HPE claims. This is an example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT.

Three concurrent drive (NVMe/SSD/HDD) failures *Same Node

HPE Simplivity – RAID 6 + Replication (2 copies) supports the loss of only two drives per RAID group, at this stage the RAID group has failed and all data must be rebuilt.

Nutanix – Three drive failures again just causes a distributed rebuild of the data contained on the failed drives (in this case, 3) across all nodes within the cluster. This distributed rebuild is evenly balanced throughout the cluster for low impact and faster time to recover. This allows Nutanix to support large capacity spindles, such as 8TB SATA. No data is lost even when using Resiliency Factor 2 (which is N+1). Again, despite what HPE claims. This is an example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT.

Four or more concurrent drive (NVMe/SSD/HDD) failures *Same Node

HPE Simplivity – The RAID 6 + Replication (2 copies) supports the loss of only two drives per RAID group, any failures 3 or more result in a failure RAID group and a total rebuild of the data is required.

Nutanix – Nutanix can support N-1 drive failures per node, meaning in a 24 drive system, such as the NX-8150, 23 drives can be lost concurrently without the node going offline and without any data loss. The only caveat is the lone surviving drive for a hybrid platform must be an SSD. This is an example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT.

Next let’s cover off failure scenarios across multiple nodes.

Two concurrent drive (NVMe/SSD/HDD) failures in the same cluster.

HPE Simplivity – RAID 6 protects from 2 drive failures locally perRAID group whereas Replication (2 copies) supports the loss of one copy (N-1). Assuming the RAID groups are intact, data would not be lost.

Nutanix – Nutanix has configurable resiliency (Resiliency Factor) of either 2 copies (RF2) or three copies (RF3). Using RF3, under any two drive failure scenario there is no data loss and it causes a distributed rebuild of the data contained on the failed drives across all nodes within the cluster.

When using RF2 and block (rack unit) awareness, in the event two or more drives fail within a block (which is up to 4 nodes of 24 SSDs/HDDs), there is no data loss. In fact, in this configuration Nutanix can support the loss of up to 24 drives concurrently e.g.: 4 entire nodes and 24 drives without data loss/unavailability.

When using RF3 and block awareness, Nutanix can support the loss of up to 48 drives concurrently e.g.: 8 entire nodes and 48 drives without data loss/unavailability.

Under no circumstances can HPE Simplivity support the loss of ANY 48 drives (e.g.: 2 HPE SVT nodes w/ 24 drives each) and maintain data availability.

This is another example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT. Nutanix distributes all data throughout the ADSF cluster, which is something HPE SVT cannot do which impacts both performance and resiliency.

Two concurrent node (NVMe/SSD/HDD) failures in the same cluster.

HPE Simplivity – If the two HPE SVT nodes mirroring the data both go offline, you have data unavailability at best, with data loss at worst. As HPE SVT is not a cluster, (note the careful use of the term “Federation”) it scales essentially in pairs and each pair cannot fail concurrently.

Nutanix – With RF3 even without the use of block awareness, any two nodes and all drives within those nodes can be lost, with no data unavailability.

Three or more concurrent node (NVMe/SSD/HDD) failures in the same cluster.

HPE Simplivity – As previously discussed, HPE SVT cannot support the loss of any two nodes, so three or more makes matters worse.

Nutanix – With RF3 and block awareness, up to eight (yes 8!!) can be lost along with all drives within those nodes, with no data unavailability. That’s up to 48 SSD/HDDs concurrently failing without data loss.

So we can clearly see Nutanix provides a highly resilient platform and there are numerous configurations which ensure two drive failures do not cause data loss despite what the HPE campaign suggests.

The above tweet would be like me configuring a HPE Proliant server with RAID 5 and complaining HPE lost my data when two drive fails, it’s just ridiculous.

The key point here is when deploying any technology to understand your requirements and configure the underlying platform to meet/exceed your resiliency requirements.

Installation/Configuration

HPE Simplivity – Dependant on vCenter.

Nutanix – Uses PRISM which is a fully distributed HTML 5 GUI with no external dependancies regardless of Hypervisor choice (ESXi, AHV, Hyper-V and XenServer). In the event any hypervisor management tool (e.g.: vCenter) is down, PRISM is fully functional.

Management (GUI)

HPE Simplivity – Uses a vCenter backed GUI. If vCenter is down, Simplivity cannot be fully managed. In the event a vCenter goes down, best case scenario vCenter HA is used, then management will have a short interruption.

Nutanix – Uses PRISM which is a fully distributed HTML 5 GUI with no external dependancies regardless of Hypervisor choice (ESXi, AHV, Hyper-V and XenServer). In the event any hypervisor management tool (e.g.: vCenter) is down, PRISM is fully functional.

In the event of a node/s failing, PRISM being a distributed management layer continues to operate.

Data Availability:

HPE Simplivity – RAID 6 (or RAID 60) + Replication (2 copies), Deduplication and Compression for all data. Not configurable.

Nutanix – Configurable resiliency and data reduction with:

  1. Resiliency Factor 2 (RF2)
  2. Resiliency Factor 3 (RF3)
  3. Resiliency Factor 2 with Block Awareness
  4. Resiliency Factor 3 with Block Awareness
  5. Erasure Coding / Deduplication / Compression in any combination across all resiliency types.

Key point:

Nutanix can scale out with compute+storage OR storage only nodes, in either case, resiliency of the cluster is increased as all nodes (or better said, Controllers) in our distributed storage fabric (ADSF) help with the distributed rebuild in the event of drive/s or node/s failures. Therefore restoring the cluster to a fully resilient state faster, to therefore be able to support subsequent failures.

HPE Simplivity – Due to HPE SVTs platform not being a distributed file system, and working in a mirror style configuration, adding additional nodes to the “per datacenter” limit of eight (8) does not increase resiliency. As such the platform does not improve as it grows which is a strength of the Nutanix platform.

Summary:

Thanks to our Acropolis Distributed Storage Fabric (ADSF) and without the use of legacy RAID technology, Nutanix can support:

  1. Equal or more concurrent drive failures per node than HPE Simplivity
  2. Equal or more concurrent drive failures per cluster than HPE Simplivity
  3. Equal or more concurrent node failures than HPE Simplivity
  4. Failure of hypervisor management layer e.g.: vCenter with full GUI functionality

Nutanix also has the follow capabilities over and above the HPE SVT offering:

  1. Configurable resiliency and data reduction on a per vDisk level
  2. Nutanix resiliency/recoverability improves as the cluster grows
  3. Nutanix does not require any UPS or power protection to be compliant with FUA & Write Through

HPE SVT is less resilient during the write path because:

  1. HPE SVT acknowledge writes before committing data to persistent media (by their own admission)

Return to the Dare2Compare Index: