Dare2Compare Part 4 : HPE provides superior resiliency than Nutanix?

As discussed in Part 1, we have proven HPE have made false claims about Nutanix snapshot capabilities as part of the #HPEDare2Compare twitter campaign.

In part 2, I explained how HPE/Simplivity’s 10:1 data reduction HyperGuarantee is nothing more than smoke and mirrors and that most vendors can provide the same if not greater efficiencies, even without hardware acceleration.

In part 3, I corrected HPE on their false claim that Nutanix cannot support dedupe without 8vCPUs and in part 4, I will respond to the claim (below) that Nutanix has less resiliency than HPE Simplivity 380.

To start with, the biggest causes of data loss, downtime, outages etc in my experience are caused by human error. From poor design, improper use of a product, poor implementation/validation and a lack of operations procedures or discipline to follow procedures, the number of times I’ve seen properly designed solutions have issues I can count on one hand.

Those rare situations have came down to multiple concurrent failures at different levels of the solution (e.g.: Infrastructure, Application, OS etc), not just things like one or more drive or server failures.

None the less, HPE Simplivity are commonly targeting Resiliency Factor 2 (RF2) and claiming it not to be resilient because they lack a basic understanding of the Acropolis Distributed Storage Fabric and how it distributes data, rebuilds from failures and therefore how resilient it is.

RF2 is often mistakenly compared to RAID 5, where a single drive failure takes a long time to rebuild and subsequent failures during rebuilds are not uncommon which would lead to a data loss scenario (for RAID 5).

Lets talk about some failure scenarios comparing HPE Simplivity to Nutanix.

Note: The below information is accurate to the best of my knowledge and testing, experience with both products.

When is a write acknowledged to the Virtual machine

HPE Simplivity – They use what they refer to as an Omnistack Accelerator card (OAC) which uses “Super capacitors to provide power to the NVRAM upon a power loss”. When a write hits the OAC it is then acknowledged to the VM. It is assumed or even likely that the capacitors will provide sufficient power to commit the writes persistently to flash but the fact is that writes are acknowledged BEFORE it is committed to persistent media. HPE will surely argue the OAC is persistent, but until the data is on something such as a SATA-SSD drive I do not consider it persistent and invite you to ask your trusted advisor/s their option because this is a grey area at best.

This can be confirmed on Page 29 of the SimpliVity Hyperconverged Infrastructure Technology Overview:

OACPowerLossLol

Nutanix – Writes are only acknowledged to the Virtual Machine when the write IO has been checksummed and confirmed written to persistent media (e.g.: SATA-SSD) on the number of nodes/drives based on the configured Resiliency Factor (RF).

Writes are never written to RAM or any other non persistent media and at any stage you can pull the power from a Nutanix node/block/cluster and 100% of the data will be in a consistent state. i.e.: It was written and acknowledged, or it was not written and therefore not acknowledged.

The fact Nutanix only acknowledges writes when data is written to persistent media on two or more hosts makes the platform compliant with FUA and Write Through which for HPE SVT, in the best case is dependant on power protection (UPS and/or OAC Capacitors) means Nutanix is more resilient (less risk) and has a higher level of data integrity than the HPE SVT product.

Checkout “Ensuring Data Integrity with Nutanix – Part 2 – Forced Unit Access (FUA) & Write Through” for more information and this will explain how Nutanix is compliant to critical data integrity protocols such as FUA and Write through and you can make your mind up if the HPE product is or not. Hint: A product is not compliant to FUA unless data is written to persistent media before acknowledgement.

Single Drive (NVMe/SSD/HDD) failure

HPE Simplivity – Protects data with RAID 6 (or RAID 5 on small nodes) + Replication (2 copies). A single drive failure causes a RAID rebuild which is a medium/high impact activity for the RAID group. RAID rebuilds are well known to be slow, this is one reason why HPE chooses (and wisely so) to use low capacity spindles to minimise the impact of RAID rebuilds. But this choice to use RAID and smaller drives has implications around cost/capacity/rack unit/power/cooling and so on.

Nutanix – Protects data with configurable Replication Factor (2 or 3 copies, or N+1 and N+2) along with rack unit (block) awareness. A single drive failure causes a distributed rebuild of the data contained on the failed drive across all nodes within the cluster. This distributed rebuild is evenly balanced throughout the cluster for low impact and faster time to recover. This allows Nutanix to support large capacity spindles, such as 8TB SATA.

Two concurrent drive (NVMe/SSD/HDD) failures *Same Node

HPE Simplivity – RAID 6 + Replication (2 copies) supports the loss of two drive failures and as with a single drive failure causes a RAID rebuild which is a medium/high impact activity for the RAID group.

Nutanix – Two drive failure causes a distributed rebuild of the data contained on the failed drives across all nodes within the cluster. This distributed rebuild is evenly balanced throughout the cluster for low impact and faster time to recover. This allows Nutanix to support large capacity spindles, such as 8TB SATA. No data is lost even when using Resiliency Factor 2 (which is N+1), despite what HPE claims. This is an example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT.

Three concurrent drive (NVMe/SSD/HDD) failures *Same Node

HPE Simplivity – RAID 6 + Replication (2 copies) supports the loss of only two drives per RAID group, at this stage the RAID group has failed and all data must be rebuilt.

Nutanix – Three drive failures again just causes a distributed rebuild of the data contained on the failed drives (in this case, 3) across all nodes within the cluster. This distributed rebuild is evenly balanced throughout the cluster for low impact and faster time to recover. This allows Nutanix to support large capacity spindles, such as 8TB SATA. No data is lost even when using Resiliency Factor 2 (which is N+1). Again, despite what HPE claims. This is an example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT.

Four or more concurrent drive (NVMe/SSD/HDD) failures *Same Node

HPE Simplivity – The RAID 6 + Replication (2 copies) supports the loss of only two drives per RAID group, any failures 3 or more result in a failure RAID group and a total rebuild of the data is required.

Nutanix – Nutanix can support N-1 drive failures per node, meaning in a 24 drive system, such as the NX-8150, 23 drives can be lost concurrently without the node going offline and without any data loss. The only caveat is the lone surviving drive for a hybrid platform must be an SSD. This is an example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT.

Next let’s cover off failure scenarios across multiple nodes.

Two concurrent drive (NVMe/SSD/HDD) failures in the same cluster.

HPE Simplivity – RAID 6 protects from 2 drive failures locally perRAID group whereas Replication (2 copies) supports the loss of one copy (N-1). Assuming the RAID groups are intact, data would not be lost.

Nutanix – Nutanix has configurable resiliency (Resiliency Factor) of either 2 copies (RF2) or three copies (RF3). Using RF3, under any two drive failure scenario there is no data loss and it causes a distributed rebuild of the data contained on the failed drives across all nodes within the cluster.

When using RF2 and block (rack unit) awareness, in the event two or more drives fail within a block (which is up to 4 nodes of 24 SSDs/HDDs), there is no data loss. In fact, in this configuration Nutanix can support the loss of up to 24 drives concurrently e.g.: 4 entire nodes and 24 drives without data loss/unavailability.

When using RF3 and block awareness, Nutanix can support the loss of up to 48 drives concurrently e.g.: 8 entire nodes and 48 drives without data loss/unavailability.

Under no circumstances can HPE Simplivity support the loss of ANY 48 drives (e.g.: 2 HPE SVT nodes w/ 24 drives each) and maintain data availability.

This is another example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT. Nutanix distributes all data throughout the ADSF cluster, which is something HPE SVT cannot do which impacts both performance and resiliency.

Two concurrent node (NVMe/SSD/HDD) failures in the same cluster.

HPE Simplivity – If the two HPE SVT nodes mirroring the data both go offline, you have data unavailability at best, with data loss at worst. As HPE SVT is not a cluster, (note the careful use of the term “Federation”) it scales essentially in pairs and each pair cannot fail concurrently.

Nutanix – With RF3 even without the use of block awareness, any two nodes and all drives within those nodes can be lost, with no data unavailability.

Three or more concurrent node (NVMe/SSD/HDD) failures in the same cluster.

HPE Simplivity – As previously discussed, HPE SVT cannot support the loss of any two nodes, so three or more makes matters worse.

Nutanix – With RF3 and block awareness, up to eight (yes 8!!) can be lost along with all drives within those nodes, with no data unavailability. That’s up to 48 SSD/HDDs concurrently failing without data loss.

So we can clearly see Nutanix provides a highly resilient platform and there are numerous configurations which ensure two drive failures do not cause data loss despite what the HPE campaign suggests.

The above tweet would be like me configuring a HPE Proliant server with RAID 5 and complaining HPE lost my data when two drive fails, it’s just ridiculous.

The key point here is when deploying any technology to understand your requirements and configure the underlying platform to meet/exceed your resiliency requirements.

Installation/Configuration

HPE Simplivity – Dependant on vCenter.

Nutanix – Uses PRISM which is a fully distributed HTML 5 GUI with no external dependancies regardless of Hypervisor choice (ESXi, AHV, Hyper-V and XenServer). In the event any hypervisor management tool (e.g.: vCenter) is down, PRISM is fully functional.

Management (GUI)

HPE Simplivity – Uses a vCenter backed GUI. If vCenter is down, Simplivity cannot be fully managed. In the event a vCenter goes down, best case scenario vCenter HA is used, then management will have a short interruption.

Nutanix – Uses PRISM which is a fully distributed HTML 5 GUI with no external dependancies regardless of Hypervisor choice (ESXi, AHV, Hyper-V and XenServer). In the event any hypervisor management tool (e.g.: vCenter) is down, PRISM is fully functional.

In the event of a node/s failing, PRISM being a distributed management layer continues to operate.

Data Availability:

HPE Simplivity – RAID 6 (or RAID 60) + Replication (2 copies), Deduplication and Compression for all data. Not configurable.

Nutanix – Configurable resiliency and data reduction with:

  1. Resiliency Factor 2 (RF2)
  2. Resiliency Factor 3 (RF3)
  3. Resiliency Factor 2 with Block Awareness
  4. Resiliency Factor 3 with Block Awareness
  5. Erasure Coding / Deduplication / Compression in any combination across all resiliency types.

Key point:

Nutanix can scale out with compute+storage OR storage only nodes, in either case, resiliency of the cluster is increased as all nodes (or better said, Controllers) in our distributed storage fabric (ADSF) help with the distributed rebuild in the event of drive/s or node/s failures. Therefore restoring the cluster to a fully resilient state faster, to therefore be able to support subsequent failures.

HPE Simplivity – Due to HPE SVTs platform not being a distributed file system, and working in a mirror style configuration, adding additional nodes to the “per datacenter” limit of eight (8) does not increase resiliency. As such the platform does not improve as it grows which is a strength of the Nutanix platform.

Summary:

Thanks to our Acropolis Distributed Storage Fabric (ADSF) and without the use of legacy RAID technology, Nutanix can support:

  1. Equal or more concurrent drive failures per node than HPE Simplivity
  2. Equal or more concurrent drive failures per cluster than HPE Simplivity
  3. Equal or more concurrent node failures than HPE Simplivity
  4. Failure of hypervisor management layer e.g.: vCenter with full GUI functionality

Nutanix also has the follow capabilities over and above the HPE SVT offering:

  1. Configurable resiliency and data reduction on a per vDisk level
  2. Nutanix resiliency/recoverability improves as the cluster grows
  3. Nutanix does not require any UPS or power protection to be compliant with FUA & Write Through

HPE SVT is less resilient during the write path because:

  1. HPE SVT acknowledge writes before committing data to persistent media (by their own admission)

Return to the Dare2Compare Index:

Expanding Capacity on a Nutanix environment – Design Decisions

I recently saw an article about design decisions around expanding capacity for a HCI platform which went through the various considerations and made some recommendations on how to proceed in different situations.

While reading the article, it really made me think how much simpler this process is with Nutanix and how these types of areas are commonly overlooked when choosing a platform.

Let’s start with a few basics:

The Nutanix Acropolis Distributed Storage Fabric (ADSF) is made up of all the drives (SSD/SAS/SATA etc) in all nodes in the cluster. Data is written locally where the VM performing the write resides and replica’s are distributed based on numerous factors throughout the cluster. i.e.: No Pairing, HA pairs, preferred nodes etc.

In the event of a drive failure, regardless of what drive (SSD,SAS,SATA) fails, only that drive is impacted, not a disk group or RAID pack.

This is key as it limited the impact of the failure.

It is importaint to note, ADSF does not store large objects nor does the file system require tuning to stripe data across multiple drives/nodes. ADSF by default distributes the data (at a 1MB granularity) in the most efficient manner throughout the cluster while maintaining the hottest data locally to ensure the lowest overheads and highest performance read I/O.

Let’s go through a few scenarios, which apply to both All Flash and Hybrid environments.

  1. Expanding capacityWhen adding a node or nodes to an existing cluster, without moving any VMs, changing any configuration or making any design decisions, ADSF will proactively send replicas from write I/O to all nodes within the cluster, therefore improving performance while reactively performing disk balancing where a significant imbalance exists within a cluster.

    This might sound odd but with other HCI products new nodes are not used unless you change the stripe configuration or create new objects e.g.: VMDKs which means you can have lots of spare capacity in your cluster, but still experience an out of space condition.

    This is a great example of why ADSF has a major advantage especially when considering environments with large IO and/or capacity requirements.

    The node addition process only requires the administrator to enter the IP addresses and its basically a one click, capacity is available immediately and there is no mass movement of data. There is also no need to move data off and recreate disk groups or similar as these legacy concepts & complexities do not exist in ADSF.

    Nutanix is also the only platform to allow expanding of capacity via Storage Only nodes and supports VMs which have larger capacity requirements than a single node can provide. Both are supported out of the box with zero configuration required.

    Interestingly, adding storage only nodes also increases performance, resiliency for the entire cluster as well as the management stack including PRISM.

  2. Impact & implications to data reduction of adding new nodesWith ADSF, there are no considerations or implications. Data reduction is truely global throughout the cluster and regardless of hypervisor or if you’re adding Compute+Storage or Storage Only nodes, the benefits particularly of deduplication continue to benefit the environment.

    The net effect of adding more nodes is better performance, higher resiliency, faster rebuilds from drive/node failures and again with global deduplication, a higher chance of duplicate data being found and not stored unnecessarily on physical storage resulting in a better deduplication ratio.

    No matter what size node/s are added & no matter what Hypervisor, the benefits from data reduction features such as deduplication and compression work at a global level.

    What about Erasure Coding? Nutanix EC-X creates the most efficient stripe based on the cluster size, so if you start with a small 4 node cluster your stripe would be 2+1 and if you expand the cluster to 5 nodes, the stripe will automatically become 3+1 and if you expand further to 6 nodes or more, the stripe will become 4+1 which is currently the largest stripe supported.

  3. Drive FailuresIn the event of a drive failure (SSD/SAS or SATA) as mentioned earlier, only that drive is impacted. Therefore to restore resiliency, only the data on that drive needs to be repaired as opposed to something like an entire disk group being marked as offline.

    It’s crazy to think a single commodity drive failure in a HCI product could bring down an entire group of drives, causing a significant impact to the environment.

    With Nutanix, a rebuild is performed in a distributed manner throughout all nodes in the cluster, so the larger the cluster, the lower the per node impact and the faster the configured resiliency factor is restored to a fully resilient state.

At this point you’re probably asking, Are there any decisions to make?

When adding any node, compute+storage or storage only, ensure you consider what the impact of a failure of that node will be.

For example, if you add one 15TB storage only node to a cluster of nodes which are only 2TB usable, then you would need to ensure 15TB of available space to allow the cluster to fully self heal from the loss of the 15TB node. As such, I recommend ensuring your N+1 (or N+2) node/s are equal to the size of the largest node in the cluster from both a capacity, performance and CPU/RAM perspective.

So if your biggest node is an NX-8150 with 44c / 512GB RAM and 20TB usable, you should have an N+1 node of the same size to cover the worst case failure scenario of an NX-8150 failing OR have the equivalent available resources available within the cluster.

By following this one, simple rule, your cluster will always be able to fully self heal in the event of a failure and VMs will failover and be able to perform at comparable levels to before the failure.

Simple as that! No RAID, Disk group, deduplication, compression, failure, or rebuild considerations to worry about.

Summary:

The above are just a few examples of the advantages the Nutanix ADSF provides compared to other HCI products. The operational and architectural complexity of other products can lead to additional risk, inefficient use of infrastructure, misconfiguration and ultimately an environment which does not deliver the business outcome it was originally design to.

Metro Availability Witness Failure Scenario 9 – Network Partition + Site Failure

Related Posts