Dare2Compare Part 4 : HPE provides superior resiliency than Nutanix?

As discussed in Part 1, we have proven HPE have made false claims about Nutanix snapshot capabilities as part of the #HPEDare2Compare twitter campaign.

In part 2, I explained how HPE/Simplivity’s 10:1 data reduction HyperGuarantee is nothing more than smoke and mirrors and that most vendors can provide the same if not greater efficiencies, even without hardware acceleration.

In part 3, I corrected HPE on their false claim that Nutanix cannot support dedupe without 8vCPUs and in part 4, I will respond to the claim (below) that Nutanix has less resiliency than HPE Simplivity 380.

To start with, the biggest causes of data loss, downtime, outages etc in my experience are caused by human error. From poor design, improper use of a product, poor implementation/validation and a lack of operations procedures or discipline to follow procedures, the number of times I’ve seen properly designed solutions have issues I can count on one hand.

Those rare situations have came down to multiple concurrent failures at different levels of the solution (e.g.: Infrastructure, Application, OS etc), not just things like one or more drive or server failures.

None the less, HPE Simplivity are commonly targeting Resiliency Factor 2 (RF2) and claiming it not to be resilient because they lack a basic understanding of the Acropolis Distributed Storage Fabric and how it distributes data, rebuilds from failures and therefore how resilient it is.

RF2 is often mistakenly compared to RAID 5, where a single drive failure takes a long time to rebuild and subsequent failures during rebuilds are not uncommon which would lead to a data loss scenario (for RAID 5).

Lets talk about some failure scenarios comparing HPE Simplivity to Nutanix.

Note: The below information is accurate to the best of my knowledge and testing, experience with both products.

When is a write acknowledged to the Virtual machine

HPE Simplivity – They use what they refer to as an Omnistack Accelerator card (OAC) which uses “Super capacitors to provide power to the NVRAM upon a power loss”. When a write hits the OAC it is then acknowledged to the VM. It is assumed or even likely that the capacitors will provide sufficient power to commit the writes persistently to flash but the fact is that writes are acknowledged BEFORE it is committed to persistent media. HPE will surely argue the OAC is persistent, but until the data is on something such as a SATA-SSD drive I do not consider it persistent and invite you to ask your trusted advisor/s their option because this is a grey area at best.

This can be confirmed on Page 29 of the SimpliVity Hyperconverged Infrastructure Technology Overview:

OACPowerLossLol

Nutanix – Writes are only acknowledged to the Virtual Machine when the write IO has been checksummed and confirmed written to persistent media (e.g.: SATA-SSD) on the number of nodes/drives based on the configured Resiliency Factor (RF).

Writes are never written to RAM or any other non persistent media and at any stage you can pull the power from a Nutanix node/block/cluster and 100% of the data will be in a consistent state. i.e.: It was written and acknowledged, or it was not written and therefore not acknowledged.

The fact Nutanix only acknowledges writes when data is written to persistent media on two or more hosts makes the platform compliant with FUA and Write Through which for HPE SVT, in the best case is dependant on power protection (UPS and/or OAC Capacitors) means Nutanix is more resilient (less risk) and has a higher level of data integrity than the HPE SVT product.

Checkout “Ensuring Data Integrity with Nutanix – Part 2 – Forced Unit Access (FUA) & Write Through” for more information and this will explain how Nutanix is compliant to critical data integrity protocols such as FUA and Write through and you can make your mind up if the HPE product is or not. Hint: A product is not compliant to FUA unless data is written to persistent media before acknowledgement.

Single Drive (NVMe/SSD/HDD) failure

HPE Simplivity – Protects data with RAID 6 (or RAID 5 on small nodes) + Replication (2 copies). A single drive failure causes a RAID rebuild which is a medium/high impact activity for the RAID group. RAID rebuilds are well known to be slow, this is one reason why HPE chooses (and wisely so) to use low capacity spindles to minimise the impact of RAID rebuilds. But this choice to use RAID and smaller drives has implications around cost/capacity/rack unit/power/cooling and so on.

Nutanix – Protects data with configurable Replication Factor (2 or 3 copies, or N+1 and N+2) along with rack unit (block) awareness. A single drive failure causes a distributed rebuild of the data contained on the failed drive across all nodes within the cluster. This distributed rebuild is evenly balanced throughout the cluster for low impact and faster time to recover. This allows Nutanix to support large capacity spindles, such as 8TB SATA.

Two concurrent drive (NVMe/SSD/HDD) failures *Same Node

HPE Simplivity – RAID 6 + Replication (2 copies) supports the loss of two drive failures and as with a single drive failure causes a RAID rebuild which is a medium/high impact activity for the RAID group.

Nutanix – Two drive failure causes a distributed rebuild of the data contained on the failed drives across all nodes within the cluster. This distributed rebuild is evenly balanced throughout the cluster for low impact and faster time to recover. This allows Nutanix to support large capacity spindles, such as 8TB SATA. No data is lost even when using Resiliency Factor 2 (which is N+1), despite what HPE claims. This is an example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT.

Three concurrent drive (NVMe/SSD/HDD) failures *Same Node

HPE Simplivity – RAID 6 + Replication (2 copies) supports the loss of only two drives per RAID group, at this stage the RAID group has failed and all data must be rebuilt.

Nutanix – Three drive failures again just causes a distributed rebuild of the data contained on the failed drives (in this case, 3) across all nodes within the cluster. This distributed rebuild is evenly balanced throughout the cluster for low impact and faster time to recover. This allows Nutanix to support large capacity spindles, such as 8TB SATA. No data is lost even when using Resiliency Factor 2 (which is N+1). Again, despite what HPE claims. This is an example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT.

Four or more concurrent drive (NVMe/SSD/HDD) failures *Same Node

HPE Simplivity – The RAID 6 + Replication (2 copies) supports the loss of only two drives per RAID group, any failures 3 or more result in a failure RAID group and a total rebuild of the data is required.

Nutanix – Nutanix can support N-1 drive failures per node, meaning in a 24 drive system, such as the NX-8150, 23 drives can be lost concurrently without the node going offline and without any data loss. The only caveat is the lone surviving drive for a hybrid platform must be an SSD. This is an example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT.

Next let’s cover off failure scenarios across multiple nodes.

Two concurrent drive (NVMe/SSD/HDD) failures in the same cluster.

HPE Simplivity – RAID 6 protects from 2 drive failures locally perRAID group whereas Replication (2 copies) supports the loss of one copy (N-1). Assuming the RAID groups are intact, data would not be lost.

Nutanix – Nutanix has configurable resiliency (Resiliency Factor) of either 2 copies (RF2) or three copies (RF3). Using RF3, under any two drive failure scenario there is no data loss and it causes a distributed rebuild of the data contained on the failed drives across all nodes within the cluster.

When using RF2 and block (rack unit) awareness, in the event two or more drives fail within a block (which is up to 4 nodes of 24 SSDs/HDDs), there is no data loss. In fact, in this configuration Nutanix can support the loss of up to 24 drives concurrently e.g.: 4 entire nodes and 24 drives without data loss/unavailability.

When using RF3 and block awareness, Nutanix can support the loss of up to 48 drives concurrently e.g.: 8 entire nodes and 48 drives without data loss/unavailability.

Under no circumstances can HPE Simplivity support the loss of ANY 48 drives (e.g.: 2 HPE SVT nodes w/ 24 drives each) and maintain data availability.

This is another example of the major advantage Nutanix Acropolis Distributed File System has over the RAID and mirroring type architecture of HPE SVT. Nutanix distributes all data throughout the ADSF cluster, which is something HPE SVT cannot do which impacts both performance and resiliency.

Two concurrent node (NVMe/SSD/HDD) failures in the same cluster.

HPE Simplivity – If the two HPE SVT nodes mirroring the data both go offline, you have data unavailability at best, with data loss at worst. As HPE SVT is not a cluster, (note the careful use of the term “Federation”) it scales essentially in pairs and each pair cannot fail concurrently.

Nutanix – With RF3 even without the use of block awareness, any two nodes and all drives within those nodes can be lost, with no data unavailability.

Three or more concurrent node (NVMe/SSD/HDD) failures in the same cluster.

HPE Simplivity – As previously discussed, HPE SVT cannot support the loss of any two nodes, so three or more makes matters worse.

Nutanix – With RF3 and block awareness, up to eight (yes 8!!) can be lost along with all drives within those nodes, with no data unavailability. That’s up to 48 SSD/HDDs concurrently failing without data loss.

So we can clearly see Nutanix provides a highly resilient platform and there are numerous configurations which ensure two drive failures do not cause data loss despite what the HPE campaign suggests.

The above tweet would be like me configuring a HPE Proliant server with RAID 5 and complaining HPE lost my data when two drive fails, it’s just ridiculous.

The key point here is when deploying any technology to understand your requirements and configure the underlying platform to meet/exceed your resiliency requirements.

Installation/Configuration

HPE Simplivity – Dependant on vCenter.

Nutanix – Uses PRISM which is a fully distributed HTML 5 GUI with no external dependancies regardless of Hypervisor choice (ESXi, AHV, Hyper-V and XenServer). In the event any hypervisor management tool (e.g.: vCenter) is down, PRISM is fully functional.

Management (GUI)

HPE Simplivity – Uses a vCenter backed GUI. If vCenter is down, Simplivity cannot be fully managed. In the event a vCenter goes down, best case scenario vCenter HA is used, then management will have a short interruption.

Nutanix – Uses PRISM which is a fully distributed HTML 5 GUI with no external dependancies regardless of Hypervisor choice (ESXi, AHV, Hyper-V and XenServer). In the event any hypervisor management tool (e.g.: vCenter) is down, PRISM is fully functional.

In the event of a node/s failing, PRISM being a distributed management layer continues to operate.

Data Availability:

HPE Simplivity – RAID 6 (or RAID 60) + Replication (2 copies), Deduplication and Compression for all data. Not configurable.

Nutanix – Configurable resiliency and data reduction with:

  1. Resiliency Factor 2 (RF2)
  2. Resiliency Factor 3 (RF3)
  3. Resiliency Factor 2 with Block Awareness
  4. Resiliency Factor 3 with Block Awareness
  5. Erasure Coding / Deduplication / Compression in any combination across all resiliency types.

Key point:

Nutanix can scale out with compute+storage OR storage only nodes, in either case, resiliency of the cluster is increased as all nodes (or better said, Controllers) in our distributed storage fabric (ADSF) help with the distributed rebuild in the event of drive/s or node/s failures. Therefore restoring the cluster to a fully resilient state faster, to therefore be able to support subsequent failures.

HPE Simplivity – Due to HPE SVTs platform not being a distributed file system, and working in a mirror style configuration, adding additional nodes to the “per datacenter” limit of eight (8) does not increase resiliency. As such the platform does not improve as it grows which is a strength of the Nutanix platform.

Summary:

Thanks to our Acropolis Distributed Storage Fabric (ADSF) and without the use of legacy RAID technology, Nutanix can support:

  1. Equal or more concurrent drive failures per node than HPE Simplivity
  2. Equal or more concurrent drive failures per cluster than HPE Simplivity
  3. Equal or more concurrent node failures than HPE Simplivity
  4. Failure of hypervisor management layer e.g.: vCenter with full GUI functionality

Nutanix also has the follow capabilities over and above the HPE SVT offering:

  1. Configurable resiliency and data reduction on a per vDisk level
  2. Nutanix resiliency/recoverability improves as the cluster grows
  3. Nutanix does not require any UPS or power protection to be compliant with FUA & Write Through

HPE SVT is less resilient during the write path because:

  1. HPE SVT acknowledge writes before committing data to persistent media (by their own admission)

Return to the Dare2Compare Index:

The ATO 5-day outage, like most outages was completely avoidable.

A while back I saw news about the Australian Tax Office (ATO) having a major outage of their storage solution and recently an article was posted titled “ATO reveals cause of SAN failure” which briefly discusses a few contributing factors for the five-day outage.

The article from ITnews.com.au quoted ATO commissioner Chris Jordan in saying:

The failure of the 3PAR SAN was the result of a confluence of events: the fibre optic cables feeding the SAN were not optimally fitted, software bugs on the SAN disk drives meant stored data was inaccessible or unreadable, back-to-base HPE monitoring tools weren’t activated, and the SAN configuration was more focused on performance than stability or resilience, Jordan said.

Before we get into breaking down the issues, I want to start by saying while this specific incident was with HPE equipment, this is not isolated to HPE and every vendor has had customers suffer similar issues. The major failing in this case, and in the vast majority of failures (especially extended outages), come back to the enterprise architect/s and operations teams failing to do their job. I’ve seen this time and time again, yet only a very small percentage of so called architects have a methodology and an even smaller percentage follow one in any meaningful way on a day to day basis.

Now back to the article, let’s break this down to a few key points.

1. The fibre optic cables feeding the SAN were not optimally fitted.

While the statement is a bit vague, cabling issues are a common mistake which can and should be easily discovered and resolved prior to going into production. As per Nutanix Platform Expert (NPX) methodology, an “Operational Verification” document should outline the tests required to be performed prior to a system going into production and/or following a change.

An example of a simple test is for a Host (Server) or SAN dual connected to an FC fabric to disconnect one cable and confirm connectivity remains, and then replace the cable and disconnect the other cable and again confirm connectivity,

Another simple test is to remove the power from a FC switch and confirm connectivity via the redundant switch then replace the power and repeat on the other FC switch.

Had an Operational Verification document been created to an NPX standard, and subsequently followed prior to going live and after any changes, this cabling issue would highly likely not have been a contributing factor to the outage.

This is an architectural and operational failure. The reason it’s an operational failure is because no engineer worth having would complete a change without an operational verification document/s to follow to validate a successful implementation/change.

2. Software bugs on the SAN disk drives meant stored data was inaccessible or unreadable.

In my opinion this is where the vendor is likely more at fault than the customer, however customers and their architect/s need to mitigate against these types of risks. Again an Operational Verification document should have tests which confirm functionality (in this case, simple read operations) from the storage, during normal and degraded scenarios such as drive pulls (simulating SSD/HDD failures) and drive shelve loss (i.e.: The loss of a bulk number of drives in a shelf, typically between 12 and 24).

Failure scenarios should be clearly documented and the risk/s, mitigation/s and recovery plan all of which needs to be mapped back to the business requirements, e.g.: Recovery Time Objective (RTO), Recovery Point Objective (RPO).

Again, this is both an architectural and operational failure as the architect should have documented/highlighted the risks as well as mitigation and recovery strategy, while the engineers should never have accepted a solution into BAU (Business as Usual) operations without these documents.

3. “Back-to-base HPE monitoring tools weren’t activated”

There is no excuse for this, and the ATOs architects and to a lesser extent the operational team need take responsibility here. While a vendor should continually be nagging customers to enable these tools, any enterprise architect worth having mandates monitoring tools sufficient to ensure continuous operation of the solution they design. The Operation Verification document would also have steps to test monitoring tools and ensure the alerting and call home functionality is working both before going into production and at scheduled intervals to ensure continued operation.

This is yet another architectural and operational failure.

4. SAN configuration was more focused on performance than stability or resilience.

This not only doesn’t surprise me but highlights a point I have raised for many years being there is a disproportionately high focus on performance, specifically peak performance, compared to data integrity, resiliency and stability.

In 2015 I wrote “Peak Performance vs Real World Performance” after continuously having to have these discussions with customers. The post covers the topic is reasonable depth but some of the key points are:

  1. Peak performance is rarely a significant factor for a storage solution.
  2. Understand and document your storage requirements / constraints before considering products.
  3. Create a viability/success criteria when considering storage which validates the solution meets your requirements within the constraints.

In this case the architect/s who designed the solution had tunnel vision around performance, when the solution likely didn’t need to be configured in such a way to meet the requirements assuming they were well understood and documented/validated.

If the SAN needed to be configured in the way it did to meet the performance requirements, then it was simply the wrong solution because it was not configured to meet the other vastly more important requirements around availability, resiliency and recoverability and the solution was certainly not validated against any meaningful criteria before going into production or many of these issues would not have occurred, or in the unlikely event of multiple concurrent failures, the recoverability requirements were not designed for or understood sufficiently.

This is again an architectural and operational failure.

ATO commissioner Chris Jordan also stated:

While only 12 of 800 disk drives failed, they impacted most ATO systems.

This means the solution was designed/configured with a tolerance for just 1.5% of drives to fail before a catastrophic failure would occur. This in my mind is so far from a minimally viable solution it’s not funny. What’s less funny is that this fact is unlikely to have been understood by the ATO, which means the failure scenarios and associated risks were not documented and mitigated in any meaningful way.

As an example, in even a small four node Nutanix solution with just 24 drives, an entire nodes worth of drives (6) can be lost concurrently (that’s 25%) without data loss or unavailability. In a 5 node Nutanix NX-8150 cluster with RF3, up to 48 drives (of a total 120, which is 40%) can be lost without data loss or unavailability, and the system can even self-heal without hardware replacement to restore resiliency automatically so further failures can be tolerate. This kind of resiliency/recoverability is essential for modern datacenters and something that would have at least mitigated or even avoided this outage altogether.

But this isn’t a product pitch, this is an example of what enterprise architects need to consider when choosing infrastructure for a project, i.e.: What happens if X,Y and/r Z fails and how does the system recover (i.e. Manually, Automatically etc).

Yet another thing which doesn’t surprise me in the fact failure domains do not appear to have been considered as the recovery tools were located on the SAN in which they were required to protect.

Additionally, some of the recovery tools that were required to restore the equipment were located on the SAN that failed.

It is critical to understand failure scenarios!! Wow I am sounding like a broken record but the message is simply not getting through to the majority of architects.

Recovery/management tools are no use to you when they are offline. If they are on the same infrastructure that requires the tools to be online to be able to recover, then your solutions recoverability is at high risk.

Yet another architectural failure followed by an operations team failure for accepting the environment and not highlighting the architecture failures.

In most, if not all enterprise environments, separate management clusters using storage from a separate failure domain is essential. It’s not a “nice to have”, it’s essential. It is very likely the five-day outage would have been reduced, or at least the cause been diagnosed much faster had the ATO had a small, isolated management cluster running the tooling required to diagnose the SAN.

The article concludes with a final quote from ATO commissioner Chris Jordan:

The details are confidential, he said, but the deal recoups key costs incurred by the ATO, and gives the agency new and “higher-grade” equipment to equip it with a “world-class storage network.

I am pleased the vendor (in this case HPE) has taken at least some responsibility and while the details are confidential, from my perspective higher grade equipment and world class storage network mean nothing without an enterprise architect who follows a proven methodology like NPX.

If the architect/s don’t document the requirements, risks, constraints and assumptions and design a solution with supporting documentation which map the solution back to these areas and then document a comprehensive Operational verification procedures for moving into production and for subsequent changes before declaring a change successful, the ATO (and other customers in similar positions) are destined to repeat the same mistakes.

If anyone from the ATO happens to read this, ensure your I.T team have a solid methodology for the new deployment and if they don’t feel free to reach out and I’ll raise my hand to get involved and lead the project to a successful outcome following NPX methodology.

In closing, everyone involved in a project must take responsibility. If the architect screws up, the ops team should call it out, if the ops team call it out and the project manager ignores it, the ops team should escalate. If the escalation doesn’t work, document the issues/risks and continue making your concerns known even after somebody accepts responsibility for the risk. After all, a risk doesn’t magically disappear when a person accepts responsibility, it simply creates a CV generating event for that person when things do go wrong and then the customer is still left up the creek without a paddle.

It’s long overdue so called enterprise architects live up to the standard at which they are (typically) paid. Every major decision by an architect should be documented to a minimum of the standard shown in my Example Architectural Decision section of this blog as well as mapped back to specific customer requirements, risks, constraints and assumptions.

For the ATO and any other customers, I recommend you look for architects with proven track records, portfolios of project documentation which they can share (even if redacted for confidentiality) as well as certifications like NPX and VCDX which require panel style reviews by peers, not multiple choice exams which are all but a waste of paper (e.g.: MCP/VCP/MCSE/CCNA etc). The skills of a VCDX/NPX are transferable to non-VMware/Nutanix environments as it’s the methodology which forms most of the value, the product experience from these certs still has value is also transferable as learning new tech is much easier than finding a great enterprise architect!

And remember, when it comes to choosing an enterprise architect…

cheaper

Metro Availability Witness Failure Scenario 7 – Complete Network outage

Related Posts