Nutanix Dares2Compare with HPE Simplivity

The following is a list of claims made by HPE as part of the #HPEDare2Compare twitter campaign regarding Nutanix and a series of blog articles and Youtube videos which disprove these claims and highlight the value of the Nutanix platform.

Dare2Compare Part 1 : HPE/Simplivity’s 10:1 data reduction HyperGuarantee Explained

Dare2Compare Part 2 : HPE/Simplivity’s claim Nutanix snaps take 10x longer

Dare2Compare Part 3 : Nutanix can’t support Dedupe without 8vCPUs

Dare2Compare Part 4 : HPE provides superior resiliency than Nutanix?

Dare2Compare Part 5 : Nutanix can’t claim single screen management w/o extra fees or GUIs

Dare2Compare Part 6 : Nutanix data efficiency stats can’t be found

Dare2Compare Part 7 : HPE provides superior performance to Nutanix

 

More coming soon! Stay tuned!

Sizing infrastructure based on vendor Data Reduction assumptions – Part 2

In part 1, we discussed how data reduction ratios can, and do, vary significantly between customers and datasets and that making assumptions on data reduction ratios, even when vendors provide guarantees, does not protect you from potentially serious problems if the data reduction ratios are not achieved.

In Part 2 we will go through an example of how misleading data reduction guarantees can be.

One HCI manufacturer provides a guarantee promising 10:1 which sounds too good to be true, and that’s because it, quite frankly, isn’t true. The guarantee includes a significant caveat for the 10:1 data reduction:

The savings/efficiency are based on the assumption that you configure a backup policy to take at least one <redacted> backup per day of every virtual machine on every<redacted> system in a given VMware Datacenter with those backups retained for 30 days.

I have a number of issues with this limitation including:

  1. The use of the word “backup” referring directly/indirectly to data reduction (savings)
  2. The use of the word “backup” when referring to metadata copies within the same system
  3. No actual deduplication or compression is required to achieve the 10:1 data reduction because metadata copies (or what the vendor incorrectly calls “backups”) are counted towards deduplication.

It is important to note, I am not aware of any other vendor who makes the claim that metadata copies ( Snapshots / Point in time copies / Recovery points etc.) are deduplication. They simply are not.

I have previously written about what should be counted in deduplication ratios, and I encourage you to review this post and share your thoughts as it is still a hot topic and one where customers are being oversold/mislead regularly in my experience.

Now let’s do the math on my claim that no actual deduplication or compression is required to achieve the 10:1 ratio.

Let’s use a single 1 TB VM as a simple example. Note: The size doesn’t matter for the calculation.

Take 1 “backup” (even though we all know this is not a backup!!) per day for 30 days and count each copy as if it was a full backup to disk, Data logically stored now equals 31TB  (1 TB + 30 TB).

The actual Size on disk is only a tiny amount of metadata higher than the original 1TB as the metadata copies pointers don’t create any copies of the data which is another reason it’s not a backup.

Then because these metadata copies are counted as deduplication, the vendor reports a data efficiency of 31:1 in its GUI.

Therefore, Effective Capacity Savings = 96.8% (1TB / 31TB = 0.032) which is rigged to be >90% every time.

So the only significant capacity savings which are guaranteed come from “backups” not actual reduction of the customer’s data from capacity saving technologies.

As every modern storage platform I can think of has the capability to create metadata based point in time recovery points, this is not a new or even a unique feature.

So back to our topic, if you’re sizing your infrastructure based on the assumption of the 10:1 data efficiency, you are in for rude shock.

Dig a little deeper into the “guarantee” and we find the following:

It’s the ratio of storage capacity that would have been used on a comparable traditional storage solution to the physical storage that is actually used in the <redacted> hyperconverged infrastructure.  ‘Comparable traditional solutions’ are storage systems that provide VM-level synchronous replication for storage and backup and do not include any deduplication or compression capability.

So if you, for example, had a 5 year old NetApp FAS, and had deduplication and/or compression enabled, the guarantee only applies if you turned those features off, allowed the data to be rehydrated and then compared the results with this vendor’s data reduction ratio.

So to summarize, this “guarantee” lacks integrity because of how misleading it is. It  is worthless to any customer using any form of enterprise storage platform probably in the last 5 –  10 years as the capacity savings from metadata based copies are, and have been, table stakes for many, many years from multiple vendors.

So what guarantee does that vendor provide for actual compression and deduplication of the customers data? The answer is NONE as its all metadata copies or what I like to call “Smoke and Mirrors”.

Summary:

“No one will question your integrity if your integrity is not questionable.” In this case the guarantee and people promoting it have questionable integrity especially when many customers may not be aware of the difference between metadata copies and actual copies of data, and critically when it comes to backups. Many customers don’t (and shouldn’t have too) know the intricacies of data reduction, they just want an outcome and 10:1 data efficiency (saving) sounds to any reasonable person as they need 10x less than I have now… which is clearly not the case with this vendors guarantee or product.

Apart from a few exceptions which will not be applicable for most customers, 10:1 data reduction is way outside the ballpark of what is realistically achievable without using questionable measurement tactics such as counting metadata copies / snapshots / recovery points etc.

In my opinion the delta in the data reduction ratio between all major vendors in the storage industry for the same dataset, is not a significant factor when making a decision on a platform. This is because there are countless other substantially more critical factors to consider. When the topic of data reduction comes up in meetings I go out of my way to ensure the customer understands this and has covered off the other areas like availability, resiliency, recoverability, manageability, security and so on before I, quite frankly waste their time talking about table stakes capability like data reduction.

I encourage all customers to demand nothing less of vendors than honestly and integrity and in the event a vendor promises you something, hold them accountable to deliver the outcome they promised.

MS Exchange on Nutanix Acropolis Hypervisor (AHV)

While Virtualization of MS Exchange is now common across multiple hypervisors it continues to be a hotly debated topic. The most common objections being cost (CAPEX), the next being complexity (which translates to CAPEX & OPEX) and the third being that virtualization adds minimal value as MS Exchange provides application level high availability. The other objection I hear is Virtualization isn’t supported, which always makes me laugh.

In my experience, the above objections are typically given in the context of a dedicated MS Exchange environment, which in that specific context some of the points have some truth, but the question becomes, how many customers run only MS Exchange? In my experience, None.

Customers I see typically run tens, hundreds even thousands of workloads in their datacenters so architecting silos for each application is what actually leads to cost & complexity when we think outside the box.

Since most customers have virtualization and want to remove silos in favour of a standarized platform, MS Exchange is just another Business Critical Application which needs to be considered.

Let’s discuss each of the common objections and how I believe Acropolis + Nutanix XCP addresses these challenges:

Microsoft Support for Virtualization

For some reason, there is a huge amount of FUD regarding Microsoft support for Virtualization (other than Hyper-V), but Nutanix + Acropolis is certified under the Microsoft Server Virtualization Validation Program (SVVP) and runs on block storage via iSCSI protocol, so Nutanix + Acropolis is 100% supported for MS Exchange as well as other workloads like Sharepoint & SQL.

Cost (CAPEX)

Unlike other hypervisors and management solutions, Acropolis and Acropolis Hypervisor (AHV) come free with every Nutanix node which eliminates the licensing cost for the virtualization layer.

Acropolis management components also do not require purchase or installation of Tier 1 database platforms, all required management components are built into the distributed platform and scaled automatically as clusters are expanded. As a result, even licenses for Windows operating system are not required.

As a result, Nutanix + Acropolis gives Exchange deployments all the Virtualization features (below) which provide benefits at no cost.

  • High Availability & Live Migration
  • Hardware abstraction
  • Performance monitoring
  • Centralized management

Complexity (CAPEX & OPEX)

Nutanix XCP + Acropolis can be deployed in a fully optimal configuration from out of the box to operational in less than 60 minutes. This includes all required management components which are automatically deployed as part of the Nutanix Controller VM (CVM). For single cluster environments, no design/installation is required for any management components, and for multiple-cluster environments, only a single virtual appliance (PRISM Central) is required for single pane of glass management across all clusters.

Acropolis gives Exchange deployments all the advantages of Virtualization without:

  • Complexity of deploying/maintaining of database server/s to support management components
  • Deployment of dedicated management clusters to house management workloads
  • Having onsite Subject Matter Experts (SMEs) in Virtualization platform/s

Virtualization adds minimal value

While applications such as Exchange have application level high availability, Virtualization can further improve resiliency and flexibility for the application while making better use of infrastructure investments.

The Nutanix XCP including Acropolis + Acropolis Hypervisor (AHV) ensures infrastructure is completely abstracted from the Operating System and Application allowing it to deliver a more highly available and resilient platform.

Microsoft advice is to limit the maximum compute resources per Exchange server to 24 CPU cores and 96GB RAM. However with CPU core counts continuing to increase, this may result in larger numbers of servers being purchased and maintained where an application specific silo is deployed. This would lead to increased datacenter and licensing costs not to mention operational overhead of managing more infrastructure. As a result, being able to run Exchange alongside other workloads in a mixed environment (where contention can easily be avoided) reduces the total cost of infrastructure while providing higher levels of availability to all workloads.

Virtualization allows Exchange servers to be sized for the current workload and resized quickly and easily if/when required which ensures oversizing is avoided.

Some of the benefits include:

  • Minimizing infrastructure in the datacenter
  • Increasing utilization and therefore value for money of infrastructure
  • Removal of application specific silos
  • Ability to upgrade/replace/performance maintenance on hardware with zero impact to application/s
  • Faster deployment of new Exchange servers
  • Increase availability and provide higher fault tolerance
  • Self-healing capabilities at the infrastructure layer to compliment application level high availability
  • Ability to increase Compute/Storage resources beyond that of the current underlying physical server (Nutanix node) e.g.: Add storage capacity/performance

The Nutanix XCP Advantages (for Exchange)

  • More usable capacity

With features such as In-Line compression giving between 1.3:1 and 1.7:1 capacity savings & Erasure Coding providing up to a further 60% usable capacity, Nutanix XCP can provide more usable capacity than RAW while providing protection from SSD/HDD and entire server failures.

In-Line compression also improved performance of the SATA drives, so its a Win/Win. Erasure coding (EC-X) stores data in a more efficient manner which allows more data to be served from the SSD tier, also a Win/Win.

  • More Messages/Day and/or Users per physical CPU core

With all Write I/O serviced by SSD the CPU WAIT time is significantly reduced which frees up the physical CPU to perform other activities rather than waiting for a slow SATA drive to respond. As MS Exchange is CPU intensive (especially from 2013 onwards) this means more Messages per Day and/or Users can be supported per MSR VM compared to physical servers.

  • Better user experience

As Nutanix XCP is a hybrid platform (SSD+SATA), newer/hotter data is serviced by the SSD tier which means faster response times for users AND less CPU WAIT which also helps further increase CPU efficiencies, again leading to more Messages/Day and/or Users per CPU core.

Summary:

With Cost (CAPEX), Complexity (CAPEX & OPEX) and supportability issues well and truly addressed and numerous clear value adds, running a business critical application like MS Exchange on Nutanix + Acropolis Hypervisor (AHV) will make a lot of sense for many customers.