Sizing infrastructure based on vendor Data Reduction assumptions – Part 2

In part 1, we discussed how data reduction ratios can, and do, vary significantly between customers and datasets and that making assumptions on data reduction ratios, even when vendors provide guarantees, does not protect you from potentially serious problems if the data reduction ratios are not achieved.

In Part 2 we will go through an example of how misleading data reduction guarantees can be.

One HCI manufacturer provides a guarantee promising 10:1 which sounds too good to be true, and that’s because it, quite frankly, isn’t true. The guarantee includes a significant caveat for the 10:1 data reduction:

The savings/efficiency are based on the assumption that you configure a backup policy to take at least one <redacted> backup per day of every virtual machine on every<redacted> system in a given VMware Datacenter with those backups retained for 30 days.

I have a number of issues with this limitation including:

  1. The use of the word “backup” referring directly/indirectly to data reduction (savings)
  2. The use of the word “backup” when referring to metadata copies within the same system
  3. No actual deduplication or compression is required to achieve the 10:1 data reduction because metadata copies (or what the vendor incorrectly calls “backups”) are counted towards deduplication.

It is important to note, I am not aware of any other vendor who makes the claim that metadata copies ( Snapshots / Point in time copies / Recovery points etc.) are deduplication. They simply are not.

I have previously written about what should be counted in deduplication ratios, and I encourage you to review this post and share your thoughts as it is still a hot topic and one where customers are being oversold/mislead regularly in my experience.

Now let’s do the math on my claim that no actual deduplication or compression is required to achieve the 10:1 ratio.

Let’s use a single 1 TB VM as a simple example. Note: The size doesn’t matter for the calculation.

Take 1 “backup” (even though we all know this is not a backup!!) per day for 30 days and count each copy as if it was a full backup to disk, Data logically stored now equals 31TB  (1 TB + 30 TB).

The actual Size on disk is only a tiny amount of metadata higher than the original 1TB as the metadata copies pointers don’t create any copies of the data which is another reason it’s not a backup.

Then because these metadata copies are counted as deduplication, the vendor reports a data efficiency of 31:1 in its GUI.

Therefore, Effective Capacity Savings = 96.8% (1TB / 31TB = 0.032) which is rigged to be >90% every time.

So the only significant capacity savings which are guaranteed come from “backups” not actual reduction of the customer’s data from capacity saving technologies.

As every modern storage platform I can think of has the capability to create metadata based point in time recovery points, this is not a new or even a unique feature.

So back to our topic, if you’re sizing your infrastructure based on the assumption of the 10:1 data efficiency, you are in for rude shock.

Dig a little deeper into the “guarantee” and we find the following:

It’s the ratio of storage capacity that would have been used on a comparable traditional storage solution to the physical storage that is actually used in the <redacted> hyperconverged infrastructure.  ‘Comparable traditional solutions’ are storage systems that provide VM-level synchronous replication for storage and backup and do not include any deduplication or compression capability.

So if you, for example, had a 5 year old NetApp FAS, and had deduplication and/or compression enabled, the guarantee only applies if you turned those features off, allowed the data to be rehydrated and then compared the results with this vendor’s data reduction ratio.

So to summarize, this “guarantee” lacks integrity because of how misleading it is. It  is worthless to any customer using any form of enterprise storage platform probably in the last 5 –  10 years as the capacity savings from metadata based copies are, and have been, table stakes for many, many years from multiple vendors.

So what guarantee does that vendor provide for actual compression and deduplication of the customers data? The answer is NONE as its all metadata copies or what I like to call “Smoke and Mirrors”.

Summary:

“No one will question your integrity if your integrity is not questionable.” In this case the guarantee and people promoting it have questionable integrity especially when many customers may not be aware of the difference between metadata copies and actual copies of data, and critically when it comes to backups. Many customers don’t (and shouldn’t have too) know the intricacies of data reduction, they just want an outcome and 10:1 data efficiency (saving) sounds to any reasonable person as they need 10x less than I have now… which is clearly not the case with this vendors guarantee or product.

Apart from a few exceptions which will not be applicable for most customers, 10:1 data reduction is way outside the ballpark of what is realistically achievable without using questionable measurement tactics such as counting metadata copies / snapshots / recovery points etc.

In my opinion the delta in the data reduction ratio between all major vendors in the storage industry for the same dataset, is not a significant factor when making a decision on a platform. This is because there are countless other substantially more critical factors to consider. When the topic of data reduction comes up in meetings I go out of my way to ensure the customer understands this and has covered off the other areas like availability, resiliency, recoverability, manageability, security and so on before I, quite frankly waste their time talking about table stakes capability like data reduction.

I encourage all customers to demand nothing less of vendors than honestly and integrity and in the event a vendor promises you something, hold them accountable to deliver the outcome they promised.

MS Exchange on Nutanix Acropolis Hypervisor (AHV)

While Virtualization of MS Exchange is now common across multiple hypervisors it continues to be a hotly debated topic. The most common objections being cost (CAPEX), the next being complexity (which translates to CAPEX & OPEX) and the third being that virtualization adds minimal value as MS Exchange provides application level high availability. The other objection I hear is Virtualization isn’t supported, which always makes me laugh.

In my experience, the above objections are typically given in the context of a dedicated MS Exchange environment, which in that specific context some of the points have some truth, but the question becomes, how many customers run only MS Exchange? In my experience, None.

Customers I see typically run tens, hundreds even thousands of workloads in their datacenters so architecting silos for each application is what actually leads to cost & complexity when we think outside the box.

Since most customers have virtualization and want to remove silos in favour of a standarized platform, MS Exchange is just another Business Critical Application which needs to be considered.

Let’s discuss each of the common objections and how I believe Acropolis + Nutanix XCP addresses these challenges:

Microsoft Support for Virtualization

For some reason, there is a huge amount of FUD regarding Microsoft support for Virtualization (other than Hyper-V), but Nutanix + Acropolis is certified under the Microsoft Server Virtualization Validation Program (SVVP) and runs on block storage via iSCSI protocol, so Nutanix + Acropolis is 100% supported for MS Exchange as well as other workloads like Sharepoint & SQL.

Cost (CAPEX)

Unlike other hypervisors and management solutions, Acropolis and Acropolis Hypervisor (AHV) come free with every Nutanix node which eliminates the licensing cost for the virtualization layer.

Acropolis management components also do not require purchase or installation of Tier 1 database platforms, all required management components are built into the distributed platform and scaled automatically as clusters are expanded. As a result, even licenses for Windows operating system are not required.

As a result, Nutanix + Acropolis gives Exchange deployments all the Virtualization features (below) which provide benefits at no cost.

  • High Availability & Live Migration
  • Hardware abstraction
  • Performance monitoring
  • Centralized management

Complexity (CAPEX & OPEX)

Nutanix XCP + Acropolis can be deployed in a fully optimal configuration from out of the box to operational in less than 60 minutes. This includes all required management components which are automatically deployed as part of the Nutanix Controller VM (CVM). For single cluster environments, no design/installation is required for any management components, and for multiple-cluster environments, only a single virtual appliance (PRISM Central) is required for single pane of glass management across all clusters.

Acropolis gives Exchange deployments all the advantages of Virtualization without:

  • Complexity of deploying/maintaining of database server/s to support management components
  • Deployment of dedicated management clusters to house management workloads
  • Having onsite Subject Matter Experts (SMEs) in Virtualization platform/s

Virtualization adds minimal value

While applications such as Exchange have application level high availability, Virtualization can further improve resiliency and flexibility for the application while making better use of infrastructure investments.

The Nutanix XCP including Acropolis + Acropolis Hypervisor (AHV) ensures infrastructure is completely abstracted from the Operating System and Application allowing it to deliver a more highly available and resilient platform.

Microsoft advice is to limit the maximum compute resources per Exchange server to 24 CPU cores and 96GB RAM. However with CPU core counts continuing to increase, this may result in larger numbers of servers being purchased and maintained where an application specific silo is deployed. This would lead to increased datacenter and licensing costs not to mention operational overhead of managing more infrastructure. As a result, being able to run Exchange alongside other workloads in a mixed environment (where contention can easily be avoided) reduces the total cost of infrastructure while providing higher levels of availability to all workloads.

Virtualization allows Exchange servers to be sized for the current workload and resized quickly and easily if/when required which ensures oversizing is avoided.

Some of the benefits include:

  • Minimizing infrastructure in the datacenter
  • Increasing utilization and therefore value for money of infrastructure
  • Removal of application specific silos
  • Ability to upgrade/replace/performance maintenance on hardware with zero impact to application/s
  • Faster deployment of new Exchange servers
  • Increase availability and provide higher fault tolerance
  • Self-healing capabilities at the infrastructure layer to compliment application level high availability
  • Ability to increase Compute/Storage resources beyond that of the current underlying physical server (Nutanix node) e.g.: Add storage capacity/performance

The Nutanix XCP Advantages (for Exchange)

  • More usable capacity

With features such as In-Line compression giving between 1.3:1 and 1.7:1 capacity savings & Erasure Coding providing up to a further 60% usable capacity, Nutanix XCP can provide more usable capacity than RAW while providing protection from SSD/HDD and entire server failures.

In-Line compression also improved performance of the SATA drives, so its a Win/Win. Erasure coding (EC-X) stores data in a more efficient manner which allows more data to be served from the SSD tier, also a Win/Win.

  • More Messages/Day and/or Users per physical CPU core

With all Write I/O serviced by SSD the CPU WAIT time is significantly reduced which frees up the physical CPU to perform other activities rather than waiting for a slow SATA drive to respond. As MS Exchange is CPU intensive (especially from 2013 onwards) this means more Messages per Day and/or Users can be supported per MSR VM compared to physical servers.

  • Better user experience

As Nutanix XCP is a hybrid platform (SSD+SATA), newer/hotter data is serviced by the SSD tier which means faster response times for users AND less CPU WAIT which also helps further increase CPU efficiencies, again leading to more Messages/Day and/or Users per CPU core.

Summary:

With Cost (CAPEX), Complexity (CAPEX & OPEX) and supportability issues well and truly addressed and numerous clear value adds, running a business critical application like MS Exchange on Nutanix + Acropolis Hypervisor (AHV) will make a lot of sense for many customers.

Example Architectural Decision – ESXi Host Hardware Sizing (Example 1)

Problem Statement

What is the most suitable hardware specifications for this environments ESXi hosts?

Requirements

1. Support Virtual Machines of up to 16 vCPUs and 256GB RAM
2. Achieve up to 400% CPU overcommitment
3. Achieve up to 150% RAM overcommitment
4. Ensure cluster performance is both consistent & maximized
5. Support IP based storage (NFS & iSCSI)
6. The average VM size is 1vCPU / 4GB RAM
7. Cluster must support approx 1000 average size Virtual machines day 1
8. The solution should be scalable beyond 1000 VMs (Future-Proofing)
9. N+2 redundancy

Assumptions

1. vSphere 5.0 or later
2. vSphere Enterprise Plus licensing (to support Network I/O Control)
3. VMs range from Business Critical Application (BCAs) to non critical servers
4. Software licensing for applications being hosted in the environment are based on per vCPU OR per host where DRS “Must” rules can be used to isolate VMs to licensed ESXi hosts

Constraints

1. None

Motivation

1. Create a Scalable solution
2. Ensure high performance
3. Minimize HA overhead
4. Maximize flexibility

Architectural Decision

Use Two Socket Servers w/ >= 8 cores per socket with HT support (16 physical cores / 32 logical cores) , 256GB Ram , 2 x 10GB NICs

Justification

1. Two socket 8 core (or greater) CPUs with Hyper threading will provide flexibility for CPU scheduling of large numbers of diverse (vCPU sized) VMs to minimize CPU Ready (contention)

2. Using Two Socket servers of the proposed specification will support the required 1000 average sized VMs with 18 hosts with 11% reserved for HA to meet the required N+2 redundancy.

3. A cluster size of 18 hosts will deliver excellent cluster (DRS) efficiency / flexibility with minimal overhead for HA (Only 11%) thus ensuring cluster performance is both consistent & maximized.

4. The cluster can be expanded with up to 14 more hosts (to the 32 host cluster limit) in the event the average VM size is greater than anticipated or the customer experiences growth

5. Having 2 x 10GB connections should comfortably support the IP Storage / vMotion / FT and network data with minimal possibility of contention. In the event of contention Network I/O Control will be configured to minimize any impact (see Example VMware vNetworking Design w/ 2 x 10GB NICs)

6. RAM is one of the most common bottlenecks in a virtual environment, with 16 physical cores and 256GB RAM this equates to 16GB of RAM per physical core. For the average sized VM (1vCPU / 4GB RAM) this meets the CPU overcommitment target (up to 400%) with no RAM overcommitment to minimize the chance of RAM becoming the bottleneck

7. In the event of a host failure, the number of Virtual machines impacted will be up to 64 (based on the assumed average size VM) which is minimal when compared to a Four Socket ESXi host which would see 128 VMs impacted by a single host outage

8. If using Four socket ESXi hosts the cluster size would be approx 10 hosts and would require 20% of cluster resources would have to be reserved for HA to meet the N+2 redundancy requirement. This cluster size is less efficient from a DRS perspective and the HA overhead would equate to higher CapEx and as a result lower the ROI

9. The solution supports Virtual machines of up to 16 vCPUs and 256GB RAM although this size VM would be discouraged in favour of a scale out approach (where possible)

10. The cluster aligns with a virtualization friendly “Scale out” methodology

11. Using smaller hosts (either single socket, or less cores per socket) would not meet the requirement to support supports Virtual machines of up to 16 vCPUs and 256GB RAM , would likely require multiple clusters and require additional 10GB and 1GB cabling as compared to the Two Socket configuration

12. The two socket configuration allows the cluster to be scaled (expanded) at a very granular level (if required) to reduce CapEx expenditure and minimize waste/unused cluster capacity by adding larger hosts

13. Enabling features such as Distributed Power Management (DPM) are more attractive and lower risk for larger clusters and may result in lower environmental costs (ie: Power / Cooling)

Alternatives

1.  Use Four Socket Servers w/ >= 8 cores per socket , 512GB Ram , 4 x 10GB NICs
2.  Use Single Socket Servers w/ >= 8 cores , 128GB Ram , 2 x 10GB NICs
3. Use Two Socket Servers w/ >= 8 cores , 512GB Ram , 2 x 10GB NICs
4. Use Two Socket Servers w/ >= 8 cores , 384GB Ram , 2 x 10GB NICs
5. Have two clusters of 9 hosts with the recommended hardware specifications

Implications

1. Additional IP addresses for ESXi Management, vMotion, FT & Out of band management will be required as compared to a solution using larger hosts

2. Additional out of band management cabling will be required as compared to a solution using larger hosts

Related Articles

1. Example Architectural Decision – Network I/O Control for ESXi Host using IP Storage (4 x 10 GB NICs)

2. Example VMware vNetworking Design w/ 2 x 10GB NICs

3. Network I/O Control Shares/Limits for ESXi Host using IP Storage

4. VMware Clusters – Scale up for Scale out?

5. Jumbo Frames for IP Storage (Do not use Jumbo Frames)

6. Jumbo Frames for IP Storage (Use Jumbo Frames)

CloudXClogo