When considering using a public cloud offering such as Amazon EC2 (AWS) combined with a product such as Nutanix Clusters or VMware Cloud (VMC) it’s important to understand and consider how the solution will work and the relevant Total cost of ownership (TCO) and Return on investment (ROI).
One simple yet often overlooked factor is how much of the bare metal resources can actually be used?
As I’ve previously highlighted, during comparisons between products, marketing material, in many cases “Tick-Box” style slides being compared and worse still being taken on face value which can lead to incorrect assumptions for critical architectural/sizing considerations such as capacity, resiliency and performance.
Let me give you a simple example of AWS i3.metal for VMC:
Here we see VMC is delivering 31.1TB of physical capacity.
VMC is based on vSAN which uses “Disk groups” each requiring a “Cache” drive. For VMC, VMware have chosen to use two disk groups which means a ratio of 1 cache drive to 3 capacity drives.
Now let’s look at the usable capacity for Nutanix Clusters on AWS.
Thanks to Nutanix AOS, 3 x i3.metal nodes deliver 39.8TB which is a 24.5% increase over the VMC offering.
Even with a small 3 node environment, it’s clear that Nutanix Clusters provides a much better ROI for the i3.metal instances in AWS.
This is because Nutanix AOS does not use the outdated and architectural flawed concept of cache and capacity drives.
All flash devices in a Nutanix node are used for both writes and reads, this means no matter what the hardware type or flash device quantity, the hardware efficiency is always maximised to deliver maximum performance while never compromising resiliency.
VMC on the other hand (as it’s based on vSAN) needs to compromise capacity for performance as multiple drives need to be reserved for cache to deliver reasonable levels of performance and avoid a single cache SSD failure causing an entire node failure.
With Nutanix Performance & resiliency are optimal by default without compromising usable capacity.
Next up we’ll discuss how things change as cluster sizes increase and what impact/benefit capacity efficiency technologies such as Erasure Coding & Compression have on both solutions in AWS.
- Public Cloud Challenges – Part 1 – Network performance
- Public Cloud Challenges – Part 2 – TCO/ROI & Storage Capacity
- Public Cloud Challenges – Part 3 – TCO/ROI & Storage Capacity at scale
- Public Cloud Challenges – Part 4 – Data Efficiency Technologies & Resiliency considerations.
- Public Cloud Challenges – Part 5 – Storage device failures & resiliency implications
- Public Cloud Challenges – Part 6 – Bare Metal Instance failures
- HCI Architecture Matters – Nutanix AOS vs the competition & their Cache Drives & Disk Groups
- Usable Capacity Comparison – Nutanix ADSF vs VMware vSAN
- Deduplication & Compression Comparison – Nutanix ADSF vs vSAN
- Erasure Coding Comparison – Nutanix ADSF vs vSAN
- Scaling Storage Capacity – Nutanix & vSAN
- Drive failure Comparison – Nutanix ADSF vs VMware vSAN
- Heterogeneous Cluster Support – Nutanix vs VMware vSAN
- Write I/O Path Comparison – Nutanix vs VMware vSAN
- Read I/O Path Comparison – Nutanix vs VMware vSAN
- Node Failure Comparison – Nutanix vs VMware vSAN/VxRAIL
- Storage Upgrade Comparison – Nutanix vs VMware vSAN/VxRAIL
- Usable Capacity Comparison PART 2 – Nutanix vs VMware vSAN/VxRAIL
- Memory Usage Comparison – Nutanix vs VMware vSAN/DellEMC VxRAIL
- Network Usage Comparison – Nutanix vs VMware vSAN/DellEMC VxRAIL
- Nutanix | Scalability, Resiliency & Performance
- Nutanix – Erasure Coding (EC-X) Deep Dive
- Performance impact & overheads of Inline Compression on Nutanix?
- My checkbox is bigger than your checkbox! by Hans De Leenheer
- Not all VAAI-NAS storage solutions are created equal.
- Automated Storage Reclaim on Nutanix Acropolis Hypervisor (AHV)