It’s 2017, let’s review Thick vs Thin Provisioning

For a long time, it has been widely considered that thick provisioning is required to achieve maximum storage performance and for many years this was a good rule of thumb.

Before we get into details, what are Thick and Thin provisioning?

Thick provisioning is where storage allocated to a LUN, NFS mount or Virtual Disk (such as a VMDK in ESXi, VHDX in Hyper-V or vDisk in AHV) is zeroed out and/or fully reserved regardless of how much capacity is actually used.

Thick provisioning avoids a storage subsystem from having to zero out a block before writing new data which is one of the reasons higher performance could be achieved on many storage platforms.

Thin provisioning on the other hand is where storage allocated to a LUN or Virtual Disk is zeroed as data is written and allows physical capacity to be overcommitted.

The advantages of Thick provisioning included easier capacity management, or simply put a “What you see is what you get” as well as maximum performance on most platforms. But by maximum performance, even on older storage platforms the advantage was rarely significant as people would claim.

VMware conducted a Performance Study of VMware vStorage Thin Provisioning back in the ESXi 4.0 days (~2009) which I will briefly summarise.

On page 6 of the performance study the following graph shows the different in performance between Thin and Thick VMDKs during zeroing and post-zeroing.

As you can see the performance is almost identical.

The disadvantages though were and remain significant to this day which include an inability to overcommit storage, meaning physical free space has to be maintained at multiple layers such as RAID group, LUN, Virtual Disk layers, leading to inefficiency.

The advantages of Thin provisioning include the ability to overcommit storage which results in more flexibility when sizing LUNs & Virtual Disks and less wasted space. The only real downsides were potentially increased capacity management complexity and lower performance.

I have previously written two example architectural decisions regarding using “Thin on Thin“, meaning thin provisioned virtual disks on a thin provisioned LUN or NFS mount as well as “Thin on Thick” meaning thin provisioned virtual disks on a thick provisioned LUN or NFS mount. These two examples cover off many of the traditional pros and cons between thick and think, so I won’t repeat myself here.

I never wrote an example design decision for Thick on Thick, but this was common practice when provisioning storage was time consuming, difficult and involved lengthly delays to engage subject matter experts.

In early 2015, I wrote a two part blog series where I explained it’s not as simple as you might think to calculate usable capacity where I compared SAN/NAS verses Nutanix. In part 1, I highlight that the LUN Provisioning Type is one area which can greatly impact the usable capacity of a traditional storage platform.

But fast forward into the era of hyper-converged platforms like Nutanix and some modern storage arrays and the major downsides of thin provisioning, being complexity of capacity management and reduced performance have not only been reduced, but at least in the case of Nutanix, have been eliminated all together.

Let’s address Capacity management w/ Nutanix:

Storage utilisation only needs to be monitored in ONE place, the storage summary which lives on the home screen of the Nutanix HTML 5 UI.

NutanixStorageSummary

No matter how many nodes in your cluster, number of containers (which translate to datastores in a VMware environment), virtual machines & virtual disks or physical servers connecting via ABS, this is the only place you need to monitor capacity.

There are no RAID groups, Disk Groups, Aggregates, LUNs etc where capacity needs to be managed. All nodes in a cluster contributed to the capacity of the cluster and even when one or more virtual machines use more capacity than a the node they run on, Nutanix Acropolis Distributed Storage Fabric (ADSF) takes care of it.

So issue #1, Capacity management, is solved. Now it’s onto the issue of performance.

Thin Provisioning Performance w/ Nutanix:

When running ESXi, Nutanix runs NFS datastores and supports thick provisioning via the VAAI-NAS Space reservation primitive as discussed in this post. This allows the creation of thick provisioned (Eager Zero or Lazy Zero Thick) VMDKs when traditionally NFS datastores did not support it.

However this was only required for Oracle RAC and VMware Fault Tolerance and was not a performance requirement.

However from a performance perspective, Thin provisioning actually outperforms thick on intelligent storage such as Nutanix. In the specific case of Nutanix, random write I/O is serviced by the fastest tier available (e.g.: SSD) and via the operations log (OPLOG) which takes the random writes commits them to persistent media, and then coalesces them into sequential IO to then commit to SSD before tiering it off to lower cost storage in the case of hybrid nodes.

This means the write penalty for overwriting or zeroing blocks before writing new I/O is eliminated.

In fact if you configure thick provisioned virtual disks, as the zeros (or whitespace) is being written by the hypervisor, the Nutanix storage fabric acknowledges every I/O and discards the zeros in favour of storing metadata and simply reserving the capacity. In simple terms, this just means Nutanix has to acknowledge a whole bunch of nothing and the thick provisioning is achieve with a simple reservation as opposed to zeroing out many GBs or TBs of storage.

This means thick provisioning is actually lower performance than thin provisioning on Nutanix.

With modern, intelligent storage, there is limited if any benefits to using thick provisioning, the only example I can think of is to artificially inflate the deduplication ratio as thick provisioned virtual disks tend to have a lot of zeros all of which dedupe. I wrote an article titled: “Deduplication ratios – What should be included in the reported ratio?” which covers off this point in detail but in short, don’t create unnessasary data (in this case, zeros) just to inflate your dedupe ratio, it just wastes storage controller resources and achieves no additional benefits.

The following is a comprehensive list of the real world advantages of using thick provisioning on Nutanix.

This space is intentionally left blank

Summary:

For the best efficiency and performance when deploying virtual machines or storage for physical servers via ABS on Nutanix, use thin provisioning!

Ignore the nonsense on twitter, What does “NoSAN” mean?

Every now and again I see nonsense on twitter which I feel needs to be responded too. The reason I am responding today is to correct mis-information about what Nutanix NoSAN is.

Earlier today a competitor of Nutanix tweeted the following:

FudSlinger

I responded to the above with the following tweet:

mytweet

To which the person responded with this:FudSlinger2

I responded with the below and the conversation ended with the following tweet:FudSlinger3

 

So before I correct the mis-information, let me briefly explain what “SAN” is:

“SAN” or “Storage Area Network” describes the connectivity between a compute node and a storage device (such as a central storage array or disk system). You can for example buy SAN (or FC) Switch/es from companies like Brocade.

However the I.T industry has for whatever reason over the years has made “SAN” mean “Central Disk System / Storage array” so for the purpose of this post, “SAN” is a Traditional Centralized Storage array (SAN/NAS).

So let’s correct the mis-information:

Claim 1: With Nutanix there is a SAN that is auto managed.

Fact: There is no centralized storage with Nutanix

Nutanix software running NDFS (Nutanix Distributed File System) logically presents DAS storage as shared storage across 3 or more nodes via NFS or SMB 3.0 to ESXi, Hyper-V or KVM. Note: While Nutanix supports iSCSI, its not recommended as it creates unnecessary complexity and has no technical advantages.

All Nutanix nodes have local DAS storage which is presented logically as shared storage and there is no “central” Nutanix nodes.

Note: Nutanix nodes can connect to traditional central SAN/NAS storage (see : Can I use my existing SAN/NAS storage with Nutanix)  but this is not Nutanix native architecture.

SAN’s also have key characteristics such as Zoning, Masking, LUNs, RAID, SANs also typically use Fibre Channel (FC) connectivity over dedicated fabrics although this is not always the case.

With Nutanix, There is no:

1. Central storage (SAN or NAS based)
2. LUNs
3. LUN masking
4. Zoning
5. Storage Controller “Pairs”
6. Dedicated Storage Fabric
7. Silos of storage capacity
8. RAID

Therefore the statement about Nutanix being a SAN that is “auto managed” is simply incorrect.

If a SAN “auto manages” LUNs, Zoning, Masking etc its just a smarter SAN, the problems with SAN (and NAS) cannot be solved by simply “masking” the complexity. (Pun intended)

Claim 2: NDFS is a distributed storage array.

Fact: NDFS is a file system, not a storage array.

Nutanix Distributed File System (NDFS) makes up part of the Nutanix solution, it is not a storage array and it is not centralised storage either.

Nutanix is a scale out shared nothing platform where data is written locally where the VM is running and in a distributed (not centralized) manner across nodes.

So what does NoSAN mean to me?

1. No centralized storage array
2. No LUNs, Zoning , Masking , RAID
3. No dedicated storage fabric (e.g.: Fibre Channel Switches)
4. Reduced complexity
5. No Silos of capacity
6. No Storage Controller “Pairs”

I could go on but I think you get the point.

In conclusion, don’t believe what you hear on social media (especially from competitors of a product) and do your own research and validate your findings from multiple sources.

Example Architectural Decision – Datastore (LUN) Sizing with Block Based Storage

Problem Statement

In a vSphere environment, What is the most suitable Datastore (LUN) sizing to use for to support both production & development workloads to ensure minimum storage overhead and optimal performance?

Requirements

1. RTO 4hrs
2. RPO 12hrs
3. Support Production and Test & Development Workloads
4. Ensure optimal storage capacity utilization
5. Ensure storage performance is both consistent & maximized
6. Ensure the solution is fully supported
7. Minimize BAU effort (Monitoring)

Assumptions

1. Business critical applications are excluded
2. Block based storage
3. VAAI is supported and enabled
4. VADP backups are being utilized
5. vSphere 5.0 or later
6. Storage DRS will not be used
7. SRM is in use
8. LUNs & VMs will be thin provisioned
9. Average size VM will be 100GB and be 50% utilized
10. Virtual machine snapshot will be used but not for > 24 hours
11. Change rate of average VM is <= 15% per 24 hour period
12. Average VM has 4GB Ram
13. No Memory reservations are being used
14. Storage I/O Control (SOIC) is not being used
15. Under normal circumstances storage will not be over committed at the storage array level.
16. The average maximum IOPS per VMs is 125 (16Kb) (MBps per VM <=2)
17. The underlying storage has sufficient performance to cater for the average maximum IOPS per VM
18. A separate swap file datastore will be configured per cluster

Constraints

1. Must used existing storage solution (Block Based Storage)

Motivation

1. Increase flexibility
2. Ensure physical disk space is not unnecessarily wasted
3. Create a Scalable solution
4. Ensure high performance
5. Ensure high utilization of storage resources by reducing “islands” of unused capacity
6. Provide flexibility in the unit size of partial SRM failovers

Architectural Decision

The standard datastore size will be 3TB and contain up to 25 standard virtual machines.

This is based on the following

25 VMs per datastore X 100GB (Assumes no over commitment) = 2500GB

25 VMs w/ 4GB RAM = 100GB minus 0Gb reservation = 100GB vswap space to be stored on the swap file datastore

25 VMs w/ Snapshots of up to 15% =  375GB

Total = 2500GB + 375GB = 2875GB

Average capacity used per VM = 115GB

Justification

1. In worst case scenario where every VM has used 100% of its VMDK capacity and has 4GB RAM with no memory reservation and a snapshot of up to 15% of its size the 3TB datastore will still have 197GB remaining, as such it will not run out of space.
2. The Queue depth is on a per datastore (LUN) basis, as such, having 25 VMs per LUNs allows for a minimum of 1.28 concurrent I/O operations per VM based on the standard queue depth of 32 although it is unlikely all VMs will have concurrent I/O so the average will be much higher.
3. Thin Provisioning minimizes the impact of situations where customers demand a lot of disk space up front when they only end up using a small portion of the available disk space
4. Using Thin provisioning for VMs increases flexibility as all unused capacity of virtual machines remains available on the Datastore (LUN).
5. VAAI automatically raises an alarm in vSphere if a Thin Provisioned datastore usage is at >= 75% of its capacity
6. The impact of SCSI reservations causing performance issues (increased latency) when thin provisioned virtual machines (VMDKs) grow is unlikely to be an issue for 25 low I/O VMs and with VAAI is no longer an issue as the Atomic Test & Set (ATS) primitive alleviates the issue of SCSI reservations.
7. As the VMs are low I/O it is unlikely that there will be any significant contention for the queue depth with only 25 VMs per datastore
8. The VAAI UNMAP primitive provides automated space reclamation to reduce wasted space from files or VMs being deleted
9. Virtual machines will be Thin provisioned for flexibility, however they can also be made Thick provisioned as the sizing of the datastore (LUN) caters for worst case scenario of 100% utilization while maintaining free space.
10. Having <=25 VMs per datastore (LUN) allows for more granular SRM fail-over (datastore groups)

Alternatives

1.  Use larger Datastores (LUNs) with more VMs per datastore
2.  Use smaller Datastores (LUNs) with less VMs per datastore

Implications

1. When performing a SRM fail over, the most granular fail over unit is a single datastore which may contain up to 25 Virtual machines.

2. The solution (day 1) does not provide CapEx saving on disk capacity but will allow (if desired) over commitment in the future

Thanks to James Wirth (VCDX#83) @JimmyWally81 for his contributions to this example decision.

Related Articles

1. Datastore (LUN) and Virtual Disk Provisioning (Thin on Thick)

2. Datastore (LUN) and Virtual Disk Provisioning (Thin on Thin)

3. Virtual Machine vSwap Location

CloudXClogo