Nutanix Resiliency – Part 10 – Disk Scrubbing / Checksums

In this series I’ve covered a wide range of topics showing how resilient the Nutanix platform is including being able to maintain data integrity for new writes during failures and the ability to rebuild after a failure in a timely manner to minimise the risk of a subsequent failure causing problems.

Despite all of this information, competing vendors still try to discredit the data integrity that Nutanix provides with claims such as “rebuild performance doesn’t matter if both copies of data are lost” which is an overly simple way to look at things since the chance of both copies of data being lost are extremely low, and of course Nutanix supports RF3 for customers who wish to store three copies of data for maximum resiliency.

So let’s get into Part 10 where we cover two critical topics, Disk Scrubbing and Checksums both of which you will learn help ensure RF2 and RF3 deployments are extremely resilient and highly unlikely to experience scenarios where data could be lost.

Let’s start with Checksums, what are they?

A checksum is a small amount of data created during a write operation which can later be read back to verify if the actual data is intact (i.e.: not corrupted).

Disk scrubbing on the other hand is a background task which periodically checks the data for consistency and if any errors are detected, disk scrubbing initiates an error correction process to fix single correctable errors.

Nutanix performs checksums for every write operation (RF2 or RF3) and verifies the checksum for every read operation! This means that data integrity is part of the IO path and is not and cannot be skipped or turned off.

Data integrity is the number 1 priority for any storage platform which is why Nutanix does not and will never provide an option to turn checksum off.

Since Nutanix performs a checksum on read, it means that data being accessed is always being checked and if any form of corruption has occurred, Nutanix AOS automatically retrieves the data from the RF copy and services the IO and concurrently corrects the error/corruption to ensure subsequent failures do not cause data loss.

The speed at which Nutanix can rebuild from a node/drive or extent (1MB block of data) failure is critical to maintaining data integrity.

But what about cold data?

Many environments have huge amounts of cold data, meaning it’s not being accessed frequently, so the checksum on read operation wont be checking that data as frequently if at all if the data is not accessed so how do we protect that data?

Simple, Disk Scrubbing.

For data which has not been accessed via front end read operations (i.e.: Reads from a VM/app), the Nutanix implementation of disk scrubbing checks cold data once per day.

The disk scrubbing task is performed concurrently across all drives in the cluster so the chance of multiple concurrent failure occurring such as a drive failure and a corrupted extent (1MB block of data) and for those two drives to be storing the same data is extremely low and that’s assuming you’re using RF2 (two copies of data).

The failures would need to be timed so perfectly that no read operation had occurred on that extent in the last 24hrs AND background disk scrubbing had not been performed on both copies of data AND Nutanix AOS predictive drive failure had not detected a drive degrading and already proactively re-protected the data.

Now assuming that scenario arose, the drive failure would also have to be storing the exact same extent as the corrupted data block, which even in a small 4 node cluster such as a NX3460, you have 24 drives so the probability is extremely low. The larger the cluster the lower the chance of this already unlikely scenario and the faster the cluster can rebuild as we’ve learned earlier in the series.

If you still feel it’s too high a risk and feel strongly all those events will line up perfectly, then deploy RF3 and you would now have to have all the stars align in addition to three concurrent failures to experience data loss.

For those of you who have deployed VSAN, disk scrubbing is only performed once a year AND VMware frequently recommend turning checksums off, including in their SAP HANA documentation which has subsequently been updated after I called them out because this is putting customers at a high and unnecessary risk of data loss.

Nutanix also has the ability to monitor the background disk scrubbing activity, the below screen shot shows the scan stats for Disk 126 which in this environment is a 2TB SATA drive at around 75% utilisation.

DiskScrubbingStats

Disk126

AOS ensures disk scrubbing occurs at a speed which guarantees the scrubbing of the entire disk regardless of size is finished every 24 hours, as per the above screenshot this scan has been running for 48158724ms or according to google, 13.3hrs with 556459ms (0.15hrs) ETA to complete.

scanduration

If you combine the distributed nature of the Acropolis Distributed Storage Fabric (ADSF) where data is dynamically spread evenly based on capacity and performance, a clusters ability to tolerate multiple concurrent drives failures per node, checksums being performed on every read/write operation, disk scrubbing being completed every day, proactive monitoring of hard drive/SSD health to in many cases re-protect data before a drive fails as well at the sheer speed that ADSF can rebuild data following failures, it’s easy to see why even using Resiliency Factor 2 (RF2) provides excellent resiliency.

Still not satisfied, change the Resiliency Factor to 3 (RF3) and you have yet another layer of protection and you get even more protection for the workloads you choose to enable RF3 for.

When considering your Resiliency Factor, or Failures to Tolerate in vSAN language, do not make the mistake of thinking two copies of data on Nutanix and vSAN is equivalent, Nutanix RF2 is vastly more resilient than FTT1 (2 copies) on vSAN which is why VMware frequently recommend FTT2 (3 copies of data). This actually makes sense because of the following reasons:

  1. vSAN is not a distributed storage fabric
  2. vSAN rebuild performance is slow and high impact
  3. vSAN disk scrubbing is only performed once a year
  4. VMware frequently recommend to turn checksums OFF (!!!)
  5. A single cache drive failure takes an entire disk group offline
  6. With all flash vSAN using compression and/or dedupe, a single drive brings down the entire disk group

Architecture matters, and for anyone who takes the time to investigate beyond the marketing slides of HCI and storage products will see that Nutanix ADSF is the clear leader especially when it comes to scalability, resiliency & data integrity.

Other companies/products are clear leaders in Marketecture (to be blunt, Bullshit like in-kernel being an advantage and 10:1 dedupe) but Nutanix leads  where it matters with a solid architecture which delivers real business outcomes.

Index:
Part 1 – Node failure rebuild performance
Part 2 – Converting from RF2 to RF3
Part 3 – Node failure rebuild performance with RF3
Part 4 – Converting RF3 to Erasure Coding (EC-X)
Part 5 – Read I/O during CVM maintenance or failures
Part 6 – Write I/O during CVM maintenance or failures
Part 7 – Read & Write I/O during Hypervisor upgrades
Part 8 – Node failure rebuild performance with RF3 & Erasure Coding (EC-X)
Part 9 – Self healing
Part 10: Nutanix Resiliency – Part 10 – Disk Scrubbing / Checksums

Integrity of I/O for VMs on NFS Datastores – Part 4 – Torn Writes

This is the fourth part of a series of posts covering how the Integrity of Write I/O is ensured for Virtual Machines when writing to VMDK/s (Virtual SCSI Hard Drives) running on NFS datastores presented via VMware’s ESXi hypervisor as a “Datastore”.

This part will focus on Torn Write I/O.

As a reminder from the first post, this post is not talking about presenting NFS direct to Windows.

Some of you are probably wondering “What is a Torn Write”?

A Torn write can occur if there is a problem (e.g.: Power or HW failure) during a multi sector block being written.

The below shows what a Torn Write looks like, which is basically where part of data A and B remain after a Torn write, resulting in corrupted data.

Torn Write

Image Source: Silent data corruption in disk arrays: A solution

The article Toward I/O-Efficient Protection Against Silent Data Corruptions in RAID Arrays describes a Torn Write (I/O) as:

Torn write: When a disk write is issued to a chunk, only a
portion of sectors in the chunk are successfully updated,
and the chunk contains some stale sectors in the end part.

The issue with the write I/O being written across multiple sectors is that in the event a power outage impacting the write back cache or a hardware issue such as a drive failing, the I/O may be partially written (or “Torn”). This means the data was not fully written, but some data was written overwriting the existing data causing corruption.

In this case, if the storage solution provides a write acknowledgement and the data is partially or not written to persistent media this results in what is known as silent data corruption as data being read back will be part of the new data and part of the old data.

It should be noted RAID does not protect against Torn writes, nor can it help correct the situation once it has occurred.

The next question is, does the issue of Torn writes impact VMs on ESXi backed by NFS datastores. The answer is, Yes because Torn Writes can potentially occur on any storage solution regardless of the abstracted storage protocol.

So do Torn Writes occur VMs on ESXi backed by NFS datastores? The answer again would be Yes, but importantly, this would not be as a result of anything at the hypervisor layer, it would be as a result of a failure impacting the underlying storage.

Note: This issue equally impacts block and file based storage presented to ESXi, so it is not a NFS specific issue.

So what is required to provide protection against Torn Writes?

The best method to protect against Torn Writes is to use checksums, specifically Block level checksums which can check the integrity of writes which span multiple sectors, therefore in the event of a torn write, the checksum will fail and a write acknowledgement will not be sent. The important fact here is the underlying storage is responsible for this process, not ESXi , the VMDK or storage protocol (FC,FCoE,iSCSI, NFS!) presenting the storage to ESXi.

In summary, Torn Writes are not an issue with VMs running on ESXi backed by NFS datastores where the underlying storage performs Block level checksums.

I have requested VMware create a Knowledge base article on Torn Writes for formal reference and will update this post with the reference if/when this is done.

In part five, I will discuss Data Corruption.

Integrity of Write I/O for VMs on NFS Datastores Series

Part 1 – Emulation of the SCSI Protocol
Part 2 – Forced Unit Access (FUA) & Write Through
Part 3 – Write Ordering
Part 4 – Torn Writes
Part 5 – Data Corruption

Nutanix Specific Articles

Part 6 – Emulation of the SCSI Protocol (Coming soon)
Part 7 – Forced Unit Access (FUA) & Write Through (Coming soon)
Part 8 – Write Ordering (Coming soon)
Part 9 – Torn I/O Protection (Coming soon)
Part 10 – Data Corruption (Coming soon)

Related Articles

1. What does Exchange running in a VMDK on NFS datastore look like to the Guest OS?
2. Support for Exchange Databases running within VMDKs on NFS datastores (TechNet)
3. Microsoft Exchange Improvements Suggestions Forum – Exchange on NFS/SMB
4. Virtualizing Exchange on vSphere with NFS backed storage

Integrity of I/O for VMs on NFS Datastores – Part 2 – Forced Unit Access (FUA) & Write Through

This is the second part of this series and the focus of this post is to cover a critical requirement for many applications including MS SQL and MS Exchange (which is designed to work with Block based storage) to operate as designed and to ensure data integrity is support for Forced Unit Access (FUA) & Write Through.

As a reminder from the first post, this post is not talking about presenting NFS direct to Windows.

The key here is for the storage solution to honour the “Write-to-stable” media intent and not depend on potentially vulnerable caching/buffering solutions using non persistent media which may require battery backing.

Microsoft have a Knowledge base article relating to the requirements for SQL Server, which details the FUA & Write Through requirements, along with other requirements covered in this series which I would recommend reading.

Key factors to consider when evaluating third-party file cache systems with SQL Server

Forced Unit Access (FUA) & Write-Through is supported by VMware but even with this support, it is also a function of the underlying storage to honour the request and this process or even support may vary from storage vendor to storage vendor.

A key point here is this process is delivered by the VMDK at the hypervisor level and passed onto the underlying storage, so regardless of the protocol being Block (iSCSI/FCP) or File based (NFS) it is the responsibility of the storage solution once the I/O is passed to it from the hypervisor.

Where a write cache on non persistent media (ie: RAM) is used, the storage vendor needs to ensure that in the event of a power outage there is sufficient battery backing to enable the cache to be de-staged to persistent media (ie: SSD / SAS / SATA).

Some solutions use Mirrored Write Cache to attempt to mitigate the risk of power outages causing issues but this could be argued to be not in compliance with the FUA which intends the Write I/O to be committed to stable media BEFORE the I/O is acknowledged as written.

If the solution does not ensure data is written to persistent media, it is not compliant and applications requiring FUA & Write-Through will likely be impacted at some point.

As I work for a storage vendor, I wont go into detail about any other vendor, but I will have an upcoming post on how Nutanix is in compliance with FUA & Write-Through.

In part three, I will discuss Write Ordering.

Integrity of Write I/O for VMs on NFS Datastores Series

Part 1 – Emulation of the SCSI Protocol
Part 2 – Forced Unit Access (FUA) & Write Through
Part 3 – Write Ordering
Part 4 – Torn Writes
Part 5 – Data Corruption

Nutanix Specific Articles

Part 6 – Emulation of the SCSI Protocol (Coming soon)
Part 7 – Forced Unit Access (FUA) & Write Through (Coming soon)
Part 8 – Write Ordering (Coming soon)
Part 9 – Torn I/O Protection (Coming soon)
Part 10 – Data Corruption (Coming soon)

Related Articles

1. What does Exchange running in a VMDK on NFS datastore look like to the Guest OS?
2. Support for Exchange Databases running within VMDKs on NFS datastores (TechNet)
3. Microsoft Exchange Improvements Suggestions Forum – Exchange on NFS/SMB
4. Virtualizing Exchange on vSphere with NFS backed storage