Integrity of I/O for VMs on NFS Datastores – Part 5 – Data Corruption

This is the fifth part of a series of posts covering how the Integrity of Write I/O is ensured for Virtual Machines when writing to VMDK/s (Virtual SCSI Hard Drives) running on NFS datastores presented via VMware’s ESXi hypervisor as a “Datastore”.

This part will focus on Data Corruption.

As a reminder from the first post, this post is not talking about presenting NFS direct to Windows.

So why am I covering data corruption? Simple, because there is a misconception that SCSI commands are not properly supported for VMs running on NFS datastores which leads to corruption. This was covered in Part 1, so Part 5 will focus on data corruption not specific to NFS, but which can effect all storage platforms and how it occurs, then how storage solutions can mitigate the risk of data corruption issues.

The following data is a summary of the data provided in An analysis of data corruption in the storage stack.

Netapp conducted a large scale study into data corruption, which covered >1 Million HDDs across tens of thousands of Netapp systems over 41 months (2004 – 2007) and long story short, Netapp detected a level of data corruption which surprised me and seems to disprove many things like advertised MTBF for HDDs.

The following shows a breakdown of the problems found.

netappfailureanalysis

The first thing I noticed in the above pie charts is the vast difference between the percentage of failures in Enterprise grade disks (left) and nearline based disks (right).

It also shows physical interconnects to be a large percentage of failures, which highlights the need for simplicity in the storage solution. In addition, one of the more surprising results in the level of storage protocol and performance based failures being the cause of corruption.

Note: In this study, the majority of systems deployed were FC (Block storage based) based, this highlights that a storage protocol itself regardless of being block or file based storage, can have issues if improperly implemented. So regardless of storage protocol, corruption can occur.

The below summary of corruption type and percentage of disks effected shows the dramatic 10x more issues with SATA drives compared to Enterprise grade drives.

NLvsEnterprise

The above also shows bit corruptions or Torn Writes effect more disks compared to lost or misdirected writes, which highlights the importance of Torn I/O Protection (covered in Part 4).

The article summarizes in the following points:summary

The main take away from my perspective is:

1. The requirement to have corruption handling mechanisms for any environment running workloads which require data integrity.
2. Data should be spread out (ideally across disks) to minimize the chance of issues.

The article went on to form these conclusions:

conclustion

In Summary:

1. Data corruption can occur on JBOD , enterprise grade storage solutions and everything in between.
2. SATA drives have a much higher rate (~10x) of corruption.
3. Enterprise grade drives are much better from a data integrity perspective.
4. Corruption handling via sector and ideally block based checksums is essential on writes.
5. Using a checksum on Read helps detect corrupted data.
6. Corruption can occur even when no ECC errors are reported by a physical HDD.
7. Any storage protocol implementation can have bugs which can lead to corruption.
8. Backup / Recovery solutions are essential. Reliance solely on primary storage or application level backups using disks puts your data at risk.
9. Solutions solely dependant on application level data protection on disk are at risk of corrupted data being replicated to other active/passive or backup copies.

My final point, in an enterprise grade storage solutions which use checksums to verify data integrity on write and reads, have a much lower risk of data corruption regardless of media type and storage protocol.

JBOD style deployments using SATA drives have a significantly higher risk of data corruption which is contributed to by the SATA drives 10x higher corruption rates and the lack of enterprise grade checksum features found in some shared storage (SAN/NAS) solutions.

Integrity of Write I/O for VMs on NFS Datastores Series

Part 1 – Emulation of the SCSI Protocol
Part 2 – Forced Unit Access (FUA) & Write Through
Part 3 – Write Ordering
Part 4 – Torn Writes
Part 5 – Data Corruption

Nutanix Specific Articles

Part 6 – Emulation of the SCSI Protocol (Coming soon)
Part 7 – Forced Unit Access (FUA) & Write Through (Coming soon)
Part 8 – Write Ordering (Coming soon)
Part 9 – Torn I/O Protection (Coming soon)
Part 10 – Data Corruption (Coming soon)

Related Articles

1. What does Exchange running in a VMDK on NFS datastore look like to the Guest OS?
2. Support for Exchange Databases running within VMDKs on NFS datastores (TechNet)
3. Microsoft Exchange Improvements Suggestions Forum – Exchange on NFS/SMB
4. Virtualizing Exchange on vSphere with NFS backed storage

Integrity of I/O for VMs on NFS Datastores – Part 3 – Write Ordering

This is the third part of a series of posts covering how the Integrity of I/O is ensured for Virtual Machines when writing to VMDK/s (Virtual SCSI Hard Drives) running on NFS datastores presented via VMware’s ESXi hypervisor as a “Datastore”.
As a reminder from the first post, this post is not talking about presenting NFS direct to Windows.

 

Write Ordering / Order Preservation

Another common concern when running business critical applications such as MS SQL and MS Exchange is Write Ordering and if/how this is handled by the SCSI protocol emulation process.

This requirement is described by Microsoft as:

The order of the I/O operations associated with SQL Server must be maintained. The system must maintain write ordering or it breaks the WAL protocol as described in this paper. (The log records must be written out in correct order and the log records must always be written to stable media before the data pages that the log records represent are written.) After a transaction log record is successfully flushed, the associated data page can then be flushed as well. If the subsystem allows the data page to reach stable media before the log record does, data integrity is breached.

Source: Microsoft SQL Server I/O basics.

VMware have released a Knowledge Base article specifically on this topic which states the following.

Write ordering and write-through integrity for NFS storage are both satisfied with NFS in an VMware ESX environment.
An NFS datastore, when mounted on an ESX host, goes through virtual SCSI emulation. A virtual machine disk (VMDK) file on an NFS datastore appears as a SCSI disk within the virtual machine’s guest operating system, which is no different than one residing on a VMFS volume over FCP or iSCSI protocol. Therefore, write ordering and write-through integrity are no different than those with block based storage (such as iSCSI or FCP protocol).
The above is the bulk of the article, but the full article can be found below.

Maintaining write ordering and write-through integrity using NFS in an ESX environment (KB1012143)

So as with Forced Unit Access (FUA) & Write-Through, Write Ordering is supported by VMware but even with this support, it is also a function of the underlying storage to honour the request and this process or even support may vary from storage vendor to storage vendor.

Again the point here is this process is delivered by the VMDK at the hypervisor level and passed onto the underlying storage, so regardless of the protocol being Block (iSCSI/FCP) or File based (NFS) it is the responsibility of the storage solution once the I/O is passed to it from the hypervisor.

In part four, I will discuss Torn I/O Protection.

Integrity of Write I/O for VMs on NFS Datastores Series

Part 1 – Emulation of the SCSI Protocol
Part 2 – Forced Unit Access (FUA) & Write Through
Part 3 – Write Ordering
Part 4 – Torn Writes
Part 5 – Data Corruption

Nutanix Specific Articles

Part 6 – Emulation of the SCSI Protocol (Coming soon)
Part 7 – Forced Unit Access (FUA) & Write Through (Coming soon)
Part 8 – Write Ordering (Coming soon)
Part 9 – Torn I/O Protection (Coming soon)
Part 10 – Data Corruption (Coming soon)

Related Articles

1. What does Exchange running in a VMDK on NFS datastore look like to the Guest OS?
2. Support for Exchange Databases running within VMDKs on NFS datastores (TechNet)
3. Microsoft Exchange Improvements Suggestions Forum – Exchange on NFS/SMB
4. Virtualizing Exchange on vSphere with NFS backed storage

Integrity of I/O for VMs on NFS Datastores – Part 1 – Emulation of the SCSI Protocol

This is the first of a series of posts covering how the Integrity of I/O is ensured for Virtual Machines when writing to VMDK/s (Virtual SCSI Hard Drives) running on NFS datastores presented via VMware’s ESXi hypervisor as a “Datastore”.

Note: To be crystal clear, this post is not talking about presenting NFS direct to Windows or any other guest operating system.

This process is patented (US7865663) by VMware and its inventors and on the patent the process is called “SCSI Protocol Emulation”.

This series will first cover the topics in a vendor agnostic manner, meaning I am talking in general about VMware + any NFS storage on the VMware HCL with NFS support.

Following the vendor agnostic posts, I will follow with a series of posts focusing specifically on Nutanix, as the motivation for the series was to cover off this topic for existing or potential Nutanix customers, some of whom are less familiar with NFS and have asked for clarification, especially around virtualizing Business Critical Applications (vBCA) such as Microsoft SQL and Exchange.

The below diagram visualizes shows how storage can be presented to an ESXi host and what this series will focus on.

A VM accesses its .vmx and .vmdk file/s via a datastore the same way, regardless of the underlying storage protocol (DAS SCSI, iSCSI , NFS , FCP).

GUID-AD71704F-67E4-4AC2-9C22-10B531755566-high

In the case of NFS datastores, SCSI protocol emulation is used to allow the Guest Operating System (OS) and application/s to read and write via SCSI even when the underlying storage (which is abstracted by the hypervisor) is served via NFS which does not natively support the same commands.

Image Source: https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.introduction.doc_50%2FGUID-2E7DB290-2A07-4F54-9199-B68FCB210BBA.html

In the following section, and throughout this series, many images shown are from the patent (US7865663) and are the property of the patent owners, not the author of this article.

The areas which I will be focusing on are the ones where there has been the most concern in the industry, especially for business critical applications, such as Microsoft SQL and Microsoft Exchange, being how are the VM operating system and application/s (or data integrity) are impacted when issuing commands when the storage is abstracted by the hypervisor and served to via NFS which does not have equivalent I/O commands as SCSI.

Some examples areas of concern around the industry for VMs running on datastores backed by NFS are:

1. SCSI Aborts / Resets
2. Forced Unit Access (FUA) & Write Through
3. Write Ordering
4. Torn I/O (Writes + Reads)

In this first part, we will look at the SCSI Protocol Emulation process and discuss SCSI Aborts and Resets and how the SCSI protocol emulation process deals with these.

Below is a diagram showing the flow of an I/O request for a VM writing SCSI commands to a VMDK (formatted as NTFS) through the SCSI emulation process and through to the NFS storage.

US07865663-20110104-D00005

The first few steps in my opinion are fairly self explanatory, where it gets interesting for me, and one of the points of contention among I.T professional (being SCSI aborts) is described in the box labelled “550“.

If the SCSI command is an abort (which has no equivalent in the NFS protocol), the SCSI emulation process removes the corresponding request from the virtual SCSI request list created in the previous step (box labelled “540“).

The same is true if the SCSI command is a reset (which also has no equivalent in the NFS protocol), the SCSI emulation process removes the corresponding request from the virtual SCSI request list. This process is shown below in the box labelled “560

US07865663-20110104-D00006

Next lets look at what happens if the SCSI “abort” or “reset” command is issued after the SCSI emulation process has passed on the command to the storage and is now receiving a reply to a command which the Guest OS / Application has aborted?

Its quite simple, the SCSI emulation process receives a reply from the NFS server, looks up the corresponding tag in the Virtual SCSI request list, and because this corresponding tag does not exist, the emulator drops the reply therefore emulating a SCSI abort command.

The process is shown below from box labelled “710” to “720” and finishing at “730“.

US07865663-20110104-D00007

In the patent, the above process is summed up nicely in the following paragraph.

Accordingly, a faithful emulation of SCSI aborts and resets, where the guest OS has total control over which commands are aborted and retried can be achieved by keeping a virtual SCSI request list of outstanding requests that have been sent to the NFS server. When the response to a request comes back, an attempt is made to find a matching request in the virtual SCSI request list. If successful, the matching request is removed from the list and the result of the response is returned to the virtual machine. If a matching request is not found in the virtual SCSI request list, the results are thrown away, dropped, ignored or the like.

So there we have it, that is how VMware’s patented SCSI Protocol emulation allows SCSI commands not supported natively by NFS to be honoured, therefore allowing applications dependant on Block based storage to be ran successfully within a VM where its VMDK is backed by NFS storage.

Let’s recap what we have learned so far.

1. The SCSI Commands, abort & reset have no equivalent in the NFS protocol.
2. The VMware SCSI Emulation process handles SCSI commands not supported natively by NFS thanks to the Virtual SCSI Request List.
3. Guest Operating Systems and Applications running in Virtual Machines on ESXi issue native SCSI commands to the NTFS volume, which is presented to the VM via a VMDK and housed on an NFS datastore.
4. The underlying NFS protocol is not exposed to the Guest OS, Application/s or Virtual Machine.
5. The SCSI Commands, abort & reset are emulated by the hyper visor through removing these requests from the Virtual SCSI emulation list.

In part two, I will discuss Forced Unit Access (FUA) & Write Through.

Integrity of Write I/O for VMs on NFS Datastores Series

Part 1 – Emulation of the SCSI Protocol
Part 2 – Forced Unit Access (FUA) & Write Through
Part 3 – Write Ordering
Part 4 – Torn Writes
Part 5 – Data Corruption

Nutanix Specific Articles

Part 6 – Emulation of the SCSI Protocol (Coming soon)
Part 7 – Forced Unit Access (FUA) & Write Through (Coming soon)
Part 8 – Write Ordering (Coming soon)
Part 9 – Torn I/O Protection (Coming soon)
Part 10 – Data Corruption (Coming soon)

Related Articles

1. What does Exchange running in a VMDK on NFS datastore look like to the Guest OS?
2. Support for Exchange Databases running within VMDKs on NFS datastores (TechNet)
3. Microsoft Exchange Improvements Suggestions Forum – Exchange on NFS/SMB
4. Virtualizing Exchange on vSphere with NFS backed storage?