The future of NAS Storage (NFS) for Virtual Environments

I read the article (below) by Howard Marks after seeing it come up on Twitter today, and I found it to be very interesting and refreshing to read as it hits the nail on the head.

http://www.networkcomputing.com/storage-networking-management/vmware-has-to-step-up-on-nfs/240163350

For a long time Network Attached Storage (NAS) has been considered by many (including myself in the past) as a second class citizen, or Tier 3 storage and not a serious choice for mission critical virtual environments.

In recent years, I have used more and more NFS in vSphere environments, and as I went through my VCDX journey I formed the strong view that NFS was in fact the best storage protocol for vSphere/vCloud/View environments having gone through a process of trying to learn as much as possible about every storage alternative available to vSphere.

In fact my VCDX design was based on a vCloud solution running on NFS, and this was one area I found quite easy to present and defend during the VCDX defence due to the many advantages of NFS.

In the article, Howard wrote

“It’s time for VMware to upgrade its support for file storage (as opposed to block storage) and embrace the pioneering vendors who are building storage systems specifically for the virtualization environment.”

I totally agree with this statement, and I think it is in the best interest of VMware, its partners and customers for VMware to go down this path. I think most would agree that Netapp have been leading the charge with NFS based storage for a long time, and in my opinion rightly so, with some new storage vendors also choosing to build solutions around NFS.

Another comment Howard made was

“managing vSphere with NFS storage is somewhat simpler than managing an equivalent system on block storage. Even better, a good NFS storage system, because it knows which blocks belong to which virtual machine, can perform storage management tasks such as snapshots, replication and storage quality of service per virtual machine, rather than per volume.”

I totally agree with the above statement and VMware’s development of features such as View Composer for Array Integration (VCAI) which is only supported on NFS, shows the protocol has significant advantages over block based storage especially for deployment speed and reduced workload on the storage compared. (VCAI uses the Fast File Clone VAAI-NAS Primitive to create near instant space efficient Linked Clone desktops)

I wrote an example architectural decision regarding storage protocol choice for Horizon View (VDI) environments which covers in more depth the advantages of NFS for VDI environments. The article can be viewed here : Example Architectural Decision – Storage Protocol Choice for Horizon View

Also NFS does not suffer from the same challenges as block based storage, as much larger numbers of VMs can share an NFS datastore compared to VMFS datastore without being negatively impacted by latency as a result of SCSI reservations (although vastly improved with VAAI) or contention resulting from limited SCSI queue depths which is something VAAI does still not address.

These limitations of block storage leads to the number of VMs per datastore remaining at the old rules of thumb of <25 for non I/O intensive workloads even with VAAI which some felt was the magic solution to the issue which sadly was incorrect. (Note: Number of desktop VMs per VMFS datastore with VAAI the recommended maximum is 140 compared to 64 without VAAI and NFS of >200).

Howard went on to write

“The first step would be for VMware to acknowledge that NFS has advanced in the past decade.”

I think this has been acknowledged by VMware along with many experts in the industry which is a positive step forward and I believe VMware will give more attention to NFS in future versions.

Howard further commented that

“Today vSphere supports version 3.0 of NFS—which is seventeen years old. NFS 4.1 has much more sophisticated security, locking and network improvements than NFS 3.0. The optional pNFS extension can bring the performance and multipathing of SANs with centralized file system management.”

I really think that VMware adding support in the future for NFS 4.1 will really help cement NFS as the protocol of choice for virtual environments and will be complimentary to VMware’s upcoming VSAN offering.

I think with bolstered NFS support and VSAN, VMware will have a solid storage layer to take virtualization into the future, without requiring storage vendors to immediately support vVOLs which in my opinion is being built (at least in part) to solve the challenges of VMFS and block based storage, when NFS (even version 3.) addresses most requirements in virtual environments very well today, and NFS 4.1 support will only improve the situation.

Howard’s comment (below) appears to echo these thoughts.

“Better NFS support will empower storage vendors to innovate and strengthen the vSphere ecosystem and fill the gap until vVols are ready. NFS support will also provide an alternative once vVols hit the market.”

 

To finish I thought Howard’s comment on snapshots (below) and replication being per Virtual Machine rather than volume or LUN, several vendors are doing this today moving towards NFS 4.1 will help these vendors continue to innovate and provide better and more efficient storage solutions for VMware’s customers which I think is what everyone wants.

Even better, a good NFS storage system, because it knows which blocks belong to which virtual machine, can perform storage management tasks such as snapshots, replication and storage quality of service per virtual machine, rather than per volume.

Competition Example Architectural Decision Entry 2 – Use of RDMs in Standard IaaS Clusters

Name: Chris Jones
Title: Virtualization Architect
Twitter: @cpjones44
Profile: VCP5 / VCAP5-DCD

Problem Statement

VMs require more than 1.9TB in a single disk. The existing virtual environment has LUNs provisioned that are 2TB in size. As these VMs have virtual data disks (VMDKs) that are > 1.9TB in size, alarms are being triggered by the infrastructure monitoring solution and raising Incident tickets to the Virtual Infrastructure support queue.

Assumptions

1. Data within the OSI must reside within the VM and not on some kind of IP based store (like a NAS share).

2. vSphere datastores are presented through FC and not IP based stores (ie. NFS).

3. vSphere Hypervisor is ESXi 4.1.

4. There is no requirement for the VMs to be performing SAN specific functionality or running SCSI target-based software.

Constraints

1. The implemented monitoring solution cannot be customised with triggers and monitoring policies for individual objects within the environment (ie. having one monitoring policy per individual or sub-group of datastores).

2. Maximum vSphere datastore size in version 4.1 is 2TB minus 512 bytes.

3. Unable to upgrade beyond ESXi 4.1 Update 3.

Motivation

1. Reduce the number of incident tickets being raised, thus improving SLA posture.

2. Reduce the requirement to span single Windows logical volumes across multiple VMDKs.

Architectural Decision

Turn the disk into an RDM (Virtual Compatibility Mode) to remove the level of monitoring from the vSphere layer.

Alternatives

1. Create smaller VMDKs (ie. 1-1.5TB disks) and create a RAID0 volume within the guest OS.

2. Change the level of alerting so that tickets are not raised for alerts that trigger beyond 90%.

3. Turn the disk into an RDM to remove the level of monitoring from the vSphere layer.

4. Thin Provision the virtual disks

5. Store the data within the guest on some kind of IP based storage (NAS/iSCSI target).

Justification

1. Option 5 goes against the assumption that data must be local to the VM, so was ruled out.

2. Whilst thin provisioning (Option 4) is an attractive solution, this option is ruled out based on a wider infrastructure decision to thick provision all disks in the environment to reduce the risk to datastores filling up and critical business VMs stopping.

3. Option 1 via smaller VMDKs spread across multiple vSphere datastores will result in these alerts disappearing, however it will create issues when trying to execute a DR recovery for either the individual disks (Active/Passive) or the whole VM (Active/Cold). All that’s needed is for one VMDK not to be replicated and the whole Windows volume will be corrupted, or for the VMDKs to be mounted in the wrong order. Multiple VMDKs to one Windows volume also complicates the recovery of snapshot array-based backups (eg. via SMVI or NetBackup).

4. Option 2 goes against the constraint of the infrastructure monitoring solution not being able to creating individual alerting policies for either a single or sub-group of datastores in the inventory. Should individualised policies be created, we would need to ensure that the affected VMDKs that consume 90-95% of a datastore remain on that datastore as moving from one to another (ie. from Tier 2 to Tier 1) will require a change to the monitoring that has been configured. At this stage, the monitoring solution has no way to track these customised policies, which is most of the reason why global environment wide policies exist.

5. Option 3 and the use of RDMs in Virtual Compatibility Mode will allow the VM to benefit from the features of VMFS, such as advanced file locking for data protection and vSphere snapshotting. The use of RDMs will also allow for VMs to be managed by DRS (ie. can be vMotion’ed) and protected by vSphere HA.

Implications

  1. The RDM mapping will need to be recorded clearly to avoid the lengthy process of discovering from scratch what physical LUN is presented to the virtual machine.

An example of how to map these will be to:

A)    Record the name of the VM that has the RDM.

B)    Record the NAA number of the physical LUN(s) that are presented to the VM.

C)    Record the virtual device node on the virtual disk controller as to where the RDM is mounted.

D)    Record the Windows drive letter that this RDM is mounted to.

2. Additional paths will be consumed, reducing the total number of vSphere datastores that can be presented to the cluster.

Back to Competition Main Page or Competition Submissions