The future of NAS Storage (NFS) for Virtual Environments

I read the article (below) by Howard Marks after seeing it come up on Twitter today, and I found it to be very interesting and refreshing to read as it hits the nail on the head.

http://www.networkcomputing.com/storage-networking-management/vmware-has-to-step-up-on-nfs/240163350

For a long time Network Attached Storage (NAS) has been considered by many (including myself in the past) as a second class citizen, or Tier 3 storage and not a serious choice for mission critical virtual environments.

In recent years, I have used more and more NFS in vSphere environments, and as I went through my VCDX journey I formed the strong view that NFS was in fact the best storage protocol for vSphere/vCloud/View environments having gone through a process of trying to learn as much as possible about every storage alternative available to vSphere.

In fact my VCDX design was based on a vCloud solution running on NFS, and this was one area I found quite easy to present and defend during the VCDX defence due to the many advantages of NFS.

In the article, Howard wrote

“It’s time for VMware to upgrade its support for file storage (as opposed to block storage) and embrace the pioneering vendors who are building storage systems specifically for the virtualization environment.”

I totally agree with this statement, and I think it is in the best interest of VMware, its partners and customers for VMware to go down this path. I think most would agree that Netapp have been leading the charge with NFS based storage for a long time, and in my opinion rightly so, with some new storage vendors also choosing to build solutions around NFS.

Another comment Howard made was

“managing vSphere with NFS storage is somewhat simpler than managing an equivalent system on block storage. Even better, a good NFS storage system, because it knows which blocks belong to which virtual machine, can perform storage management tasks such as snapshots, replication and storage quality of service per virtual machine, rather than per volume.”

I totally agree with the above statement and VMware’s development of features such as View Composer for Array Integration (VCAI) which is only supported on NFS, shows the protocol has significant advantages over block based storage especially for deployment speed and reduced workload on the storage compared. (VCAI uses the Fast File Clone VAAI-NAS Primitive to create near instant space efficient Linked Clone desktops)

I wrote an example architectural decision regarding storage protocol choice for Horizon View (VDI) environments which covers in more depth the advantages of NFS for VDI environments. The article can be viewed here : Example Architectural Decision – Storage Protocol Choice for Horizon View

Also NFS does not suffer from the same challenges as block based storage, as much larger numbers of VMs can share an NFS datastore compared to VMFS datastore without being negatively impacted by latency as a result of SCSI reservations (although vastly improved with VAAI) or contention resulting from limited SCSI queue depths which is something VAAI does still not address.

These limitations of block storage leads to the number of VMs per datastore remaining at the old rules of thumb of <25 for non I/O intensive workloads even with VAAI which some felt was the magic solution to the issue which sadly was incorrect. (Note: Number of desktop VMs per VMFS datastore with VAAI the recommended maximum is 140 compared to 64 without VAAI and NFS of >200).

Howard went on to write

“The first step would be for VMware to acknowledge that NFS has advanced in the past decade.”

I think this has been acknowledged by VMware along with many experts in the industry which is a positive step forward and I believe VMware will give more attention to NFS in future versions.

Howard further commented that

“Today vSphere supports version 3.0 of NFS—which is seventeen years old. NFS 4.1 has much more sophisticated security, locking and network improvements than NFS 3.0. The optional pNFS extension can bring the performance and multipathing of SANs with centralized file system management.”

I really think that VMware adding support in the future for NFS 4.1 will really help cement NFS as the protocol of choice for virtual environments and will be complimentary to VMware’s upcoming VSAN offering.

I think with bolstered NFS support and VSAN, VMware will have a solid storage layer to take virtualization into the future, without requiring storage vendors to immediately support vVOLs which in my opinion is being built (at least in part) to solve the challenges of VMFS and block based storage, when NFS (even version 3.) addresses most requirements in virtual environments very well today, and NFS 4.1 support will only improve the situation.

Howard’s comment (below) appears to echo these thoughts.

“Better NFS support will empower storage vendors to innovate and strengthen the vSphere ecosystem and fill the gap until vVols are ready. NFS support will also provide an alternative once vVols hit the market.”

 

To finish I thought Howard’s comment on snapshots (below) and replication being per Virtual Machine rather than volume or LUN, several vendors are doing this today moving towards NFS 4.1 will help these vendors continue to innovate and provide better and more efficient storage solutions for VMware’s customers which I think is what everyone wants.

Even better, a good NFS storage system, because it knows which blocks belong to which virtual machine, can perform storage management tasks such as snapshots, replication and storage quality of service per virtual machine, rather than per volume.

9 thoughts on “The future of NAS Storage (NFS) for Virtual Environments

  1. I think NFS 4.1 would be tied to a major platform release and therefore might be a way off. I don’t have a problem with more than 25 VM’s per datastore with a working VAAI integrated array, provided the performance can be supported. Interesting the enhancements around storage in View 5.3 are around NFS. But I don’t think we can look at storage protocol in isolation to the rest of the customer requirements and existing investments. Those will also drive protocol decisions. But it’s clear that protocol isn’t really a performance determinative. The different protocols can perform pretty much equally with the right hardware backing them.

    • Totally agree, Performance is no longer tied to a specific storage protocol, as FC , FCoE, iSCSI and NFS performance is very similar in apples/apples comparisons. Existing investments and of course customers requirements/constraints drive architectural decisions, but for green fields environments I think its difficult to go past NFS for the vast majority of workloads.

      As for NFS 4.1 support, I’m sure it will be tied to a major release as you mention, so we’ll be using NFS 3.0 for a while yet.

        • I don’t have any experience with AoE nor have I heard of any deployments using it, at least in the Virtualization space, I think that’s saying something about where the tech will lead. 🙂

          Although I could be wrong.

  2. Nice articel! We’re already using NFS-based VMware platforms for years without any (performance) problems. Hopefully VMware will start to acknowledge the fact that NFS is great to work with.

  3. I agree NFS looks very interesting to be used as vSphere storage. However the devil is in detail. I published NFS vs FC comparison here ( http://blog.igics.com/2013/09/nfs-or-fibre-channel-storage-for-vmware.html ) and reported about NFS performance troubleshooting here ( http://blog.igics.com/2013/09/high-latency-on-vsphere-datastore.html ).

    It is not only about NFS storage (NFS server implementation) but also about NFS client implementation in ESXi. I don’t understand why NFS v4 is not implemented in ESXi 5.x. Hope it will be available at least in ESXi 6.

  4. I would never go for anything else than NFS in a vSphere environment. But for sure NFS v4.1 is something I’m missing. But I guess Hyper-v 2012 R2 will make VMware set NFS v4.1 a higher priority.
    My wish is that vmfs will come to an end, I have more confident in zfs or wafl as file system than vmfs.

    • Microsoft pushing SMB 3 in HyperV may just be the push for ESX to add NFS 4.1 as you said. For SMB market, a fibre channel network is an added layer when they are also facing 10g Ethernet costs as well. Settling on a single, fast core network that their hypervisor can reliably run on is a CIO and CFO-friendly strategy.

      Even enterprise clients are not assuming that FC will be their plan going forward. FCoE is the bridge most are using to get them out of separate FC networks. The largest clients with big investments in 8g FC seem to be content with FC but others are employing options in Nexus mostly.

      Try as they may to push vVols, VMware can’t tell array vendors to simply abolish the limits of SCSI queues on block storage to shared storage arrays. Even their parent, EMC, will have a hell of a time updating a fundamental aspect of storage access on all their product lines: VMAX, VNX, Isilon, etc.