What’s .NEXT 2016 – Acropolis Block Services (ABS)

Acropolis Block Services or ABS (not to be confused with Anti-lock Braking Systems), is an extension of the In-Guest iSCSI Nutanix announced at .NEXT 2015.

The original goal of the In-Guest iSCSI was to enable support for applications like MS Exchange which are not supported on NFS and applications such as SQL clustering for quorum drives, and this has been very successful. However customers have been telling us for a number of years they want to make Nutanix the standard platform for their datacenters, however they have not been able to realise this vision due to a number of reasons including:

  • The desire/requirement to re-use existing servers
  • Applications which are not virtual (for many reasons, mostly political)
  • Performance / Scalability of externally connected servers
  • Complexity including operational considerations of external iSCSI

Let’s discuss each of these topics and how ABS solves these challenges.

Re-using existing servers

As it’s uncommon for customers to be at the exact right time in the refresh cycle for servers and storage to replace all infrastructure at once, ABS allows customers to either get started with Nutanix by deploying some nodes/blocks, or to scale the existing environment/s while being able to use the Acropolis Distributed Storage Fabric (ADSF) to provide storage to existing HCI workloads and non HCI workloads.

A couple of key advantages of ABS compared to the existing In-Guest iSCSI support and traditional SAN/NAS is:

  • ABS load balances and optimizes paths so MPIO and ALUA are not needed
  • New storage is automatically added without requiring client-side changes

The downside to using ABS as a stop gap until the refresh cycle for the compute hardware is that is does add complexity which I discuss in this article from July 2015.

Scaling Hyper-converged solutions – Compute only

However, if the goal is to maximise the return on investment (ROI) of existing infrastructure, ABS is in my opinion a better option than having another silo of storage to install/configure and manage as it:

  • ABS load balances and optimizes paths so MPIO and ALUA are not needed
  • New storage is automatically added without requiring client-side changes
  • Removes the requirement for another silo.
  • Increases performance/capacity/resiliency of an existing cluster
  • Allows customers to standardize their infrastructure
  • Gives customers flexibility to quickly add/remove nodes from a cluster/s to meet requirements.

Scalability:

ABS ensures linear and automated scalability by creating virtual targets to ensure performance is not limited by iSCSI limitation of one session per initiator and target. This means a single LUN (or Volume Group in Nutanix speak) can be serviced by the multiple virtual targets which are spread across all Nutanix CVMs. This ensures multiple network threads are used which also mitigates against network threads being a bottleneck.

By default 32 virtual targets are used to ensure optimal performance for even the largest and most I/O intensive workloads.

This process is also transparent to the administrator and application to avoid any complexity in implementation and ongoing support.

The following diagram shows how the data services IP sits in front of the virtual targets (which are on each CVM) and the vDisks are spread across all controllers for maximum performance.

ABSvirtualtargets

At .NEXT 2015 Nutanix announced support to scale storage seperate to compute using “Storage Only” nodes and this capability is fully compatible with ABS. This ensures capacity and performance can be scaled separately to compute for maximum flexibility.

ABSnoiSCSIMPIO

Resiliency:

If a vDisks active CVM goes offline due to failure or planned maintenance, any active sessions against that CVM are disconnected, which triggers a re-logon from the iSCSI client. The re-logon occurs through the external data services IP, which redirects the session to a healthy CVM.

This means things like One-Click rolling AOS upgrades can still be performed as they are with native Nutanix environments.

ABSCVMfailure

Functionality:

ABS supports SCSI-3 persistent reservations for shared storage-based Windows clusters, which are commonly used with Microsoft SQL Server and clustered file servers.

As of Acropolis OS (AOS) 4.7, ABS will be supported with physical servers or virtual machines. Support for connecting ESXi via iSCSI is expected to follow in a future release.

ABS supports several use cases, including:

  • iSCSI for Microsoft Exchange Server.
  • Shared storage for Linux-based clusters
  • Windows Server Failover Clustering (WSFC).
  • SCSI-3 persistent reservations for shared storage-based Windows clusters
  • Shared storage for Oracle RAC environments.
  • Bare-metal environments.

ABSoverview

ABS enables server hardware separate from the Nutanix environment to consume the Acropolis DSF resources, so you can leverage existing server hardware investments against Nutanix storage resources. Workloads not targeted for virtualization can also use the DSF.

Supported Client OS & Qualified Applications

  • RHEL 6+
  • Windows 2008 R2 & Windows 2012 R2
  • Oracle RAC
  • Microsoft SQL Server
  • Microsoft Exchange Server

Summary:

Whether you have applications that require shared storage access or environments with separate storage and compute needs, Acropolis Block Services (ABS) simplifies deployment and highlights the dynamic scale out, extreme performance, and high availability of the Nutanix platform. ABS automatically load balances iSCSI clients to take advantage of all resources in the cluster, and failure events are managed seamlessly. The same upgrade, snapshot, and asynchronous replication workflows that customers leverage today work consistently whether you are using VMs or VGs. By enabling VM, file, and block services, Nutanix offers a single platform to consolidate workloads and ease administration, thus reducing risk and enabling organizations to simplify their infrastructure.

Related .NEXT 2016 Posts

Integrity of I/O for VMs on NFS Datastores – Part 1 – Emulation of the SCSI Protocol

This is the first of a series of posts covering how the Integrity of I/O is ensured for Virtual Machines when writing to VMDK/s (Virtual SCSI Hard Drives) running on NFS datastores presented via VMware’s ESXi hypervisor as a “Datastore”.

Note: To be crystal clear, this post is not talking about presenting NFS direct to Windows or any other guest operating system.

This process is patented (US7865663) by VMware and its inventors and on the patent the process is called “SCSI Protocol Emulation”.

This series will first cover the topics in a vendor agnostic manner, meaning I am talking in general about VMware + any NFS storage on the VMware HCL with NFS support.

Following the vendor agnostic posts, I will follow with a series of posts focusing specifically on Nutanix, as the motivation for the series was to cover off this topic for existing or potential Nutanix customers, some of whom are less familiar with NFS and have asked for clarification, especially around virtualizing Business Critical Applications (vBCA) such as Microsoft SQL and Exchange.

The below diagram visualizes shows how storage can be presented to an ESXi host and what this series will focus on.

A VM accesses its .vmx and .vmdk file/s via a datastore the same way, regardless of the underlying storage protocol (DAS SCSI, iSCSI , NFS , FCP).

GUID-AD71704F-67E4-4AC2-9C22-10B531755566-high

In the case of NFS datastores, SCSI protocol emulation is used to allow the Guest Operating System (OS) and application/s to read and write via SCSI even when the underlying storage (which is abstracted by the hypervisor) is served via NFS which does not natively support the same commands.

Image Source: https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.introduction.doc_50%2FGUID-2E7DB290-2A07-4F54-9199-B68FCB210BBA.html

In the following section, and throughout this series, many images shown are from the patent (US7865663) and are the property of the patent owners, not the author of this article.

The areas which I will be focusing on are the ones where there has been the most concern in the industry, especially for business critical applications, such as Microsoft SQL and Microsoft Exchange, being how are the VM operating system and application/s (or data integrity) are impacted when issuing commands when the storage is abstracted by the hypervisor and served to via NFS which does not have equivalent I/O commands as SCSI.

Some examples areas of concern around the industry for VMs running on datastores backed by NFS are:

1. SCSI Aborts / Resets
2. Forced Unit Access (FUA) & Write Through
3. Write Ordering
4. Torn I/O (Writes + Reads)

In this first part, we will look at the SCSI Protocol Emulation process and discuss SCSI Aborts and Resets and how the SCSI protocol emulation process deals with these.

Below is a diagram showing the flow of an I/O request for a VM writing SCSI commands to a VMDK (formatted as NTFS) through the SCSI emulation process and through to the NFS storage.

US07865663-20110104-D00005

The first few steps in my opinion are fairly self explanatory, where it gets interesting for me, and one of the points of contention among I.T professional (being SCSI aborts) is described in the box labelled “550“.

If the SCSI command is an abort (which has no equivalent in the NFS protocol), the SCSI emulation process removes the corresponding request from the virtual SCSI request list created in the previous step (box labelled “540“).

The same is true if the SCSI command is a reset (which also has no equivalent in the NFS protocol), the SCSI emulation process removes the corresponding request from the virtual SCSI request list. This process is shown below in the box labelled “560

US07865663-20110104-D00006

Next lets look at what happens if the SCSI “abort” or “reset” command is issued after the SCSI emulation process has passed on the command to the storage and is now receiving a reply to a command which the Guest OS / Application has aborted?

Its quite simple, the SCSI emulation process receives a reply from the NFS server, looks up the corresponding tag in the Virtual SCSI request list, and because this corresponding tag does not exist, the emulator drops the reply therefore emulating a SCSI abort command.

The process is shown below from box labelled “710” to “720” and finishing at “730“.

US07865663-20110104-D00007

In the patent, the above process is summed up nicely in the following paragraph.

Accordingly, a faithful emulation of SCSI aborts and resets, where the guest OS has total control over which commands are aborted and retried can be achieved by keeping a virtual SCSI request list of outstanding requests that have been sent to the NFS server. When the response to a request comes back, an attempt is made to find a matching request in the virtual SCSI request list. If successful, the matching request is removed from the list and the result of the response is returned to the virtual machine. If a matching request is not found in the virtual SCSI request list, the results are thrown away, dropped, ignored or the like.

So there we have it, that is how VMware’s patented SCSI Protocol emulation allows SCSI commands not supported natively by NFS to be honoured, therefore allowing applications dependant on Block based storage to be ran successfully within a VM where its VMDK is backed by NFS storage.

Let’s recap what we have learned so far.

1. The SCSI Commands, abort & reset have no equivalent in the NFS protocol.
2. The VMware SCSI Emulation process handles SCSI commands not supported natively by NFS thanks to the Virtual SCSI Request List.
3. Guest Operating Systems and Applications running in Virtual Machines on ESXi issue native SCSI commands to the NTFS volume, which is presented to the VM via a VMDK and housed on an NFS datastore.
4. The underlying NFS protocol is not exposed to the Guest OS, Application/s or Virtual Machine.
5. The SCSI Commands, abort & reset are emulated by the hyper visor through removing these requests from the Virtual SCSI emulation list.

In part two, I will discuss Forced Unit Access (FUA) & Write Through.

Integrity of Write I/O for VMs on NFS Datastores Series

Part 1 – Emulation of the SCSI Protocol
Part 2 – Forced Unit Access (FUA) & Write Through
Part 3 – Write Ordering
Part 4 – Torn Writes
Part 5 – Data Corruption

Nutanix Specific Articles

Part 6 – Emulation of the SCSI Protocol (Coming soon)
Part 7 – Forced Unit Access (FUA) & Write Through (Coming soon)
Part 8 – Write Ordering (Coming soon)
Part 9 – Torn I/O Protection (Coming soon)
Part 10 – Data Corruption (Coming soon)

Related Articles

1. What does Exchange running in a VMDK on NFS datastore look like to the Guest OS?
2. Support for Exchange Databases running within VMDKs on NFS datastores (TechNet)
3. Microsoft Exchange Improvements Suggestions Forum – Exchange on NFS/SMB
4. Virtualizing Exchange on vSphere with NFS backed storage?