How to successfully Virtualize MS Exchange – Part 10 – Presenting Storage direct to the Guest OS

Let’s start with listing three common storage types which can be presented direct to a Windows OS?

1. iSCSI LUNs
2. SMB 3.0 shares
3. NFS mounts

Next let’s discuss these 3 options.

iSCSI LUNs are a common way of presenting storage direct to the Guest OS even in vSphere environments and can be useful for environments using storage array level backup solutions (which will be discussed in detail in an upcoming post).

The use of iSCSI LUNs is fully supported by VMware and Microsoft as iSCSI meets the technical requirements for Exchange, being Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands. iSCSI LUNs presented to Windows are then formatted with NTFS which is a journalling file system which also protects against Torn I/O.

In vSphere environments nearing the configuration maximum of 256 datastores per ESXi host (and therefore HA/DRS cluster) presenting iSCSI LUNs to applications such as Exchange can help ensure scalability even where vSphere limits may have been reached.

Note: I would recommend reviewing the storage design and trying to optimize VMs/LUN etc first before using iSCSI LUNs presented to VMs.

The problem with iSCSI LUNs is they result in additional complexity compared to using VMDKs on Datastores (discussed in Part 11). The complexity is not insignificant as typically multiple LUNs need to be created per Exchange VM, things like iSCSI initiators and LUN masking needs to be configured. Then when the iSCSI initiator driver is updated (say via Windows Update) you may find your storage disconnected and you may need to troubleshoot iSCSI driver issues. You also need to consider the vNetworking implications as the VM now needs IP connectivity to the storage network.

I wrote this article (Example VMware vNetworking Design w/ 2 x 10GB NICs for IP Storage) a while ago showing an example vNetworking design that supports IP storage with 2 x 10GB NICs.

The above article shows NFS on the dvPortGroup name but the same configuration is also optimal for iSCSI. Each Exchange VM would then need a 2nd vmNIC connected to the iSCSI portgroup (or dvPortgroup) ideally with a static IP address.

IP addressing is another complexity added by presenting storage direct to VMs rather than using VMDKs on datastores.

Many system administrators, architects and engineers might scoff at the suggestion iSCSI is complex, but in my opinion while I don’t find iSCSI at all difficult to design/install/configure and use, it is significantly more complex and has many more points of failure than using a VMDK on a Datastore.

One of the things I have learned and seen benefit countless customers over the years is keeping things as simple as possible while meeting the business requirements. With that in mind, I recommend only considering the use of iSCSI direct to the Guest OS in the following situations:

1. When using a Backup solution which triggers a storage level snapshot which is not VM or VMDK based. i.e.: Where snapshots are only support at the LUN level. (Older storage technologies).
2. Where ESXi scalability maximums are going to be reached and creating a separate cluster is not viable (technically and/or commercially) following a detailed review and optimization of storage for the vSphere environment.
3. When using legacy storage architecture where performance is constrained at a datastore level. e.g.: Where increasing the number of VMs per Datastore impacts performance due to latency created from queue depth or storage controller contention.

Next let’s discuss SMB 3.0 / CIFS shares.

SMB 3.0 or CIFS shares are commonly used to present storage for Hyper-V and also file servers. However presenting SMB 3.0 directly to Windows is not a supported configuration for MS Exchange because SMB 3.0 presented to the Guest OS directly does not meet the technical requirements for Exchange, such as Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands.

However SMB 3.0 is supported for MS Exchange when presented to Hyper-V and where Exchange database files reside within a VHD which emulates the SCSI commands over the SMB file protocol. This will be discussed in the upcoming Hyper-V series.

The below is a quote from Exchange 2013 storage configuration options outlining the storage support statement for MS Exchange.

All storage used by Exchange for storage of Exchange data must be block-level storage because Exchange 2013 doesn’t support the use of NAS volumes, other than in the SMB 3.0 scenario outlined in the topic Exchange 2013 virtualization. Also, in a virtualized environment, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported.

The above statement is pretty confusing in my opinion, but what Microsoft mean by this is SMB 3.0 is supported when presented to Hyper-V with Exchange running in a VM with its databases housed within one or more VHDs. However to be clear presenting SMB 3.0 direct to Windows for Exchange files is not supported.

NFS mounts can be used to present storage to Windows although this is not that common. Its important to note presenting NFS directly to Windows is not a supported configuration for MS Exchange and as with SMB 3.0, presenting NFS to Windows directly also does not meet the technical requirements for Exchange, being Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands. iSCSI LUNs can be formatted with VMFS which is a journalling file system which also protects against Torn I/O.

As such I recommend not presenting NFS mounts to Windows for Exchange storage.

Note: Do not confuse presenting NFS to Windows which presenting NFS datastores to ESXi as these are different. NFS datastores will be discussed in Part 11.

Summary:

iSCSI is the only supported storage protocol to present storage direct to Windows for storage of Exchange databases.

Lets now discuss the Pros and Cons for presenting iSCSI storage direct to the Guest OS.

PROS

1. Ability to reduce overheads of legacy LUN based snapshot based backup solutions by having MS Exchange use dedicated LUN/s therefore reducing delta changes that need to be captured/stored. (e.g.: Netapp SnapManager for Exchange)
2. Does not impact ESXi configuration maximums for LUNs per ESXi host as storage is presented to the Guest OS and not the hypervisor
3. Dedicated LUN/s per MS Exchange VM can potentially improve performance depending on the underlying storage capabilities and design.

CONS

1. Complexity e.g.: Having to create, present and manage LUN/s per Exchange MBX/MSR VMs
2. Having to manage and potentially troubleshoot iSCSI drivers within a Guest OS
3. Having to design for IP storage traffic to access VMs directly, which requires additional vNetworking considerations relating to performance and availability.

Recommendations:

1. When choosing to present storage direct to the Guest OS, only iSCSI is supported.
2. Where no requirements or constraints exist that require the use of storage presented to the Guest OS directly, use VMDKs on Datastores option which is discussed in Part 11.
3. Use a dedicated vmNIC on the Exchange VM for iSCSI traffic
4. Use NIOC to ensure sufficient bandwidth for iSCSI traffic in the event of network congestion. Recommended share values along with justification can be found in Example Architectural Decision – Network I/O Control Shares/Limits for ESXi Host using IP Storage.
5. Use a dedicated VLAN for iSCSI traffic
6. Do NOT present SMB 3.0 or NFS direct to the Guest OS and use for Exchange Databases!

Back to the Index of How to successfully Virtualize MS Exchange.

How to successfully Virtualize MS Exchange – Part 7 – Storage Options

When virtualizing Exchange, we not only have to consider the Compute (CPU/RAM) and Network, but also the storage to provide both the capacity and IOPS required.

However before considering IOPS and capacity, we need to decide how we will provide storage for Exchange as storage can be presented to a Virtual Machine in many ways.

This post will cover the different ways storage can be presented to ESXi and used for Exchange while subsequent posts will cover in detail each of the options discussed.

First lets discuss Local Storage.

What I mean by Local Storage is SSD/HDDs within a physical ESXi hosts that is not shared (e.g.: Not accessible by other hosts).

This is probably the most basic form of storage we can present to ESXi and apart from the Hypervisor layer could be considered similar to a physical Exchange deployment.

UseLocalStorage

Next lets discuss Raw Device Mappings.

Raw Device Mappings or “RDMs” are where shared storage from a SAN is presented through the hypervisor to the guest as a native SCSI device and enables.

RDMs

For more information about Raw Device Mappings, see: About Raw Device Mappings

The next option is Presenting Storage direct to the Guest OS.

It is possible and sometime advantageous to presents SAN/NAS storage direct to the Guest OS via NFS , iSCSI or SMB 3.0 and bypasses the hyper-visor all together.

DirectInGuest

The final option we will discuss is “Datastores“.

Datastores are probably the most common way to present storage to ESXi. Datastores can be Block or File based, and presented via iSCSI , NFS or FCP (FC / FCoE) as of vSphere 5.5.

Datastores are basically just LUNs or NFS mounts. If the datastore is backed by a LUN, it will be formatted with Virtual Machine File System (VMFS) whereas NFS datastores are simply NFS 3 mounts with no formatting done by ESXi.

ViaDatastore

For more information about VMFS see: Virtual Machine File System Technical Overview.

What do all the above options have in common?

Local storage, RDMs, storage presented to the Guest OS directly and Datastores can all be protected by RAID or be JBOD deployments with no data protection at the storage layer.

Importantly, none of the four options on their own guarantee data protection or integrity, that is, prevent data loss or corruption. Protecting from data loss or corruption is a separate topic which I will cover in a non Exchange specific post.

So regardless of the way you present your storage to ESXi or the VM, how you ensure data protection and integrity needs to be considered.

In summary, there are four main ways (listed below) to present storage to ESXi which can be used for Exchange each with different considerations around Availability, Performance, Scalability, Cost , Complexity and support.

1. Local Storage (Part 8)
2. Raw Device Mappings  (Part 9)
3. Direct to the Guest OS (Part 10)
4. Datastores (Part 11)

In the next four parts, each of these storage options for MS Exchange will be discussed in detail.

Back to the Index of How to successfully Virtualize MS Exchange.

Fight the FUD! – Not all VAAI-NAS storage solutions are created equal.

At a meeting recently, a potential customer who is comparing NAS/Hyper-converged solutions for an upcoming project advised me they only wanted to consider platforms with VAAI-NAS support.

As the customer was considering a wide range of workloads, including VDI and server the requirement for VAAI-NAS makes sense.

Then the customer advised us they are comparing 4 different Hyper-Converged platforms and a range of traditional NAS solutions. The customer eliminated two platforms due to no VAAI support at all (!) but then said Nutanix and one other vendor both had VAAI-NAS support so this was not a differentiator.

Having personally completed the VAAI-NAS certification for Nutanix, I was curious what other vendor had full VAAI-NAS support, as it was (and remains) my understanding Nutanix is the only Hyper-converged vendor who has passed the full suite of certification tests.

The customer advised who the other vendor was, so we checked the HCL together and sure enough, that vendor only supported a subset of VAAI-NAS capabilities even though the sales reps and marketing material all claim full VAAI-NAS support.

The customer was more than a little surprised that VAAI-NAS certification does not require all capabilities to be supported.

Any storage vendor wanting its customers to get support for VAAI-NAS with VMware is required to complete a certification process which includes a comprehensive set of tests. There are a total of 66 tests for VAAI-NAS vSphere 5.5 certification which are required to be completed to gain the full VAAI-NAS certification.

However as this customer learned, it is possible and indeed common for storage vendors not to pass all tests and gain certification for only a subset of VAAI-NAS capabilities.

The below shows the Nutanix listing on the VMware HCL for VAAI NAS highlighting the 4 VAAI-NAS features which can be certified and supported being:

1. Extended Stats
2. File Cloning
3. Native SS for LC
4. Space Reserve
NutanixVAAI-NAS

This is an example of a fully certified solution supporting all VAAI-NAS features.

Here is an example of a VAAI-NAS certified solution which has only certified 1 of the 4 capabilities. (This is a Hyper-converged platform although they were not being considered by the customer)

vaai-naslol

Here is another example of a VAAI-NAS certified solution which has only certified 2 of the 4 capabilities. (This is a Hyper-converged platform).

vaainasc

So customers using the above storage solution cannot for example create Thick Provisioned Virtual Disks, therefore preventing the use of Fault Tolerance (FT) or virtualization of business critical applications such as Oracle RAC.

In this next example, the vendor has certified 3 out of 4 capabilities and is not certified for Native SS for LC. (This is a traditional centralized NAS platform).

VNXvaainas

So this solution does not support using storage level snapshots for the creation of Linked Clones, so things like Horizon View (VDI) or vCloud Director FAST Provisioning deployments will not get the cloning performance or optimal capacity saving benefits of fully certified/supported VAAI-NAS storage solutions.

The point of this article is simply to raise awareness that not all solutions advertising VAAI-NAS support are created equal and ALWAYS CHECK THE HCL! Don’t believe the friendly sales rep as they may be misleading you or flat out lying about VAAI-NAS capabilities / support.

When comparing traditional NAS or Hyper-converged solutions, ensure you check the VMware HCL and compare the various VAAI-NAS capabilities supported as some vendors have certified only a subset of the VAAI-NAS capabilities.

To properly compare solutions, use the VMware HCL Storage/SAN section and as per the below image select:

Product Release Version: All
Partner Name: All or the specific vendor you wish to compare
Features Category: VAAI-NAS
Storage Virtual Appliance Only: No for SAN/NAS , Yes for Hyperconverged or VSA solutions

generichcl

Then click on the Model you wish to compare e.g.: NX-3000 Series

hclnutanix1

Then you should see something similar to the below:

ClickViewButtonHCL

Click the “View” link to show the VAAI-NAS capabilities and you will see the below which highlights the VAAI-NAS features supported.

Note: if the “View” link does not appear, the product is NOT supported for VAAI-NAS.

nutanixvaainasresults

If the Features do not list Extended StatsFile CloningNative SS for LCSpace Reserve the solution does not support the full VAAI-NAS capabilities.

Related Articles:

1. My checkbox is bigger than your checkbox@HansDeLeenheer

2. Unchain My VM, And Set Me Free!(Snapshots)

3. VAAI-NAS – Some snapshot chains are deeper than others