MS Exchange on Nutanix now a MS validated ESRP solution

I am pleased to announce that Nutanix has successfully completed the Microsoft Exchange Solution Review Program requirements and are now listed as a validated solution at the following URL:

Exchange Solution Reviewed Program (ESRP) – Storage

The solution shows a dual site 24,000 1GB Mailbox solution running on just 8 NX-8150 nodes. This is a very highly resilient solution with N+1 availability at each site allowing for full self healing and failover in the event of a node failure.

Nutanix is also the FIRST and only hyper-converged platform to be validated under ESRP, further strengthening our leadership in the market.

The performance testing (using Jetstress) was with the nodes at around 90% capacity with 8.5TB per node, proving that Nutanix provided great performance even when running at high utilization and where the working set far exceeds the SSD tier. This is key to a truly enterprise solution for a business critical application such as Exchange.

The solution is running on Hyper-V with SMB 3.0 on the underlying Nutanix Distributed Storage Fabric. The same solution can also be deployed on vSphere or Acropolis Hypervisors using iSCSI in a fully supported configuration.

The above solution was validated without using Compression or Erasure Coding both of which improve performance and give significant capacity savings which allows for larger mailboxes. As a result, the Nutanix platform provides even more value than the ESRP submission shows.

If there was any doubt around if you should virtualize MS Exchange on Nutanix platform, the fact Nutanix is now validated by Microsoft should put your mind at ease.

Now you can move one step closer to a fully webscale datacenter by removing another application specific silo and enjoy improved resiliency/performance while reducing operational cost and complexity.

Fight the FUD – Support for MS Exchange on Nutanix

Its disappointing that some vendors have so little respect for customers time that they continue to spread FUD (Fear Uncertainty and Doubt) about other vendors products.

This post is for existing and future Nutanix customers to get the facts about support for MS Exchange on Nutanix.

First of all, Nutanix Distributed File System (NDFS) is not Network File System (NFS).

NDFS is a file system which presents storage to hypervisors (ESXi, Hyper-V and Acropolis Hypervisor) via industry standard storage protocols being iSCSI, SMB 3.0 and NFS.

Currently Microsoft do not support Exchange running with VMDKs hosted on NFS for Exchange deployments on vSphere, this is due to an outdated and baseless support policy which experts from almost every major storage vendor agree.

For more information see the below article:

Virtualizing Exchange on vSphere with NFS backed storage?

However Nutanix have a published KB providing full support for MS Exchange when running within VMDKs on NFS datastores.

Nutanix aims to provide an Uncompromisingly Simple solution for customers which gives them maximum flexibility/choice. As such, when deploying applications such as MS Exchange on Nutanix, how the application/s are deployed and what storage protocol they use is ultimately the customers choice.

The below are some of the benefits of this include:

  1. Running a standard platform and storage protocol for all workloads is a simple model which reduces the unnecessary complexity of multiple protocols and/or in-guest storage configurations.
  2. The customer has the choice to deploy in multiple configurations to suit their specific requirements
If you want to run a 100% Microsoft supported Exchange configuration on Nutanix you currently have two options:
  1. vSphere running iSCSI
  2. Hyper-V running SMB 3.0
  3. Acropolis Hypervisor (AHV) running iSCSI (default)

If you understand that NFS datastores are not supported by Microsoft, but accept it is fully supported by Nutanix and you want to run Exchange in a VMDK on NFS datastores, then Nutanix will support for MS Exchange and Microsoft will provide commercially reasonable support directly or via TSAnet if the case needs to be escalated.

So there you have it, MS Exchange can be ran on Nutanix in 100% Microsoft supported configurations and Nutanix customers have the choice in how they wish to deploy with full support from Nutanix.

Exchange__transparent-1024x408

Related Articles:

How to successfully Virtualize MS Exchange – Part 10 – Presenting Storage direct to the Guest OS

Let’s start with listing three common storage types which can be presented direct to a Windows OS?

1. iSCSI LUNs
2. SMB 3.0 shares
3. NFS mounts

Next let’s discuss these 3 options.

iSCSI LUNs are a common way of presenting storage direct to the Guest OS even in vSphere environments and can be useful for environments using storage array level backup solutions (which will be discussed in detail in an upcoming post).

The use of iSCSI LUNs is fully supported by VMware and Microsoft as iSCSI meets the technical requirements for Exchange, being Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands. iSCSI LUNs presented to Windows are then formatted with NTFS which is a journalling file system which also protects against Torn I/O.

In vSphere environments nearing the configuration maximum of 256 datastores per ESXi host (and therefore HA/DRS cluster) presenting iSCSI LUNs to applications such as Exchange can help ensure scalability even where vSphere limits may have been reached.

Note: I would recommend reviewing the storage design and trying to optimize VMs/LUN etc first before using iSCSI LUNs presented to VMs.

The problem with iSCSI LUNs is they result in additional complexity compared to using VMDKs on Datastores (discussed in Part 11). The complexity is not insignificant as typically multiple LUNs need to be created per Exchange VM, things like iSCSI initiators and LUN masking needs to be configured. Then when the iSCSI initiator driver is updated (say via Windows Update) you may find your storage disconnected and you may need to troubleshoot iSCSI driver issues. You also need to consider the vNetworking implications as the VM now needs IP connectivity to the storage network.

I wrote this article (Example VMware vNetworking Design w/ 2 x 10GB NICs for IP Storage) a while ago showing an example vNetworking design that supports IP storage with 2 x 10GB NICs.

The above article shows NFS on the dvPortGroup name but the same configuration is also optimal for iSCSI. Each Exchange VM would then need a 2nd vmNIC connected to the iSCSI portgroup (or dvPortgroup) ideally with a static IP address.

IP addressing is another complexity added by presenting storage direct to VMs rather than using VMDKs on datastores.

Many system administrators, architects and engineers might scoff at the suggestion iSCSI is complex, but in my opinion while I don’t find iSCSI at all difficult to design/install/configure and use, it is significantly more complex and has many more points of failure than using a VMDK on a Datastore.

One of the things I have learned and seen benefit countless customers over the years is keeping things as simple as possible while meeting the business requirements. With that in mind, I recommend only considering the use of iSCSI direct to the Guest OS in the following situations:

1. When using a Backup solution which triggers a storage level snapshot which is not VM or VMDK based. i.e.: Where snapshots are only support at the LUN level. (Older storage technologies).
2. Where ESXi scalability maximums are going to be reached and creating a separate cluster is not viable (technically and/or commercially) following a detailed review and optimization of storage for the vSphere environment.
3. When using legacy storage architecture where performance is constrained at a datastore level. e.g.: Where increasing the number of VMs per Datastore impacts performance due to latency created from queue depth or storage controller contention.

Next let’s discuss SMB 3.0 / CIFS shares.

SMB 3.0 or CIFS shares are commonly used to present storage for Hyper-V and also file servers. However presenting SMB 3.0 directly to Windows is not a supported configuration for MS Exchange because SMB 3.0 presented to the Guest OS directly does not meet the technical requirements for Exchange, such as Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands.

However SMB 3.0 is supported for MS Exchange when presented to Hyper-V and where Exchange database files reside within a VHD which emulates the SCSI commands over the SMB file protocol. This will be discussed in the upcoming Hyper-V series.

The below is a quote from Exchange 2013 storage configuration options outlining the storage support statement for MS Exchange.

All storage used by Exchange for storage of Exchange data must be block-level storage because Exchange 2013 doesn’t support the use of NAS volumes, other than in the SMB 3.0 scenario outlined in the topic Exchange 2013 virtualization. Also, in a virtualized environment, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported.

The above statement is pretty confusing in my opinion, but what Microsoft mean by this is SMB 3.0 is supported when presented to Hyper-V with Exchange running in a VM with its databases housed within one or more VHDs. However to be clear presenting SMB 3.0 direct to Windows for Exchange files is not supported.

NFS mounts can be used to present storage to Windows although this is not that common. Its important to note presenting NFS directly to Windows is not a supported configuration for MS Exchange and as with SMB 3.0, presenting NFS to Windows directly also does not meet the technical requirements for Exchange, being Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands. iSCSI LUNs can be formatted with VMFS which is a journalling file system which also protects against Torn I/O.

As such I recommend not presenting NFS mounts to Windows for Exchange storage.

Note: Do not confuse presenting NFS to Windows which presenting NFS datastores to ESXi as these are different. NFS datastores will be discussed in Part 11.

Summary:

iSCSI is the only supported storage protocol to present storage direct to Windows for storage of Exchange databases.

Lets now discuss the Pros and Cons for presenting iSCSI storage direct to the Guest OS.

PROS

1. Ability to reduce overheads of legacy LUN based snapshot based backup solutions by having MS Exchange use dedicated LUN/s therefore reducing delta changes that need to be captured/stored. (e.g.: Netapp SnapManager for Exchange)
2. Does not impact ESXi configuration maximums for LUNs per ESXi host as storage is presented to the Guest OS and not the hypervisor
3. Dedicated LUN/s per MS Exchange VM can potentially improve performance depending on the underlying storage capabilities and design.

CONS

1. Complexity e.g.: Having to create, present and manage LUN/s per Exchange MBX/MSR VMs
2. Having to manage and potentially troubleshoot iSCSI drivers within a Guest OS
3. Having to design for IP storage traffic to access VMs directly, which requires additional vNetworking considerations relating to performance and availability.

Recommendations:

1. When choosing to present storage direct to the Guest OS, only iSCSI is supported.
2. Where no requirements or constraints exist that require the use of storage presented to the Guest OS directly, use VMDKs on Datastores option which is discussed in Part 11.
3. Use a dedicated vmNIC on the Exchange VM for iSCSI traffic
4. Use NIOC to ensure sufficient bandwidth for iSCSI traffic in the event of network congestion. Recommended share values along with justification can be found in Example Architectural Decision – Network I/O Control Shares/Limits for ESXi Host using IP Storage.
5. Use a dedicated VLAN for iSCSI traffic
6. Do NOT present SMB 3.0 or NFS direct to the Guest OS and use for Exchange Databases!

Back to the Index of How to successfully Virtualize MS Exchange.