Support for Active Directory on vSphere

I heard something interested today from a customer, a storage vendor who sells predominantly block storage products was trying to tell them that Active Directory domain controllers are not supported on vSphere when using NFS datastores.

The context was the vendor was attempting to sell a traditional block based SAN, and they were trying to compete against Nutanix. The funny thing is, Nutanix supports block storage too, so it was a uneducated and pointless argument.

None the less, the topic of support for Active Directory on vSphere using NFS datastores is worth clarifying.

There are two Microsoft TechNet articles which cover support for  topic:

  1. Things to consider when you host Active Directory domain controllers in virtual hosting environments
  2. Support policy for Microsoft software that runs on non-Microsoft hardware virtualization software

Note: There is no mention of storage protocols (Block or File) in these articles.

The second article states:

for vendors who have Server Virtualization Validation Program (SVVP) validated solutions, Microsoft will support server operating systems subject to the Microsoft Support Lifecycle policy for its customers who have support agreements when the operating system runs virtualized on non-Microsoft hardware virtualization software.

VMware has validated vSphere as a SVVP solution which can be validated here: http://www.windowsservercatalog.com/svvp.aspx

The next interesting point is:

If the virtual hosting environment software correctly supports a SCSI emulation mode that supports forced unit access (FUA), un-buffered writes that Active Directory performs in this environment are passed to the host operating system. If forced unit access is not supported, you must disable the write cache on all volumes of the guest operating system that host the Active Directory database, the logs, and the checkpoint file.

Funnily enough, this is the same point for Exchange, but where the Exchange team decided not to support it, the wider organisation have a much more intelligent policy where they support SCSI emulation (ie: VMDKs on NFS datastores) as long as the storage ensures writes are not acknowledged to the OS prior to being written to persistent media (ie: Not volatile memory such as RAM).

This is a very reasonable support statement and one which has a solid technical justification.

In Summary, running Active Directory is supported on vSphere including both block (iSCSI, FC, FCoE) and file (NFS) based datastores where the storage vendor complies with the above requirements.

So check with your storage vendor to confirm if the storage your using is compliant.

Nutanix 100% complies with these requirements for both Block and File storage. For more details see: Ensuring Data Integrity with Nutanix – Part 2 – Forced Unit Access (FUA) & Write Through

For more information about how NFS datastores provide true block level storage to Virtual Machines via VMDKs, check out Emulation of the SCSI Protocol which shows how all native SCSI commands are honoured by VMDKs on NFS.

Related Articles:

  1. Running Domain Controllers in Hyper-V

This post covers the requirement for FUA the same as with vSphere and recommends the use of UPS (to ensure write integrity) as well as enterprise grade drives which are also applicable to vSphere deployments.

Example Architectural Decision – Storage I/O Control for Clusters Protected by SRM (Example 2 – Use SIOC)

Problem Statement

In an environment with one or more clusters with virtual machines protected by SRM, What is the most appropriate configuration of Storage I/O control?

Requirements

1. SRM solution must not be impacted

Assumptions

1. vSphere Version 4.1 or later

2. FC (Block) Based Storage OR NFS (File) based Storage

3. Number of datastores is fairly static

Constraints

1. Storage I/O control can prevent unmounting of datastore during a Recovery which can lead to errors being reported by SRM

Motivation

1. Where possible ensure consistent storage performance for all virtual machines

Architectural Decision

Enable and Configure Storage I/O control for all datastores.

Set the congestion threshold to 20ms

Leave the shares value default

Add a Step to each SRM recovery Plan as Step 1 and Select the Step Placement of “Before selected step”.

Configure step type as “Command of SRM Server” and execute the Scheduled Task which will disable SIOC prior to executing a SRM recovery

Justification

1. The benefits of Storage I/O control can still be achieved without impact to the SRM solution

2. SIOC will not impact SRM failover as it will be disabled automatically as part of the SRM recovery plan

3. In the event the Protected site or is lost, SIOC will not prevent failover

Implications

1. Increased complexity for the SRM solution

2. An additional step to excecute a “Command of SRM Server” is required

3. A Scheduled Task will need to be setup and configured with setting “Allow task to be ran on demand”

4. A script to disable SIOC will need to be prepared and configured with all datastores

Alternatives

1. Enable Storage I/O control and leave default settings

2. Enable storage I/O control and set share values on virtual machines

3. Enable Storage I/O control and set a lower “congestion threshold”

4. Enable Storage I/O control and set a higher “congestion threshold”

5. Disable Storage I/O control

Relates Articles

1. Example Architectural Decision –  Storage I/O Control for Clusters Protected by SRM (Example 2 – Don’t Use SIOC)

 

Example Architectural Decision – Storage Protocol Choice for a VMware View Environment using Linked Clones

Problem Statement

What is the most suitable storage protocol for a Virtual Desktop (VMware View) environment using Linked Clones?

Assumptions

1.  The Storage Array supports NFS native snapshot offload
2. VMware View 5.1 or later

Motivation

1. Minimize recompose (maintenance) window
2. Minimize impact on the storage array and HA/DRS cluster during recompose activities
3. Reduce storage costs where possible
4. Simplify the storage design eg: Number/size of Datastores / Storage Connectivity
5. Reduce the total solution cost eg: Number of Hosts required

Architectural Decision

Use Network File System (NFS)

Justification

1. Using native NFS snapshot (VAAI) offloads the creation of VMs to the array, therefore reducing the compute overhead on the ESXi hosts
2. Native NFS snapshots require much less disk space than traditional linked clones
3. Recomposition times are reduced due to the offloading of the cloning to the array
4. More virtual machines can be supported per NFS datastore compared to VMFS datastores (200+ for NFS compared to max recommended of 140, but it is generally recommended to design for much lower numbers eg: 64 per VMFS)
5. Recompositions/Refresh activities can be performed during business hours, or at Logoff (for Refresh) with minimal impact to the HA/DRS cluster, thus giving more flexibility to maintain the environment
6. Avoid’s potential VMFS locking issues – although this issue is not as important for environments using vSphere 4.1 onward with VAAI compatible arrays
7. When sizing your storage array, less capacity is required. Note: Performance sizing is also critical
8. The cost of a FC Storage Area Network can be avoided
9. Fewer ESXi hosts may be required as the compute overhead of driving cloning has been removed

Implications

1.  In the current release, 5.1, View Storage Accelerator (formally Content Based Read Cache or CBRC) is not supported when using Native NFS snapshots (VAAI)
2. Also in the current release 5.1, “Use native NFS snapshots (VAAI) is in “Tech Preview” – This is rumored to change in View 5.2

Alternatives

1. Use VMFS (block) based datastores and have more VMFS datastores – Note: Recompose activity will be driven by the host which adds an overhead to the cluster.