How to successfully Virtualize MS Exchange – Part 4 – DRS

DRS is a well known feature of vSphere which is designed to help load balance virtual environments for optimal performance.

With most virtual workloads, DRS does an excellent job of load balancing, so leaving DRS set to “Fully Automated” without specifying any DRS rules is fine.

The “Migration Threshold” can be adjusted from Conservative to Aggressive in 5 increments, with the default being “3” which I recommend.

For more information on this recommendation see : Example Architectural Decision – DRS Automation Level

These two settings are shown below:

NoSAN-ClusterDRSsettings

However with MS Exchange VMs which are CPU and RAM intensive, it doesn’t make sense to have these VMs moved around automatically if it can be avoided. If Exchange MBX / MSR VMs were vMotioned, it may take several minutes for the process to complete, during which time, depending on the vMotion configuration and bandwidth, could result in performance degradation. As a result, avoiding vMotion where possible reduces the risk to Exchange.

Note: I am not saying vMotion does not work, or cannot be configured to work very well for large VMs like MBX/MSR, but if vMotion can be avoided without adding significant complexity or operational cost to an environment, I try to avoid it except during planned maintenance activities.

I still however recommend enabling DRS and configuring it in “Fully Automated” mode, but by combining it with DRS rules for MBX / MSR VMs we can provide both higher and more consistent performance for MS Exchange.

To achieve this I recommend the following:

Create a “Host DRS Group” for each ESXi host in the cluster where Exchange VMs are expected to run and naming them with the ESXi hosts name to make them easily identifiable.

NoSAN-DRS-HostDRSGroup

Next I recommend creating a “VM DRS Group” per Exchange Mailbox VM and naming the VM DRS Group as the Exchange MBX or MSR server name OR another easily identifiable name such as “Exchange DAG Node 1” shown below.

NoSANRSGroup-ExchDAG1

Now that we have our “Host DRS Group/s” and “VM DRS Group/s” created, we setup a DRS “Virtual Machines to Hosts” rule per MBX/MSR VM and ESXi host with the policy “Should run of hosts in group” as shown below.

NoSANExch01ShouldRunHost1

What the above rule does is ensure the MSR or MBX VM runs only on the specified ESXi host unless there is an ESXi host failure, in which can it can automatically restart on another node within the cluster.

NoSAN-DRSRule-ShouldRunOnHostsInGroup

The below screenshot shows an example of what the recommended DRS rules would be in an environment had four MSR or MBX servers.  NoSANExchangeShouldRules

The above rules will result in the MBX or MSR VMs running on separate hosts as shown below.

NoSAN_ExchangeVMs_OnePerHost

Advantages of this DRS configuration:

1. Ensures no compute or network contention between the Exchange VMs
2. Ensures no storage layer contention between Exchange VMs such as HBA queue depths, NIC Note: This will not eliminate storage contention which may exist at a SAN/NAS layer.
3. DRS will not automatically move an MBX or MSR VM meaning performance will not potentially be impacted by the vMotion activity
4. HA is still fully functional
5. vMotion can still be used if required. e.g.: Prior to host maintenance.
6. DRS will still automatically load balance VMs throughout the cluster to ensure optimal performance of all ESXi hosts
7. More efficient than simply using Anti-Affinity rules for MBX/MSR VMs
8. Ensures two or more DAG members will not be impacted in the event of a single ESXi host failure.

Recommendations for DRS:

1. Set DRS Automation level to “Fully Automated”
2. Setup DRS “Migration Threshold” to “3” (Default)
3. Setup a “VM DRS Group” per Exchange Mailbox VM
4. Setup a “Host DRS Group” on a 1:1 basis with Exchange MSR or MBX VMs
5. Setup a DRS “Virtual Machines to Hosts” rule with the policy “Should run of hosts in group” on a 1:1 basis with Exchange MSR or MBX VMs & ESXi hosts
6. Disable Distributed Power Management (DPM) for hosts running Exchange MBX/MSR VMs.

Back to the Index of How to successfully Virtualize MS Exchange.

Example Architectural Decision – Site Recovery Manager Server – Physical or Virtual?

Problem Statement

To ensure Production vSphere environment/s can meet/exceed the required RTOs in the event of a declared site failure, What is the most suitable way to deploy VMware Site Recovery Manager, on a Physical or Virtual machine?

Requirements

1. Meet/Exceed RTO requirements

2. Ensure solution is fully supported

3. SRM be highly available, or be able to be recovered rapidly to ensure Management / Recovery of the Virtual infrastructure

4. Where possible, reduce the CAPEX and OPEX for the solution

5. Ensure the environment can be easily maintained in BAU

Assumptions

1. Sufficient compute capacity in the Management cluster for an additional VM

2. SRM database is hosted on an SQL server

3. vSphere Cluster (ideally Management cluster)  has N+1 availability

Constraints

1. None

Motivation

1. Reduce CAPEX and OPEX

2. Reduce the complexity of BAU maintenance / upgrades

3. Reduce power / cooling / rackspace usage in datacenter

Architectural Decision

Install Site Recovery Manager on a Virtual machine

Justification

1. Ongoing datacenter costs relating to Power / Cooling and Rackspace are avoided

2. Placing Site Recovery Management on a Virtual machine ensures the application benefits from the availability, load balancing, and fault resilience capabilities provided by vSphere

3. The CAPEX of a virtual machine is lower than a physical system especially when taking into consideration network/storage connectivity for the additional hardware where a physical server was used

4. The OPEX of a virtual machine is lower than a physical system due to no hardware maintenance, minimal/no additional power usage , and no cooling costs

3. Improved scale-ability and the ability to dynamically add additional resources (where required) assuming increased resource consumption by the VM. Note: The guest operating system must support Hot Add / Hot Plug and be enabled while the VM is shutdown. Where these features are not supported, virtual hardware can be added with a short outage.

4. Improved manageability as the VMware abstraction layer makes day to day tasks such as backup/recovery easier

5. Ability to non-disruptively migrate to new hardware where EVC is configured in compatible mode and enabled between hosts within a vSphere data center

Alternatives

1. Place SRM on a physical server

Implications

1. For some storage arrays, the SRM server needs to have access to admin LUNs and using a virtual machine may increase complexity by the requirement for RDMs

I would like to Thank James Wirth VCDX#83 (@jimmywally81) for his contribution to this example architectural decision.

Related Articles

1. Site Recovery Manager Deployment Location

2. Swap file location for SRM protected VMs

CloudXClogo

 

 

Example Architectural Decision – Site Recovery Manager Deployment Location

Problem Statement

To ensure Production vSphere environment/s can meet/exceed the required RTOs in the event of a declared site failure and easily perform scheduled DR testing, VMware Site Recovery Manager will be used to automated the failover to the secondary site.

What is the most suitable way to deploy Site Recovery Manager to ensure the environment can be maintained with minimal risk/complexity?

Requirements

1. Meet/Exceed RTO requirements
2. Ensure solution is fully supported

Assumptions

1. vCenter is considered a Tier 1 application
2. vSphere 5.1
3. SRM 5.1
4. A single Windows instance hosts vCenter, SSO and Inventory services and is protected by vCenter Heartbeat

Constraints

1. SRM is not protected by vCenter Heartbeat

Motivation

1. Reduce the complexity for BAU maintenance

Architectural Decision

Install Site Recovery Manager on a dedicated Windows 2008 instance

Justification

1. When installing / upgrading /  patching  SRM including Storage Replication Adapters (SRAs) this may require a reboot or troubleshooting which may impact the production vCenter, including SSO and inventory services.

2. Having SRM separate to vCenter ensures the fail over is not unnecessarily delayed in the event of a disaster due to contention with vCenter on the same VM

3. SRM and vCenter work together in the event of an outage, as such they are less complimentary workloads

4. If hosted on vCenter, SRM will then be subject to the same change windows and be impacted during any maintenance performed for applications running on the same OS instance

5. The SRM application has different availability requirements than vCenter, as such if SRM was combined with vCenter, SRM (having a lower availability requirement than vCenter) would have to be treated with the same change management / care as vCenter which would complicate BAU maintenance

6. The SRM service (business) has different maintenance requirements to vCenter, as such they are not suited to be placed on the same VM

7. Having SRM on a dedicated VM aligns with the scaling out recommendation for virtual workloads

8. Having additional components on the same OS increases complexity and may reduce the availability of vCenter

Alternatives

1. Place SRM on the vCenter server

Implications

1. One (1) additional Windows 2008 R2 licenses will be required

2. One (1) additional Windows instance will need to be maintained in BAU

I would like to Thank James Wirth VCDX#83 (@jimmywally81) for his contribution to this example architectural decision.

Related Articles

1. VMware Site Recovery Manager, Physical or Virtual machine?

2. Swap file location for SRM protected VMs

CloudXClogo