What’s .NEXT 2016 – Acropolis X-Fit

Now that Acropolis Hypervisor (AHV) has been GA for approx 18 months (with many customers using it in production well before official GA), Nutanix has had a lot of positive feedback about its ease of deployment, management, scaling and performance. However there has been a common theme that customers have wanted the ability to create rules to seperate VMs and to keep VMs together much like vSphere’s DRS functionality.

Since the GA of AHV, it has supported some basic DRS functionality including initial placement of VMs and the ability to restore a VMs data locality by migrating the VM to the node containing the most data locally.

These features work very well, so the affinity and anti-affinity rules were the main pain point. While AHV is not designed to or aimed to be feature parity with ESXi or Hyper-V, DRS style rules is one area where similar functionality makes sense whereas in many other areas, AHV is and will remain very different to legacy hypervisors.

No surprise the AHV scheduler now provides VM/host affinity and anti-affinity rule capabilities which (similar to vSphere DRS) allows for “Should” and “Must” rules to control how the cluster enforces the rules.

DRSAffinityAntiAffinity

Rule types which can be created:

  • VM-VM affinity: Keep VMs on a same host.
  • VM-VM anti-affinity: Keep VMs on separate hosts.
  • VM-Host affinity: Keep a given VM on a group of hosts.
  • VM-Host anti-affinity: Keep a given VM out of a group of hosts.
  • Affinity and Anti-affinity rules are cross-cluster policies.
  • Users can specify MUST as well as SHOULD enforcement of DRS rules

In addition to matching the capabilities of vSphere DRS, the Acropolis X-Fit functionality is also tightly integrated with both the compute and storage layers and works to proactively identify and resolve storage and compute contention by automatically moving virtual machines while ensuring data locality is optimised.

AHVScheduling1

There are many other exciting load balancing capabilities to come so stay tuned as the AHV scheduler has plenty more tricks up its sleeve.

Related .NEXT 2016 Posts

How to successfully Virtualize MS Exchange – Part 4 – DRS

DRS is a well known feature of vSphere which is designed to help load balance virtual environments for optimal performance.

With most virtual workloads, DRS does an excellent job of load balancing, so leaving DRS set to “Fully Automated” without specifying any DRS rules is fine.

The “Migration Threshold” can be adjusted from Conservative to Aggressive in 5 increments, with the default being “3” which I recommend.

For more information on this recommendation see : Example Architectural Decision – DRS Automation Level

These two settings are shown below:

NoSAN-ClusterDRSsettings

However with MS Exchange VMs which are CPU and RAM intensive, it doesn’t make sense to have these VMs moved around automatically if it can be avoided. If Exchange MBX / MSR VMs were vMotioned, it may take several minutes for the process to complete, during which time, depending on the vMotion configuration and bandwidth, could result in performance degradation. As a result, avoiding vMotion where possible reduces the risk to Exchange.

Note: I am not saying vMotion does not work, or cannot be configured to work very well for large VMs like MBX/MSR, but if vMotion can be avoided without adding significant complexity or operational cost to an environment, I try to avoid it except during planned maintenance activities.

I still however recommend enabling DRS and configuring it in “Fully Automated” mode, but by combining it with DRS rules for MBX / MSR VMs we can provide both higher and more consistent performance for MS Exchange.

To achieve this I recommend the following:

Create a “Host DRS Group” for each ESXi host in the cluster where Exchange VMs are expected to run and naming them with the ESXi hosts name to make them easily identifiable.

NoSAN-DRS-HostDRSGroup

Next I recommend creating a “VM DRS Group” per Exchange Mailbox VM and naming the VM DRS Group as the Exchange MBX or MSR server name OR another easily identifiable name such as “Exchange DAG Node 1” shown below.

NoSANRSGroup-ExchDAG1

Now that we have our “Host DRS Group/s” and “VM DRS Group/s” created, we setup a DRS “Virtual Machines to Hosts” rule per MBX/MSR VM and ESXi host with the policy “Should run of hosts in group” as shown below.

NoSANExch01ShouldRunHost1

What the above rule does is ensure the MSR or MBX VM runs only on the specified ESXi host unless there is an ESXi host failure, in which can it can automatically restart on another node within the cluster.

NoSAN-DRSRule-ShouldRunOnHostsInGroup

The below screenshot shows an example of what the recommended DRS rules would be in an environment had four MSR or MBX servers.  NoSANExchangeShouldRules

The above rules will result in the MBX or MSR VMs running on separate hosts as shown below.

NoSAN_ExchangeVMs_OnePerHost

Advantages of this DRS configuration:

1. Ensures no compute or network contention between the Exchange VMs
2. Ensures no storage layer contention between Exchange VMs such as HBA queue depths, NIC Note: This will not eliminate storage contention which may exist at a SAN/NAS layer.
3. DRS will not automatically move an MBX or MSR VM meaning performance will not potentially be impacted by the vMotion activity
4. HA is still fully functional
5. vMotion can still be used if required. e.g.: Prior to host maintenance.
6. DRS will still automatically load balance VMs throughout the cluster to ensure optimal performance of all ESXi hosts
7. More efficient than simply using Anti-Affinity rules for MBX/MSR VMs
8. Ensures two or more DAG members will not be impacted in the event of a single ESXi host failure.

Recommendations for DRS:

1. Set DRS Automation level to “Fully Automated”
2. Setup DRS “Migration Threshold” to “3” (Default)
3. Setup a “VM DRS Group” per Exchange Mailbox VM
4. Setup a “Host DRS Group” on a 1:1 basis with Exchange MSR or MBX VMs
5. Setup a DRS “Virtual Machines to Hosts” rule with the policy “Should run of hosts in group” on a 1:1 basis with Exchange MSR or MBX VMs & ESXi hosts
6. Disable Distributed Power Management (DPM) for hosts running Exchange MBX/MSR VMs.

Back to the Index of How to successfully Virtualize MS Exchange.