Example Architectural Decision – HA Admission Control Policy with Software licensing constaints

High Availability Admission Control Setting & Policy with a Software Licensing Constraint

Problem Statement

The customer has a requirement to virtualize “Application X” which is currently running on physical servers. The customer is licensed for a maximum of 32 cores and the software vendor has strict licensing restrictions which do not recognize the use of DRS rules to restrict virtual machines to a sub-set of hosts within a cluster.

The application is Tier 1, and requires maximum availability. A capacity planner assessment has been conducted and found 32 cores and 256Gb RAM is sufficient to run all servers.

The servers requirements vary greatly from 1vCPU/2GB RAM to 8vCPU/64GB Ram with the bulk of the VMs 2vCPU or less with varying RAM sizes.

What is the most suitable hardware configuration and HA admission control policy / setting  that complies with the licensing restrictions while ensuring N+1 redundancy and minimizing the change of poor application performance?

Assumptions

1. None

Constraints

1. Software vendor has strict licensing requirements
2. Only 32 cores are licensed and the customer has no budget for further licenses
3. DRS rules cannot be used to isolate VMs onto one or more hosts due to software licensing agreement

Motivation

1. Ensure maximum availability for the Tier 1 application/s
2. Ensure optimal performance for Tier 1 application/s

Architectural Decision

Purchase a total of three (3) x Two (2) Way Servers, with 8 core CPUs and 128GB Ram each and form a cluster of three nodes.

For the HA Admission control setting use “Enable – Do not power on virtual machines that violate availability constraints”

For the HA admission control policy use “Specify a Failover Host” and select the third host in the cluster. (Leaving two active hosts in the cluster).

Justification

1. Enabling strict admission control is critical to ensure the required level of availability for the Tier 1 application
2. Ensure maximum CPU scheduling efficiency by having two hosts active within the cluster running virtual machines as opposed to a single large host
3. Having 2 active hosts in the cluster allows DRS some flexibility to load balance to resolve contention compared to using a single large 32 core host
4. N+1 redundancy is achieved as one host can fail and the “fail-over” host will become active and be able to take the failed hosts workloads without performance degrading
5. As only 32 cores ( 2 servers with 16 cores each) are active at any one time, the solution complies with the licensing constraint
6. Using CPUs with smaller numbers of cores (such as 5 x 2 way servers with 4 cores per socket) would result in larger VMs not fitting within NUMA nodes and potentially impacting memory performance. Although, with vNUMA in vSphere 5.0 this would be less of an issue.
7. All VMs will fit within a NUMA node thus giving the VMs maximum performance without the requirement for vNUMA which is only available in vSphere 5.0 or later
8. The compute resource supplied by the proposed cluster is sufficient to run the workloads as per the capacity planner assessment.

Implications

1. Additional networking and storage ports for three hosts as opposed to a two host cluster
2. If additional compute is required in the cluster, additional software licenses would need to be purchased. Alternativley if the application servers were redesigned to use a scale out methodology (especially for VMs with 4-8vCPUs) it would likley result in higher overcommitment ratios without significant contention and better utilization of the existing licensed cores
3. One host is sitting as a hot standby not servicing customer workloads and may be considered to be “waste”

Alternatives

1. Use 2 x 4 way 8 core ESXi hosts (32 cores per host) and set HA admission control to specify a fail over host
2. Use 5 x 2 Way 4 core ESXi hosts (8 cores per host) and set HA admission control to specify a fail over host

The Below is a basic diagram of the proposed solution.

FailoverHost

*Post updated February 11th to correct an error.

Common Mistake: Inefficient cluster sizes

Link

In my day job, I regularly come across environments which are running poorly and have inefficient designs.

One of the most common issues I see is VMware environments which cannot power on VMs due to being out of compute resources, but not for the reasons you may expect.

While the environments may have less than optimal HA settings / policies, the most common issues I see is customers (for whatever reason) having multiple clusters with only a few nodes. (ie: 2/3/4 etc)

Some of the time, there are corporate policies which may require this type of setup, but alot of the time, you can comply with these policies while still optimizing the environment.

It seems that even with virtualisation having been common place for many years, the basics are still mis-understood by a significant percentage of industry professionals. I have heard comments event recently saying you need 2 node clusters for maximum HA efficiency, They couldn’t be more Wrong!

So, why are small clusters a potential problem?

Depending on what HA setting you choose (Host failures cluster tolerates , Percentage of cluster resources reserved for HA, or Failover Host/s), the clusters have a large amount of “waste”.

What is “Waste”?

“Waste”, is the amount of the compute power within the cluster, that cannot be used to ensure in a HA event, VMs can be restarted on the remaining hosts.

Now at this stage, let me point out, some “Waste” is a good thing. We need to have some spare capacity for HA events, but the challenge is to minimize the waste without compromising HA.

So, in a recent environment I reviewed, there was 4 clusters using similar IBM x3850 Servers.

Cluster 1 : 2 Nodes

Cluster 2 : 2 Nodes

Cluster 3: 3 Nodes

Cluster 4 : 2 Nodes

In all clusters, HA was enabled (as it should be) and the HA admission control setting was “Percentage of Cluster resources reserved for HA” (which I prefer).

The 2 node clusters HA reservation percentage was set to 50%, and the 3 node cluster was 33%, which would be the settings I would choose if I had to stick with the 4 cluster design.

Because the environment (in its current state) was unable to host any more VMs, the customer wanted to purchase another 2 new Hosts, and form a new cluster.

At this stage we have the equivalent of 4 hosts of “waste” within the environment, and with a new cluster we would have 5 hosts “wasted”.

Now after a quick check of the VMware EVC KB: 1003212 all CPUs are compatible with EVC and support the EVC mode “Intel® “Merom” Generation”.

So, we can form a single new cluster using the existing 9 hosts and maintain full cluster functionality by enabling EVC.

Lets assume the hosts are all in a cluster and we’re configuring HA, How do we ensure we have more available compute for the new virtual machines?

Simple, we Enable HA (as you always should), Enable admission control, and set the HA policy to “Percentage of Cluster resources reserved for HA, But what percentage should we choose?

Well, it depends of what level of redundancy you require.

Generally, I recommend for

<8 hosts = N+1 – Note: If you require N+1 during maintenance you need N+2

>8 hosts < 16 hosts = N+2

>16 hosts <24 hosts = N+3

>24 hosts = N+4

The reason for the above, is as you add more hosts, your chance of a host failure, and a subsequent host failure increases. Therefore the more hosts you have, the more redundancy you need, Similar concept to RAID.

So in this example, we’re right on the line in terms of N+1 or N+2.

Lets be conservative, and choose N+2, therefore setting “Percentage of Cluster resources reserved for HA” to 22% (N+2 is actually 22.5%, but we use round numbers).

So what have we achieved?

The previous setup had only N+1 and an average HA overhead of 45.75% (50%+50%+50%+33% divide 4).

The new 9 node cluster now with N+2 redundancy and only has an overhead of 22%. A NET gain of 23.75% of available compute resources without purchasing new hardware.

What else do we gain by having a single larger cluster:

1. Increased DRS flexibility

2. Increase redundancy (previously N+1, now N+2)

3. Less chance of contention

4. No need to purchase new hardware!!

The above is a simple example of how to increase efficiency within a VMware environment without purchasing new hardware.

Now for those of you wanting to know more about HA/DRS, this has been covered in great detail in other blogs, I would recommend you first have a read of the following blog and get a copy of “vSphere 5.0 Clustering technical deep dive” book.

Yellow Bricks (Duncan Epping) – HA Admission control Pros and Cons