Example Architectural Decision – Enhanced vMotion Compatiblity

Problem Statement

The virtual infrastructure is required to scale over time as demand for compute and/or availability increases.
When purchasing additional ESXi hosts over an expected ESXi host hardware life of >=3 year it is unlikely that the exact make/model of server or CPU type will be available. The solution needs to ensure full functionality across ESXi hosts (specifically vMotion) which may not be exactly the same hardware, although all processors will always be from the same vendor.

How can the vSphere cluster/s be configured for maximum flexibility without significant impact to Virtual machine performance?

Assumptions

1. All CPU types will be Intel or AMD but not a mix of the two
2. All CPUs will have a supported EVC mode

Motivation

1. Ensure full functionality between ESXi hosts whos Intel CPUs may not match exactly
2. Prevent having to purchase large volumes of identical hardware at one time
3. Allow vSphere clusters to be expanded over time using similar, but not identical hardware although maintaining the same CPU make.

Architectural Decision

Enable EVC and maintain it at the maximum supported EVC level for all ESXi hosts in each vSphere cluster.

Justification

1. vMotion is a requirement for the cluster/s to ensure maximum flexibility
2. It is essential to avoid downtime where possible. EVC ensures VMs can be vMotion’d to newer hosts for the purpose of expanding a cluster, OR alternatively, to newer hardware so older hardware can be decommissioned without impact to the VM.
3. The EVC level for the cluster can be increased without downtime
4. Having EVC disabled would require virtual machines being migrated to new hardware have downtime where CPU types are not similar
5. If EVC was not enabled, newer hardware may be placed into a new (smaller) cluster/s and this would add an unnecessary HA overhead as well as reduce the efficiency of DRS

Implications

1. Where the EVC level for a cluster is increased, virtual machines will not leverage new CPU features unmasked by EVC until the next reboot
2. In the event new hardware is added to a cluster and the new hardware is compatible with a higher EVC mode, a virtual machine which has a workload which can benefit from CPU features masked by the existing EVC mode may not perform at the optimal level until older hardware is removed from the cluster and the EVC mode increased.

Alternatives

1. Leave EVC disabled and where CPU types are not compatible to vMotion, shutdown the guest OS for migrations.

VMware Clusters – Scale up or out?

I get asked this question all the time, is it better to Scale up or out?

The answer is of course, it depends. 🙂

First lets define the two terms. Put simply,

Scale Up is having larger hosts, and less of them.

Scale Out is having more smaller hosts.

What are the Pro’s and Con’s of each?

Scale Up 

* PRO – More RAM per host will likely achieve higher transparent memory sharing (higher consolidation ratio!)

* PRO – Greater CPU scheduling flexibility as more physical cores are available (less chance for CPU contention!)

* PRO – Ability to support larger VMs (ie: The 32vCPU monster VM w/ 1TB RAM)

* PRO – Larger NUMA node sizes for better memory performance. Note: For those of you not familiar with NUMA, i recommend you check out Sizing VMs and NUMA nodes | frankdenneman.nl

* PRO – Use less ports in the Data and Storage networks

* PRO – Less complex DRS simulations to take place (every 5 mins)

* CON – Potential for Network or I/O bottlenecks due to larger number of VMs per host

* CON – When a host fails, a larger number of VMs are impacted and have to be restarted on the surviving hosts

* CON – Less hosts per cluster leads to a higher HA overhead or “waste”

* CON – Less hosts for DRS to effectively load balance VMs across

Scale Out

* CON – Less RAM per host will likely achieve lower transparent memory sharing (thus reducing overcommitment)

* CON – Less physical cores may impact CPU scheduling (which may lead to contention – CPU ready)

* CON – Unable to support larger VMs (ie: 8vCPU VMs or the 32vCPU monster VM w/ 1TB RAM)

* CON – Use more ports in the Data and Storage networks – ie: Cost!

* PRO – Less likely for Data or I/O bottlenecks due to smaller number of VMs per host

* PRO – When a host fails, a smaller number of VMs are impacted and have to be restarted on the surviving hosts

* PRO – More hosts per cluster may lead to a lower HA overhead or “waste”

* PRO – Greater flexibility for DRS to load balance VMs

Overall, both Scale out and up have their advantages so how do you choose?

When doing your initial capacity planning exercise, determine how many VMs you will have day 1 (and their vCPU/RAM/Disk/Network/IOPS) and try and start with a cluster size which gives you the minimum HA overhead.

Example: If you have 2 large hosts with heaps of CPU / RAM your HA overhead is 50%, if you have 8 smaller hosts your overhead is 12.5% (both with N+1).

As a general rule, I believe the ideal cluster would be large 4 way hosts with a bucket load of ram and around 16-24 hosts. This would be in my opinion the best of both worlds. Sadly, few environments meet the requirements (or have the budget) for this type of cluster.

I believe a cluster should ideally start with enough hosts to ensure a sufficient number of hosts to minimize the initial HA overhead (say <25%) and ensure DRS can load balance effectively, then scale up (eg: RAM) to cater for additional VMs. If more compute power is required in future, scaling out and then scaling up (add RAM) further. I would generally suggest not to design to the maximum, so up to 24 node clusters.

From a HA perspective, I feel in a 32 node cluster, 4 hosts worth of compute should be reserved for HA, or 1 in 8 (12.5% HA Reservation). Similar to the RAID-DP concept from Netapp, of 14+2 disks in a RAID pack.

Tip: Choose Hardware which can be upgraded (Scaled up) . Avoid designing a cluster with hosts hardware specs maxed out day 1.

There are exceptions to this, such as Management clusters, which may only have (and need) 2 or 3 hosts over their life span, (eg: For environments where vCloud Director is used), or environments with static or predictable workloads.

To achieve the above, the chosen hardware needs to be upgradable, ie: If a Servers maximum RAM is 1TB, you may consider only half populating it (being careful to choose DIMMs that allow you to expand) to enable you to scale up as the environments compute requires grow.

Tip: Know your workloads! So use tools like Capacity Planner so you understand what your designing for.

It is very important to consider the larger VMs, and ensure the hardware you select has suitable number of physical cores.

Example: Don’t expect 2 x 8vCPU VMs (highly utilized) to run well together on a 2 way 4 core host.

When designing a new cluster or scaling an existing one, be sure to consider the CPU to RAM ratio, so that you don’t end up with a cluster with heaps of available CPU and maxed out memory or vice versa. This is a common mistake i see.

Note: Typically in environments I have seen over many years, Memory is almost always the bottleneck.

The Following is an example where a Scale Out and Up approach end up with very similar compute power in their respective clusters, but would likely have very different performance characteristics and consolidation ratios.

Example Scenario: A customer with 200 VMs day one , and lets say the average VM size is 1vCPU / 4GB RAM but they have 4 highly utilized 8vCPU / 64GB Ram VMs running database workloads.

The expected consolidation ratio is 2.5:1 vCPUs to physical cores and 1.5:1 vRAM to physical Ram.

The customer expects to increase the number of VMs by 25% per year, for the next 3 years.

So our total compute required is

Day one : 92.8 CPU cores and 704GB Ram.

End of Year 1 : 116 CPU cores and 880GB Ram.

End of Year 2 : 145 CPU cores and 1100GB Ram.

End of Year 3 : 181 CPU cores and 1375GB Ram.

The day 1 requirements could be achieved in a number of ways, see two examples below.

Option 1 (Scale Out) – Use 9 hosts with 2 Way / 6 core / 96GB Ram w/ HA reservation of 12% (~N+1)

Total Cluster Resources = 108 Cores & 864GB RAM

Usable assuming N+1 HA = 96 cores & 768GB RAM

Option 2 (Scale Up) – Use 4 hosts with 4 Way / 8 core / 256GB Ram w/ HA reservation of 25% (~N+1)

Total Cluster Resources = 128 Cores & 1024GB RAM

Usable assuming N+1 HA = 96 cores & 768GB RAM

Both Option 1 and Option 2 appear to meet the Day 1 compute requirements of the customer, right?

Well, yes, at the high level, both scale out and up appear to provide the required compute resources.

Now lets review how the clusters will scale to meet the End of Year 3 requirements, after all, we don’t design just for day 1 do we. 🙂

End of Year 3 Requirements : 181 CPU cores and 1375GB Ram.

Option 1 (Scale Out) would require ~15 hosts (2RU per host) based on CPU & ~15 hosts based on RAM plus HA capacity of ~12% (N+2 as the cluster is >8 hosts.) taking the total required hosts to 18 hosts.

Total Cluster Resources = 216 Cores & 1728GB RAM

Usable assuming N+2 HA = 190 cores & 1520GB RAM

Note: At between 16 and 24 hosts N+3 should be considered. (Equates to 1 spare host of compute per 8 hosts)

Option 2 (Scale Up) – would require Use 6 hosts (4RU per host) based on CPU &  5 hosts based on RAM plus HA capacity of ~15% (N+1 as the cluster is <8 hosts.) taking the total required hosts to 7 hosts.

Total Cluster Resources = 224 Cores & 1792GB RAM

Usable assuming N+1 HA = 190 cores & 1523GB RAM

So on the raw compute numbers, we have two viable options which scale from Day to end of Year 3 and meet the customers compute requirement.

Which option would I choose I hear you asking, good question.

I think I could easily defend either Option, but I believe Option 2 would be be more economically viable and result in better performance. The below are a few reasons for my conclusion.

* Option 2 Would give significant transparent page sharing, compared to Option 1 therefore getting a higher consolidation ratio.

* Option 2 would likely be much cheaper from a Network / Storage connectivity point of view (less connections)

* Option 2 is more suited to host the 4 x 8vCPU highly utilized VMs (as they fit within a NUMA node and will only use 1/4 of the hosts CPU as opposed to 3/4’s of the 2 Way host)

* The 4 way (32 core) host would provide better CPU scheduling due to the large number of cores

* From a data center perspective, Option 2 would only use 28RU compared to 36RU

Note: A cluster of 7 hosts is not really ideal, but in my opinion is large enough to get both HA and DRS efficiencies. The 18 node cluster (option 1) is really in the sweet spot for cluster sizing, but the CPUs did not suit the 8 vCPU workloads. Had Option 1 used 8 core processors that would have made Option 1 more attractive.

Happy to hear everyone’s thoughts on the topic.

Common Mistake: Inefficient cluster sizes

Link

In my day job, I regularly come across environments which are running poorly and have inefficient designs.

One of the most common issues I see is VMware environments which cannot power on VMs due to being out of compute resources, but not for the reasons you may expect.

While the environments may have less than optimal HA settings / policies, the most common issues I see is customers (for whatever reason) having multiple clusters with only a few nodes. (ie: 2/3/4 etc)

Some of the time, there are corporate policies which may require this type of setup, but alot of the time, you can comply with these policies while still optimizing the environment.

It seems that even with virtualisation having been common place for many years, the basics are still mis-understood by a significant percentage of industry professionals. I have heard comments event recently saying you need 2 node clusters for maximum HA efficiency, They couldn’t be more Wrong!

So, why are small clusters a potential problem?

Depending on what HA setting you choose (Host failures cluster tolerates , Percentage of cluster resources reserved for HA, or Failover Host/s), the clusters have a large amount of “waste”.

What is “Waste”?

“Waste”, is the amount of the compute power within the cluster, that cannot be used to ensure in a HA event, VMs can be restarted on the remaining hosts.

Now at this stage, let me point out, some “Waste” is a good thing. We need to have some spare capacity for HA events, but the challenge is to minimize the waste without compromising HA.

So, in a recent environment I reviewed, there was 4 clusters using similar IBM x3850 Servers.

Cluster 1 : 2 Nodes

Cluster 2 : 2 Nodes

Cluster 3: 3 Nodes

Cluster 4 : 2 Nodes

In all clusters, HA was enabled (as it should be) and the HA admission control setting was “Percentage of Cluster resources reserved for HA” (which I prefer).

The 2 node clusters HA reservation percentage was set to 50%, and the 3 node cluster was 33%, which would be the settings I would choose if I had to stick with the 4 cluster design.

Because the environment (in its current state) was unable to host any more VMs, the customer wanted to purchase another 2 new Hosts, and form a new cluster.

At this stage we have the equivalent of 4 hosts of “waste” within the environment, and with a new cluster we would have 5 hosts “wasted”.

Now after a quick check of the VMware EVC KB: 1003212 all CPUs are compatible with EVC and support the EVC mode “Intel® “Merom” Generation”.

So, we can form a single new cluster using the existing 9 hosts and maintain full cluster functionality by enabling EVC.

Lets assume the hosts are all in a cluster and we’re configuring HA, How do we ensure we have more available compute for the new virtual machines?

Simple, we Enable HA (as you always should), Enable admission control, and set the HA policy to “Percentage of Cluster resources reserved for HA, But what percentage should we choose?

Well, it depends of what level of redundancy you require.

Generally, I recommend for

<8 hosts = N+1 – Note: If you require N+1 during maintenance you need N+2

>8 hosts < 16 hosts = N+2

>16 hosts <24 hosts = N+3

>24 hosts = N+4

The reason for the above, is as you add more hosts, your chance of a host failure, and a subsequent host failure increases. Therefore the more hosts you have, the more redundancy you need, Similar concept to RAID.

So in this example, we’re right on the line in terms of N+1 or N+2.

Lets be conservative, and choose N+2, therefore setting “Percentage of Cluster resources reserved for HA” to 22% (N+2 is actually 22.5%, but we use round numbers).

So what have we achieved?

The previous setup had only N+1 and an average HA overhead of 45.75% (50%+50%+50%+33% divide 4).

The new 9 node cluster now with N+2 redundancy and only has an overhead of 22%. A NET gain of 23.75% of available compute resources without purchasing new hardware.

What else do we gain by having a single larger cluster:

1. Increased DRS flexibility

2. Increase redundancy (previously N+1, now N+2)

3. Less chance of contention

4. No need to purchase new hardware!!

The above is a simple example of how to increase efficiency within a VMware environment without purchasing new hardware.

Now for those of you wanting to know more about HA/DRS, this has been covered in great detail in other blogs, I would recommend you first have a read of the following blog and get a copy of “vSphere 5.0 Clustering technical deep dive” book.

Yellow Bricks (Duncan Epping) – HA Admission control Pros and Cons