How to successfully Virtualize MS Exchange – Part 3 – Memory

In Part 1 and Part 2, we discussed how to size and configure Exchange VMs to meet CPU requirements. In Part 3 we will focus on virtual memory (vRAM).

As Exchange 2013 is quite RAM intensive and is not unusual to have memory requirements of >128GB RAM in larger deployments. As such, one of the first things we should consider is Virtual Machine maximums.

Luckily in recent years the maximum VM size in vSphere has increased and is no longer a constraint for virtualizing even the largest of Exchange environments.

The current maximum vRAM configuration per VM is shown below:

vSphere Virtual Machine RAM Maximums

Maximum vRAM per VM1TB (vSphere 5.0 or later)
Maximum vRAM per VM: 255GB (vSphere 4.1)

The key point here is memory is in no way a constraining factor when virtualizing Exchange even in older vSphere 4.1 deployments.

Memory Sizing

For maximum Memory performance, sizing the Exchange VM within a NUMA node gives the maximum benefit from NUMA locality, meaning the latency between the CPU and RAM is minimized.

In the event the memory requirements exceed the NUMA node, consider scaling out until you have at least 4 Exchange VMs (across 4 ESXi hosts) before scaling Exchange VMs up. This ensures higher resiliency and aligns with a Virtualization friendly scale out approach. Once the environment has 4 or more Exchange VMs, scaling up beyond the size of a NUMA node can be a good option to reduce the number of Exchange instances to manage and license without significantly impacting resiliency.

Memory Overcommitment

ESXi has excellent memory overcommitment capabilities which can work very well depending on the Operating system and application running within the guest. However Exchange is generally considered a Business Critical Application and as such, overcommitting memory for Exchange is generally not a good idea and should be avoided where possible.

Memory Reservations

For Exchange VMs, I recommended configuring the VM with “All Memory Locked” or in other words, a 100% memory reservation.

This has two advantages, the first being consistent memory performance for MS Exchange which is critical to ensure a great end user experience.

The second is the potentially large storage saving as the vSwap file is eliminated. For example, if an Exchange VM has 128Gb RAM and no memory reservation, a 128Gb vSwap file will be created by default in the same Datastore as the VMs .vmx file which could impact storage sizing and performance.

ESXi Host / Cluster Sizing Considerations

Exchange VMs are typically larger than the average VM, as a result they can consume a significant percentage of an ESXi hosts memory resources. As a result it is important to size your ESXi hosts to have sufficient RAM for the Exchange VMs.

As such in cases where the Exchange VM is sized to exceed the NUMA node, I recommend sizing ESXi hosts to have at least 25% more physical RAM than the vRAM assigned to your Exchange VMs.

Example: If your Exchange VM is assigned 96Gb, the ESXi hosts in the cluster should have at least 128Gb. This ensures memory for the hypervisor and other smaller VMs such as Domain Controllers to service things like the global catalog requirements for Exchange without contention.

Recommendations:

1. Set “All Memory Locked” (100% Memory Reservation) for Exchange VMs.
2. Where possible, size the Exchange VMs RAM within a NUMA node.
3. Where Exchange RAM requirements exceed that of the NUMA node ensure the size ESXi hosts to have at least 25% more RAM than the Exchange VM (or the largest vRAM VM in the cluster)
4. Ensure VMs vRAM is right sized after deployment to minimize waste (especially considering the recommendation to use memory reservations)

Back to the Index of How to successfully Virtualize MS Exchange.

Free Training – Virtualizing Business Critical Applications (vBCA)

Just found another great free self paced training course offered by VMware. This one is focused on one of my favourite topics, Virtualizing Business Critical Applications.

The course covers thing like what business critical applications can be virtualized efficiently as well as covering common customer objections, some of which are FUD or fiction.

In addition use cases, best practices and value propositions for virtualizing each business-critical application.

One important area the course covers (which can be hard to find reliable information on) is the licensing requirements for applications such as Oracle databases, SAP and the Microsoft Suite e.g.: SQL / Exchange / Sharepoint.

Kudo’s to VMware for releasing this training free of charge. The link to access the course is below.

Virtualizing Business-Critical Applications [V5.X] – Customer

Transparent Page Sharing (TPS) Example Architectural Decisions Register

The following is a register of all Example Architectural Decisions related to Transparent Page Sharing on VMware ESXi following the announcement from VMware that TPS will be disabled by default in future patches and versions.

See The Impact of Transparent Page Sharing (TPS) being disabled by default for more information.

The goal of this series is to give the pros and cons for multiple options for the configuration of TPS for a wide range of virtual workloads from VDI, to Server, Business Critical Apps , Test/Dev and QA/Pre-Production.

Business Critical Applications (vBCA) :

1. Transparent Page Sharing (TPS) Configuration for Virtualized Business Critical Applications (vBCA)

Mixed Server Workloads:

1. Transparent Page Sharing (TPS) Configuration for Production Servers (1 of 2)

2. Transparent Page Sharing (TPS) Configuration for Production Servers (2 of 2) – Coming Soon!

Virtual Desktop (VDI) Environments:

1. Transparent Page Sharing (TPS) Configuration for VDI (1 of 2)

2. Transparent Page Sharing (TPS) Configuration for VDI (2 of 2)

Testing & Development:

1. Transparent Page Sharing (TPS) Configuration for Test/Dev Servers (1 of 2) – Coming Soon!

2. Transparent Page Sharing (TPS) Configuration for Test/Dev Servers (2 of 2) – Coming Soon!

QA / Pre-Production:

1. Transparent Page Sharing (TPS) Configuration for QA / Pre-Production Servers

Related Articles:

1. Example Architectural Decision Register

2. The Impact of Transparent Page Sharing (TPS) being disabled by default – @josh_odgers (VCDX#90)

3. Future direction of disabling TPS by default and its impact on capacity planning – @FrankDenneman (VCDX #29)

4. Transparent Page Sharing Vulnerable, Yet Largely Irrelevant – @ChrisWahl (VCDX#104)