How to successfully Virtualize MS Exchange – Part 1 – CPU Sizing

Part 1 will focus on CPU sizing for Exchange Mailbox (MBX) or Multi Server Role (MSR) deployments.

The Exchange 2013 Server Role Requirements Calculator v6.6 should be used to size the VM to ensure sufficient performance for the specific Exchange deployment.

One key input for the calculator is the “SpecInt2006 Rate Value” which can be found on the “Input” tab of the calculator  (shown below).

SPECint2006Input

To find the SpecInt2006 Rate Value for your specific CPU, I recommend using the  Exchange Processor Query Tool which allows you to enter the Processor Model number of your servers and query the Spec.org database for the rating of your CPU.

Note: This tool is applicable to Exchange 2010 and 2013 deployments despite the tool being titled “Exchange 2010 Processor Query Tool”.

To do this, enter the model number of your CPU (example E5-2697 v2 shown below) and press query.

ProcQueryTool01

The calculator will then return the list of tested server in the right hand side of the spreadsheet an example of this is shown below.

ProcQueryTool02

The SpecInt2006 result for your CPU is highlighted in Orange in the “Result” column.

At this stage the drop box in Step 4 allows you to choose the number of physical cores planned to be used and it will then return the average result of all tested servers.

ProcQueryTool03

The above result for example assumes a Dual Socket physical server with 12 core Intel E5-2697 v2 processors.

As we are discussing Virtualizing Exchange, Step 7 is applicable.

Here the tool allows you to enter the overcommitment (vCPU to Physical Core) and the number of vCPUs (called virtual processors in the spreadsheet) which then results in what the spreadsheet calls

“Virtual mailbox server SPECint2006 Rate Value” shown below in Orange.

ProcQueryTool04

The calculator makes the assumption that CPU overcommitment of 2:1 degrades performance by 50% which is not strictly true, but can be used as general guidance that high levels of CPU overcommitment which may lead to CPU contention are not recommended for MS Exchange deployments. It is important to note, CPU overcommitment ≠ CPU contention, although the higher the overcommitment, the higher the possibility of contention (CPU Ready).

Now that we have the SPECint2006 Rate Value, this can be entered into the Primary and Secondary (if applicable) field of the Exchange 2013 Server Role Requirements Calculator (shown below).

SPECint2006ratePrimSec

The SPECInt2006 value is for the physical processor, which if it supports Hyper-threading (HT), means the rating includes the performance benefit of HT. The key point here is using just physical cores for sizing means your VM will not get the full performance of the SPECint2006 rating, it will be slightly less. This will be discussed in more detail in Part 2 – vCPU configurations.

The “Processor Cores / Server” field should be populated by the physical cores intended to be used.

While the “Processor Cores / Server” value does not impact the CPU utilization calculations, entering the “Processor Cores / Server” allows the calculator to report the number Processor Cores Utilized as shown below from the “Role Requirements” tab.

ServerConfigBefore2

The number of cores utilized helps calculate the number of vCPUs required for the Exchange VM. If the “Server CPU utilization” is much lower than 80% (recommended maximum), the “SPECInt2006 rate value” and “Processor Cores / Server” can be reduced.

Example: If the calculator reports Server CPU Utilization at 40% and the CPU Type is Intel E5-2697 v2 with 12 physical cores with a SPECint2006 rating of 479. The Virtual Machine should be sized with 6 vCPU. To confirm this the SPECint2006 rating for Primary and Secondary (if applicable) field of the Exchange 2013 Server Role Requirements Calculator can be reduced by 50% (from 479) too 239.5 which will result in the calculator reporting Server CPU Utilization at 80%.

Another option is to review the number of Mailbox Servers are configured, and where the utilization is low as in the previous example 40%, you could choose to “scale up” each of the Exchange VMs. To do this, change the highlighted field on the “Input” tab of the calculator to 4, and you will see the utilization under “Server configuration” increase (on the role requirements tab) to 80%.

MBXserversPerDAG

Scaling up reduces the number of Windows/Exchange instances licenses and ongoing maintenance (such as patching) required, but also increases the failure domain and impact of a failure so this decision needs to not only be a architectural/technical one, but a business decision.

As a general rule, I recommend customers scale out until they have 4 or more Exchange VMs (across 4 or more ESXi hosts), then scale up (and out) as required. This ensures the impact of a server failure is 25%, compared to 50% if it was a scaled up Exchange server deployment with only 2 VMs.

An important consideration for any business critical application deployment is the scalability of the solution. In this case, when discussing virtualizing one or more Exchange servers, the Virtual Machine maximums are critical.

The below shows the maximum vCPUs supported for a VMware based virtual machine.

vSphere Virtual Machine CPU Maximums

Maximum vCPUs: 64 (vSphere 5.1 or later)
Maximum vCPUs: 32 (vSphere 5.0)
Maximum vCPUs: 8 (vSphere 4.1)

The above numbers are dependant on the physical hardware chosen.

Recommendations for CPU sizing:

1. CPU overcommitment be less than 2:1, and ideally 1:1 for hosts servicing Exchange workloads. This will be discussed further in this series.

Use the vSphere Cluster Sizing Calculator to confirm overcommitment ratios for your cluster or to validate your design.

2. Size Exchange Server VMs to less than 80% CPU Utilization

This allows for burst activity such as increased load or DAG failovers.

3. Scale up a single Exchange VM per ESXi host as opposed to running multiple smaller Exchange VMs per host.

4. Do not oversize Exchange VMs Day 1, Size for Day 1 demand and scale vCPUs as required (which can be done quickly and easily thanks to the virtual layer).
5. CPU reservations do not solve CPU scheduling contention (a.k.a CPU Ready). CPU reservations should not be required in properly sized environments.
6. Size Exchange VMs using Physical Cores and assume no benefit from HT
7. Leave HT turned on at the ESXi layer

Back to the Index of How to successfully Virtualize MS Exchange.

Virtual machines lose network connectivity after HA fail over. (ESXi 5.5)

If you’re running ESXi 5.5 pre Release 2068190 (Update 2) and have not upgraded from GA release or are avoided upgrading due to the NFS issue in Update 1 and are running on a vSphere Distributed Switch (VDS) then read on:

When configured with Static Port Binding, which is the default and recommended port binding setting (See VMware KB: 1022312) after a HA event, a problem has been discovered which prevents VMs from connecting to the network and the VM starts up with its vmNIC in a “Disconnected” state.

You may receive the following error when trying to reconnect the vmNIC.

DVSfailure2

If you are having this problem you can connect the VMs to a dvPortGroup which uses Ephemeral Binding to get the environment online. This could be used for VMs such as Domain Controllers and vCenter (and its dependencies) then once these VMs are online, this will allow all other VMs to connect to the network normally.

Once everything is back online, I recommend you connect all VMs back to their original dvPortGroup/s with Static Binding.

If you don’t have a dvPortGroup with Ephemeral binding, create a Standard vSwitch and connect a single NIC to it, follow the same process, then migrate the VMs back to the dvPortGroup once they are online and return the pNIC to the dvSwitch.

To future proof the environment, you may choose to create a dvPortGroup for the Infrastructure VMs and use Ephemeral Binding, or just have a dvPortGroup with Ephemeral binding ready to use just in case.

The good news is VMware have already resolved this issue in ESXi Update 2 (Release: 2068190) so I recommend bypassing Update 1 (due to the NFS bug) and going straight to Update 2 which means you will avoid the issue altogether.

Ignore the nonsense on twitter, What does “NoSAN” mean?

Every now and again I see nonsense on twitter which I feel needs to be responded too. The reason I am responding today is to correct mis-information about what Nutanix NoSAN is.

Earlier today a competitor of Nutanix tweeted the following:

FudSlinger

I responded to the above with the following tweet:

mytweet

To which the person responded with this:FudSlinger2

I responded with the below and the conversation ended with the following tweet:FudSlinger3

 

So before I correct the mis-information, let me briefly explain what “SAN” is:

“SAN” or “Storage Area Network” describes the connectivity between a compute node and a storage device (such as a central storage array or disk system). You can for example buy SAN (or FC) Switch/es from companies like Brocade.

However the I.T industry has for whatever reason over the years has made “SAN” mean “Central Disk System / Storage array” so for the purpose of this post, “SAN” is a Traditional Centralized Storage array (SAN/NAS).

So let’s correct the mis-information:

Claim 1: With Nutanix there is a SAN that is auto managed.

Fact: There is no centralized storage with Nutanix

Nutanix software running NDFS (Nutanix Distributed File System) logically presents DAS storage as shared storage across 3 or more nodes via NFS or SMB 3.0 to ESXi, Hyper-V or KVM. Note: While Nutanix supports iSCSI, its not recommended as it creates unnecessary complexity and has no technical advantages.

All Nutanix nodes have local DAS storage which is presented logically as shared storage and there is no “central” Nutanix nodes.

Note: Nutanix nodes can connect to traditional central SAN/NAS storage (see : Can I use my existing SAN/NAS storage with Nutanix)  but this is not Nutanix native architecture.

SAN’s also have key characteristics such as Zoning, Masking, LUNs, RAID, SANs also typically use Fibre Channel (FC) connectivity over dedicated fabrics although this is not always the case.

With Nutanix, There is no:

1. Central storage (SAN or NAS based)
2. LUNs
3. LUN masking
4. Zoning
5. Storage Controller “Pairs”
6. Dedicated Storage Fabric
7. Silos of storage capacity
8. RAID

Therefore the statement about Nutanix being a SAN that is “auto managed” is simply incorrect.

If a SAN “auto manages” LUNs, Zoning, Masking etc its just a smarter SAN, the problems with SAN (and NAS) cannot be solved by simply “masking” the complexity. (Pun intended)

Claim 2: NDFS is a distributed storage array.

Fact: NDFS is a file system, not a storage array.

Nutanix Distributed File System (NDFS) makes up part of the Nutanix solution, it is not a storage array and it is not centralised storage either.

Nutanix is a scale out shared nothing platform where data is written locally where the VM is running and in a distributed (not centralized) manner across nodes.

So what does NoSAN mean to me?

1. No centralized storage array
2. No LUNs, Zoning , Masking , RAID
3. No dedicated storage fabric (e.g.: Fibre Channel Switches)
4. Reduced complexity
5. No Silos of capacity
6. No Storage Controller “Pairs”

I could go on but I think you get the point.

In conclusion, don’t believe what you hear on social media (especially from competitors of a product) and do your own research and validate your findings from multiple sources.