Exchange 2013 & VMware – The latest round.

Recently there has been some back and forth on social media (Twitter & Blogs) around the following article published on the Exchange Team Blog:

Troubleshooting High CPU utilization issues in Exchange 2013
The article discusses several topics including Common Configuration Issues, Over-sizing and several performance metrics which can help Exchange administrators identify and hopefully resolve performance problems.
As a result of some of the recent updated recommendations, the Exchange 2013 Server Role Requirements calculator has been updated to reflect the newer recommendations.

The calculator can be found here: Exchange 2013 Server Role Requirements Calculator.

On key recommendation from Microsoft is the maximum Exchange Mailbox (or Multi-Role) Server sizing now being:

Recommended Maximum CPU Core Count

24

Recommended Maximum Memory

96 GB

In reply to the above, VMware have released the following article.

A Stronger Case For Virtualizing Exchange Server 2013 – Think “Performance”

Personally, I don’t think the article will (or has) had the effect VMware and Virtualization architects/admins would have liked due to its negativity. As such I am going to highlight the important points in this post.

In response to VMware’s article, a well known member of the Exchange community and MVP, Tony Redmond wrote the below article.

VMware tells Microsoft that they don’t know anything about Exchange 2013 performance

In this case, even as a strong Virtualization (and VMware) evangelist, I think Tony has some valid points.

Tony mentions the following regarding the sizing tool:

Most experienced people take the output from any general-purpose sizing tool and cast a cold eye over its recommendations to put them into context with the operational and business requirements for a deployment. In other words, the recommendations are adjusted. And yes, sometimes those recommendations are adjusted to make sure that Exchange 2013 works well when deployed on virtualized servers, either Hyper-V or VMware.

I totally agree! No matter if the sizing tool was recommending more CPUs or not, experienced architects should be making decisions based on many factors, the most important being their real world experience and using the calculator as a secondary (or tertiary) input and adjusting for the underlying infrastructure, regardless of it being physical or virtualized on one of many viable hypervisors.

I also agree with the following point:

Third, wouldn’t it be better use of VMware’s time to publish well-argued and pertinent observations on how you can take the output of Microsoft sizing tools and adjust them for their platform

VMware can be considered the experts in their own technology and they should as Tony suggested publish and continue to update documentation on how best to successfully deploy Exchange on vSphere. There is no advantage in an “I told you so” style blog posts when Microsoft have actually published recommendations which to the one point of VMware’s blog that I agree with, Strengthens the case for Virtualization of Exchange.

For companies like Nutanix who have numerous Virtualization experts across multiple hypervisors including vSphere, Hyper-V and Acropolis (which is fully supported for Windows 2012 and Exchange) we should also post reference architectures and best practice guides on how to deploy Exchange successfully.

Tony also commented in reply to VMware’s post which was not favourable towards Multi-role deployments:

Combined role means a multi-role server, which I think that every reasonable expert working with Exchange has concluded is the only way to go because it increases server utilization and improves the overall resilience of any deployment. But it’s a bad thing in VMware’s world, which is a pity for them because Exchange 2016 only supports multi-role servers, so I guess they will just have to get used to that fact.

VMware’s point here is scaling out (rather than up) means in general better overall consolidation and performance. Now to be fair this is true but one cannot apply a blanket rule for all applications.

Deploying Exchange 2013 in a Multi-Role configuration is a good way to simplify as well as ensure consistent performance and resiliency in the event of a server failure as all servers can service all roles.

As Exchange is generally considered a Business Critical Application, Virtualization architects like myself should consider the recommendations for the application, the critically of the solution to the customer and make an informed recommendation for its deployment. For an app like Exchange, I think its fair to say the MS Exchange team have solid justification to recommend MSR deployments.

The impact of less scale out (i.e.: MBX + CAS) is simply a design consideration, and not an uncommon one so this in my opinion is not an issue for virtual deployments.

Key Take Aways:

1. Ensure you size within Microsoft’s revised vCPU / vRAM recommended configuration maximums (24 vCPUs / 96GB RAM)

2. When sizing Exchange on any Hypervisor, start small and scale up vCPU/vRAM requirements as required to avoid oversizing. (This is a major advantage of virtualization, so don’t be afraid to use it!)

3. Multi-Role Deployments are perfectly fine for Virtual environments. Exchange is a Business Critical Application, so treat it as such in your design phase.

Final thoughts:

With the ever increasing number of cores per socket, the case to virtualize Exchange is strengthened when considering 12-18c CPUs are not uncommon these days. As such an 18vCPU / 96GB RAM Exchange 2013 MSR VM could be virtualized with zero CPU/RAM overcommitment (as is generally recommended) while running other VMs on the same host.

This helps to remove silos within the datacenter as well as driving up utilization (without creating resource contention) which equates to lower cost/power/cooling/maintenance, the list goes on.

While Virtualization does add some complexity & cost, I would argue with newer technologies such as Nutanix which replace complex and costly SAN/NAS storage with simple to deploy and manage scale out storage these (storage) challenges are soon going to be things of the past.

Nutanix also allows customers to choose a premium feature rich hypervisor such as ESXi, or lower cost/feature solutions such as Hyper-V or Acropolis which allows customers to work with what they are comfortable with.

Acropolis for example is fully supported by Microsoft (see Microsoft SVVP) and can be deployed in <30mins with just a few clicks and its free with Nutanix, so the cost and complexity arguments against virtualization of Exchange just went out the window and Exchange would have all the benefits of things like Virtual Machine High Availability, Migration (vMotion),

Please to hear everyone’s thoughts.

The VCDX candidates advantage over the panellists.

As the candidate, you submit a VCDX application based on a project you have worked on from start to finish and in most case, lead from a technical/architectural perspective.

You therefore should have:

  • Had Initial discussions with the customer about requirements.
  • Lead or been involved in Design workshops
  • Considered design decisions
  • Documented the detailed design along with implementation & test plans etc
  • Either overseen or been actively involved with implementation

As a result of the above, you have spent many hours, potentially hundreds or thousands of hours depending on the size of the project and you will have intimate knowledge of the design & solution.

On the other hand, VCDX panellists, while they are experts in the field, have been given your application, including the design and a very limited amount of time to review and prepare for the defence.

As a result, the VCDX candidate has a HUGE advantage over the panellists!

So in a defence, who is the expert in the room on the design? The Candidate!

As a result, the candidate should be an expert in the design being presented and answering questions from the panel about the design should not be intimidating.

To all VCDX candidates, understand that you have a major advantage and go defend your design with confidence!

VCDX Defence Essentials – Part 3 – Preparing for the Troubleshooting Scenario

Following on from Part 1 – Preparing for the Design Defence & Part 2 – Preparing for the Design Scenario, Part 3 covers my tips for the final stage of the VCDX defence, the Troubleshooting Scenario.

After completing the 75min Design defence and the 30min Design Scenario, if your still standing and haven’t retreated at full speed, your final challenge is the 15min Troubleshooting Scenario.

As mentioned in the previous Parts of this series, I am not a official panellist and I do not know how the scoring works. The below is my advice based on conducting mock panels, the success rate of candidates I have conducted mock panels with and my successfully achieving VCDX on the 1st attempt.

If you have read Part 2, then you should notice several similarities in both the common mistakes and tips below.

Common Mistakes

1. Trying to guess the solution to the issue

Taking pot shot guesses at what the problem/s might be does not prove your expertise. If you don’t methodically work through the issue and just keep making guesses, your not doing yourself or the people trying to assess your expertise any good.

2. Not documenting the troubleshooting steps you have completed

Assuming you have not made Mistake #1, and you are methodically working through the troubleshooting scenario, a common mistake I see is a candidate getting confused about what they have or have not investigated.

When candidates repeat the same troubleshooting steps because they have lost track, it does nothing but waste time and does not increase your chance of passing.

15 mins goes by in a flash, you cannot afford to waste time!

3. Going down a rabbit hole

Same as in the design scenario, I have observed many candidates who are clearly very knowledgeable, who have spent the majority of the time troubleshooting one specific area of the environment. eg: Just the vSphere layer

Doing this may demonstrate your expertise in one area really well, but this does not help getting as many potential issues eliminated in the scenario as possible within the time constraint.

4. Being Mute!

Again, same as in the design scenario, I have seen candidates who stand starring at the troubleshooting scenario and the whiteboard for mins at a time.

 

Tips for the Troubleshooting Scenario

1. Do not try to guess the solution to the issue

If you happen to guess the solution (assuming there is one.. hint hint) what expertise have you demonstrated to the panel for them to score you on? The answer is “bugger all” (This is Australian for “none”).

Talk the panel through your troubleshooting methodology, for example, you might choose to go through OSI models layers, or you may choose to start with, Networking, then move onto Storage, then application, then vSphere etc.

The goal of this section of the defence is to demonstrate your troubleshooting skills, so make sure you explain what your trying to eliminate. eg: If a VM has lost connectivity you may ask the panel to perform a vMotion of VM1 from host A to host B. You could explain to the panel that if the ping begins to work following the vMotion, you plan to investigate the networking of Host A. If the ping does not start working, you will continue to investigate for a larger networking issue, such as a VLAN specific problem.

2. Documenting your troubleshooting steps & findings

Ensure you methodically address each of the key areas of a vSphere solution by writing on the whiteboard headings like the following:

a) Storage/SAN/Protocol

b) Networking/Firewall

c) Compute HW

d) Application/Guest OS

e) vSphere

Ensure you eliminate several (i’d suggest >=3) potential issues in each section, so you are covering off the entire environment and record what you have done & the result of the troubleshooting step.

Keep in mind, you only have 15 mins, so 1 item per min is required if you are to cover all areas off thoroughly.

3. Don’t go down a rabbit hole!

Same as in the design scenario, I have observed many candidates who are clearly very knowledgeable, who have spent the majority of the time troubleshooting one specific area of a vSphere environment. eg: Storage

Doing this may demonstrate your expertise in one area really well, but this does not help getting as many potential issues eliminated in the scenario as possible within the time constraint.

Once you have looked into 3 potential issues in storage, move onto Networking, or vSphere etc.

Do not spend more than 60-90 seconds on any one troubleshooting step as this is preventing you demonstrating broad expertise which is the purpose of VCDX.

4. Think out Loud!

Again, same as in the design scenario, I have seen candidates who stand starring at the troubleshooting scenario and the whiteboard totally silent for mins at a time.

Talk the panel through your thought process and expected outcomes for troubleshooting actions.

I cannot give you advise, if I don’t know what your thinking! Same with the panellists, they can’t score you if you don’t verbalize your thought process.

No matter what, keep thinking out loud, if your working through options in your mind, that’s what the panel want’s to hear, so let them hear it!

Summary

I hope the above tips help you prepare for the VCDX design scenario and best of luck with your VCDX journey. For those who are interested, you can read about My VCDX Journey.

If you have any questions on the VCDX process or the advise given in this series please leave your comments and I will compile a list of questions and do a Q&A post.