Example Architectural Decision – Transparent Page Sharing (TPS) Configuration for VDI (1 of 2)

Problem Statement

In a VMware vSphere environment, with future releases of ESXi disabling Transparent Page Sharing by default, what is the most suitable TPS configuration for a Virtual Desktop environment?

Assumptions

1. TPS is disabled by default
2. Storage is expensive
3. Two Socket ESXi Hosts have been chosen to align with a scale out methodology.
4. HA Admission Control policy used is “Percentage of Cluster Resources reserved for HA”
5. vSphere 5.5 or earlier

Requirements

1. VDI environment must deliver consistent performance
2. VDI environment supports a high percentage of Power Users

Motivation

1. Reduce complexity where possible.
2. Maximize the efficiency of the infrastructure

Architectural Decision

Leave TPS disabled (default) and apply 100% Memory Reservations to VDI VMs and/or Golden Master Image.

Justification

1. Setting 100% memory reservations ensures consistent performance by eliminating the possibility of swapping.
2. The 100% memory reservation also eliminates the capacity usage by the vswap file which saves space on the shared storage as well as reducing the impact on the storage in the event of swapping.
3. RAM is cheaper than Tier 1 storage (which is recommended for vSwap storage to ensure minimal performance impact during swapping) so the increased cost of memory in the hosts is easily offset by the saving in shared storage.
4. Simplicity. Leaving default settings is advantageous from both an architectural and operational perspective.  Example: ESXi Patching can cause settings to revert to default which could negate TPS savings and put a sudden high demand on storage where TPS savings are expected.
5. TPS savings for desktops can be significant, however with a high percentage of Power Users with >=4GB desktops and 2vCPUs, the TPS savings are lower compared to Kiosk or Task users typically with 1-2GB per desktop.
6. The decision has been made to use 2 socket ESXi hosts and scale out so the TPS savings per host compared to a 4 socket server with double the RAM will be lower.
7. HA admission control will calculate fail-over requirements (when using Percentage of cluster resources reserved for HA) so that performance will be approximately the same in the event of a fail-over due to reserving the full RAM reserved for every VM leading to more consistent performance under a wider range of circumstances.
8. Lower core count (and lower cost) CPUs will likely be viable as RAM will likely be the first constraint for further consolidation.

Implications

1. Using 100% memory reservations requires ESXi hosts and the cluster be sized at a 1:1 ratio of vRAM to pRAM (Physical RAM) and should include N+1 so a host failure can be tolerated.
2. Increased RAM costs
3. No memory overcommitment can be achieved
4. Potential for lower CPU utilization / overcommitment as RAM may become the first constraint.

Alternatives

1. Use 50% reservation and enable TPS
2. Use no reservation, Enable TPS and disable large pages

Related Articles:

1. The Impact of Transparent Page Sharing (TPS) being disabled by default @josh_odgers (VCDX#90)

2. Example Architectural Decision – Transparent Page Sharing (TPS) Configuration for VDI (2 of 2)

3. Future direction of disabling TPS by default and its impact on capacity planning –@FrankDenneman (VCDX #29)

4. Transparent Page Sharing Vulnerable, Yet Largely Irrelevant – @ChrisWahl(VCDX#104)

Enterprise Architecture & Avoiding tunnel vision.

Recently I have read a number of articles and had several conversations with architects and engineers across various specialities in the industry and I’m finding there is a growing trend of SMEs (Subject Matter Experts) having tunnel vision when it comes to architecting solutions for their customers.

What I mean by “Tunnel Vision” is that the architect only looks at what is right in front of him/her (e.g.: The current task/project) , and does not consider the implications of how the decisions being made for this task may impact the wider I.T infrastructure and customer from a commercial / operational perspective.

In my previous role I saw this all to often, and it was frustrating to know the solutions being designed and delivered to the customers were in some cases quite well designed when considered in isolation, but when taking into account the “Big Picture” (or what I would describe as the customers overall requirements) the solutions were adding unnecessary complexity, adding risk and increasing costs, when new solutions should be doing the exact opposite.

Lets start with an example;

Customer “ACME” need an enterprise messaging solution and have chosen Microsoft Exchange 2013 and have a requirement that there be no single points of failure in the environment.

Customer engages an Exchange SME who looks at the requirements for Exchange, he then points to a vendor best practice or reference architecture document and says “We’ll deploy Exchange on physical hardware, with JBOD & no shared storage and use Exchange Database Availability Groups for HA.”

The SME then attempts to justify his recommendation with “because its Microsoft’s Best practice” which most people still seem to blindly accept, but this is a story for another post.

In fairness to the SME, in isolation the decision/recommendation meets the customers messaging requirements, so what’s the problem?

If the customers had no existing I.T and the messaging system was going to be the only I.T infrastructure and they had no plans to run any other workloads, I would say the solution proposed could be a excellent solution, but how many customers only run messaging? In my experience, none.

So lets consider the customer has an existing Virtual environment, running Test/Dev, Production and Business Critical applications and adheres to a “Virtual First” policy.

The customer has already invested in virtualization & some form of shared storage (SAN/NAS/Web Scale) and has operational procedures and expertises in supporting and maintaining this environment.

If we were to add a new “silo” of physical servers, there are many disadvantages to the customer including but not limited too;

1. Additional operational documentation for new Physical environment.

2. New Backup & Disaster Recovery strategy / documentation.

3. Additional complexity managing / supporting a new Silo of infrastructure.

4. Reduced flexibility / scalability with physical servers vs virtual machines.

5. Increased downtime and/or impact in the event hardware failures.

6. Increased CAPEX due to having to size for future requirements due to scaling challenges with physical servers.

So what am I getting at?

The cost of deploying the MS Exchange solution on physical hardware could potentially be cheaper (CAPEX) Day 1 than virtualizing the new workload on the existing infrastructure (which likely needs to be scaled e.g.: Disk Shelves / Nodes) BUT would likely result overall higher TCO (Total Cost of Ownership) due to increased complexity & operational costs due to the creation of a new silo of resources.

Both a physical or virtual solution would likely meet/exceed the customers basic requirement to serve MS Exchange, but may have vastly different results in terms of the big picture.

Another example would be a customer has a legacy SAN which needs to be replaced and is causing issues for a large portion of the customers workloads, but the project being proposed is only to address the new Enterprise messaging requirements. In my opinion a good architect should consider the big picture and try to identify where projects can be combined (or a projects scope increased) to ensure a more cost effective yet better overall result for the customer.

If the architect only looked at Exchange and went Physical Servers w/ JBOD, there is zero chance of improvement for the rest of the infrastructure and the physical equipment for Exchange would likely be oversized and underutilized.

It will in many cases be much more economical to combine two or more projects, to enable the purchase of a new technology or infrastructure components and consolidate the workloads onto shared infrastructure rather than building two or more silo’s which add complexity to the environment, and will likely result in underutilized infrastructure and a solution which is inferior to what could have been achieved by combining the projects.

In conclusion, I hope that after reading this article, the next time you or your customers embark on a new project, that you as the Architect, Project Manager, or Engineer consider the big picture and not just the new requirement and ensure your customer/s get the best technical and business outcomes and avoid where possible the use of silos.

Data Centre Migration Strategies – Part 2 – Lift and Shift

Continuing on from Data Centre Migration Strategies Part 1 – Overview, Part 2 focuses on the “Lift and Shift” method.

I’m sure your reading this and already thinking, “this is the least interesting migration strategy, tell me about vMSC and SRM!” and well, your right, BUT it is important to understand the pros and cons so if you are ever in a situation where you have to use this method (I have on numerous occasions) that the migration is successful.

So what are the pros and cons of this method.

Pros

1. No need to purchase equipment for the new data centre
2. The environment should perform as it did at the original data centre following relocation
3.The approach is simple from a technical perspective ie: No new products are required
4. Low direct cost (Note: Point 8 in Cons)
5. Achieves a Recovery Point Objective (RPO) of zero (0).

Cons

1. The entire environment needs to be fully shut-down
2. The outage for the environment starts from when the servers are shut-down, until completion of operational verification testing at the new datacenter. Note: This may take several days depending on the size of the environment.
3. This method is high risk as the ability to fail back to the original datacenter requires all equipment be physically relocated back. This means the Recovery Time Objective (RTO) cannot be low.
4. The Lift and shift method cannot be tested until at least a significant amount of equipment has been physical relocated
5. In the event of an issue during operational verification at the new data centre, a decision needs to be made to proceed and troubleshoot the issues, OR at what point to fail back.
6. Depending on your environment, a vendor (eg: Storage) may need to revalidate your environment
7. Your migration (and schedule) are heavily dependant on the logistical side of the relocation which may have many factors (eg: Traffic / Weather) which are outside your control which may lead to delays or failed migration.
8. Potentially high indirect cost eg: Downtime, Loss of Business , productivity etc

When to use this method?

1. When purchasing equipment for the new data centre is not possible
2. When extended outages to the environment are acceptable
3. When you have no other options

Recommendations when using “Lift and Shift”

1. Ensure you have accurate wiring and rack diagrams of your datacenter
2. Be prepared with your vendor support contact details on hand as it is common following relocation of equipment to have hardware failures
3. Ensure you have an accurate Operational Verification document which tests every part of your environment from Layer 1 (Physical) all the way to Layer 7 (Application)
4. Label EVERYTHING as you disconnect it at the original datacenter
5. Prior to starting your data centre  migration, discuss and agree on a timeline for the migration and at what point and under what situation do you initiate a fail back.
6. Migrate the minimum amount of physical equipment that is required to get your environment back on-line and do your Operational Verification, then on successful completion of your Operational Verification migrate the remaining equipment. This allows for faster fail-back in the event Operational Verification fails.

In Part 3, we discuss Data centre migrations using VMware Site Recovery Manager. (Coming soon)