Example Architectural Decision – Securing vMotion & Fault Tolerance Traffic in IaaS/Cloud Environments

Problem Statement

vMotion and Fault tolerance logging traffic is unencrypted and anyone with access to the same VLAN/network could potentially view and/or compromise this traffic. How can the environment be made as secure as possible to ensure security between in a multi-tenant/multi-department environment?

Assumptions

1.  vMotion and FT is required in the vSphere cluster/s (although FT is currently not supported for VMs hosted with vCloud Director)
2. IP Storage is being used and vNetworking has 2 x 10GB for non Virtual Machine traffic such as VMKernel’s & 2 x 10GB NICs are available for Virtual Machine traffic (Similar to Example vNetworking Design for IP Storage)
3. VI3 or later

Motivation

1. Ensure maximum security and performance for vMotion and FT traffic
2. Prevent vMotion and/or FT traffic impacting production virtual machines

Architectural Decision

vMotion & Fault tolerance logging traffic will each have a dedicated non routable VLAN which will be hosted on a dvSwitch which is physically separate from virtual machine distributed virtual switch.

Justification

1.  vMotion / FT traffic does not require external (or public) access
2. A VLAN per function ensures maximum security / performance with minimal design / implementation overhead
3. Prevent vMotion and/or FT traffic potentially impacting production virtual machine and vice versa by having the traffic share one or more broadcast domain/s
4. Ensure vMotion/FT traffic cannot leave there respective dedicated VLAN/s and potentially be sniffed

Implications

1. Two (2) VLANs with private IP ranges are required to be presented over 802.1q connections to the appropriate pNICs

Alternatives

1.  vMotion / FT share the ESXi management VLAN – This would increase risk of traffic being intercepted and “sniffed”
2. vMotion / FT share a dvSwitch with Virtual Machine networks while still running within dedicated non routable VLANs over 802.1q

Example Architectural Decision – Distributed Power Management (DPM) for Virtual Desktop Clusters

Problem Statement

In a VMware View (VDI) environment where the bulk of the workforce work between 8am and 6pm daily, how can vSphere be configured to minimize the power consumption without significant impact to the end user experience?

Assumptions

1. The bulk of the workforce work between 8am and 6pm daily
2. Most users login during a 2 hour window between 7:30 and 9:30 daily
3. Most users logoff during a 2 hour window between 4:30 and 6:30 daily
4. VMware View cluster maintains at least N+1 redundancy
5. VMware View cluster only runs desktop workloads
6. VMware View cluster size is >=5
7. VMware View cluster/s are configured with HA admission control policy of “Percentage of cluster resources reserved for HA” to avoid the potentially inefficient slot size calculation preventing hosts going into standby mode

Motivation

1. Reduce the power consumption
2. Align with Green IT strategies
3. Reduce the datacenter costs
4. Reduce the carbon footprint

Architectural Decision

Configure and enable DPM on all ESXi hosts with the power management set to “Automatic” and the DPM threshold set to “Apply priority 3 or higher recommendations” and set hosts 1,2 and 3 in the cluster not to enter standby mode.

Justification

1. As the bulk of the users are inactive outside of normal business hours, a significant power saving can be achieved
2. The users do not all login at once, which allows DPM to gradually start ESXi hosts (which were put into standby mode by DPM previously)
3. In the event the workload is unusually low on a given day, power savings can be realized without significant impact to the end user experience
4. Where a large number of users login unexpectedly early one morning, the impact to users will be minimal
5. DPM is configured to ensure a minimum of three (3)  ESXi hosts remain on at all times. This number is expected to be able to support all desktops within the environment under low load (ie: 80% of desktops at idle). This number can be adjusted if required.

Implications

1. In the unlikely event a large number of users logon unexpectedly early one morning, the impact to users may be experienced for the time it takes for one or more ESXi hosts to exit maintenance mode. This is generally <10mins for most servers.
2. Out of band interfaces such as DRAC / iLO / RSA or IMM interfaces (depending on host hardware type) will need to be configured and be accessible to vCenter and the ESXi hosts to enable DPM to function
3. As the “Percentage of cluster resources reserved for HA” setting is static (not dynamically adjusted by DPM) in the event of a host failure while one or more hosts are in standby mode, in unlikely event a VM attempts to power on before a host has been able to successful exit standby mode, the VM may fail to power on.
4. Where large percentages of Memory reservations are used (see Example AD – Memory Reservation for VDI) then ability for the for DPM to put one or more hosts into standby will be reduced. Where DPM is expected to be used, no more than 50% memory reservation should be configured to ensure maximum  memory overcommitment can be achieved without placing a significant overhead on the shared storage for vSwap files
5. Monitoring solutions may need to be customized/modified not to trigger an alarm for a host that is put into standby mode

Alternatives

1. Set a lower number of hosts to remain on to maximize power savings – This may result in higher impact to users first thing in the morning in the event of high concurrent logins
2. Set a higher number of host to remain on, however this will minimize power savings and give less value to the added complexity of setting up DPM (and associated out of band management interfaces)
3. Set the DPM threshold more aggressive to maximize power savings – This would likely result in some impact to VMs due to increased physical cores being available to the CPU scheduler and physical memory being available for VMs which may result in swapping

Example Architectural Decision – Memory Reservation for Virtual Desktops

Problem Statement

In a VMware View (VDI) environment with a large number of virtual desktops, the potential Tier 1 storage requirement for vswap files (*.vswp) can make the solution less attractive from a ROI perspective and have a high upfront cost for storage. What can be done to minimize the storage requirements for the vswap file thus reducing the storage requirements for the VMware View (VDI) solution?

Assumptions

1. vSwap files are placed on Tier 1 shared storage with the Virtual machine (default setting)

Motivation

1. Minimize the storage requirements for the virtual desktop solution
2. Reduce the up front cost of storage for VDI
3. Ensure the VDI solution gets the fastest ROI possible without compromising performance

Architectural Decision

Set the VMware View Master Template with a 50% memory reservation so all VDI machines deployed have a 50% memory reservation

Justification

1. Setting 50% reservation reduces the storage requirement for vSwap by half
2. Setting only 50% ensures some memory overcommitment and transparent page sharing can still be achieved
3. Memory overcommitment is generally much lower than CPU overcommitment (around 1.5:1 for VDI)
4. Reserving 50% of a VDI machines RAM is cheaper than the equivalent shared storage
5. A memory reservation will generally provide increased performance for the VM
6. Reduces/Removes the requirement/benefit for a dedicated datastore for vSwap files
7. Transparent page sharing (TPS) will generally only give up to 30-35% memory savings

Implications

1. Less memory overcommitment will be achieved

Alternatives

1. Set a higher memory reservation  of 75% – This would further reduce the shared storage requirement while still allowing for 1.25:1 memory overcommitment
2. Set a 100% memory reservation – This would eliminate the vSwap file but prevent memory overcommitment
3. Set a lower memory reservation of 25% – This would not provide significant storage savings and as transparent page sharing generally only achieves upto 30-35% there would still be a sizable requirement for vSwap storage with minimal benefit
4. Create a dedicated datastore for vSwap files on lower Tier storage