How to successfully Virtualize MS Exchange – Part 7 – Storage Options

When virtualizing Exchange, we not only have to consider the Compute (CPU/RAM) and Network, but also the storage to provide both the capacity and IOPS required.

However before considering IOPS and capacity, we need to decide how we will provide storage for Exchange as storage can be presented to a Virtual Machine in many ways.

This post will cover the different ways storage can be presented to ESXi and used for Exchange while subsequent posts will cover in detail each of the options discussed.

First lets discuss Local Storage.

What I mean by Local Storage is SSD/HDDs within a physical ESXi hosts that is not shared (e.g.: Not accessible by other hosts).

This is probably the most basic form of storage we can present to ESXi and apart from the Hypervisor layer could be considered similar to a physical Exchange deployment.

UseLocalStorage

Next lets discuss Raw Device Mappings.

Raw Device Mappings or “RDMs” are where shared storage from a SAN is presented through the hypervisor to the guest as a native SCSI device and enables.

RDMs

For more information about Raw Device Mappings, see: About Raw Device Mappings

The next option is Presenting Storage direct to the Guest OS.

It is possible and sometime advantageous to presents SAN/NAS storage direct to the Guest OS via NFS , iSCSI or SMB 3.0 and bypasses the hyper-visor all together.

DirectInGuest

The final option we will discuss is “Datastores“.

Datastores are probably the most common way to present storage to ESXi. Datastores can be Block or File based, and presented via iSCSI , NFS or FCP (FC / FCoE) as of vSphere 5.5.

Datastores are basically just LUNs or NFS mounts. If the datastore is backed by a LUN, it will be formatted with Virtual Machine File System (VMFS) whereas NFS datastores are simply NFS 3 mounts with no formatting done by ESXi.

ViaDatastore

For more information about VMFS see: Virtual Machine File System Technical Overview.

What do all the above options have in common?

Local storage, RDMs, storage presented to the Guest OS directly and Datastores can all be protected by RAID or be JBOD deployments with no data protection at the storage layer.

Importantly, none of the four options on their own guarantee data protection or integrity, that is, prevent data loss or corruption. Protecting from data loss or corruption is a separate topic which I will cover in a non Exchange specific post.

So regardless of the way you present your storage to ESXi or the VM, how you ensure data protection and integrity needs to be considered.

In summary, there are four main ways (listed below) to present storage to ESXi which can be used for Exchange each with different considerations around Availability, Performance, Scalability, Cost , Complexity and support.

1. Local Storage (Part 8)
2. Raw Device Mappings  (Part 9)
3. Direct to the Guest OS (Part 10)
4. Datastores (Part 11)

In the next four parts, each of these storage options for MS Exchange will be discussed in detail.

Back to the Index of How to successfully Virtualize MS Exchange.

Example Architectural Decision – Port Binding Setting for a dvPortGroup

Problem Statement

In a VMware vSphere environment using Virtual Distributed Switches (VDS) where all VMs including vCenter is hosted on the VDS, What is the most suitable Port Binding setting for dvPortgroups to ensure maximum performance and availability?

Assumptions

1. Enterprise Plus Licensing
2. vCenter is hosted on the VDS

Requirements

1. The environment must have central management of vNetworking
2. All VMs must be able to be powered on in the event of a vCenter outage
3. Network connectivity must not be impacted if vCenter is down.

Motivation

1. Reduce complexity where possible.
2. Maximize the availability of the infrastructure

Architectural Decision

Use the default dvPortGroup Port Binding setting of “Static Binding”

Justification

1. A dvPortGroup port is assigned to a VM and reserved when a VM is connected to the dvPortGroup. This ensures connectivity at all times including when vCenter is down.
3. Using “Static Binding” ensures the vCenter VM can be powered on and connected to the dvPortGroup even after a failure/outage.
4. “Static Binding” is the default setting and there is no reason to modify this setting.

Implications

1. The number of VMs supported on the dvPortGroup / VDS is limited to the number ports on the VDS (not overcommitment of ports is possible).
2. Number of ports configured on a dvPortGroup should be greater than the maximum number of VMs required to be supported.
3. Port Allocation should be left at the default of “Elastic” to ensure the number of ports is automatically expanded if/when required.
4. New Virtual machines cannot be powered on and connected to a dvPortGroup (VDS) when vCenter is down.

Alternatives

1. Set dvPortGroup Port Binding to “Dynamic binding”
2. Set dvPortGroup Port Binding to “Ephemeral binding”

Related Articles

1. Distributed vSwitches and vCenter outage, what’s the deal? – @duncanyb (VCDX #007)

2. Choosing a port binding type in ESX/ESXi (1022312)

Example Architectural Decision – Horizon View Desktop Power Policy for Linked Clones (1 of 2)

Problem Statement

In a VMware Horizon View environment using persistent Linked Clones, Disposable disks are being used to redirect transient paging and  temporary files to a separate VMDK.

What is the most suitable Desktop Pool setting to ensure storage overheads are reduced?

Assumptions

1. VMware View 4.5 or later
2. Recompose / Refresh cycles are infrequent
3. Desktop Usage concurrency within the pool is less than 100%
4. Memory Reservations are not being used.

Requirements

1. The environment must deliver consistent performance
2. Minimize the cost/utilization of shared storage

Motivation

1. Reduce complexity where possible.
2. Maximize the efficiency of the infrastructure

Architectural Decision

Set the Power Policy for all Linked Clone desktop pools to “Power Off”

Justification

1. Using disposable disks can save storage space by slowing the growth of linked clones and reducing the space used by powered off virtual machines.
2. Using the “Power Off” policy for the pool means at user logoff (or shutdown) the disposable disk will be refreshed, therefore reducing the capacity usage at the storage layer.
3. “Powered Off” VMs do not have a Virtual Machine SWAP file which will also reduce storage consumption.

Implications

1. Setting the policy to “Power Off” will result in more frequent power operations which may impact the performance of the storage and vCenter.
2. When a user attempts to login to a desktop which has been powered off, there will be a delay while the VM is powered on and booting up before the user will be logged in.
3. The peak concurrency rate of users will need to be understood to allow accurate storage planning for the VSWAP file.

Alternatives

1. Increase the frequency of Recompose / Refresh / Rebalance operations
2. Set the Policy to “Take no power action” and schedule an Administrator task to periodically change the Power Policy to “Powered Off” during a maintenance window.
3. Set the Policy to “Ensure desktops are always powered on” and schedule an Administrator task to periodically change the Power Policy to “Powered Off” during a maintenance window.
4. Set the Policy to “Suspend”  and schedule an Administrator task to periodically change the Power Policy to “Powered Off” during a maintenance window, however this will consume extra storage for the Suspend File.
5. Use Memory Reservations to reduce storage requirements for vSwap and leave Power Policy to “Always On”.

Related Articles:

The example architectural decision was contributed to by Travis Wood (@vTravWood) and was inspired by the following article:

1. Understanding View Disposable Disks by @vTravWood (Double VCDX #97 Desktop/Datacenter Virtualization)

1. Transparent Page Sharing (TPS) Configuration for VDI (1 of 2)

2. Transparent Page Sharing (TPS) Configuration for VDI (2 of 2)