Competition Example Architectural Decision Entry 2 – Use of RDMs in Standard IaaS Clusters

Name: Chris Jones
Title: Virtualization Architect
Twitter: @cpjones44
Profile: VCP5 / VCAP5-DCD

Problem Statement

VMs require more than 1.9TB in a single disk. The existing virtual environment has LUNs provisioned that are 2TB in size. As these VMs have virtual data disks (VMDKs) that are > 1.9TB in size, alarms are being triggered by the infrastructure monitoring solution and raising Incident tickets to the Virtual Infrastructure support queue.

Assumptions

1. Data within the OSI must reside within the VM and not on some kind of IP based store (like a NAS share).

2. vSphere datastores are presented through FC and not IP based stores (ie. NFS).

3. vSphere Hypervisor is ESXi 4.1.

4. There is no requirement for the VMs to be performing SAN specific functionality or running SCSI target-based software.

Constraints

1. The implemented monitoring solution cannot be customised with triggers and monitoring policies for individual objects within the environment (ie. having one monitoring policy per individual or sub-group of datastores).

2. Maximum vSphere datastore size in version 4.1 is 2TB minus 512 bytes.

3. Unable to upgrade beyond ESXi 4.1 Update 3.

Motivation

1. Reduce the number of incident tickets being raised, thus improving SLA posture.

2. Reduce the requirement to span single Windows logical volumes across multiple VMDKs.

Architectural Decision

Turn the disk into an RDM (Virtual Compatibility Mode) to remove the level of monitoring from the vSphere layer.

Alternatives

1. Create smaller VMDKs (ie. 1-1.5TB disks) and create a RAID0 volume within the guest OS.

2. Change the level of alerting so that tickets are not raised for alerts that trigger beyond 90%.

3. Turn the disk into an RDM to remove the level of monitoring from the vSphere layer.

4. Thin Provision the virtual disks

5. Store the data within the guest on some kind of IP based storage (NAS/iSCSI target).

Justification

1. Option 5 goes against the assumption that data must be local to the VM, so was ruled out.

2. Whilst thin provisioning (Option 4) is an attractive solution, this option is ruled out based on a wider infrastructure decision to thick provision all disks in the environment to reduce the risk to datastores filling up and critical business VMs stopping.

3. Option 1 via smaller VMDKs spread across multiple vSphere datastores will result in these alerts disappearing, however it will create issues when trying to execute a DR recovery for either the individual disks (Active/Passive) or the whole VM (Active/Cold). All that’s needed is for one VMDK not to be replicated and the whole Windows volume will be corrupted, or for the VMDKs to be mounted in the wrong order. Multiple VMDKs to one Windows volume also complicates the recovery of snapshot array-based backups (eg. via SMVI or NetBackup).

4. Option 2 goes against the constraint of the infrastructure monitoring solution not being able to creating individual alerting policies for either a single or sub-group of datastores in the inventory. Should individualised policies be created, we would need to ensure that the affected VMDKs that consume 90-95% of a datastore remain on that datastore as moving from one to another (ie. from Tier 2 to Tier 1) will require a change to the monitoring that has been configured. At this stage, the monitoring solution has no way to track these customised policies, which is most of the reason why global environment wide policies exist.

5. Option 3 and the use of RDMs in Virtual Compatibility Mode will allow the VM to benefit from the features of VMFS, such as advanced file locking for data protection and vSphere snapshotting. The use of RDMs will also allow for VMs to be managed by DRS (ie. can be vMotion’ed) and protected by vSphere HA.

Implications

  1. The RDM mapping will need to be recorded clearly to avoid the lengthy process of discovering from scratch what physical LUN is presented to the virtual machine.

An example of how to map these will be to:

A)    Record the name of the VM that has the RDM.

B)    Record the NAA number of the physical LUN(s) that are presented to the VM.

C)    Record the virtual device node on the virtual disk controller as to where the RDM is mounted.

D)    Record the Windows drive letter that this RDM is mounted to.

2. Additional paths will be consumed, reducing the total number of vSphere datastores that can be presented to the cluster.

Back to Competition Main Page or Competition Submissions

Example Architectural Decision – Storage I/O Control for Clusters Protected by SRM (Example 2 – Use SIOC)

Problem Statement

In an environment with one or more clusters with virtual machines protected by SRM, What is the most appropriate configuration of Storage I/O control?

Requirements

1. SRM solution must not be impacted

Assumptions

1. vSphere Version 4.1 or later

2. FC (Block) Based Storage OR NFS (File) based Storage

3. Number of datastores is fairly static

Constraints

1. Storage I/O control can prevent unmounting of datastore during a Recovery which can lead to errors being reported by SRM

Motivation

1. Where possible ensure consistent storage performance for all virtual machines

Architectural Decision

Enable and Configure Storage I/O control for all datastores.

Set the congestion threshold to 20ms

Leave the shares value default

Add a Step to each SRM recovery Plan as Step 1 and Select the Step Placement of “Before selected step”.

Configure step type as “Command of SRM Server” and execute the Scheduled Task which will disable SIOC prior to executing a SRM recovery

Justification

1. The benefits of Storage I/O control can still be achieved without impact to the SRM solution

2. SIOC will not impact SRM failover as it will be disabled automatically as part of the SRM recovery plan

3. In the event the Protected site or is lost, SIOC will not prevent failover

Implications

1. Increased complexity for the SRM solution

2. An additional step to excecute a “Command of SRM Server” is required

3. A Scheduled Task will need to be setup and configured with setting “Allow task to be ran on demand”

4. A script to disable SIOC will need to be prepared and configured with all datastores

Alternatives

1. Enable Storage I/O control and leave default settings

2. Enable storage I/O control and set share values on virtual machines

3. Enable Storage I/O control and set a lower “congestion threshold”

4. Enable Storage I/O control and set a higher “congestion threshold”

5. Disable Storage I/O control

Relates Articles

1. Example Architectural Decision –  Storage I/O Control for Clusters Protected by SRM (Example 2 – Don’t Use SIOC)