Example Architectural Decision – Jumbo Frames for IP Storage (Do not use Jumbo Frames)

Problem Statement

When using IP based storage over a converged 10GB network, should Jumbo Frames be used?

Requirements

1. Fully Supported storage

2. Maximum vSphere environment availability

3. Maximize performance where possible

Assumptions

1. Converged 10GB Network which is highly available

2. Two (or more) 10GB connections per ESXi host

Constraints

1. No dedicated network for IP storage traffic

Motivation

1. Simplify the environment

Architectural Decision

Do not use Jumbo Frames

Justification

1. Reduce the complexity in the environment for initial implementation

2. Simplify ongoing support / troubleshooting

3. For a Jumbo Frame to be transmitted without fragmentation, All devices end to end must support and be configured for Jumbo Frames

4. While there can be performance benefits of Jumbo Frames for IP Storage this is not generally seen across the board and depends on I/O types

5. Ensure IP storage packets are not fragmented or dropped by mis-configured devices or devices that do not support Jumbo Frames

6. Storage performance for the virtual environment will generally be constrained by the storage array controllers not the storage area network

7. Ensure packet fragmentation does not occur as all devices support a default MTU of 1500

8. Increasing the MTU will decrease the number of packets required for the same bandwidth but where the bottleneck is throughput (bytes) there will be minimal/no benefit

9. Jumbo Frames will only assist where the network is constrained at an interrupt level

Implications

1. IP Storage may have reduced performance in some circumstances compared to what Jumbo Frames may offer

Alternatives

1. Use Jumbo Frames

Relates Articles

1. Example Architectural Decision – Jumbo Frames for IP Storage (Use Jumbo Frames)

 Contributors

Thanks to Rob McNab (IBM) and Peter McCrystal (IBM) for their input into this example architectural decision.

 

 

Example Architectural Decision – Site Recovery Manager Server – Physical or Virtual?

Problem Statement

To ensure Production vSphere environment/s can meet/exceed the required RTOs in the event of a declared site failure, What is the most suitable way to deploy VMware Site Recovery Manager, on a Physical or Virtual machine?

Requirements

1. Meet/Exceed RTO requirements

2. Ensure solution is fully supported

3. SRM be highly available, or be able to be recovered rapidly to ensure Management / Recovery of the Virtual infrastructure

4. Where possible, reduce the CAPEX and OPEX for the solution

5. Ensure the environment can be easily maintained in BAU

Assumptions

1. Sufficient compute capacity in the Management cluster for an additional VM

2. SRM database is hosted on an SQL server

3. vSphere Cluster (ideally Management cluster)  has N+1 availability

Constraints

1. None

Motivation

1. Reduce CAPEX and OPEX

2. Reduce the complexity of BAU maintenance / upgrades

3. Reduce power / cooling / rackspace usage in datacenter

Architectural Decision

Install Site Recovery Manager on a Virtual machine

Justification

1. Ongoing datacenter costs relating to Power / Cooling and Rackspace are avoided

2. Placing Site Recovery Management on a Virtual machine ensures the application benefits from the availability, load balancing, and fault resilience capabilities provided by vSphere

3. The CAPEX of a virtual machine is lower than a physical system especially when taking into consideration network/storage connectivity for the additional hardware where a physical server was used

4. The OPEX of a virtual machine is lower than a physical system due to no hardware maintenance, minimal/no additional power usage , and no cooling costs

3. Improved scale-ability and the ability to dynamically add additional resources (where required) assuming increased resource consumption by the VM. Note: The guest operating system must support Hot Add / Hot Plug and be enabled while the VM is shutdown. Where these features are not supported, virtual hardware can be added with a short outage.

4. Improved manageability as the VMware abstraction layer makes day to day tasks such as backup/recovery easier

5. Ability to non-disruptively migrate to new hardware where EVC is configured in compatible mode and enabled between hosts within a vSphere data center

Alternatives

1. Place SRM on a physical server

Implications

1. For some storage arrays, the SRM server needs to have access to admin LUNs and using a virtual machine may increase complexity by the requirement for RDMs

I would like to Thank James Wirth VCDX#83 (@jimmywally81) for his contribution to this example architectural decision.

Related Articles

1. Site Recovery Manager Deployment Location

2. Swap file location for SRM protected VMs

CloudXClogo