IT Infrastructure Business Continuity & Disaster Recovery (BC/DR) – Corona Virus edition

Back in 2014, I wrote about Hardware support contracts & why 24×7 4 hour onsite should no longer be required. For those of you who haven’t read the article, I recommend doing so prior to reading this post.

In short, the post talked about the concept of the typical old-school requirement to have expensive 24/7, 2 or 4-hour maintenance contracts and how these become all but redundant when IT solutions are designed with appropriate levels of resiliency and have self-healing capabilities capable of meeting the business continuity requirements.

Some of the key points I made regarding hardware maintenance contracts included:

a) Vendors failing to meet SLA for onsite support.

b) Vendors failing to have the required parts available within the SLA.

c) Replacement HW being refurbished (common practice) and being faulty.

d) The more propitiatory the HW, the more likely replacement parts will not be available in a timely manner.

All of these are applicable to all vendors and can significantly impact the ability to get the IT infrastructure back online or back to a resilient state where subsequent failures may be tolerated without downtime or data loss.

I thought with the current Coronavirus pandemic, it’s important to revisit this topic and see what we can do to improve the resiliency of our critical IT infrastructure and ensure business continuity no matter what the situation.

Let’s start with “Vendors failing to meet SLA for onsite support.”

At the time of writing, companies the world over are asking employees to work from home and operate on skeleton staff. This will no doubt impact vendor abilities to provide their typical levels of support.

Governments are also encouraging social distance – that people isolate themselves and avoid unnecessary travel.

We would be foolish to assume this won’t impact vendor abilities to provide support, especially hardware support.

What about Vendors failing to have the required parts available within the SLA?

Currently I’m seeing significantly reduced flights operating, e.g.: From USA to Europe which will no doubt delay parts shipment to meet the target service level agreements.

Regarding vendors using potentially faulty refurbished (common practice) hardware, this risk in itself isn’t increased, but if this situation occurs, then the delays for shipment of alternative/new parts is likely going to be delayed.

Lastly, infrastructure leveraging propitatory HW makes it more likely that replacement parts will not be available in a timely manner.

What are some of the options Enterprise Architects can offer their customers/employers when it comes to delivering highly resilient infrastructure to meet/exceed business continuity requirements?

Let’s start with the assumption that replacement hardware isn’t available for one week, which is likely much more realistic than same-day replacement for the majority of customers considering the current pandemic.

Business Continuity Requirement #1: Infrastructure must be able to tolerate at least one component failure and have the ability to self heal back to a resilient state where a subsequent failure can be tolerated.

By component failure, I’m talking about things like:

a) HDD/SSDs

b) Physical server/node

c) Networking device such as a switch

d) Storage controller (SAN/NAS controllers, or in the case of HCI, a node)

HDDs/SSDs have been traditionally protected by using RAID and Hot Spares, although this is becoming less common due to RAID’s inherent limitations and high impact of failure.

For physical servers/nodes, products like VMware vSphere, Microsoft Hyper-V and Nutanix AHV all have “High Availability” functions which allow virtual machines to recover onto other physical servers in a cluster in the event of a physical server failure.

For networking, typically leaf/spine topologies provide a sufficient level of protection with a minimum of dual connections to all devices. Depending on the criticality of the environment, quad connections may be considered/required.

Lastly with Storage Controllers, traditional dual controller SAN/NAS have a serious constraint when it comes to resiliency in that they require the HW replacement to restore resiliency. This is one reason why Hyper-CXonverged Infrastructure (a.k.a HCI) has become so popular: Some HCI products have the ability to tolerate multiple storage controller failures and continue to function and self-heal thanks to their distributed/clustered architecture.

So with these things in mind, how do we meet our Business Continuity Requirement?

Disclaimer: I work for Nutanix, a company that provides Hyper-Converged Infrastructure (HCI), so I’ll be using this technology as my example of how resilient infrastructure can be designed. With that said the article and the key points I highlight are conceptual and can be applied to any environment regardless of vendor.

For example, Nutanix uses a Scale Out Shared Nothing Architecture to deliver highly resilient and self healing capabilities. In this example, Nutanix has a small cluster of just 5 nodes. The post shows the environment suffering a physical server failure, and then self healing both the CPU/RAM and Storage layers back to a fully resilient state and then tolerating a further physical server failure.

After the second physical server failure, it’s critical to note the Nutanix environment has self healed back to a fully resilient state and has the ability to tolerate another physical server failure.

In fact the environment has lost 40% of its infrastructure and Nutanix still maintains data integrity & resiliency. If a third physical server failed, the environment would continue to function maintaining data integrity, though it may not be able to tolerate a subsequent disk failure without data becoming unavailable.

So in this simple example of a small 5-node Nutanix environment, up to 60% of the physical servers can be lost and the business would continue to function.

With all these component failures, it’s important to note the Nutanix platform self healing was completed without any human intervention.

For those who want more technical detail, checkout my post which shows Nutanix Node (server) failure rebuild performance.

From a business perspective, a Nutanix environment can be designed so that the infrastructure can self heal from a node failure in minutes, not hours or days. The platform’s ability to self heal in a timely manner is critical to reduce the risk of a subsequent failure causing downtime or data loss.

Key Point: The ability for infrastructure to self heal back to a fully resilient state following one or more failures WITHOUT human intervention or hardware replacement should be a firm requirement for any new or upgraded infrastructure.

So the good news for Nutanix customers is during this pandemic or future events, assuming the infrastructure has been designed to tolerate one or more failures and self heal, the potential (if not likely) delay in hardware replacements is unlikely to impact business continuity.

For those of you who are concerned after reading this that your infrastructure may not provide the business continuity you require, I recommend you get in touch with the vendor/s who supplied the infrastructure and go through and document the failure scenarios and what impact this has on the environment and how the solution is recovered back to a fully resilient state.

Worst case, you’ll identify gaps which will need attention, but think of this as a good thing because this process may identify issues which you can proactively resolve.

Pro Tip: Where possible, choose a standard platform for all workloads.

As discussed in “Thing to consider when choosing infrastructure”, choosing a standard platform to support all workloads can have major advantages such as:

  1. Reduced silos
  2. Increased infrastructure utilisation (due to reduced fragmentation of resources)
  3. Reduced operational risk/complexity (due to fewer components)
  4. Reduced OPEX
  5. Reduced CAPEX

The article summaries by stating:

“if you can meet all the customer requirements with a standard platform while working within constraints such as budget, power, cooling, rack space and time to value, then I would suggest you’re doing yourself (or your customer) a dis-service by not considering using a standard platform for your workloads.”

What are some of the key factors to improve business continuity?

  1. Keep it simple (stupid!) and avoid silos of bespoke infrastructure where possible.
  2. Design BEFORE purchasing hardware.
  3. Document BUSINESS requirements AND technical requirements.
  4. Map the technical solution back to the business requirements i.e.: How does each design decision help achieve the business objective/s.
  5. Document risks and how the solution mitigates & responds to the risks.
  6. Perform operational verification i.e.: Validate the solution works as designed/assumed & perform this testing after initial implementation & maintenance/change windows.

Considerations for CIOs / IT Management:

  1. Cost of performance degradation such as reduced sales transactions/minute and/or employee productivity/moral
  2. Cost of downtime like Total outage of IT systems inc Lost revenue & impact to your brand
  3. Cost of increased resiliency compared to points 1 & 2
    1. I.e.: It’s often much cheaper to implement a more resilient solution than suffer even a single outage annually
  4. How employees can work from home and continue to be productive

Here’s a few tips to ask your architect/s when designing infrastructure:

  1. Document failure scenarios and the impact to the infrastructure.
  2. Document how the environment can be upgraded to provide higher levels of resiliency.
  3. Document the Recovery Time (RTO) and Recovery Point Objectives (RPO) and how the environment meets/exceeds these.
  4. Document under what circumstances the environment may/will NOT meet the desired RPO/RTOs.
  5. Design & Document a “Scalable and repeatable model” which allows the environment to be scaled without major re-design or infrastructure replacement to cater for unforeseen workload (e.g.: Such as a sudden increase in employees working from home).
  6. Avoid creating unnecessary silos of dissimilar infrastructure

Related Articles:

  1. Scale Out Shared Nothing Architecture Resiliency by Nutanix
  2. Hardware support contracts & why 24×7 4 hour onsite should no longer be required.
  3. Nutanix | Scalability, Resiliency & Performance | Index
  4. Nutanix vs VSAN / VxRAIL Comparison Series
  5. How to Architect a VSA , Nutanix or VSAN solution for >=N+1 availability.
  6. Enterprise Architecture and avoiding tunnel vision

Hardware support contracts & why 24×7 4 hour onsite should no longer be required.

In recent weeks, I have seen numerous RFQs which have the requirement for 24×7 2 or 4hr onsite HW replacement, and while this is not uncommon I’ve been thinking why is this the case?

Over my I.T career spanning coming up on 15 years, in the majority of cases, I have strongly recommended in my designs and Bill of Materials (BoMs) that customers buy 24×7 4 hours onsite hardware maintenance contracts for equipment such as Compute, Storage Arrays , Storage Area Networking and IP network devices.

I have never found it difficult to justify this recommendation, because traditionally if a component in the datacenter fails, such as a Storage Controller, this generally has a high impact on the customers business and could cost tens or hundreds of thousands of dollars or even millions of dollars in revenue depending on the size of the customer.

Not only is loosing a Storage controller general a high impact, it is also a high risk as the environment may no longer have redundancy and a subsequent failure could (and would likely) result in a full outage.

So in this example, a typical storage solution has a Storage Controller failure resulting in degraded performance (due to loosing 50% of the controllers) and high impact/risk to the customer, a customer purchasing 24×7 4 Hour, or even 24×7 2hr support contract makes perfect sense! The question is, why choose HW (or a solution) which puts you at high risk after a single component failure in the first place?

With technology fast changing and over the last year or so, I’ve been involved in many customer meetings where I am asked what I recommend in terms of hardware maintenance contracts (for Nutanix customers).

Normally this question/conversation happens after the discussion about the technology, where I explain various failure scenarios and how resilient a Nutanix cluster is.

My recommendation goes something like this.

If you architect your solution for your desired level of availability (e.g.: N+2) there is no need to buy 24×7 4hr hardware maintenance contract, the default Next Business Day option is perfectly fine.

Justification:

1. In the event of even an entire node failure, the Nutanix cluster will have automatically self healed the configured resiliency factor (2 or 3) well before even a 2hr support contract can provide a technician to be onsite, diagnose the issue and replace hardware.

2. Assuming the HW is replaced on the 2hr mark (not typical in my experience), AND assuming Nutanix was not automatically self healing prior to the drive/node replacement, the replacement drive or node would then START the process of self healing. So the actual time to recovery would be greater than 2hrs. In the case of Nutanix, self heal begins almost immediately.

3. If a cluster is sized for the desired level of availability based on business requirements, say N+2, a Node can fail, Nutanix will automatically self heal and then tolerate a subsequent failure with the ability to full self heal the configured resiliency factor (2 or 3) again.

4. If a cluster is sized only to customer requirement of only N+1, a Node can fail, Nutanix will automatically and fully self heal. Then in the unlikely (but possible) event of a subsequent failure (i.e.: A 2nd node failure before the next business day warranty replaces the failed HW), the Nutanix cluster will still continue to operate.

5. The performance impact of a node failure in a Nutanix environment is N-1, so in a worst case scenario (3 node cluster) the impact is 33%, compared to a 2 controller SAN/NAS where the impact would be 50%. In a 4 node cluster the impact is only 25% and for customer with say 8 nodes only 12.5%. The bigger the cluster the lower the impact. Nutanix recommends N+1 up to 16 nodes, and N+2 up to 32 nodes. Beyond 32 nodes higher levels of availability may be desired based on customer requirements.

The risk and impact of the failure scenario/s is key, in the case of Nutanix, because of the self healing capability, and the fact all controllers and SSDs/HDDs in the cluster participate in the self heal, it can be done very quickly and with low impact. So the impact of the failure is low (N-1) and the recovery is done quickly, so the risk to the business is low, therefore dramatically reducing (and in my opinion potentially removing) the requirement for a 24×7 2 or 4hr support contract for Nutanix customers.

In Summary:

1. The decision on what hardware maintenance contract is appropriate is a business level decision which should be based in part on a comprehensive risk assessment done by an experienced enterprise architect, intimately familiar with all the technology being used.

2. If the recommendation from the trusted experienced enterprise architect is that the risk of HW failure causing high impact or outage to the business is so high that purchasing a 4hr or 2hr onsite HW replacement is required, my advise would be to reconsider if the proposed “solution” meets the business requirements. Only if you are constrained to that solution, purchase a 24×7 2 or 4hr support contract.

3. Being heavily dependant on Hardware being replaced to restore resiliency / performance for a solution, is in itself a high risk to the business.

AND

4. In my experience, it is not uncommon to have problems getting onsite support or hardware replacement regardless of the support contract / SLA. Sometimes this is outside a vendors control, but most vendors will experience one or more of these issues which I have personally experienced on numerous occasions in previous roles:

a) Vendors failing to meet SLA for onsite support.
b) Vendors failing to have the required parts available within the SLA.
c) Replacement HW being refurbished (common practice) and being faulty.
d) The more propitiatory the HW, the more likely replacement parts will not be available in a timely manner.

Note: Support contracts don’t promise a resolution by the 2hr / 4hr contract, they simply promise somebody will be onsite and in some cases this is only after you have gone through troubleshooting with the vendor on the phone, sent logs for analysis and so on. So the reality is, the 2hr or 4hr part doesn’t hold much value.

If you have accepted the solution being sold to you OR your an architect recommending a solution which is enterprise grade and highly resilient with self healing capabilities, then consider why you need a 24×7 2hr or 4hr hardware maintenance contract if the solution is architected for the required availability level (i.e.: N+1 / N+2 etc)

So with your next infrastructure purchase (or when making your recommendations if you’re an architect), carefully consider what solution your investing in (or proposing), and if you feel an aggressive 2hr/4hr HW support contract is required, I would recommend revisiting the requirements as you may well be buying (or recommending) something that isn’t resilient enough to meet the requirements.

Food for thought.