Nutanix Data Protection Capabilities

There is a lot of misinformation being spread in the HCI space about Nutanix data protection capabilities. One such example (below) was published recently on InfoStore.

Evaluating Data Protection for Hyperconverged Infrastructure

When I see articles like this, It really makes me wonder about the accuracy of content on these type of website as it seems articles are published without so much as a brief fact check from InfoStore.

None the less, I am writing this post to confirm what Data Protection Capabilities Nutanix provides.

  • Native In-Built Data protection

Prior to my joining Nutanix in mid-2013, Nutanix already provided a Hypervisor agnostic Integrated backup and disaster recovery solution with centralised consumer- grade management through our PRISM GUI which is HTML 5 based.

The built in capabilties are flexible and VM-centric policies to protect virtualized applications with different RPOs and RTOs with or without application consistency.

The solution also supports Local, remote, and cloud-based backups, and synchronous and asynchronous replication-based disaster recovery solutions.

Currently supported cloud targets include AWS and Azure as shown below.

CloudBackup

The below video which shows in real time how to create Application consistent snapshots from the Nutanix PRISM GUI.

Nutanix can also perform One to One, One to Many and Many to One replication of application consistent snapshots to onsite or offsite Nutanix clusters as well as Cloud providers (AWS/Azure), ensuring choice and flexibility for customers.

Nutanix native data protection can also replicate between and recover VMs to clusters of different hypervisors.

  • CommVault Intellisnap Integration

Nutanix also provides integration with Commvault Intellisnap which allows existing Commvault customers to continue leveraging their investment in the market leading data protection product and to take advantage of other features where required.

The below shows how agentless backups of Virtual Machines is supported with Acropolis Hypervisor (AHV). Note: Commvault is also fully supported with Hyper-V and ESXi.

By Commvault directly calling the Nutanix Distributed Storage Fabric (NDSF) it ensures snapshots are taken quickly and efficiently without the dependancy on a hypervisor.

  • Hypervisor specific support such as VMware API Data Protection (VADP)

Nutanix also supports solutions which leverage VADP, allowing customers with existing investment in products such as Veeam & Netbackup to continue with their existing strategy until such time as they want to migrate to Nutanix native data protection or solutions such as Commvault.

  • In-Guest Agents

Nutanix supports the use of In-Guest agents which are typically very inefficient with centralised SAN/NAS storage but due to data locality and NDSF being a truly distributed platform, In-Guest Incremental forever backups perform extremely well on Nutanix as the traditional choke points such as Network, Storage Controllers & RAID packs have been eliminated.

Summary:

As one size does not fit all in the world of I.T, Nutanix provides customers choice to meet a wide range of market segments and requirements with strong native data protection capabilities as well as 3rd party integration.

Melbourne VMUG Feb 7th 2013 – Optimizing VMware vSphere , vCloud and VDI Environments with Intelligent Storage

Last month I presented a Community Session at the Melbourne VMUG

“Optimizing VMware vSphere , vCloud and Desktop Environments with Intelligent Storage”

For those who are interested, you can watch the recorded session here.

A special Thanks to Craig Waters (@cswaters1) Melbourne MVUG leader for organizing the Melbourne VMUG and recording/encoding this session for the VMware community.

Example Architectural Decision – Storage DRS Configuration for NFS Datastores

Problem Statement

In a vSphere environment, a NAS array is presenting Thin Provisioned NFS mounts (Datastores) to the vSphere environment. The storage has deduplication enabled across the datastores being used for the SDRS cluster. What is the most suitable configuration for SDRS to ensure the underlying storage efficiencies are not compromised while maintaining an even distribution of utilized capacity and I/O across all datastores?

Assumptions

1. vSphere 5.0 or later
2. NFS Based storage
3. NFS Mounts (Datastores) are Thin Provisioned
4. Deduplication is enabled on the array
5. VAAI is supported by the array and enabled across the vSphere environment
6. All datastores in a Datastore cluster are of the same RAID Type / Offer Similar performance due to having a similar spindle count
7. All datastores are presented to all hosts within the cluster

Motivation

1. Ensure storage efficiencies are not negatively impacted
2. Minimize the vSphere administrators workload where possible

Architectural Decision

Set the DRS automation setting to “No Automation (Manual Mode)”

  • Set “Utilized Space” threshold to 80%
  • Set “I/O latency” to 15ms
  • I/O Metric Inclution – Enabled

Advanced Options

  • No recommendations until utilization difference between source and destination is: 10%
  • Evaluate I/O load every 8 Hours
  • I/O Imbalance threshold  3

Justification

1. Setting Storage DRS to “No Automation (Manual Mode)” ensures that the administrator can confirm the recommendation will not Negatively impact the efficiency of Deduplication or  the thin provisioned NFS mounts
2. When creating a new Virtual Machine, in the “Ready to complete” window, Tick the “Show all storage recommendations” check box to review Storage DRS recommendations and override the recommendations where required
3. Where a VM is deduplicated on the source datastore, and it is moved to the destination datastore, this write activity is considered new data which will scanned by the post deduplication process which will use valuable CPU cycles on the array
4. “XCOPY” is not supported for NFS, as such, any Storage vMotion activity can only be offloaded to the array using the “Full File Clone” when a virtual machine is powered off.
5. Array level snapshots cannot be migrated with the VM using Storage DRS. If Virtual machines were automatically moved then the array level snapshot relasionship with the VM is broken and it cannot be leveraged
6. NFS datastores can be set to autogrow  by a predefined size in the event they reach a predefined utilization threashold
7. Where a significant I/O imbalance is detected by SDRS, the vSphere administrator can consider the impact of the Storage vMotion and where suitable apply the SDRS recommendation
8. SDRS still provides valuable “initial placement” for new virtual machines which will help avoid a situation where datastores are unevenly balanced from a capacity perspective
9. Storage DRS will still analysis I/O and where an imbalance is identified the vSphere administrator can choose to apply the SDRS recommendation to address the I/O imbalance

Implications

1. When selecting datastores for the datastore cluster, having VASA enabled allows the “System Capability” column to be populated in the “New Datastore Cluster” wizard to ensure suitable datastores of similar performance, RAID type and features are grouped together
2. A vSphere administrator will need to review SDRS recommendations

Alternatives

1. Use “Fully Automated”