What’s .NEXT 2016 – Enhanced & Adaptive Compression

There are so many “under the cover” capabilities of the Acropolis Distributed Storage Fabric (ADSF) which have been designed and built not for short term marketing “checkboxes” but with a long term vision in mind.

As a result, Nutanix has been able to continually innovate and stay ahead of the HCI market while building a next generation platform (including the Acropolis Hypervisor, AHV) for the enterprise cloud.

Nutanix is also 100% software defined which makes adding new features and enhancing existing features possible even for hardware which is several years old.

As a result of the forward looking development of ADSF, it has allowed Nutanix to lead in the SDS space with features like Compression, Deduplication and Erasure Coding (EC-X).

In-line Compression is recommended for most workloads including business critical applications such as Oracle, SQL and Exchange and typically provides not only excellent capacity savings but an increased effective SSD capacity which results in higher performance. Compressing data on the capacity tier (not just flash tier) also helps improve performance and lowers the cost per GB of storage.

As of the next release, the compression functionality has been enhanced to support compressed and uncompressed slices in the same extent groups which for those of you not familiar with ADSF, an “Extent Group” is a group of “Extents” in which data is stored.

In previous generations of ADSF, regardless of if ADSF got good compression or not – all the data for a virtual disk (vdisk) residing in a container with compression enabled will have all of its data compressed. This can causes unnecessary overheads especially in cases where compression savings are minimal, such as for already compressed data such as Video or image files (e.g.: JPG).

This is one reason why it’s important that data reduction features such as compression (and Dedupe/Erasure Coding) can be turned off for workloads where benefits are minimal.

Previously in ADSF, compressed and uncompressed data was not supported within the same extent group which resulted in the cluster (Curator) having the added overhead of moving extents from one extent group to another even for data with low/no compression benefits.

This unnecessary overhead has now been removed which means less background tasks (overheads) resulting in lower CPU utilization by the Nutanix Controller VM (CVM) and better overall compression performance.

Secondly, Nutanix will be moving to the LZ4 group of algorithms which has two variants, LZ4 and LZ4H. LZ4H is really exciting because it gets nearly as much compression as Zlib while having a similar CPU cost but can decompress at the speed of LZ4. LZ4 by itself is marginally superior to Snappy in the common case, but the LZ4H makes this a very attractive choice.

This allows ADSF to do tiered compression – so cold data compressed with LZ4 can be further compressed with LZ4H giving higher compression ratios.

Also some good news for existing customers, this enhanced compression will be included in the next major AOS update which can be deployed via One-Click upgrade without any downtime or the requirement to reformat the drives, that’s true software defined storage.

Stay tuned for an upcoming blog showing the before and after compression savings on the same dataset.

Summary:

The upcoming releases of Acropolis OS (AOS) will provide:

  1. Higher compression savings
  2. Lower CVM overheads
  3. Dramatically reduced background file system maintenance tasks
  4. Enhanced compression will be included in the next major AOS one click upgrade!

Related .NEXT 2016 Posts

What’s .NEXT 2016 – Acropolis File Services (AFS)

At .NEXT 2015 Nutanix announced the Scale out File Server Tech Preview which was supported for AHV environments only. With the imminent release of AOS 4.7 the Scale out File Server has been renamed to Acropolis File Services (AFS) and will now be GA for AHV and ESXi.

AFS provides what I personally refer to as an “invisible” file server experience because it can be setup with just a few clicks in PRISM without the need to deploy operating systems.

AFS provides a highly available and distributed single namespace across 3 or more front end VMs which are automatically deployed and maintained by ADSF. The below shows a mixed cluster of 10 nodes made up of 8 x NX3060 and 2 x NX6035C nodes with the AFS UVMs spread across the cluster.

AFSoverview

Data is then stored on the underlying Acropolis Distributed Storage Fabric (ADSF) in a Container which can be configured with your desired level of resiliency e.g.: RF2 or RF3 as well as data reduction features such as Compression, Deduplication and Erasure coding.

AFS inherits all of the resiliency that ADSF natively provides and supports operational tasks such as one-click rolling upgrades of AOS and hypervisor without impacting the availability of the file services.

Functionality

Backups

Nutanix will provide AFS with native support for local recovery points on the primary storage (cluster) and allow both Async-DR (60 mins) and Sync-DR (0 RPO) to allow data to be backed up to remote cluster.

For customers who employ 3rd party backup tools, AFS can also be simply backed up as an SMB share which is a common capability amongst backup vendors such as Commvault and Netbackup.

The below shows a high level of what a 3rd party backup solution looks like with AFS.

AFSbackup2

Quotas

AFS also allows administrators to set quotas to help with capacity management especially in environments with multi-tenant or departmental deployments to avoid users monopolising capacity in the environment.

Patching/Upgrades

Acropolis File Server can be upgraded and patched separately to AOS and the underlying hypervisor. This ensures that the version of AFS is not dependant on the AOS or hypervisor versions which also makes QA easier and minimizes the chance of bugs since the AFS layer is abstracted from the AOS and hypervisor.

This is similar to how the AOS version is not dependant on a hypervisor version, ensuring maximum flexibility and stability for customers. This means as new features/improvements are added, AFS can be upgraded via PRISM without worrying about interoperability and dependancies.

Patches and upgrades are one-click, rolling, non-disruptive upgrades the same as AOS.

Scaling

As the file serving workload increases, Acropolis File Server can be scaled out by simply adding instances to balance the workload across. If the Nutanix cluster has more nodes than AFS instances, this can be done quickly and easily through prism.

If the cluster has for example 4 nodes and 4 AFS instances are already deployed, then to scale the performance of the AFS environment the UVMs vCPU/vRAM can be scaled up OR additional nodes can be added to the cluster and AFS instances scaled out.

When one or more additional AFS instances (UVM) are added, the workload is automatically balanced across all UVMs in the environment. ADSF will also automatically balance the new and existing file server data across the ADSF cluster to ensure even capacity utilization across nodes as well as consistent performance and linear scaling.

So in short, AFS provides both scale up and scale out options.

Interoperability with Storage Only nodes

Acropolis File Server is fully supported on environments using storage only nodes. As the storage nodes provide a Nutanix CVM and underlying storage to ADSF, the available capacity and performance is made available to AFS just like it is to any other VM. The only requirement is 3 or more Compute+Storage nodes in a cluster to support the minimum 3 AFS UVMs.

AFS deployment examples

Acropolis File Services can be deployed on existing Nutanix clusters which allows file data to be co-located on the same storage pool with existing data from virtual machines as well as with physical or virtual servers utilising Acropolis Block Services (ABS).

AFS_ExistingCluster

Acropolis File Services can be deployed on dedicated clusters such as storage heavy and storage only nodes for environments which do not have virtual machines, or for very large environments while be centrally managed along with other Nutanix clusters via PRISM Central.

AFS_DedicatedCluster

Multi-tenancy

AFS also allows multiple seperate instances to be deployed in the same Nutanix cluster to service different security zones, tenants or use cases. The following shows an example of a 4 node Nutanix cluster with two instances of AFS. The first has 4 AFS instances (UVMs) and the second has just 3 instances. Each instance can have different data reduction (Compression, Dedupe,EC-X) settings and be scaled independently.
AFSMultipleFileServers

Summary:

  • AFS supports multiple hypervisors and is deployed in mins from PRISM
  • Can be scaled both up and out to support more users, capacity and/or performance
  • Interoperable with all OEMs and node types including storage only
  • Supports non-disruptive one-click rolling upgrades
  • Supports multiple AFS instances on the one cluster for multi-tenancy and security zone support
  • Has native local recovery point support as well as remote backup (Sync and Async) support
  • All data is protected by the underlying ADSF
  • Supports all ADSF data reduction technologies including Compression, Dedupe and Erasure Coding.
  • Eliminates the requirement for a silo for File sharing
  • Capacity available to AFS is automatically expanded as nodes are added to the cluster.

Related .NEXT 2016 Posts

What’s .NEXT 2016 – All Flash Everywhere!

I am pleased to say Nutanix and our OEMs are now offering even more flexibility with our “Configure To Order” option (a.k.a CTO) by allowing any node type, yes ANY node type to be configured with all flash.

Why is this so cool, well Nutanix and our OEMs (Dell XC & Lenovo HX) have a wide range of models which customers can choose from and for customers who require large usable capacity of high performance storage, this is a simple way to get a pre-certified solution with all the flexibility of build your own without the risks.

AllFlashEverywhere

With this increased level of flexibility, the argument for BYO/HCL is all but moot in my opinion.

So let’s think about what this means.

The NX-8150, a 1 node per 2RU product (which I was heavily involved in the design of) will now support 24 x SSDs!

Even with the currently supported SSDs (1.92TB each), this would mean >46TB of RAW SSD capacity along with dual Broadwell CPUs and up to 768GB RAM.

Note: Higher capacity SSDs are coming soon to provide even more capacity!

Now with 24 x SSDs that is some serious power!

What’s also exciting is this doesn’t just mean higher flash capacity, it also means higher performance. This is because Nutanix persistent write buffer (OpLog) is striped across all SSDs in a node, this means the write performance can benefit from all SSDs in the node, in the case of that’s NX8150 that’s 24 drives!

Combine this with the fact Nutanix now supports any node as storage only, and this gives customers near unlimited flexibility without the risk/complexity of BYO/HCL options.

After all, the hardware is commodity, all the value is in the software so who cares what HW it runs on as long as its reliable.

Summary:

  • Configure to Order (CTO) now allows any node type to be configured with All Flash
  • All Flash nodes can also be Storage Only nodes
  • Write Performance takes advantage of all SSDs in a node
  • Nutanix Configure to Order (CTO) option makes the argument for BYO/HCL options all but moot.

Related .NEXT 2016 Posts