What’s .NEXT? – Scale Storage separately to Compute on Nutanix!

Since I joined Nutanix, I have heard from customers that they want to scale storage (capacity) separate to compute as they have done in traditional SAN/NAS environments.

I wrote an article a while ago about Scaling problems with traditional shared storage which discusses why scaling storage capacity separately can be problematic. As such I still believe scaling capacity separately is more of a perceived advantage than a real one in most cases, especially with traditional SAN/NAS.

However here at Nutanix we have locked ourselves away and brainstormed how we can scale capacity without degrading performance and without loosing the benefits of a Nutanix Hyper-Converged platform such as Data Locality and linear scalability.

At the same time, we wanted to ensure doing so didn’t add any unnecessary cost.

Introducing the NX-6035c , a new “Storage only” node!

What is it?

The NX-6035c is a 2 node per 2 RU block, which has 2 single socket servers with 1 SSD and 5 x 3.5″ SATA HDDs and 2 x 10GB NICs for network connectivity.

How does it work?

As with all Nutanix nodes, the NX-6035c runs the Nutanix Controller VM (CVM) which presents the local storage to the Nutanix Distributed File System (NDFS).

The main difference between the NX-6035c and other Nutanix nodes is that it is not a member of the hypervisor cluster and as a result does not run virtual machines, but it is a fully functional member of the NDFS cluster.

The below diagram shows a 3 node vSphere or Hyper-V cluster with storage presented by a 5 node NDFS cluster using 3 x NX-8150s as Compute+Storage and 2 x NX-6035C nodes as Storage only.

6035cinndfscluster

Because the NX-6035c does not run VMs, it only receives data via Write I/O replication from Resliency Factor 2 or 3 and Disk Balancing.

This means for every NX-6035c in an NDFS cluster, the Write performance for the cluster increases because of the additional CVM. This is how Nutanix ensures we avoid the traditional capacity scaling issues of SAN/NAS.

Rule of thumb: Don’t scale capacity without scaling storage controllers!

The CVM running on the NX-6035c also provides data reduction capabilities just like other Nutanix nodes, so data reduction can occur with even lower impact on Virtual Machine I/O.

What about Hypervisor licensing?

The NX-6035c runs the CVM on a Nutanix optimized version of KVM which does not require any hypervisor licensing.

For customers using vSphere or Hyper-V, the NX-6035c provides storage performance and capacity to the NDFS cluster which serves the hypervisor.

This results is more storage capacity and performance with no additional hypervisor costs.

Want more? Check out how Nutanix is increasing usable capacity with Erasure Coding!

What’s .NEXT? – Acropolis! Part 1

By now many of you will probably have heard about Project “Acropolis” which was the code name for development where Nutanix decided to create an Uncompromisingly Simple management platform for an optimized KVM hypervisor.

Along the way with Nutanix extensive skills and experience with products such as vSphere and Hyper-V, we took on the challenge of delivering similar enterprise grade features while doing so in an equally scalable and performant platform as NDFS.

Acropolis therefore had to be built into PRISM. The below screen shot shows the Home screen for PRISM in a Nutanix Acropolis environment, looks pretty much the same as any other Nutanix solution right! Simple!

PRISMwAcropolis

So let’s talk about how you install Acropolis. Since its a management platform for your Nutanix infrastructure it is critical component, so do I need a management cluster? No!

Acropolis is built into the Nutanix Controller VM (CVM), so it is installed by default when loading the KVM hypervisor (which is actually shipped by default).

Because its built into the CVM, Acropolis (and therefore all the management components) automatically scale with the Nutanix cluster, so there is no need to size the management infrastructure. There is also no need to license or maintain operating systems for management tools, further reducing cost and operational expense.

The following diagram shows a 4 node Nutanix NDFS cluster running Nutanix KVM Hypervisor using Acropolis. One CVM per cluster is elected the Acropolis Master and the rest of the CVMs are Acropolis Slaves.

AcropolisCluster1

The Acropolis Master is responsible for the following tasks:

  1. Scheduler for HA
  2. Network Controller
  3. Task Executors
  4. Collector/Publisher of local stats from Hypervisor
  5. VNC Proxy for VM Console connections
  6. IP address management

Each Acropolis Slave is responsible for the following tasks:

  1. Collector/Publisher of local stats from Hypervisor
  2. VNC Proxy for VM Console connections

Acropolis is a truly distributed management platform which has no dependency on external database servers and is fully resilient with in-built self healing capabilities so in the event of node or CVM failures that management continues without interruption.

What does Acropolis do? Well, put simply, the things 95% of customers need including but not limited to:

  • High Availability (Think vSphere HA)
  • Load Balancing / Virtual Machine Migrations (Think DRS & vMotion)
  • Virtual machine templates
  • Cloning (Instant and space efficient like VAAI-NAS)
  • VM operations / snapshots / console access
  • Centralised configuration of nodes (think vSphere Host Profiles & vSphere Distributed Switch)
  • Centralized Managements of virtual networking (think vSphere Distributed Switch)
  • Performance Monitoring of Physical HW, Hypervisor & VMs (think vRealize Operations manager)

Summary: Acropolis combines the best of breed hyperconverged platform with an enterprise grade KVM management solution which dramatically simplifies the design, deployment and ongoing management of datacenter infrastructure.

In the next few parts of this series I will explore the above features and the advantages of the Acropolis solution.

My NPX Journey

I have had an amazing learning experience in the last few months, expanding my skills into a second hypervisor being Kernel Virtual Machine (KVM) as well as continuing to enhance my knowledge on the ever increasing functionality of the Nutanix platform itself.

This past week I have been in Miami with some of the most talented guys in the industry who I have the pleasure to work with. We have been bootstrapping the Nutanix Platform Expert (NPX) program and have had numerous people submit comprehensive documentation sets which have been reviewed, and those who met the (very) high bar, were invited to the in-person, panel based Nutanix Design Review (NDR).

I was lucky enough to be asked to be part of the NDR panel as well as being invited to the NDR to attempt my NPX.

Being on the panel was a great learning experience in itself as I was privileged to observe many candidates who presented expert level architecture, design and troubleshooting abilities across multiple hypervisors.

I presented a design based on KVM for a customer which I have been working with over the last few months who is deploying a large scale vBCA solution on Nutanix.

I had an All-Star panel made up entirely of experienced Nutant’s who all happen to also be VCDXs, its safe to say it was not an easy experience.

The Design Review section was 90 mins which went by in a heart beat where I presented my vBCA KVM design, followed by a 30 min troubleshooting session and 60 min design scenario also based on vSphere.

Its a serious challenge having to present at an expert level on one Hypervisor, then swap into troubleshooting and designing a second hypervisor, so by the end of the examination it was safe to say I went to the bar.

As this is a bootstrap process I was asked to leave the room while the panel performed the final scores, then I was invited back in the room and told I was

Congratulations NPX #001

I am over the moon to be a part of an amazing company and to be honoured with the being #001 of such an challenging certification. I intend to continue to pursue deeper level knowledge on multiple hypervisors and everything Nutanix related to ensure I do justice to being NPX #001.

I am also pleased to say we have crowned several other NPX’s but I won’t steal there thunder by announcing their names and numbers.

For more information on the NPX program see http://go.nutanix.com/npx-application.html

Looking forward to .NEXT conference which is on this week!