My NPX Journey

I have had an amazing learning experience in the last few months, expanding my skills into a second hypervisor being Kernel Virtual Machine (KVM) as well as continuing to enhance my knowledge on the ever increasing functionality of the Nutanix platform itself.

This past week I have been in Miami with some of the most talented guys in the industry who I have the pleasure to work with. We have been bootstrapping the Nutanix Platform Expert (NPX) program and have had numerous people submit comprehensive documentation sets which have been reviewed, and those who met the (very) high bar, were invited to the in-person, panel based Nutanix Design Review (NDR).

I was lucky enough to be asked to be part of the NDR panel as well as being invited to the NDR to attempt my NPX.

Being on the panel was a great learning experience in itself as I was privileged to observe many candidates who presented expert level architecture, design and troubleshooting abilities across multiple hypervisors.

I presented a design based on KVM for a customer which I have been working with over the last few months who is deploying a large scale vBCA solution on Nutanix.

I had an All-Star panel made up entirely of experienced Nutant’s who all happen to also be VCDXs, its safe to say it was not an easy experience.

The Design Review section was 90 mins which went by in a heart beat where I presented my vBCA KVM design, followed by a 30 min troubleshooting session and 60 min design scenario also based on vSphere.

Its a serious challenge having to present at an expert level on one Hypervisor, then swap into troubleshooting and designing a second hypervisor, so by the end of the examination it was safe to say I went to the bar.

As this is a bootstrap process I was asked to leave the room while the panel performed the final scores, then I was invited back in the room and told I was

Congratulations NPX #001

I am over the moon to be a part of an amazing company and to be honoured with the being #001 of such an challenging certification. I intend to continue to pursue deeper level knowledge on multiple hypervisors and everything Nutanix related to ensure I do justice to being NPX #001.

I am also pleased to say we have crowned several other NPX’s but I won’t steal there thunder by announcing their names and numbers.

For more information on the NPX program see http://go.nutanix.com/npx-application.html

Looking forward to .NEXT conference which is on this week!

MS Exchange Performance – Nutanix vs VSAN 6.0

When I saw a post (20+ Common VSAN Questions) by Chuck Hollis on VMware’s corporate blog claiming (extract below) “stunning performance advantage (over Nutanix) on identical hardware with most demanding datacenter workloads” I honestly wondering where does he get this nonsense?

FUDfromChuckles

Then when I saw Microsoft Applications on Virtual SAN 6.0 white paper released I thought I would check out what VMware is claiming in terms of this stunning performance advantage for an application I have done lots of work with lately, MS Exchange.

I have summarized the VMware Whitepaper and the Nutanix testing I personally performed in the below table. Now these tests were not exactly the same, however the ESXi Host CPU and RAM were identical, both tests used 2 x 10Gb as well as 4 x SSD devices.

The main differences were ESXi 6.0 for VSAN testing and ESXi 5.5 U2 for Nutanix, I’d say that’s advantage number 1 for VMware, Advantage Number 2 is VMware use two LSI controllers, my testing used 1, and VMware had a cluster size of 8 whereas my testing (in this case) only used 3. The larger cluster size is a huge advantage for a distributed platform, especially VSAN since it does not have data locality, so the more nodes in the cluster, the less chance of a bottleneck.

Nutanix has one advantage, more spindles, but the advantage really goes away when you consider they are SATA compared to VSAN using SAS. But if you really want to kick up a stink about Nutanix having more HDDs, take 100 IOPS per drive (which is much more than you can get from a SATA drive consistently) off the Nutanix Jetstress result.

So the areas where I feel one vendor is at a disadvantage I have highlighted in Red, and to opposing solution in Green. Regardless of these opinions, the results really do speak for themselves.

So here is a summary of the testing performed by each vendor and the results:

 

VSANvNutanixThe VMware white paper did not show the Jetstress report, however for transparency I have copied the Nutanix Test Summary below.

NutanixNX8150Jetstress

Summary: Nutanix has a stunning performance advantage over VSAN 6.0 even on identical lesser hardware, and an older version of ESXi using lower spec HDDs while (apparently) having a significant disadvantage by not running in the Kernel.