VMware you’re full of it (FUD) : Nutanix CVM/AHV & vSphere/VSAN overheads

For a long time now, VMware & EMC are leading the charge along with other vendors spreading FUD regarding the Nutanix Controller VM (CVM), making claims it uses a lot of resources to drive storage I/O and it being a Virtual Machine (a.k.a Virtual Storage Appliance / VSA) is inefficient / slower than running In-Kernel.

An recent example of the FUD comes from an article written by the President of VCE himself, Mr Chad Sakac who wrote:

… it (VxRAIL) is the also the ONLY HCIA that has a fully integrated SDS stack that is embedded into the kernel – specifically VSAN because VxRail uses vSphere.  No crazy 8vCPU, 16+ GB of RAM for the storage stack (per “storage controller” or even per node in some cases with other HCIA choices!) needed.

So I thought I would put together a post covering what the Nutanix CVM provides and giving a comparison to what Chad referred to as a fully integrated SDS stack.

Let’s compare what resources are required between the Nutanix suite which is made up of the Acropolis Distributed Storage Fabric (ADSF) & Acropolis Hypervisor (AHV) and VMware’s suite made up of  vCenter , ESXi ,  VSAN and associated components.

This should assist those not familiar with the Nutanix platform understand the capabilities and value the CVM provides and correct the FUD being spread by some competitors.

Before we begin, let’s address the default size for the Nutanix CVM.

As it stands today, The CVM by default is assigned 8 vCPUs and 16GB RAM.

What CPU resources the CVM actually uses obviously depends on the customers use case/s so if the I/O requirements are low, the CVM wont use 8 vCPU, or even 4vCPUs, but it is assigned 8vCPUs.

With the improvement in ESXi CPU scheduling over the years, the impact of having more than the required vCPUs assigned to a limited number of VMs (such as the CVM) in an environment is typically negligible, but the CVM can be right sized which is also common.

The RAM allocation is recommended to be 24Gb when using deduplication, and for workloads which are very read intensive, the RAM can be increased to provide more read cache.

However, increasing the CVM RAM for read cache (Extent Cache) is more of a legacy recommendation as the Acropolis Operating System (AOS) 4.6 release achieves outstanding performance even with the read cache disabled.

In fact, the >150K 4k random read IOPS per node which AOS 4.6 achieves on the NX-9040-G4 nodes was done without the use of in-memory read cache as part of engineering testing to see how hard the SSD drives can be pushed. As a result, even for extreme levels of performance, increasing the CVM RAM for Read Cache is no longer a requirement. As such, 24Gb RAM will be more than sufficient for the vast majority of workloads and reducing RAM levels is also on the cards.

Thought: Even if it was true in-kernel solutions provided faster outright storage performance, (which is not the case as I showed here), this is only one small part of the equation. What about management? VSAN management is done via vSphere Web Client which runs in a VM in user space (i.e.: Not “In-Kernel”) which connects to vCenter which also runs as a VM in user space which commonly leverage an SQL/Oracle database which also runs in user space.

Now think about Replication, VSAN uses vSphere Replication, which, you guessed it, runs in a VM in user space. For Capacity/Performance management, VSAN leverages vRealise Operations Manager (vROM) which also runs in user space. What about backup? The vSphere Data Protection appliance is yet another service which runs in a VM in user space.

All of these products require the data to move from kernel space into user space, So for almost every function apart from basic VM I/O, VSAN is dependant on components which are running in user space (i.e.: Not In-Kernel).

Lets take a look at the requirements for VSAN itself.

According to the VSAN design and sizing guide (Page 56) VSAN uses up to 10% of hosts CPU and requires 32GB RAM for full VSAN functionality. Now the RAM required doesn’t mean VSAN is using all 32GB and the same is true for the Nutanix CVM if it doesn’t need/use all the assigned RAM, it can be downsized although 12GB is the recommended minimum, 16GB is typical and for a node with even 192Gb RAM which is small by todays standards, 16GB is <10% which is minimal overhead for either VSAN or the Nutanix CVM.

In my testing VSAN is not limited to 10% CPU usage and this can be confirmed in VMware’s own official testing of SQL in : VMware Virtual SAN™ Performance with Microsoft SQL Server

In short, the performance testing is conducted with 3 VMs each with 4 vCPUs each on hosts contained a dual-socket Intel Xeon Processor E5-2650 v2 (16 cores, 32 threads, @2.6GHz).

So assuming the VMs were at 100% utilisation, they would only be using 75% of the total cores (12 of 16).  As we can see from the graph below, the hosts were almost 100% utilized, so something other than the VMs is using the CPU. Best case, VSAN is using ~20% CPU, with the hypervisor using 5%, in reality the VMs wont be pegged at 100% so the overhead of VSAN will be higher than 20%.

VSAN10PercentBS

Now I understand I/O requires CPU, and I don’t have a problem with VSAN using 20% or even more CPU, what I have a problem with is VMware lying to customers that it only uses 10% AND spreading FUD about other vendors virtual appliances such as the Nutanix CVM are resource hogs.

Don’t take my word for it, do your own testing and read their documents like the above which simple maths shows the claim of 10% max is a myth.

So that’s roughly 4 vCPUs (on a typical dual socket 8 core system) and up to 32GB RAM required for VSAN, but lets assume just 16GB RAM on average as not all systems are scaled to 5 disk groups.

The above testing was not on the latest VSAN 6.2, so things may have changed. One such change is the introduction of software checksums into VSAN. This actually reduces performance (as you would expect) because it provides a layer of data integrity with every I/O, as such the above performance is still a fair comparison because Nutanix has always had software checksums as this is essential for any production ready storage solution.

Now keep in mind, VSAN is really only providing the storage stack, so its using ~20% CPU under heavy load for just the storage stack, unlike the Nutanix CVM which is also providing a highly available management layer which has comparable (and in many cases better functionality/availability/scalability) to vCenter, VUM, vROM, vSphere Replication, vSphere Data Protection, vSphere Web Client, Platform Services Controller (PSC) and the supporting database platform (e.g.: SQL/Oracle/Postgress).

So I comparing VSAN CPU utilization to a Nutanix CVM is about as far from Apples/Apples as you could get, so let’s look at what all the vSphere Managements components resource requirements are and make a fairer comparison.

vCenter Server

Resource Requirements:

Small | Medium | Large

  • 4vCPUs | 8vCPUs | 16vCPUs
  • 16GB | 24GB | 32GG RAM

Reference:http://pubs.vmware.com/vsphere-60/index.jsp#com.vmware.vsphere.install.doc/GUID-D2121DC5-1FC8-48DC-A4BA-C3FD72D0BE77.html

Platform Services Controller

Resource Requirements:

  • 2vCPUs
  • 2GB RAM

Reference:http://pubs.vmware.com/vsphere-60/index.jsp#com.vmware.vsphere.install.doc/GUID-D2121DC5-1FC8-48DC-A4BA-C3FD72D0BE77.html

vCenter Heartbeat (Deprecated)

If we we’re to compare apples to apples, vCenter would need to be fully distributed and highly available which its not. The now deprecated vCenter Heartbeat used to be able to somewhat provide, so that’s 2x the resources of vCenter, VUM etc, but since its deprecated we’ll give VMware the benefit of the doubt and not count resources to make their management components highly available.

What about vCenter Linked Mode? 

I couldn’t find its resource requirements in the documentation so let’s give VMware the benefit of the doubt and say it doesn’t add any overheads. But regardless of overheads, its another product to install/validate and maintain.

vSphere Web Client

The Web Client is required for full VSAN management/functionality and has its own resource requirements:

  • 4vCPUs
  • 2GB RAM (at least)

Reference:https://pubs.vmware.com/vsphere-50/index.jsp#com.vmware.vsphere.install.doc_50/GUID-67C4D2A0-10F7-4158-A249-D1B7D7B3BC99.html

vSphere Update Manager (VUM)

VUM can be installed on the vCenter server (if you are using the Windows Installation) to save having management VM and OS to manage, if you are using the Virtual Appliance then a seperate windows instance is required.

Resource Requirements:

  • 2vCPUs
  • 2GB

The Nutanix CVM provides the ability to do Major and Minor patch updates for ESXi and of course for AHV.

vRealize Operations Manager (vROM)

Nutanix provides built in Analytics similar to what vROM provides in PRISM Element and centrally managed capacity planning/management and “what if” scenarios for adding nodes to the cluster, as such including vROM in the comparison is essential if we want to get close to apples/apples.

Resource Requirements:

Small | Medium | Large

  • 4vCPUs | 8vCPUs | 16vCPUs
  • 16GB | 32GB | 48GB
  • 14GB Storage

Remote Collectors Standard | Large

  • 2vCPUs | 4vCPUs
  • 4GB | 16GB

Reference:https://pubs.vmware.com/vrealizeoperationsmanager-62/index.jsp#com.vmware.vcom.core.doc/GUID-071E3259-625A-437B-AB34-E6A58B87C65B.html

vSphere Data protection

Nutanix also has built in backup/recovery/snapshotting capabilities which include application consistency via VSS. As with vROM we need to include vSphere Data Protection in any comparison to the Nutanix CVM.

vSphere Data Protection can be deployed in 0.5 to 8TB as shown below:

VDPrequirements

Reference: http://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vmware-data-protection-administration-guide-61.pdf

The minimum size is 4vCPUs and 4GB RAM but that only supports 0.5TB, for even an average size node which supports say 4TB, 4 vCPUs and 8GB is required.

So best case scenario we need to deploy one VDP appliance per 8TB, which is smaller than some Nutanix (or VSAN Ready) nodes (e.g.: NX6035 / NX8035 / NX8150) so that would potentially mean one VDP appliance per node when running VSAN since the backup capabilities are not built in like they are with Nutanix.

Now what about if I want to replicate my VMs or use Site Recovery Manager (SRM)?

vSphere Replication

As with vROM and vSphere Data protection, vSphere Replication provides VSAN functionality which Nutanix also has built into the CVM. So we also need to include vSphere Replication resources in any comparison to the CVM.

While vSphere Replication has fairly light on resource requirements, if all my replication needs to go via the appliance, it means one VSAN node will be a hotspot for storage and network traffic, potentially saturating the network/node and being a noisy neighbour to any Virtual machines on the node.

Resource Requirements:

  • 2vCPUs
  • 4GB RAM
  • 14GB Storage

Limitations:

  • 1 vSphere replication appliance per vCenter
  • Limited to 2000 VMs

Reference: http://pubs.vmware.com/vsphere-replication-61/index.jsp?topic=%2Fcom.vmware.vsphere.replication-admin.doc%2FGUID-E114BAB8-F423-45D4-B029-91A5D551AC47.html

So scaling beyond 2000 VMs requires another vCenter, which means another VUM, another Heartbeat VM (if it was still available), potentially more databases on SQL or Oracle.

Nutanix doesn’t have this limitation, but again we’ll give VMware the benefit of the doubt for this comparison.

Supporting Databases

The size of even a small SQL server is typically at least 2vCPUs and 8GB+ RAM and if you want to compare apples/apples with Nutanix AHV/CVM you need to make the supporting database server/s highly available.

So even in a small environment we would be talking 2 VMs @ 2 vCPUs and 8GB+ RAM ea just to support the back end database requirements for vCenter, VUM, SRM etc.

As the environment grows so does the vCPU/vRAM and Storage (Capacity/IOPS) requirements, so keep this in mind.

So what are the approx. VSAN overheads for a small 4 node cluster?

The table below shows the minimum vCPU/vRAM requirements for the various components I have discussed previously to get VSAN comparable (not equivalent) functionality to what the Nutanix CVM provides.

VSANoverheads3

As the above only covers the minimum requirements for a small say 4 node environment, things like vSphere Data Protection will require multiple instances, SQL should be made highly available using an Always on Availability group (AAG) which requires a 2nd SQL server and as the environment grows, so do the vCPU/vRAM requirements for vCenter, vRealize Operations Manager and SQL.

A Nutanix AHV environment on the other hand looks like this:

NutanixOverheads2

So just 32 vCPUs and 64GB RAM for a 4 node cluster which is 8vCPU and 54GB RAM LESS than the comparable vSphere/VSAN 4 node solution.

If we add Nutanix Scale out File Server functionality into the mix (which is optionally enabled) this increases to 48vCPUs and 100GB RAM. Just 8vCPUs more and still 18GB RAM LESS than vSphere/VSAN while Nutanix provides MORE functionality (e.g.: Scale out File Services) and comes out of the box with a fully distributed, highly available, self healing FULLY INTEGRATED management stack.

The Nutanix vCPU count assumes all vCPUs are in use which is VERY rarely the case. So this comparison is well and truely in favour of VSAN while still showing vSphere/VSAN having higher overheads for a typical/comparable solution with Nutanix providing additional built in features such as Scale out File Server (another distributed and highly available solution) for only a small amount more resources than vSphere/VSAN which does not provide comparable native file serving functionality.

What about if you don’t use all those vSphere/VSAN features and therefore don’t deploy all those management VMs. VSAN overheads are lower, right?

It is a fair argument to say not all vSphere/VSAN features need to be deployed, so this will reduce the vSphere/VSAN requirements (or overheads).

The same however is true for the Nutanix Controller VM.

Its not uncommon where customers don’t run all features and/or have lower I/O requirements for the CVM to be downsized to 6vCPUs. I personally did this earlier this week for a customer running SQL/Exchange this week and the CVM is still only running at ~75% or approx ~4 vCPUs and that’s running vBCA with in-line compression.

So the overheads depend on the workloads, and the default sizes can be changed for both vSphere/VSAN components and the Nutanix CVM.

Now back to the whole In-Kernel nonsense.

VMware also like to spread FUD that their own hypervisor has such high overheads, its crazy to run any storage through it. I’ve always found this funny since VMware have been telling the market for years the hypervisor has a low overhead (which it does), but they change their tune like the weather to suit their latest slideware.

One such example of this FUD comes from VMware’s Chief Technologist, Duncan Epping who tweeted:

DuncanInKernelRubbish

The tweet is trying to imply that going through the hypervisor to another Virtual Machine (in this case a Nutanix CVM) is inefficient, which is interesting for a few reasons:

  1. If going from one VM to another via the kernel has such high overheads, why do VMware themselves recommend virtualizing business critical high I/O applications which have applications access data between VMs (and ESXi hosts) all the time? e.g.: When a Web Server VM accesses an Application Server VM which accesses data from a Database. All this is in one VM, through the kernel and into another VM.
  2. Because for VSAN has to do exactly this to leverage many of the features it advertises such as:
  • Replication (via vSphere Replication)
  • vRealize Operations Manager (vROM)
  • vSphere Data Protection (vDP)
  • vCenter and supporting components

Another example of FUD from VMware, in this case Principal Engineer, Jad El-Zein is implying VSAN has low(er) overheads compared to Nutanix (Blocks = Nutanix “Blocks”):

screenshot_2015-03-10-10-05-44-1.png

I guess he forgot about the large number of VMs (and resources) required to provide VSAN functionality and basic vSphere management. Any advantage of being In-Kernel (assuming you still believe it is in fact any advantage) are well and truely eliminated by the constant traffic across the hypervisor to and from the management VMs all of which are not In-Kernel as shown below.

vSphereManagement

I’d say its #AHVisTheOnlyWay and #GoNutanix since the overheads of AHV are lower than vSphere/VSAN!

Summary:

  1. The Nutanix CVM provides a fully integrated, preconfigured and highly available, self healing management stack. vSphere/VSAN requires numerous appliances and/or software to be installed.
  2. The Nutanix AHV Management stack (provided by the CVM) using just 8vCPUs and typically 16GB RAM provides functionality which in many cases exceeds the capabilities of vSphere/VSAN which requires vastly more resources and VMs/Appliances to provide comparable (but in many cases not equivalent) functionality.
  3. The Nutanix CVM provides these capabilities built in (with the exception of PRISM Central which is a seperate Virtual Appliance) rather than being dependant on multiple virtual appliances, VMs and/or 3rd party database products for various functionality.
  4. The Nutanix management stack is also more resilient/highly available that competing products such as all VMware management components and comes this way out of the box. As the cluster scales, the Acropolis management stack continues to automatically scale management capabilities to ensure linear scalability and consistent performance.
  5. Next time VMware/EMC try to spread FUD about the Nutanix Controller VM (CVM) being a resource hog or similar, ask them what resources are required for all functionality they are referring to. They probably haven’t even considered all the points we have discussed in this post so get them to review the above as a learning experience.
  6. Nutanix/AHV management is fully distributed and highly available. Ask VMware how to make all the vSphere/VSAN management components highly available and what the professional services costs will be to design/install/validate/maintain that solution.
  7. The next conversation to have would be “How much does VSAN cost compared to Nutanix”? Now that we understand all the resources overheads and complexity in design/implementation/validation of the VSAN/vSphere environment, not to mention most management components will not be highly available beyond vSphere HA. But cost is a topic for another post as the ELA / Licensing costs are the least of your worries.

To our friends at VMware/EMC, the Nutanix CVM says,

“Go ahead, underestimate me”.

 

15 thoughts on “VMware you’re full of it (FUD) : Nutanix CVM/AHV & vSphere/VSAN overheads

  1. Thank you for sharing. I have a VSAN 6.2 one cluster node with 3HDD and 1SDD, I’m also running the exact same configuration for Nutanix CE. My RAM utilization and CPU utilization after booting up my VSAN node is = 10GB of RAM and about 10% of CPU. This is just with booting up the host no VMs running on it.

  2. FUD – Fear Uncertainty and Doubt

    Saying Vmware is promoting FUD ,aren’t you doing the same through this article 🙂 ! In a so called ” small ” environment vCenter + PSC can be run in a same machine and if you use vCenter Linux Appliance we are talking about one appliance or max two Appliances. Starting vSphere 6 you do not even have option to run web client separate, and included free PostGRESql DB is fully supported in production and can scale up to max capacity.

    Rest of the products used in this article are “Products” 🙂 !

    So it would be nice and clear,.if you can compare vSan, vCenter and vSphere Hypervisor with Nutanix AHV .

    • I think you may be touching on the fact its difficult to compare the two products. That’s because they are very different. Its like in the old days comparing EMC and Netapp, one Netapp box did everything and with EMC you needed a dozen boxes to do the same. Similar situation now, VSAN needs lots of components to be fully functional, Nutanix/AHV just needs the CVM. The comparison discusses briefly that not all “products” as you put it may be required/deployed in a VSAN environment (e.g.: vSphere Replication, vSphere Data Protection) and that reduces the overheads, the same is true for Nutanix CVM, the less features that are used, the less of the 8vCPUs assigned will be used and the CVM can be right sized. Thanks for the comment, and yes, when responding to FUD, its inevitable that correcting the FUD and making statements (however factual) about the competitors product may be interpreted as FUD.

  3. Hi Josh, are you biased by any chance? Reading this has put me off recommending Nutanix to my customers in future. My company is a Nutanix partner by the way. Bashing companies in public really doesn’t look good.

    You were previously someone I looked up to as a VCDX holder and have a few of your posts bookmarked as I prepare for my own defence.

    Also, it may be worth asking someone to proof-read your posts for spelling and grammar before publishing.

    • Hi Graeme,

      Are you biased by any chance? (Sounds silly doesn’t it)

      Can you point me in the direction of you calling out VMware & EMC for bashing other vendors?

      On a more serious note, For you’re VCDX defence, think about this. If you were asked “Why didn’t you choose Product X”, and you responded “Because I didn’t like a blog one of their employees wrote”, how do you think you would score?

      My point here is, If you don’t like me or this post, no problem you are entitled to your opinion. But if your making recommendations to customers, disliking a person and/or blog post should have no influence on the recommendation. The decision should be based purely on the customers desired business outcome and their specific requirements/constraints etc.

      I think this tweet really sums up my feeling about your comment.

      https://twitter.com/josh_odgers/status/719428720046374912

      None the less, best of luck with your VCDX, I hope you have a rewarding journey like I did.

      BTW: If you can move past not liking my blog post (*tongue in cheek*) you should consider starting your NPX journey after VCDX as i’m sure you’ll find it to be a great learning experience as well.

      • The biased comment was actually sarcasm… 😉

        In the real world, I’d be a lot more professional with my response. In fact, unless the customer specifically mentioned Nutanix, maybe I wouldn’t even bring it up. I probably wouldn’t bring up VSAN either – that’s just because I don’t work with many customers who use HCI. I’ve done side-by-side quotes and even VSAN is too expensive compared to traditional storage when comparing the costs of GB / £ with like-for-like performance. With this in mind, I might present both proposals to a customer, and no doubt in future this will include Nutanix’s offerings – however when I’m out for beers with these same customers we’ll be talking about vendor wars and who the noisiest “vendor slaggers” are.

        I’m also passionate about the company I work for, and in the hundreds of technical audits I’ve written critiquing the work of our competitors, not once have I bashed their work or choice of product. Of course I might say that the choice of storage was incorrect for the workload it’s running, but I’d have a spreadsheet next to me to back up my claims – I wouldn’t tell them to go run their own numbers and “take my word for it”. I would never publish a post and bash our competitors either. Firstly my employer wouldn’t allow it, secondly that’s just not how I roll.

        I’ll keep an eye out for these posts you speak of, because it really is off-putting. It might please your employer, but it makes the rest of us cringe – well the group of about 10 of us in EMEA that discussed it earlier this morning.

        I have no doubt your VCDX has opened up a lot of doors, maybe it was even one of the reasons you got the job with Nutanix. It certified you as an elite architect, and for that I will always respect you. I just won’t ever take posts like this seriously.

        Thanks for the advice of the VCDX! Working with some great people on it and it’s already made me more aware and better at my job.

        • If you are comparing $/TB then you don’t even know what HCI even is to begin with. HCI at best can be compared with converged and every HCI quote from every vendor is cheaper than converged.

  4. VMware guys (professionals). It can be so hilarious working with them when you have real issues and they still have to defend “why/in progress” solution. Love enterprise meetings with them.

  5. Well,

    Not sure all of this is really interresting. What matter at the end is how reals VM works, not how a Benchmark can validate or not an architecture. People needs to do their own tests and not believe some nice figures.

    Who knows exactly what kind of i/o he is doing ? (Read/write?).

    It’s a fact that in kernel operations will Have less impact on latency than going through any appliance. Will it give VM better performances, i don’t know, but my feeling is in favor of in kernel i/o if i have to choose.

    For other things, talking of extenal solution like vSphere Replication or vSphere Data Protection is a way to present things : all management products on vSphere architecture can be used to manage up to 64 nodes, and some of thems can go further (vRealize Opération Manager, vCenter Server, etc…). None of them need to be replicated on each node, so, for me, speaking of that is doing some FUD, exactly what the article was against at the beginning if i remember well…

    Last things : writer said CVM use less of what is recommended (8vcpus, 24 GB ram), ok, nice, but why asking this is it not necessary ? If Nutanix use less than it recommend, could it be the same for VMware products ?

    Well : do your own test, choose if you want specific hardware or not, try the scalability and more important, the manageability of the solution (how you can upgrade, add, remove some nodes, especially if you use IT automation solutions above, like Horizon View or Cloud portal).

    • I believe the article covered many of the points you raised, I agree not all external solutions need to be replicated per node, however some would such as vDP. If you wanted to scale something like vDP in a linear fashion (like Nutanix) you would want one per node (to avoid creating a hot spot in your cluster). So call that FUD if you wish, or maybe consider it’s a fair point / comparison. You’re correct that vROM & vCenter don’t need to be deployed on every node, but they do need to be scaled up (and additional instances) to support larger environments. The larger the environment, the more instances/resources they uses (e.g.: 16 vCPUs just for vCenter) and the bigger the issue of the management layer not being highly available becomes, which is not an issue with Nutanix AHV. I also covered the fact both VMware and Nutanix resource requirements will vary based on what features are used. The fact Nutanix/AHV is fully distributed and highly available means it SHOULD use more resources than the vSphere/VSAN suite as it does not share the same scale out management layer as the article explains. So unless vSphere/VSAN uses <50% the resources of Nutanix CVM, then its using more CPU for less functionality/availability/resiliency. I 100% agree with you customers should try the scalability and manageability of both solutions. Regarding your comment about choosing if you want specific hardware, you do understand VSAN has a HCL and that HW compatibility is the source of lots of issues for VSAN customers? Don't take my word for it, checkout the VMware KB / Reddit etc. Nutanix Customers currently have a choice of 3 hardware vendors and countless HW options all of which are tested and certified, so if that's the worst thing you can say about Nutanix is we remove the complexity of checking/maintaining compatibility/interoperability of HW components, we're doing well!

  6. I’ve tested both and I have heard from both, and I think that EMC/VMW does spread misconceptoins. Your own testing will show you what is better for performance and also what is better for resilience in failure scenarios. They behave quite differently.

    I do feel there is actually a larger ecosystem for the VCE packages with VSAN, which includes more VMs than listed above. You have a few more points of management, more VMs, and more items to keep current. With NTX, that’s all one console/one point of mgmt provided and runs with/in the CVMs. If you are running NTX on top of VMW, then you need vCenter in both cases.

  7. Disclosure – EMCer (Chad).

    Josh – as always, thank you for adding the dialog.

    Even when I disagree with you, I respect your right to do it, and your furious passion for the architecture you represent. I’m no different – and I would wager it’s because we both believe in it, not because we’re being shills (I hope not!)

    In spite of the obvious error of commenting in your forum (it’s your microphone after all 🙂 – I’ll do it anyway…

    couple things I wanted to put out there:

    1) I don’t think the bulk of my comment was FUD. It was factual. VxRail **IS** the only HCIA that is co-engineered with VMware. VxRail **IS** the only HCIA with single end-to-end support inclusive of the vSphere stack (in all other cases it’s a best effort dual vendor model). VxRail **IS** the only HCIA for VMware that has a vmkernel integrated storage stack.

    Those are not opinions, they are FACTS. Customers can weigh the relative value to other benefits/weaknesses.

    If a customer wants a HCIA for VMware, I think they would be crazy not to evaluate VxRail. If they aren’t using VMware, then it’s simply not the choice for them.

    I’ll also point out that I didn’t refer to another competitor directly, and maybe I shouldn’t have even made the reference. I try like heck to not refer to competitors in my posts (or in general) – though sometimes I fall down.

    It’s interesting that you thought I was talking about you folks when I noted the VSA model – I wasn’t. I was thinking about Springpath. But, I get that you might have thought I was thinking about Nutanix and it’s VSA implementation.

    2) If indeed, those are statements of FACT, then my second comment (on resource consumption of VSAs) falls into an area where opinions and points of view then apply.

    My point of view (and there is data to support) is that there are downsides (and I’m sure upsides) of the transactional storage stack not being in kernel space. A material part of that is during times where the kernel resources are tightly constrained (often not the place where people are doing testing).

    Your point of view disagrees – and that will be the battle that will play out in the marketplace.

    I think I went astray in making the comment about resource consumption – while that’s not good, the resources consumption is a “your mileage may vary” area and a point which is difficult to be definitive. It’s a “it depends” answer.

    My main argument is about behavior of kernel-space vs. user/guest-space for core functions… For another example, we have a ScaleIO customer with 10PB deployed (mostly vSphere, but some KVM) – and until the ScaleIO SDC component was a kernel module for vSphere, it was non-optimal, particularly under kernel contention…. which leads me to the next point…

    3) the examples of other things that consume resources you point out (and compare with Nutanix’s value) I would note are not sensitive about kernel-level behavior, and their failure under very high kernel-resource contention would not be fatal (VUM, VDP, VROps, even vCenter etc). Conversely Networking, persistence – control at the kernel level of these things is important – if they get bumpy VMs stop running.

    It can be argued about whether VMware’s tight control of the vmkernel is good/bad – but fundamentally, it is something that is correlated with the reliability of the vmkernel.

    In any case, the debate will continue, and I never underestimate anyone, certainly not you folks!

    Have a great day – and see you on the battlefield!