Jetstress Performance Testing on Nutanix Acropolis Hypervisor (AHV) – Part 2 – Performance after a VM Migration

This is Part 2 of the Jetstress performance testing on Nutanix Acropolis Hypervisor (AHV) series of videos.

This video shows the following:

  1. Jetstress performance after the VM is migrated from the node the Databases were created, too another node in the NDSF cluster.

This test is the first of several to show the impact to performance by migrating VMs to nodes in the same cluster where not all data is stored locally. This is demonstrated by the advanced reporting capabilities of NDSF.

Note: As with previous videos, This demonstration is not showing the peak performance which can be achieved by Jetstress on Nutanix.

Part 2

Return to the Table of Contents

Related Articles:

Microsoft Exchange 2013/2016 Jetstress Performance Testing on Nutanix Acropolis Hypervisor (AHV)

Virtualization of business critical application has been common place for a number of years, however it is less well known that these business critical applications are also regularly deployed on Nutanix Hyper-converged Infrastructure (HCI) as I discuss in the following post:

Think HCI is not an ideal way to run your mission-critical x86 workloads? Think again!

I am regularly involved in discussions with customers about how well MS Exchange and other business critical applications perform on Nutanix especially during:

  • Storage software upgrades (Acropolis Base Software)
  • Hypervisor upgrade
  • VMs Migrations (e.g.: vMotion)
  • Failure scenarios.

Customers also ask how Data Locality works with workloads like Exchange which have large amounts of data, what overheads are there if any, how much data is served local vs remote and so on.

As a result, I have created the following series of Videos demonstrating the following:

  • Setting a baseline for Jetstress performance on Node 1
  • Migrating VM to a 2nd node and repeating the Jetstress performance test
  • Migrating VM to a 3rd node and repeating the Jetstress performance test
  • Migrating VM to a 4th node and repeating the Jetstress performance test
  • Migrating the VM back to the 1st node and repeating the Jetstress performance test
  • Repeating the test on the 2nd, 3rd and 4th nodes (second Jetstress run for comparison)
  • Performing a Jetstress performance test on a VM with the local Nutanix Controller VM (CVM) offline (to simulate a CVM failure, Storage Maintenance or Upgrade scenarios)

During the above videos I will show advanced Nutanix Distributed Storage Fabric (NDSF) performance statistics such as how Write I/O is being served and What percentage of data is being served locally verses remotely.

Enjoy the videos:

Part 1 – Setting a baseline for Jetstress performance on Nutanix AHV

Part 2 – Migrating Jetstress to 2nd node and repeating Jetstress test

Part 3 – Migrating Jetstress to 3rd node and repeating Jetstress test

Part 4 – Migrating Jetstress to 4th node and repeating Jetstress test

Part 5 through 8 – Repeat Jetstress Tests on all four nodes. (Coming soon)

Part 9 – Take the local Nutanix Controller VM (CVM) offline and repeat test (Coming soon)

Part 10 – Scale out Performance Validation (Coming soon)

Related Articles:

vMotion issues when using NFS storage with vSphere 5.5 Update 2

When vMotioning a VM (specifically the .vswp file) residing on an NFS datastore you may see the following error.

vMotion fails with the error: remote host IP_Address failed with status Busy

This issue originally occurred in vSphere 4.1 but appears to have reappeared in vSphere 5.5 Update 2.

Luckily there is a workaround for now, until VMware can investigate and resolve the problem.

The workaround is to modify the advanced setting “Migrate.VMotionResolveSwapType” from the default of 1, to 0 on both the source and destination hosts. If you want to solve this for your entire cluster, then every host needs to be modified.

To modify the setting:
  1. Launch the vSphere Client and log in to your vCenter Server.
  2. Select the source ESX host and then click the Configuration tab.
  3. Click Software > Advanced Settings > Migrate.
  4. Under the Migrate options, locate the line containing Migrate.VMotionResolveSwapType. By default, it is set to 1.
  5. Change the value to 0.
  6. Click OK.
  7. Repeat Steps 2 to 6 for all hosts in the cluster.

The official VMware KB is below.

vMotion fails with the error: remote host IP_Address failed with status Busy(1031636)