Storage I/O Control (SIOC) is a feature introduced in VMware vSphere 4.1 which was designed to allow prioritization of storage resources during periods of contention across a vSphere cluster. This situation has often been described as the “Noisy Neighbor” issue where one or more VMs can have a negative impact on other VMs sharing the same underlying infrastructure.
For traditional centralized shared storage, enabling SIOC is a “No Brainer” as even using the default settings will ensure more Consistent performance during periods of storage contention which all but no downsides. SIOC does this by managing and potentially throttling the Device Queue depth based on “Shares” assigned to each Virtual Machine to ensure Consistent performance across ESXi hosts.
The below diagrams show the impact on three (3) identical VMs with the same Disk “Shares” values with and without SIOC in a traditional centralized storage environment (a.k.a SAN/NAS).
Without Storage I/O Control
With Storage I/O Control
As show in the above, where VMs have equal share values but reside on different ESXi hosts can result in an undesired result with one VM having double the available storage queue compared to the VMs residing on a different host. In comparison, SIOC ensuring VMs with the same share value get equal access to the underlying storage queue.
While SIOC is an excellent feature, it was designed to address a problem which is no longer a significant factor with the Nutanix Scale out Shared nothing style architecture.
The issue of “noisy neighbour” or storage contention in the Storage Area Network (SAN) is all but eliminated as all Datastores (or “Containers” in Nutanix speak) are serviced by every Nutanix Controller VM in the cluster and under normal circumstances, upwards on 95% of read I/O is serviced by the local Controller VM, Nutanix refers to this feature as “Data Locality”.
Data Locality ensures data being written and read by a VM remains on the Nutanix node where the VM is running, thus reducing latency of accessing data across a Storage Area Network, and ensuring that a VM reading data on one node, has minimal or no impact on another VM on another node in the cluster.
As Write I/O is also distributed throughout the Nutanix cluster, which means no one single node is monopolized by (Write) replication traffic.
Storage I/O Control was designed around the concept of a LUN or NFS mount (from vSphere 5.0 onwards) where the LUN or NFS mount is served by a central storage controller, as is the most typical deployment in the past for VMware vSphere environment.
As such, SIOC limiting the LUN queue depth allowed all VMs on the LUN to have either an equal share of the available queue, OR by specifying “Share” values on a VM basis, ensure VMs can be prioritized based on importance.
By default, all Virtual Hard Disks have a share value of “Normal” (1000 shares). Therefore if a individual VM needs to be given higher storage priority, the Share value can be increased.
Note: In general, modifying VM Virtual disk share values should not be required.
As Nutanix has one Storage Controller (or CVM) per node which all actively service I/O to the Datastore, SOIC is not required, and provides no benefits.
For more information about SIOC in traditional NAS environments see: “Performance implications of Storage I/O Control – Enabled NFS Datastores in VMware vSphere 5.0”
As such for Nutanix environments it is recommended SIOC control be disabled and DRS “anti-affinity” or “VM to host” rules be used to separate high I/O VMs.