Storage Capabilities not appearing after Installing & configuring the Netapp VASA Provider 1.0

 

When installing the Netapp VASA provider 1.0 today, I was surprised to find the Storage Capabilities were not being populated (as shown in the screen shot below). As I have installed and configure the Netapp VASA provider numerous times and never had an issue, this was very annoying.

StorageCapabilitiesBlank

 

I had completed the configuration of the VASA provider (see below) and successfully registered the provider.

netappvasaconfig_blank

I confirmed the provider was showing up by going to the “Solution providers” in vSphere Client

NetappVASASolutionProviders

Then confirming the vendor provider has been registered as per the below.

VendorProviders

So after unregistering the provider then uninstalling and re-installing the Netapp VASA Provider software the problem was still not solved.
After a quick Google, much to my surprise I couldn’t find anything on this issue.

It turns out, it was a quite simple “fix” if I can call it that.

For whatever reason I configured the storage system first, then proceeded to complete the other steps.

If your having this issue, you can avoid the problem by configuring the VASA provider (shown below) in this order

1. Configure the username and password for communication with vCenter
2. Configure the vCenter server details and register the provider
3. Register the storage system

 

netappvasaconfig

 

Shortly there after the storage capabilities all appeared successfully.

StorageCapabilitiesWorking

So the above contradicts the official Netapp installation guide for the VASA provider, which states on P11 that you can add storage systems at any time,  but solved my problem.

Hope you found this useful.

vCloud Suite 5.1 Upgrade Guide

I just came across an unofficial vCloud Suite 5.1 upgrade guide by Jad El-Zem which covers off the steps involved and a few gotchas to watch out for.

VMware Blogs – vCloud Suite 5.1 Solution Upgrade Guide

Example Architectural Decision – Securing vMotion & Fault Tolerance Traffic in IaaS/Cloud Environments

Problem Statement

vMotion and Fault tolerance logging traffic is unencrypted and anyone with access to the same VLAN/network could potentially view and/or compromise this traffic. How can the environment be made as secure as possible to ensure security between in a multi-tenant/multi-department environment?

Assumptions

1.  vMotion and FT is required in the vSphere cluster/s (although FT is currently not supported for VMs hosted with vCloud Director)
2. IP Storage is being used and vNetworking has 2 x 10GB for non Virtual Machine traffic such as VMKernel’s & 2 x 10GB NICs are available for Virtual Machine traffic (Similar to Example vNetworking Design for IP Storage)
3. VI3 or later

Motivation

1. Ensure maximum security and performance for vMotion and FT traffic
2. Prevent vMotion and/or FT traffic impacting production virtual machines

Architectural Decision

vMotion & Fault tolerance logging traffic will each have a dedicated non routable VLAN which will be hosted on a dvSwitch which is physically separate from virtual machine distributed virtual switch.

Justification

1.  vMotion / FT traffic does not require external (or public) access
2. A VLAN per function ensures maximum security / performance with minimal design / implementation overhead
3. Prevent vMotion and/or FT traffic potentially impacting production virtual machine and vice versa by having the traffic share one or more broadcast domain/s
4. Ensure vMotion/FT traffic cannot leave there respective dedicated VLAN/s and potentially be sniffed

Implications

1. Two (2) VLANs with private IP ranges are required to be presented over 802.1q connections to the appropriate pNICs

Alternatives

1.  vMotion / FT share the ESXi management VLAN – This would increase risk of traffic being intercepted and “sniffed”
2. vMotion / FT share a dvSwitch with Virtual Machine networks while still running within dedicated non routable VLANs over 802.1q