My Journey to Double-VCDX

It was back in 2011 when I started my journey to VCDX which was a fantastic learning experience which has helped improve my skills as an Enterprise Architect.

After achieving VCDX-DCV in May 2012, I have continued to put the skills and experience I gained during my VCDX journey into practice, and came to the realization, of how little I actually know, and how much more there is learn.

I was looking for another certification challenge however there was no additional VCDX certification tracks at the time. Then VCDX-Cloud and VCDX-Desktop were released, I figured I should attempt VCDX-Cloud since my VCDX-DCV submission was actually based on a vCloud design.

At the time I didn’t have my VCAPs for Cloud, so as per my VCAP-CID Exam Experience and VCAP-CIA Exam Experience posts explain, I formed a study group and sat and passed both exams over a period of a few months.

Next came the VCDX application phase, I prepared my design in a similar fashion to my original application which basically meant reviewing the VCDX-Cloud Blueprint and ensuring all sections have been covered.

I sad part about submitting a second VCDX is that there is no requirement to redefend in person. As a result I suspect the impression is that achieving a second VCDX is easier. While I think this is somewhat true as the defence is no walk in the park, the VCDX submission still must be of an expert standard.

I suspect for first time VCDX applicants, the candidate may be given the benefit of the doubt if the documentation is not clear, has mistakes or contradicts itself in some areas as these points can be clarified or tested by the panellists during the design defence.

In the case of subsequent applications,  I suspect that Double-X candidates may not get the benefit of the doubt, as these points cannot be clarified. As a result, It could be argued the quality of the documentation needs to be of a higher standard so that everything in the design is clear and does not require clarification.

My tips for Double-X Candidates:

In addition to the tips in my original VCDX Journey Post:

  1. Ensure your documentation is of a level which could be handed to a competent engineer and implemented with minimal or no assistance.
  2. Ensure you have covered all items in the blueprint to a standard which is higher than your previous successful VCDX submission
  3. Make your design decisions clear and concise and ensure you have cross referenced relevant sections back to detailed customer requirements.
  4. Treat your Double-VCDX submission equally if not more seriously than your first applications. Ensure you dot all your “I”s and cross your “T”s.

I was lucky enough to have existing Double-VCDX Magnus Andersson (@magander3) and Nutanix colleague review my submission and give some excellent advice. So a big thanks Magnus!

What next?

Well just like when I completed my first VCDX, I was already looking for another challenge. Luckily I have already found the next certification and am well on the way to submitting my application for the Nutanix Platform Expert (NPX).

The VCDX-DCV and VCDX-Cloud have both been awesome learning experiences and I think both have proven to be great preparation for my NPX attempt, so stay tuned and with a bit of luck, you’ll be reading my NPX Journey in the not to distant future.

Support for Active Directory on vSphere

I heard something interested today from a customer, a storage vendor who sells predominantly block storage products was trying to tell them that Active Directory domain controllers are not supported on vSphere when using NFS datastores.

The context was the vendor was attempting to sell a traditional block based SAN, and they were trying to compete against Nutanix. The funny thing is, Nutanix supports block storage too, so it was a uneducated and pointless argument.

None the less, the topic of support for Active Directory on vSphere using NFS datastores is worth clarifying.

There are two Microsoft TechNet articles which cover support for  topic:

  1. Things to consider when you host Active Directory domain controllers in virtual hosting environments
  2. Support policy for Microsoft software that runs on non-Microsoft hardware virtualization software

Note: There is no mention of storage protocols (Block or File) in these articles.

The second article states:

for vendors who have Server Virtualization Validation Program (SVVP) validated solutions, Microsoft will support server operating systems subject to the Microsoft Support Lifecycle policy for its customers who have support agreements when the operating system runs virtualized on non-Microsoft hardware virtualization software.

VMware has validated vSphere as a SVVP solution which can be validated here: http://www.windowsservercatalog.com/svvp.aspx

The next interesting point is:

If the virtual hosting environment software correctly supports a SCSI emulation mode that supports forced unit access (FUA), un-buffered writes that Active Directory performs in this environment are passed to the host operating system. If forced unit access is not supported, you must disable the write cache on all volumes of the guest operating system that host the Active Directory database, the logs, and the checkpoint file.

Funnily enough, this is the same point for Exchange, but where the Exchange team decided not to support it, the wider organisation have a much more intelligent policy where they support SCSI emulation (ie: VMDKs on NFS datastores) as long as the storage ensures writes are not acknowledged to the OS prior to being written to persistent media (ie: Not volatile memory such as RAM).

This is a very reasonable support statement and one which has a solid technical justification.

In Summary, running Active Directory is supported on vSphere including both block (iSCSI, FC, FCoE) and file (NFS) based datastores where the storage vendor complies with the above requirements.

So check with your storage vendor to confirm if the storage your using is compliant.

Nutanix 100% complies with these requirements for both Block and File storage. For more details see: Ensuring Data Integrity with Nutanix – Part 2 – Forced Unit Access (FUA) & Write Through

For more information about how NFS datastores provide true block level storage to Virtual Machines via VMDKs, check out Emulation of the SCSI Protocol which shows how all native SCSI commands are honoured by VMDKs on NFS.

Related Articles:

  1. Running Domain Controllers in Hyper-V

This post covers the requirement for FUA the same as with vSphere and recommends the use of UPS (to ensure write integrity) as well as enterprise grade drives which are also applicable to vSphere deployments.

NFS Storage and the “Block Dinosaur”

Disclaimer: If you don’t have a sense of humour and/or you just really love block storage, Parental Guidance is recommend.

23-Apr-15 8-42-18 PM

For as long as I can remember it has not been uncommon for I.T “professionals” working in the storage industry or in a storage role to make statements about NFS (Network File System) as if its is a 2nd class citizen in the storage world.

I’ve heard any number of statements such as:

  • NFS is slow(er) than block storage
  • NFS (datastores) don’t honour all SCSI commands
  • NFS is not scalable
  • NFS uses significantly more CPU than block storage
  • NFS does not support <insert your favourite technology here>

People making these statements are known as “Block Dinosaurs

The definition of “Block Dinosaur” is as follows:

“Block Dinosaur”

 Pronounced: [blok] – [dahy-nuh-sawr]

Examples

noun
  1. a homo sapien becoming less common in the wild since the widespread use of NFS with vSphere and Hyper-Converged solutions
  2. a species soon to be extinct, of which attempts to spread Fear Uncertainty and Doubt (FUD) about the capabilities of NFS storage
  3. someone that provides storage which is unwieldy in size, inflexible and requires an outdated technologies such as “LUNs” , “Zoning” & “Masking”.
  4. a person unable to adapt to change who continues to attempt to sell outdated equipment: e.g.: The SAN dinosaur recommended an outdated product that was complicated and cost the company millions to install and operate.
  5. a person who does not understand SCSI protocol emulation and/or has performed little/no practical testing of NFS storage in which to have an informed opinion;
  6. a person who drinks from the fire hose of their respective employer or predominately block storage vendor;

Synonyms for “Block Dinosaur”

  1. SAN zombie
  2. Old-School SAN salesman
  3. SAN hugger
Origin of “Block dinosaur”
Believed to have originated in Hopkinton, MA, USA but quickly spread to Santa Clara, California and onto Armonk, NY before going global after frequent “parroting” of anti NAS or NFS statements.
Recent “Block Dinosaur” sightings:
  holb090203_cmyk-735392
The only cool “Block Dinosaurs” are a different species and can only be found at Lego Land.
brickdinosaurlego-dino-legoland--large-msg-12161441386298IMG_1464[activities]Saurus3_414x2
Final (and more serious) Thought:
I hope this post came across as light hearted as its not meant to upset anyone, at the same time, I would really like the ridiculous debate about Block vs File storage be put to bed, its 2015 people, there is much more important things to worry about.
The fact is there are advantages to both block and file storage and reasons where you may use one over another depending on requirements. At the end of the day both can provide enterprise grade storage solutions which provide business outcomes to customers, so there is no need to bash one or the other.