VMware View 5.1 Local Mode Check Out fails with “Internal Error” at 10%

I had an interesting issue today, where a VMware View virtual desktop failed to check out (ie: Local mode) giving an error “VMware View Client – Internal Error.” (as shown below) at the 10% mark.

View_LocalMode_InitializingCheckOutPaused_InternalError

I went into Monitoring , and Events, and came across the following error while trying to diagnose the issue.

LocalModeEventError

Talk about a useless error message!!

However, as it turns out the solution was actually very simple, all you need to do is the following

Login to VMware View Administrator, under View Configuration, click “Servers”, select the “Transfer Servers” tab, highlight your Transfer server (as shown below) and press the “Enter Maintenance mode” button.

TransferServerEnterMaintenanceMode

Once the “Status” of the transfer server changes to Maintenance mode, perform a controlled shutdown of the VM.

Next edit the settings of the virtual machine and remove any/all Floppy Drives from the VM’s hardware.

Now power back on the VM. While the VMs is powering back on, you can return to the VMware View Administrator, browse back to the Transfer server tab, highlight your transfer server and press the “Exit Maintenance Mode” button. The status of the transfer server will be “Pending” and once the transfer server is back online its status will change to “Ready”.

Now retry your Check Out operation and you should see something similar to the below, where the Check Out process proceeds as it did previously to 10%

View_LocalMode_InitializingCheckOutProcess6%

Where the check out previously failed, it should now proceed similar to the below example.

LocalModeCheckingOutPogress

After the Check out process has completed you should see the message “Log on to Local desktop” (as shown below).

LocalMode_LogOnToLocalDesktop

So what do we learn from this?

It is always a good idea to remove unnecessary devices from your VMs. 😉

VMware View 5.1 Reference Architecture on Netapp Storage (NFS)

I just saw this reference architecture from VMware and Netapp, and wanted to share it with you.

It gives excellent examples of the benefits of using VCAI (View Composer Array Integration) and the VSA (View Storage Accelerator).

Anyone deploying any significant number of VMware View virtural desktops should review this architecture.

VMware View 5.1 Reference Architecture on Netapp Storage

Example Architectural Decision – Memory Reservation for Virtual Desktops

Problem Statement

In a VMware View (VDI) environment with a large number of virtual desktops, the potential Tier 1 storage requirement for vswap files (*.vswp) can make the solution less attractive from a ROI perspective and have a high upfront cost for storage. What can be done to minimize the storage requirements for the vswap file thus reducing the storage requirements for the VMware View (VDI) solution?

Assumptions

1. vSwap files are placed on Tier 1 shared storage with the Virtual machine (default setting)

Motivation

1. Minimize the storage requirements for the virtual desktop solution
2. Reduce the up front cost of storage for VDI
3. Ensure the VDI solution gets the fastest ROI possible without compromising performance

Architectural Decision

Set the VMware View Master Template with a 50% memory reservation so all VDI machines deployed have a 50% memory reservation

Justification

1. Setting 50% reservation reduces the storage requirement for vSwap by half
2. Setting only 50% ensures some memory overcommitment and transparent page sharing can still be achieved
3. Memory overcommitment is generally much lower than CPU overcommitment (around 1.5:1 for VDI)
4. Reserving 50% of a VDI machines RAM is cheaper than the equivalent shared storage
5. A memory reservation will generally provide increased performance for the VM
6. Reduces/Removes the requirement/benefit for a dedicated datastore for vSwap files
7. Transparent page sharing (TPS) will generally only give up to 30-35% memory savings

Implications

1. Less memory overcommitment will be achieved

Alternatives

1. Set a higher memory reservation  of 75% – This would further reduce the shared storage requirement while still allowing for 1.25:1 memory overcommitment
2. Set a 100% memory reservation – This would eliminate the vSwap file but prevent memory overcommitment
3. Set a lower memory reservation of 25% – This would not provide significant storage savings and as transparent page sharing generally only achieves upto 30-35% there would still be a sizable requirement for vSwap storage with minimal benefit
4. Create a dedicated datastore for vSwap files on lower Tier storage