VUM issues? – Remediating an ESXi 5.x host fails (Error code:15)

I just had a most annoying issue with a fresh installed ESXi 5.1 host, release 1065491.

When remediating the Host using VUM I received the below error.

Remediating an ESXi 5.x host fails with the error: The host returns esxupdate error code:15. The package manager transaction is not successful (2030665)

After a quick google I came across this KB

However one of the steps (below) required I copy files from a working 5.1 host to resolve the issue. Since I didn’t have a working 5.1 host in this environment I was stuck, but decided to proceed and see if I could resolve it without this step.

“5. Use WinSCP to copy the folders and files from the / locker/packages/5.0.0/ or / locker/packages/5.1.0/directory on a working host to the affected host.”

However if you skip the above step, and follow the instructions below you will be able to remediate your hosts.

Note: The below is a slightly modified version from the KB listed above.

  1. Put the host in the Maintenance Mode. (OPTIONAL – although recommended)
  2. Navigate to the /locker/packages/5.0.0/ folder on the host (or /locker/packages/5.1.0/ on an ESXi 5.1 host).
  3. Rename the folder to /locker/packages/5.0.0.old (or /locker/packages/5.1.0.old on an ESXi 5.1 host).
  4. Recreate the folder, as user root and run the command:

    For ESXi 5.0:

    mkdir / locker/packages/5.0.0/

    For ESXi 5.1:

    mkdir / locker/packages/5.1.0/

  • Verify and ensure that there is sufficient free space on root folder using this command:

    vdf -h

  • Check the locker location using this command:

    ls -ltr /

    If the locker is not pointing to a datastore:

    1. Rename the old locker file using this command:

      mv /locker /locker.old

    2. Recreate the symbolic link using this command:

      ln -s /store/locker /locker

Now retry remediating your hosts and you should be successful.



4 thoughts on “VUM issues? – Remediating an ESXi 5.x host fails (Error code:15)

  1. Hi Josh,

    In item 1. , When you say host, do you mean the physical host – potentially hosting dozens or maybe hundreds of vms? If so, does this spawn a number of vmWare subtasks on a busy network, or is this an isolated, and quick recovery situation? Would it force an isolation, and therefore a reprovisioning of vms on another hosts within a given period??


    • Hi Pete,

      In VMware terms, a “host” is a physical x86 server running the ESXi hypervisor, and yes a “host” is what the VMs run on top of (one, dozens or hundreds of VMs).

      In this case, before remediating (patching) an ESXi host the host, the host should be in “Maintenance mode” which means all running virtual machines have been evacuated (via vMotion) from that host. Once VMs are evacuated and the host is in Maintenance mode, remediation can begin. As all VMs have been evacuated, if remediation fails such as in this case, there is no impact to virtual machines , the network or storage.

      The evacuation of the VMs prior to the host entering maintenance mode could potentially (or even likely) put significant load on the vMotion network (recommended to be a dedicated non routable VLAN)

      Host “Isolation” is when an ESXi host cannot reach (via ICMP) its isolation address, which is by default the VMKernel’s default gateway (Note: This can be customized depending on requirements). Isolation is essentially unrelated to VMware Update Manager and a ESXi hosts remediation success or failure.

      By “Quick recovery” I assume your referring to HA, where a VM crashes on one host for whatever reason (eg: Hardware failure) and is automatically restarted on a surviving host. This would also not be the case here as the VMs were evacuated via vMotion (non disruptively) prior to the host entering maintenance mode for patching.

      Hope that all makes sense.

      P.S You have my number, don’t hesitate to call.

    • Using Autodeploy or not would really depend on the customer requirements and environment, for BAU style patching, VUM works fine, this post was an unusual bug with VUM which I discovered which I decided to share and im sure will be addressed in the near future.