Another vCenter another ESXi with problems applying last updates.
In this case, it is an HP DL360 G9 with ESXi 6.0 build 3568940.
Using VMware Update Manager to scan, it shows 17 updates to install. The stage is 7(the rest are older versions). When remediating the host, we get this:
Remediate entity esxi721.localdomain. The host returns esxupdate error code: 15. The package manager transaction is not successful.
Check the Update Manager log files and esxupdate log files for more details.
Again an issue. I need troubleshooting to check where was the problem here.
Looking at the esxupdate.log there is some information about the locker folder:
2016-04-24T15:11:44Z esxupdate: downloader: DEBUG: Downloading from http://esxi721.localdomain:9084/vum/repository/hostupdate/vmw/vib20/tools-light/VMware_locker_tools-light_6.0.0-2.34.3620759.vib…
2016-04-24T15:12:48Z esxupdate: LockerInstaller: WARNING: There was an error in cleaning up product locker: No such file or directory: ‘/locker/packages/var/db/locker’
2016-04-24T15:12:48Z esxupdate: esxupdate: ERROR: An esxupdate error exception was caught:
So need to investigate the ESXi host. In VMware KB about this ‘error 15’, it says to double-check the folder/link Locker > Store
I double-check the link to see if the link exists and also the folder, and all is ok. Next, check the locker folder/link and see if the locker link is valid.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
[root@esxi721:~] ls /locker/ packages var vsantraces [root@esxi721:~] ls -l / total 565 lrwxrwxrwx 1 root root 49 Apr 23 21:23 altbootbank -> /vmfs/volumes/764b33e1-310325dd-1ebb-5399a3f70a03 drwxr-xr-x 1 root root 512 Apr 23 21:22 bin lrwxrwxrwx 1 root root 49 Apr 23 21:23 bootbank -> /vmfs/volumes/2b0d84e8-332418f4-8e2c-e841fe1625cb -r--r--r-- 1 root root 331579 Feb 19 02:24 bootpart.gz drwxr-xr-x 15 root root 512 Apr 24 20:08 dev drwxr-xr-x 1 root root 512 Apr 24 19:23 etc drwxr-xr-x 1 root root 512 Apr 23 21:22 lib drwxr-xr-x 1 root root 512 Apr 23 21:22 lib64 -r-x------ 1 root root 28377 Apr 23 21:19 local.tgz lrwxrwxrwx 1 root root 6 Apr 23 21:23 locker -> /store drwxr-xr-x 1 root root 512 Apr 23 21:22 mbr drwxr-xr-x 1 root root 512 Apr 23 21:22 opt drwxr-xr-x 1 root root 131072 Apr 24 20:08 proc lrwxrwxrwx 1 root root 22 Apr 23 21:23 productLocker -> /locker/packages/6.0.0 lrwxrwxrwx 1 root root 4 Feb 19 01:54 sbin -> /bin lrwxrwxrwx 1 root root 12 Apr 23 21:23 scratch -> /tmp/scratch lrwxrwxrwx 1 root root 49 Apr 23 21:23 store -> /vmfs/volumes/56bb2e57-7933dd24-7e9c-00110a6930e4 drwxr-xr-x 1 root root 512 Apr 23 21:22 tardisks drwxr-xr-x 1 root root 512 Apr 23 21:22 tardisks.noauto drwxrwxrwt 1 root root 512 Apr 24 20:08 tmp drwxr-xr-x 1 root root 512 Apr 23 21:22 usr drwxr-xr-x 1 root root 512 Apr 23 21:22 var drwxr-xr-x 1 root root 512 Apr 23 21:22 vmfs drwxr-xr-x 1 root root 512 Apr 23 21:22 vmimages lrwxrwxrwx 1 root root 17 Feb 19 01:54 vmupgrade -> /locker/vmupgrade [root@esxi721:~] |
Check if the store location is correct.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
[root@esxi721:~] ls -ltr / total 565 lrwxrwxrwx 1 root root 17 Feb 19 01:54 vmupgrade -> /locker/vmupgrade lrwxrwxrwx 1 root root 4 Feb 19 01:54 sbin -> /bin -r--r--r-- 1 root root 331579 Feb 19 02:24 bootpart.gz -r-x------ 1 root root 28377 Apr 23 21:19 local.tgz drwxr-xr-x 1 root root 512 Apr 23 21:22 vmimages drwxr-xr-x 1 root root 512 Apr 23 21:22 vmfs drwxr-xr-x 1 root root 512 Apr 23 21:22 var drwxr-xr-x 1 root root 512 Apr 23 21:22 usr drwxr-xr-x 1 root root 512 Apr 23 21:22 tardisks.noauto drwxr-xr-x 1 root root 512 Apr 23 21:22 tardisks drwxr-xr-x 1 root root 512 Apr 23 21:22 opt drwxr-xr-x 1 root root 512 Apr 23 21:22 mbr drwxr-xr-x 1 root root 512 Apr 23 21:22 lib64 drwxr-xr-x 1 root root 512 Apr 23 21:22 lib drwxr-xr-x 1 root root 512 Apr 23 21:22 bin lrwxrwxrwx 1 root root 49 Apr 23 21:23 store -> /vmfs/volumes/56bb2e57-7933dd24-7e9c-00110a6930e4 lrwxrwxrwx 1 root root 12 Apr 23 21:23 scratch -> /tmp/scratch lrwxrwxrwx 1 root root 22 Apr 23 21:23 productLocker -> /locker/packages/6.0.0 lrwxrwxrwx 1 root root 6 Apr 23 21:23 locker -> /store lrwxrwxrwx 1 root root 49 Apr 23 21:23 bootbank -> /vmfs/volumes/2b0d84e8-332418f4-8e2c-e841fe1625cb lrwxrwxrwx 1 root root 49 Apr 23 21:23 altbootbank -> /vmfs/volumes/764b33e1-310325dd-1ebb-5399a3f70a03 drwxr-xr-x 1 root root 512 Apr 24 19:23 etc drwxrwxrwt 1 root root 512 Apr 24 20:42 tmp drwxr-xr-x 1 root root 131072 Apr 24 20:42 proc drwxr-xr-x 15 root root 512 Apr 24 20:42 dev [root@esxi721:~] |
All is ok, so you need to check the locker/packages folder to see if Version(in this case, folder 6.0.0) exists.
1 2 3 4 5 |
[root@esxi721:~] cd /locker/packages/ [root@esxi721:/vmfs/volumes/56bb2e57-7933dd24-7e9c-00110a6930e4/packages] ls var |
The folder doesn’t exist, and there are no floppies, vmtools folders that have all the files that ESXi and VUM need for the updates. The VMware KB recommends deleting old folders and links and recreating them. In this case, we don’t need to delete anything but recreate and copy the necessary files(we will use another ESXi host with the same build).
We will use SCP to copy all files to this ESXi host by connecting to another host.
First, if you don’t have your SSH Client enabled in the host firewall, you need to allow it to do the next task using the SCP command.
To enable SSH Client in the source ESXi host:
1 2 3 |
[root@esxi720:~] esxcli network firewall ruleset set --enabled true --ruleset-id=sshClient |
Note: Don’t forget to disable SSH Client after doing these tasks.
After you run the SCP command, you will be prompted for the root password of the remote host, and once you have successfully authenticated, the files will copy.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
[root@esxi720:~]:/vmfs/volumes/566a92b0-97db4da2-c8be-00110a69322c] scp -r /locker/packages/ root@esxi721:/locker The authenticity of host 'esxi721 (esxi721)' can't be established. RSA key fingerprint is SHA256:bkmqdMHuJgAWEA5s96pWOTDJO3B7FxUzgJ0t0BnqFeM. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'esxi721' (RSA) to the list of known hosts. Password: pvscsi-Windows2003.flp 100% 118KB 117.5KB/s 00:00 pvscsi-Windows2008.flp 100% 122KB 122.0KB/s 00:00 pvscsi-WindowsXP.flp 100% 114KB 114.0KB/s 00:00 vmscsi.flp 100% 42KB 41.5KB/s 00:00 solaris.iso 100% 13MB 12.8MB/s 00:01 solaris_avr_manifest.txt 100% 49 0.1KB/s 00:00 darwin.iso.sig 100% 256 0.3KB/s 00:00 winPre2k.iso.sig 100% 256 0.3KB/s 00:00 windows.iso 100% 87MB 14.4MB/s 00:06 linux_avr_manifest.txt 100% 1738 1.7KB/s 00:00 freebsd.iso 100% 15MB 15.1MB/s 00:00 netware.iso 100% 528KB 528.0KB/s 00:00 winPre2k.iso 100% 13MB 13.4MB/s 00:01 windows_avr_manifest.txt 100% 1069 1.0KB/s 00:00 solaris.iso.sig 100% 256 0.3KB/s 00:00 darwin.iso 100% 3022KB 3.0MB/s 00:01 winPre2k_avr_manifest.txt 100% 49 0.1KB/s 00:00 windows.iso.sig 100% 256 0.3KB/s 00:00 linux.iso 100% 71MB 11.8MB/s 00:06 scp: /locker/packages/packages/6.0.0/vmtools/linux.iso: truncate: No space left on device scp: /locker/packages/packages/6.0.0/vmtools/linux.iso: No space left on device scp: /locker/packages/packages/6.0.0/vmtools/netware.iso.sig: No space left on device scp: /locker/packages/packages/6.0.0/vmtools/freebsd.iso.sig: No space left on device scp: /locker/packages/packages/6.0.0/vmtools/tools-key.pub: No space left on device scp: /locker/packages/packages/6.0.0/vmtools/linux.iso.sig: No space left on device scp: /locker/packages/packages/var: No space left on device scp: /locker/packages/packages/db: No space left on device |
Only when trying to copy the files do we find the real issue. Did not find anything in the logs related to this. Space issue with applying the updates.
So need to double-check the root space.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[root@esxi721:~] stat -f / File: "/" ID: 100000000 Namelen: 127 Type: visorfs Block size: 4096 Blocks: Total: 655532 Free: 455845 Available: 455845 Inodes: Total: 524288 Free: 519299 [root@esxi721:~] vdf -h Tardisk Space Used sb.v00 139M 139M s.v00 330M 330M net_tg3.v00 300K 298K elxnet.v00 508K 505K ima_be2i.v00 2M 2M .... ----- Ramdisk Size Used Available Use% Mounted on root 32M 248K 31M 0% -- etc 28M 240K 27M 0% -- opt 32M 368K 31M 1% -- var 48M 740K 47M 1% -- tmp 256M 5M 250M 2% -- iofilters 32M 0B 32M 0% -- hostdstats 1303M 2M 1300M 0% -- stagebootbank 250M 191M 58M 76% -- |
Here I don’t see any issues with the space, but see big files from the Tardisk
Checking filesystems, I see that the one is used for Locker is 100% full.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[root@esxi721:] df -h Filesystem Size Used Available Use% Mounted on NFS 1000.0G 692.5G 307.5G 69% /vmfs/volumes/vol01 NFS 1000.0G 459.2G 540.8G 46% /vmfs/volumes/vol02 NFS 1000.0G 577.9G 422.1G 58% /vmfs/volumes/vol03 NFS 1000.0G 822.5G 177.5G 82% /vmfs/volumes/vol04 NFS 1000.0G 570.4G 429.6G 57% /vmfs/volumes/vol05 NFS 1000.0G 398.5G 601.5G 40% /vmfs/volumes/vol06 NFS 666.5G 363.4G 303.1G 55% /vmfs/volumes/iso-vol NFS 1000.0G 519.6G 480.4G 52% /vmfs/volumes/vol07 NFS 1000.0G 692.1G 307.9G 69% /vmfs/volumes/vol08 vfat 249.7M 185.4M 64.3M 74% /vmfs/volumes/2b0d84e8-332418f4-8e2c-e841fe1625cb vfat 249.7M 185.1M 64.6M 74% /vmfs/volumes/764b33e1-310325dd-1ebb-5399a3f70a03 vfat 285.8M 285.4M 488.0K 100% /vmfs/volumes/56bb2e57-7933dd24-7e9c-00110a6930e4 |
So next step is to find big files logs and inside /tmp if any dump files or other big files contribute to this issue.
1 2 3 4 5 6 7 8 9 10 11 12 |
[root@esxi721:~] find / -path "/vmfs" -prune -o -type f -size +50000k -exec ls -l '{}' \; -r--r--r-- 1 root root 146513728 Apr 23 21:22 /tardisks/sb.v00 -r--r--r-- 1 root root 347061695 Apr 23 21:22 /tardisks/s.v00 -rw-r--r-- 1 root root 97493923 Apr 24 15:10 /tmp/stagebootbank/s.v00 -rw------- 1 root root 15931539456 Apr 24 20:48 /dev/disks/mpx.vmhba32:C0:T0:L0 -rw------- 1 root root 2684354560 Apr 24 20:48 /dev/disks/mpx.vmhba32:C0:T0:L0:9 -rw------- 1 root root 299876352 Apr 24 20:48 /dev/disks/mpx.vmhba32:C0:T0:L0:8 -rw------- 1 root root 115326976 Apr 24 20:48 /dev/disks/mpx.vmhba32:C0:T0:L0:7 -rw------- 1 root root 262127616 Apr 24 20:48 /dev/disks/mpx.vmhba32:C0:T0:L0:6 -rw------- 1 root root 262127616 Apr 24 20:48 /dev/disks/mpx.vmhba32:C0:T0:L0:5 |
As we can check, there are some big temp files in the list, so the next step is to delete some.
Note: Double-check which files do you want to delete. Don’t delete any log files you could need for any troubleshooting or audit.
We can check the space after deleting the files that we will not need(and deleted the files that we copied from the previous ESXi host) and all folders inside the Locker/Store folder.
1 2 3 4 5 |
[root@esxi721:] df -h Filesystem Size Used Available Use% Mounted on vfat 285.8M 160.0K 285.7M 0% /vmfs/volumes/56bb2e57-7933dd24-7e9c-00110a6930e4 |
We now have space around 0% and a lot of free space. Copy the files again from the other ESXi host and finish 100%.
Using VUM, we will scan, stage, and remediate the ESXi host and fix the problem. The ESXi is fully updated after a final reboot(from remediate).
I hope this can help.
Share this article if you think it is worth sharing. If you have any questions or comments, comment here or contact me on Twitter.
Same issue here, HP Blade g9 environment with HP custom VMware image, version ESXi 6.0 build 3568940. Only 1 host had the issue. After recreating the symbolic link, copying the /locker/packages/ folder from a healthy host did the trick.
Thanks for writing this up mate!
Hi Wilbert,
Thanks. Glade that I could help.
PS: Share if you think can help others.
Thank You
Luciano Patrao
Hi Wilbert,
Only today I notice that my reply is not sending emails to users that comment on my blog. So now I am just FYI to you regarding my comment.
Thank You
Luciano Patrao
I have the same exact problem on a Dell Server … BUT in my case the /locker/packages is only 29% used when running the df -h command.
When I run find / -path “/vmfs” -prune -o -type f -size +50000k -exec ls -l ‘{}’ \; I don’t see anything that makes sense to delete. I do see /tardisks/sb.v00 AND /tardisks/s.v00 BUT I get busy message when attempt to delete. Any ideas? Thank you in advace.
Hi bob,
What information you get in your esxupdate.log?
That is important.
But you can delete all information that you have in the /locker/packages and then copy from a working ESXi host. Don’t forget that needs to be same ESXi version.
I had this problem too (about 30% used), there was a dump file sitting in /var/core that needed to be deleted.
Hi Bob,
Only today I notice that my reply is not sending emails to users that comment on my blog. So now I am just FYI to you regarding my comment.
Thank You
Luciano Patrao
Hi, tank you for the help!
Hi Cristiano,
No problem, glad to help.
Hi Cristiano,
Only today I notice that my reply is not sending emails to users that comment on my blog. So now I am just FYI to you regarding my comment.
Thank You
Luciano Patrao
Hello, thank you for the help, the VMWare KB isn’t revelant about space issue. Same as Joe, I had do delete a dump file in /var/core to do the trick.
Hi Ben,
Glad to help.
Hi Ben,
Only today I notice that my reply is not sending emails to users that comment on my blog. So now I am just FYI to you regarding my comment.
Thank You
Luciano Patrao