In another quick tip, in this vSphere removing inaccessible NFS datastores. I will explain how to troubleshoot and remove these inaccessible datastores that sometimes are left behind, and we can’t remove typically.
There are some blog posts from other bloggers on how to do this, but the case I had last week was not usual, and I would like to share you guys.
I need to remove 2 NFS datastores (where ISOs and logs are stored) and add two new PureStorage datastores to replace the old NFS.
I removed these datastores from 20 ESXi hosts but had issues in 4 (2 in different Clusters).
Try to unmount it normally, and it was not possible. Trying to umount in the command line was not possible.
An example:
In this above Cluster, 2 ESXi hosts, I could not remove it.
Checking the command line:
1 2 3 |
[root@ESXi-host-04:~] esxcli storage nfs list |
No results, nothing show here.
1 2 3 4 5 |
[root@ESXi-host-04:~] esxcfg-nas -l xx-yy-iso-image is /xx-yy-iso-image from 192.7.28.23 unmounted unavailable xx-yy-esxi-log is /xx-yy-esxi-log from 192.7.28.23 unmounted unavailable |
The above command shows me 2 NFS shares but unmounted unavailable.
So try to check filesystem storage using the command esxcli storage filesystem list. Then it shows the 2 NFS datastores without any extra information.
I don’t think removing it for this case will be complicated.
1 2 3 4 5 6 |
[root@ESXi-host-04:~] esxcfg-nas -d xx-yy-iso-image NAS volume xx-yy-iso-image deleted. [root@ESXi-host-04:~] esxcfg-nas -d xx-yy-esxi-log NAS volume xx-yy-esxi-log deleted. |
Rechecking filesystem Storage, I don’t see the 2 NFS datastores anymore.
But it still shows in the vCenter in the ESXi hosts, so NFS datastores are removed after a reboot.
The second example was more difficult. It was not both datastores, but only one with the ISOs.
I first suspected VMs that still had any ISO mounted from that datastore(I had this before), but running a script to check all VM ISO mounted found none.
So, I started to troubleshoot. The command esxcli storage filesystem list and esxcfg-nas -l, shows anything.
So I need to go to the vCenter DB and check the records to see where the NFS are stuck and fix the issue.
First, connect to vCenter Postgres DB.
Connect to your vCenter and run:
1 2 3 4 5 |
root@vCenter [ ~ ]# /opt/vmware/vpostgres/current/bin/psql -d VCDB -U postgres psql.bin (13.3 (VMware Postgres 13.3.0-18202630 release)) Type "help" for help. |
Next, try to find the ID for this datastore.
1 2 3 |
SELECT * FROM vpx_entity WHERE name = 'xx-yy-iso-image'; |
Now that I have the datastore ID, check where it is still used.
1 2 3 |
SELECT * FROM vpx_ds_assignment WHERE ds_id=523575; |
So I got 5 records where this datastore was still used. For the record with the full path, we can identify without checking the entity_id of the ESXi hosts(the issue was still in 2 ESXi hosts).
So run the query to check all entity_id.
1 2 3 |
SELECT * FROM vpx_entity WHERE id in (549662, 620587, 449581, 517166, 629380); |
As we can check above, 2 are ESXi hosts, and the rest are VMs(in this case, templates).
Why templates? Well, when the template was created(convert a VM to a template), the ISO was not removed and then has a record pointing to that specific datastore, which is why removing it is impossible.
And this is not possible to check on the vCenter, since after you convert to a template, you can’t edit the settings of a VM, and the template doesn’t have this information.
To double-check this, I edited one of the template vmx from RHORacle.vmx
sata0:0.deviceType = “cdrom-image”
sata0:0.fileName = “/vmfs/volumes/a3a4adfe-4149d14c-0000/linux/centos/CentOS-7-x86_64-DVD-1804.iso”
As shown above, one Linux ISO is mounted on the template.
So how can we remove this from the templates and free the ISOs and consequently the NFS datastore?
First, we need to convert the template back to a VM, then umount the ISO from the virtual CD-ROM, and convert the VM back to a template. This will do the trick.
In my case, I have only 3 to do this, which is some click for each one, so I create a small PowerCli script to do this automatically and save it for future use.
So what this script doesn’t is: Check all templates, convert them to VM, and check if there is any ISO mounted. If yes, unmount and then convert back to template.
If the template has no ISO mounted, it converts back to a template.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
$temps = Get-Template Write-host ".... Finding Templates .... " -ForegroundColor Blue If ([string]::IsNullOrEmpty($temps)) { ## If there are no templates, then exit the script Write-host "... There is no Templates found .... " -ForeGroundColor DarkGreen; } Else { Write-host "Found Templates" -ForeGroundColor DarkRed; Foreach($temp in $temps) { ## Since we cannot check ISO in a Template, we need to convert to VM first Write-host " Template Name --> " $temp -ForegroundColor DarkBlue Write-host " ... Converting to VM" -ForegroundColor Red Set-Template -Template (Get-Template $temp) -ToVM -Confirm:$False | Out-Null If (Get-VM $temp | Get-CDDrive | select @{N="VM";E="Parent"},IsoPath | where {$_.IsoPath -ne $null}) { ## If there is a Template with an ISO, remove the ISO from VM settings, then convert VM back to Template Write-host " ... This template has an ISO image, removing... " -ForegroundColor Red Get-VM $temp | Get-CDDrive | where {$_.IsoPath -ne $null} | Set-CDDrive -NoMedia -Confirm:$False | Out-Null Write-host " ... ISO removed, converting VMs back to Template " -ForegroundColor DarkGreen Get-VM -Name $temp | Set-VM -ToTemplate -Confirm:$false | Out-Null } Else { ## If there is a Template, but no ISO set, then convert VM to Template and check the next Template Write-host " ... No ISOs found, converting VMs back to Template" -ForegroundColor DarkGreen Get-VM -Name $temp | Set-VM -ToTemplate -Confirm:$false | Out-Null } } } |
After running the script, rerun the DB query and get zero results.
I rebooted both ESXi hosts for the safe side, and all was good. No more NFS datastores were inaccessible.
As a tip, I hope this small blog post will help you troubleshoot and fix any inaccessible datastores.
Share this article if you think it is worth sharing. If you have any questions or comments, comment here, or contact me on Twitter.
Leave A Comment
You must be logged in to post a comment.