/vSphere removing inaccessible NFS datastores

vSphere removing inaccessible NFS datastores

In another quick tip, in this vSphere removing inaccessible NFS datastores. I will explain how to troubleshoot and remove these inaccessible datastores that sometimes are left behind and we can’t remove normally.

There are some blog posts from other bloggers on how to do this, but the case I had last week was not usual, and I would like to share you guys.

I need to remove 2 NFS datastores (where ISOs and logs are stored) and add two new PureStorage datastores to replace the old NFS.

I removed these datastores from 20 ESXi hosts, but I had issues in 4 (2 in different Clusters).

Try to unmount it normally, and it was not possible. Try to umount in the command line, was not possible.

An example:

In this above Cluster, 2 ESXi hosts, I could not remove it.

Checking the command line:

No results, nothing show here.

The above command shows me 2 NFS shares but unmounted unavailable.

So try to check filesystem storage using the command esxcli storage filesystem list. Then it shows the 2 NFS datastores without any extra information.

I don’t think it will be complicated to remove it for this case.

Rechecking filesystem Storage, I don’t see the 2 NFS datastores anymore.

But it still shows in the vCenter in the ESXi hosts, so NFS datastores are removed after a reboot.

The second example was more difficult. It was not both datastores, but only one with the ISOs.

I first suspected VMs that still had any ISO mounted from that datastore(I had this in the past) but running a script to check all VM ISO mounted found none.

So, I started to troubleshoot. With command esxcli storage filesystem list and esxcfg-nas -l, shows anything.

So I need to go to the vCenter DB and check the records to see where the NFS are stuck and fix the issue.

First, connect to vCenter Postgres DB.

Connect to your vCenter and run:

Next, try to find the ID for this datastore.

Now that I have the datastore ID, check where it is still used.

So I got 5 records where this datastore was still used. For the record with the full path, we can identify without checking the entity_id of the ESXi hosts(issue was still in 2 ESXi hosts).

So run the query to check all entity_id.

As we can check above, 2 are ESXi hosts, and the rest are VMs(in this case, templates).

Why templates? Well, when the template was created(convert a VM to a template), the ISO was not removed and then has a record pointing to that specific datastore, and that is why it is not possible to remove it.

And this is not possible to check on the vCenter, since after you converted to a template, you cant edit the settings of a VM, and the template doesn’t have this information.

To double-check this I just edit one of the template vmx from RHORacle.vmx

sata0:0.deviceType = “cdrom-image”
sata0:0.fileName = “/vmfs/volumes/a3a4adfe-4149d14c-0000/linux/centos/CentOS-7-x86_64-DVD-1804.iso”

Has we can see above, one Linux ISO is mounted on the template.

So how can we remove this from the templates and free the ISOs and consequently the NFS datastore?

First, we need to convert the template back to a VM, then umount the ISO from the virtual CD-ROM, and convert the VM back to a template. This will do the trick.

In my case, I have only 3 to do this, which is some click for each one, so I create a small PowerCli script to do this automatically and save it for future use.

So what this script doesn’t is: Check all templates, convert them to VM, check if there is any ISO mounted, if yes, unmount and then convert back to template.

If the template doesn’t have any ISO mounted, it converts back to a template.

After running the script, rerun the query in the DB, and get zero results.

I rebooted both ESXi hosts for the safe side and all was good. No more NFS datastores were inaccessible.

As a tip, I hope this small blog post will help you troubleshoot and fix any inaccessible datastores.

Share this article if you think it is worth sharing. If you have any questions or comments, comment here, or contact me on Twitter.

©2022 ProVirtualzone. All Rights Reserved
By | 2022-01-24T15:01:45+01:00 January 17th, 2022|VMware Posts, vSphere|0 Comments

About the Author:

I am over 20 years’ experience in the IT industry. Working with Virtualization for more than 10 years (mainly VMware). I am an MCP, VCP6.5-DCV, VMware vSAN Specialist, Veeam Vanguard 2018/2019, vExpert vSAN 2018/2019 and vExpert for the last 4 years. Specialties are Virtualization, Storage, and Virtual Backups. I am working for Elits a Swedish consulting company and allocated to a Swedish multinational networking and telecommunications company as a Teach Lead and acting as a Senior ICT Infrastructure Engineer. I am a blogger and owner of the blog ProVirtualzone.com

Leave A Comment