Migrating ESXi from an SD Card to a Local Disk: My Experience
Recently, I was experiencing some issues with one of my ESXis 8.x that I installed on a 25GB SD card. After troubleshooting, I realized the SD card was failing. I had the choice of repairing it (which I had done a couple of times already, including fresh installations), moving the installation to another server, or installing it on a local disk. Since I was also running low on storage space in my Synology NAS (only 1TB free), I acquired two 4TB SAS disks for my DL360 G9 and installed the new ESXi on these disks. This solved the SD card issue (which VMware recommends avoiding in favor of local storage installations) and simultaneously addressed my storage space concerns.
However, I wanted to avoid configuring the new ESXi installation from scratch. To save time and simplify the process, I decided to use ESXi built-in backup and restore functionality. In this blog post How to migrate ESXi installation from an SD Card to a Local Disk, I’ll walk you through exactly how I did this.
Step 1: Backing Up the ESXi Configuration
The first step was to back up my existing ESXi configuration. I accessed the ESXi shell and executed:
1 2 3 4 5 | vim-cmd hostsvc/firmware/sync_config vim-cmd hostsvc/firmware/backup_config |
The ESXi server then provided a download link for the backup bundle:
1 2 3 4 | Bundle can be downloaded at: http://*/downloads/525f5072-077a-1468-5886-a5c5c862664e/configBundle-DL360-ESXi02.vmwarehome.lab.tgz |
I downloaded this bundle safely to my computer for later use (do not forget to add your IP or FQDN at the beginning of the path).
Step 2: Installing ESXi on Local Disks
With the backup secured, I proceeded with a fresh installation of ESXi onto the new local disks. The installation was straightforward, using the standard ESXi installation ISO and bootable via Virtual CD. After the installation was completed and ESXi booted up, I was ready to restore my previous configuration.
Step 3: Restoring the ESXi Configuration
Since the backup file name we downloaded is configBundle-DL360-ESXi02.vmwarehome.lab.tgz and the file name to restore is configBundle.tgz, we need to rename the downloaded file to configBundle.tgz so that we can use it for the restore.
I uploaded the renamed backup configuration bundle (configBundle.tgz) to a datastore (datastore1) using the datastore browser.
Once uploaded, I performed the following steps through the ESXi shell.
First, put the server in maintenance mode and reboot:
1 2 3 4 5 | vim-cmd hostsvc/maintenance_mode_enter reboot -f |
After the server is rebooted, you can now start the steps to restore the configuration. First, we need to move the file configBundle.tgz to /tmp/ and then restore the configuration:
1 2 3 4 5 6 7 | cd /vmfs/volumes/datastore1/ mv configBundle.tgz /tmp/ cd /tmp/ vim-cmd hostsvc/firmware/restore_config 1 |
ESXi automatically rebooted once the configuration was restored and my previous ESXi IP was set.
Step 4: Verification and Testing
To confirm everything was working correctly, I verified my VMkernel interfaces and connectivity:
1 2 3 4 | esxcfg-vmknic -l |
The output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | Interface Port Group/DVPort/Opaque Network IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type NetStack vmk0 Management Network IPv4 192.168.1.10 255.255.255.0 192.168.1.255 30:e1:71:61:13:80 1500 65535 true STATIC defaultTcpipStack vmk0 Management Network IPv6 fe80::32e1:71ff:fe61:1380 64 30:e1:71:61:13:80 1500 65535 true STATIC, PREFERRED defaultTcpipStack vmk0 Management Network IPv6 2a00:6020:a280:700:32e1:71ff:fe61:1380 64 30:e1:71:61:13:80 1500 65535 true AUTOCONF, PREFERRED defaultTcpipStack vmk0 Management Network IPv6 fd42:9069:45b8:0:32e1:71ff:fe61:1380 64 30:e1:71:61:13:80 1500 65535 true AUTOCONF, PREFERRED defaultTcpipStack vmk1 1 IPv4 192.168.10.32 255.255.255.0 192.168.10.255 00:50:56:62:ed:08 9000 65535 true STATIC defaultTcpipStack vmk1 1 IPv6 fe80::250:56ff:fe62:ed08 64 00:50:56:62:ed:08 9000 65535 true STATIC, PREFERRED defaultTcpipStack vmk2 9 IPv4 192.168.10.33 255.255.255.0 192.168.10.255 00:50:56:67:aa:17 9000 65535 true STATIC defaultTcpipStack vmk2 9 IPv6 fe80::250:56ff:fe67:aa17 64 00:50:56:67:aa:17 9000 65535 true STATIC, PREFERRED defaultTcpipStack vmk3 267 IPv4 192.168.20.10 255.255.255.0 192.168.20.255 00:50:56:6c:f0:2f 9000 65535 true STATIC vmotion vmk3 267 IPv6 fe80::250:56ff:fe6c:f02f 64 00:50:56:6c:f0:2f 9000 65535 true STATIC, PREFERRED vmotion |
I also tested network connectivity using jumbo frames to ensure performance:
1 2 3 4 5 6 7 8 | vmkping -4 -c 20 -d -s 8972 -v 192.168.10.198 -I vmk1 vmkping -4 -c 20 -d -s 8972 -v 192.168.10.198 -I vmk2 vmkping -4 -c 20 -d -s 8972 -v 192.168.20.9 -I vmk3 Interface 'vmk3' not found in the current netstack, use '-S vmotion' to specify netstack. vmkping -4 -c 20 -d -s 8972 -v 192.168.20.9 -S vmotion |
The ping responses confirmed successful jumbo frame communication with no packet loss and good latency.
After checking that everything was okay, I “re-added” the ESXi to vCenter. Since I had previously disconnected the ESXi host, I could now reconnect.
After reconnecting without any issues, I double-checked if all was ok, including Virtual Switches, VMkernel, iSCSI network Port Binding, etc.
All seems ok, and the vSphere installation has been moved to my local disks.
Conclusion
Migrating your ESXi installation from an SD card to local disks is a quick, simple, and effective process. Instead of spending hours reinstalling ESXi and manually reconfiguring your host settings, you can leverage VMware built-in backup and restore functionality to clone your current setup in just a few steps. This method is especially helpful for anyone looking to move away from SD card-based installations—something VMware itself advises against—towards more reliable and supported local storage. It’s not only the fastest route, but also the most secure and hassle-free, allowing you to preserve your configuration exactly as it was.
One thing to remember: Starting with vSphere 7.0 Update 2, if your host has a TPM (Trusted Platform Module) enabled, the configuration might be encrypted using TPM keys. In that case, the -force option during restore won’t work if you’re restoring to a different host. The same TPM that was used during backup must be present during restore, otherwise, the process will fail.
Aside from that scenario, this approach works smoothly and is highly recommended for anyone planning this type of migration. If you’re considering the move, go for it—and feel free to reach out if you hit any bumps along the way.
Share this article if you think it is worth sharing. If you have any questions or comments, comment here, or contact me on Twitter(yes for me is not X but still Twitter).
Leave A Comment