Continuing the reconfiguration of my Hyper-V/SCVMM environment, I had dead LUNs in my Hyper-V Cluster that needed cleanup, as well as a new LUN to add. All nodes run Windows Server Core 2025, so I performed all tasks manually via PowerShell.
In this post, I walk through a real case where a Failover Cluster lost its storage because the old iSCSI LUNs were deleted. The cluster was left with dead disks and failed CSVs. To bring it back, I added a new Synology Block LUN over iSCSI and replaced the old storage.
The process is straightforward: clean up cluster resources, resolve iSCSI sessions, prepare the new disk, and re-integrate it into the cluster. For each step, I explain what I’m doing, why I’m doing it, and the exact one-line commands.
Part 1 — Replace dead storage and add a new LUN
Step 1 — Check cluster prerequisites
I always verify prerequisites first so I don’t chase storage issues caused by missing roles or services. I confirm that Hyper-V, Failover Clustering, and the iSCSI initiator service are enabled before touching the disks.
Commands
1 2 3 4 5 | Get-WindowsFeature -Name Hyper-V, Failover-Clustering Start-Service MSiSCSI; Set-Service MSiSCSI -StartupType Automatic |
Result
Features are present, and iSCSI is set to Automatic.
Step 2 — Inspect the current cluster state
Before adding anything new, I remove any dead storage objects. Old Physical Disk resources or broken CSVs will block arbitration and confuse the cluster.
Commands
1 2 3 4 5 | Remove-ClusterSharedVolume -Name "Cluster Disk 1" Remove-ClusterResource -Name "Cluster Disk 1" -Confirm:$false |
Result
The cluster is clean and ready to accept new storage.
Step 3 — Fix iSCSI sessions
Every node must connect only to the same Synology target IQN that the cluster will use. Stale targets from the old LUNs must go, or the disk won’t come online.
Commands
1 2 3 4 5 | Get-IscsiSession Disconnect-IscsiTarget -NodeAddress "iqn.2000-01.com.synology:homelab-storage.default-target.c811e7e2992" -Confirm:$false |
If a session refuses to disconnect, I log it off by its SessionIdentifier:
1 2 3 4 5 | Get-IscsiSession | Select SessionIdentifier,TargetNodeAddress iscsicli LogoutTarget <SessionIdentifier> |
If the node got messy, I just reboot:
1 2 3 4 | Restart-Computer -Force |
Result
Only iqn.2000-01.com.synology:hyper-v remains on each node.
Step 4 — Prepare the new LUN (one node only)
I prepare the disk on a single node: clear read-only, initialize GPT, and create a single NTFS volume with a 64K allocation. Other nodes must not initialize or format the disk.
Commands
1 2 3 4 5 6 | Get-Disk | Where-Object BusType -eq 'iSCSI' | Set-Disk -IsReadOnly:$false Get-Disk | Where-Object BusType -eq 'iSCSI' | Initialize-Disk -PartitionStyle GPT Get-Disk | Where-Object BusType -eq 'iSCSI' | New-Partition -UseMaximumSize -AssignDriveLetter | Format-Volume -FileSystem NTFS -NewFileSystemLabel "ClusterDisk01" -AllocationUnitSize 65536 -Confirm:$false |
Result
A healthy NTFS volume ClusterDisk01 (64K) ready for clustering.
Step 5 — Add the new disk to the cluster
Once the OS can see the disk, I let the cluster claim it as a Physical Disk resource.
Commands
1 2 3 4 | Get-ClusterAvailableDisk | Add-ClusterDisk |
Result
The disk is added as Cluster Disk 2 (initially Offline).
Step 6 — Bring the disk online on a clean node
If the owner node has any lingering session problems, I move “Available Storage” to a clean node and bring the disk online there.
Commands
1 2 3 4 5 | Move-ClusterGroup -Name "Available Storage" -Node Hyper-V-Node05 Start-ClusterResource -Name "Cluster Disk 2" |
Result
The resource comes Online successfully.
Step 7 — Convert to Cluster Shared Volume (CSV)
After the disk is online, I convert it to a CSV so every node sees it under C:\ClusterStorage\.
Commands
1 2 3 4 5 | Add-ClusterSharedVolume -Name "Cluster Disk 2" Get-ClusterSharedVolume | Format-Table Name, State, OwnerNode, @{N='Path';E={$_.SharedVolumeInfo.FriendlyVolumeName}} |
Result
Cluster Disk 2 Online Hyper-V-Node05 C:\ClusterStorage\Volume3.
Step 8 — Validate all nodes
Ultimately, I verify that each node sees the same target, the same disk attributes, and the same CSV path. Non-owner nodes should typically show the disk Offline at the OS layer (the cluster owns it).
Commands
1 2 3 4 5 6 | Get-IscsiSession Get-Disk | Where-Object BusType -eq 'iSCSI' | Format-List Number, PartitionStyle, IsOffline, IsReadOnly, OperationalStatus Get-ClusterSharedVolume |
Result
Sessions are consistent, disk metadata is correct, and the CSV path is uniform (C:\ClusterStorage\Volume3).
Part 2 — Add a new node from scratch (Windows Server 2025 Core)
Step 1 — Baseline the new node’s networking
Since my Hyper-V hosts utilize three dedicated networks (Management, Cluster, and Storage/iSCSI), I first verify the adapters on the new node and align the IP addresses, gateway, and DNS settings to match the standard. This keeps cluster and iSCSI traffic on their own VLANs, making the node consistent with the rest of the cluster.
Verify
1 2 3 4 | Get-NetIPConfiguration | Select InterfaceAlias,@{N='IPv4Address';E={($_.IPv4Address).IPAddress}},@{N='DefaultGateway';E={($_.IPv4DefaultGateway).NextHop}},@{N='DNSServers';E={($_.DNSServer.ServerAddresses -join ", ")}} |
Configure (only if needed)
1 2 3 4 5 6 7 8 9 10 | New-NetIPAddress -InterfaceAlias "Local Network" -IPAddress 192.168.1.122 -PrefixLength 24 -DefaultGateway 192.168.1.254 Set-DnsClientServerAddress -InterfaceAlias "Local Network" -ServerAddresses 192.168.1.131 New-NetIPAddress -InterfaceAlias "Cluster Network" -IPAddress 192.168.2.3 -PrefixLength 24 New-NetIPAddress -InterfaceAlias "Storage Network" -IPAddress 192.168.10.122 -PrefixLength 24 Set-DnsClientServerAddress -InterfaceAlias "Cluster Network" -ResetServerAddresses Set-DnsClientServerAddress -InterfaceAlias "Storage Network" -ResetServerAddresses Disable-NetAdapter -Name "Hyper-V Network" -Confirm:$false |
Step 2 — Ensure Hyper-V and Failover Clustering are installed
Even on a fresh OS, I verify first. If either role is missing, I install it. Verifying first keeps the process idempotent and avoids unnecessary reboots.
Verify
1 2 3 4 | Get-WindowsFeature -Name Hyper-V,Failover-Clustering | Format-Table Name,InstallState |
Install (only if InstallState = Available)
1 2 3 4 | Install-WindowsFeature -Name Hyper-V,Failover-Clustering -IncludeManagementTools -Restart |
Step 3 — Start iSCSI and connect to the Synology cluster target
I enable the iSCSI initiator and connect the node to the exact Synology target the cluster uses, so it sees the same LUN(s) as the existing nodes.
Enable / Discover / Connect
1 2 3 4 5 6 7 | Start-Service MSiSCSI; Set-Service MSiSCSI -StartupType Automatic New-IscsiTargetPortal -TargetPortalAddress 192.168.10.198 -ErrorAction SilentlyContinue Connect-IscsiTarget -NodeAddress "iqn.2000-01.com.synology:hyper-v" -TargetPortalAddress 192.168.10.198 -IsPersistent $true Get-IscsiSession | Select TargetNodeAddress,InitiatorPortalAddress,IsPersistent |
If a stale target appears, I remove it by SessionIdentifier:
1 2 3 4 5 | Get-IscsiSession | Select SessionIdentifier,TargetNodeAddress iscsicli LogoutTarget <SessionIdentifier> |
Step 4 — Verify the shared LUN view (don’t initialize here)
On a new node, I don’t initialize or format the disk. I only make sure Windows didn’t mark it read-only. The cluster controls ownership and online state.
Verify / Fix RO (if needed)
1 2 3 4 5 | Get-Disk | Where-Object BusType -eq 'iSCSI' | Format-List Number,FriendlyName,PartitionStyle,IsOffline,IsReadOnly,OperationalStatus,Size Get-Disk | Where-Object BusType -eq 'iSCSI' | Set-Disk -IsReadOnly:$false |
Step 5 — Join the node to the existing cluster
Once networking and storage visibility are set up correctly, I add the server to the cluster so it can manage CSVs and run clustered roles.
Add / Verify
1 2 3 4 5 | Add-ClusterNode -Name Hyper-V-Node03 -Cluster HyperV-V2H Get-ClusterNode |
Step 6 — Validate CSV access from the new node
I confirm the CSV path under C:\ClusterStorage\... mounts correctly. Optionally, I move the CSV to the new node and back to prove ownership and access.
Check / Optional move
1 2 3 4 5 6 | (Get-ClusterSharedVolume -Name "Cluster Disk 2").SharedVolumeInfo.FriendlyVolumeName Move-ClusterSharedVolume -Name "Cluster Disk 2" -Node Hyper-V-Node03 Get-ClusterSharedVolume | Format-Table Name,State,OwnerNode,@{N='Path';E={$_.SharedVolumeInfo.FriendlyVolumeName}} |
Step 7 — (Optional) Enable MPIO for iSCSI now
If I plan to add a second iSCSI path, I will enable MPIO now. With multiple paths, MPIO provides resiliency and proper path management, enabling seamless future expansion when implemented early.
Install / Claim / Show
1 2 3 4 5 | Install-WindowsFeature -Name Multipath-IO -Restart Enable-MSDSMAutomaticClaim -BusType iSCSI; mpclaim.exe -s -d |
(Seeing “No MPIO disks are present” is normal when only one path exists.)
Step 8 — Final quick health snapshot
I conclude with a light check to establish a “known-good” baseline: the CSV is online and owned by an expected node, the cluster core is healthy, and network roles are aligned with my design.
Check
1 2 3 4 5 6 | Get-ClusterSharedVolume | Format-Table Name,State,OwnerNode,@{N='Path';E={$_.SharedVolumeInfo.FriendlyVolumeName}} Get-ClusterGroup "Cluster Group" | Format-List Name,State,OwnerNode Get-ClusterNetwork | Format-Table Name,Role,State,Metric,Address,AddressMask |
If I need to set network roles later:
1 2 3 4 5 6 7 | Import-Module FailoverClusters Set-ClusterNetwork -Name "Cluster Network 1" -Role 1 Set-ClusterNetwork -Name "Cluster Network 2" -Role 3 Set-ClusterNetwork -Name "Cluster Network 3" -Role 0 |
Conclusion
What makes this worth writing up isn’t that I brought a cluster back to life; that’s expected. but how I did it on Windows Server Core, where there’s no GUI safety net and everything lives and dies by the exact PowerShell you type. Clicking through wizards is easy; reconstructing the right sequence of commands months later is not. That’s why I built this as a deterministic, copy-pastable runbook: I always verify first, I only change what’s missing or wrong, and I keep each action to a single line so it’s testable, reversible, and easy to audit.
This step-by-step approach turns “tribal knowledge” into something I can repeat under pressure. Verifying features before installing prevents unnecessary reboots. Cleaning stale iSCSI sessions by SessionIdentifier avoids the usual dead-end errors. Preparing the LUN exactly once on a single node protects the cluster from split-brain formatting. Converting to CSV only after the disk is healthy gives me a stable, uniform path on every node. Those are small choices, but together they’re what make the process predictable.
Doing this in PowerShell also provides me with better troubleshooting capabilities. When a command fails, the error tells me why, and I can adjust one line at a time without breaking the rest of the plan. If a node can’t arbitrate a disk, I move ownership to the clean node and try again. If persistence doesn’t reflect immediately, I let the next reconnect handle it and confirm later. The sequence stays the same, even when the environment doesn’t.
In the end, I didn’t just fix storage; I documented a working method I can trust the next time I’m staring at a blank Core console. The post serves me later as a memory aid and helps anyone else running Hyper-V on Server Core follow the same path: verify before change, clean before add, fix iSCSI consistency first, initialize once, and let the cluster manage ownership. That’s the difference between a rescue and a repeatable practice.
Share this article if you think it is worth sharing. If you have any questions or comments, comment here, or contact me on Twitter (yes, for me it is not X, but still Twitter).
Leave A Comment