/Replacing Dead LUNs and Adding a New Synology iSCSI LUN to a Windows Server 2025 Hyper-V Cluster (PowerShell-Only)

Replacing Dead LUNs and Adding a New Synology iSCSI LUN to a Windows Server 2025 Hyper-V Cluster (PowerShell-Only)

Continuing the reconfiguration of my Hyper-V/SCVMM environment, I had dead LUNs in my Hyper-V Cluster that needed cleanup, as well as a new LUN to add. All nodes run Windows Server Core 2025, so I performed all tasks manually via PowerShell.

In this post, I walk through a real case where a Failover Cluster lost its storage because the old iSCSI LUNs were deleted. The cluster was left with dead disks and failed CSVs. To bring it back, I added a new Synology Block LUN over iSCSI and replaced the old storage.

The process is straightforward: clean up cluster resources, resolve iSCSI sessions, prepare the new disk, and re-integrate it into the cluster. For each step, I explain what I’m doing, why I’m doing it, and the exact one-line commands.


Part 1 — Replace dead storage and add a new LUN

Step 1 — Check cluster prerequisites

I always verify prerequisites first so I don’t chase storage issues caused by missing roles or services. I confirm that Hyper-V, Failover Clustering, and the iSCSI initiator service are enabled before touching the disks.

Commands

Result
Features are present, and iSCSI is set to Automatic.


Step 2 — Inspect the current cluster state

Before adding anything new, I remove any dead storage objects. Old Physical Disk resources or broken CSVs will block arbitration and confuse the cluster.

Commands

Result
The cluster is clean and ready to accept new storage.


Step 3 — Fix iSCSI sessions

Every node must connect only to the same Synology target IQN that the cluster will use. Stale targets from the old LUNs must go, or the disk won’t come online.

Commands

If a session refuses to disconnect, I log it off by its SessionIdentifier:

If the node got messy, I just reboot:

Result
Only iqn.2000-01.com.synology:hyper-v remains on each node.


Step 4 — Prepare the new LUN (one node only)

I prepare the disk on a single node: clear read-only, initialize GPT, and create a single NTFS volume with a 64K allocation. Other nodes must not initialize or format the disk.

Commands

Result
A healthy NTFS volume ClusterDisk01 (64K) ready for clustering.


Step 5 — Add the new disk to the cluster

Once the OS can see the disk, I let the cluster claim it as a Physical Disk resource.

Commands

Result
The disk is added as Cluster Disk 2 (initially Offline).


Step 6 — Bring the disk online on a clean node

If the owner node has any lingering session problems, I move “Available Storage” to a clean node and bring the disk online there.

Commands

Result
The resource comes Online successfully.


Step 7 — Convert to Cluster Shared Volume (CSV)

After the disk is online, I convert it to a CSV so every node sees it under C:\ClusterStorage\.

Commands

Result
Cluster Disk 2 Online Hyper-V-Node05 C:\ClusterStorage\Volume3.


Step 8 — Validate all nodes

Ultimately, I verify that each node sees the same target, the same disk attributes, and the same CSV path. Non-owner nodes should typically show the disk Offline at the OS layer (the cluster owns it).

Commands

Result
Sessions are consistent, disk metadata is correct, and the CSV path is uniform (C:\ClusterStorage\Volume3).


Part 2 — Add a new node from scratch (Windows Server 2025 Core)

Step 1 — Baseline the new node’s networking

Since my Hyper-V hosts utilize three dedicated networks (Management, Cluster, and Storage/iSCSI), I first verify the adapters on the new node and align the IP addresses, gateway, and DNS settings to match the standard. This keeps cluster and iSCSI traffic on their own VLANs, making the node consistent with the rest of the cluster.

Verify

Configure (only if needed)


Step 2 — Ensure Hyper-V and Failover Clustering are installed

Even on a fresh OS, I verify first. If either role is missing, I install it. Verifying first keeps the process idempotent and avoids unnecessary reboots.

Verify

Install (only if InstallState = Available)


Step 3 — Start iSCSI and connect to the Synology cluster target

I enable the iSCSI initiator and connect the node to the exact Synology target the cluster uses, so it sees the same LUN(s) as the existing nodes.

Enable / Discover / Connect

If a stale target appears, I remove it by SessionIdentifier:


Step 4 — Verify the shared LUN view (don’t initialize here)

On a new node, I don’t initialize or format the disk. I only make sure Windows didn’t mark it read-only. The cluster controls ownership and online state.

Verify / Fix RO (if needed)


Step 5 — Join the node to the existing cluster

Once networking and storage visibility are set up correctly, I add the server to the cluster so it can manage CSVs and run clustered roles.

Add / Verify


Step 6 — Validate CSV access from the new node

I confirm the CSV path under C:\ClusterStorage\... mounts correctly. Optionally, I move the CSV to the new node and back to prove ownership and access.

Check / Optional move


Step 7 — (Optional) Enable MPIO for iSCSI now

If I plan to add a second iSCSI path, I will enable MPIO now. With multiple paths, MPIO provides resiliency and proper path management, enabling seamless future expansion when implemented early.

Install / Claim / Show

(Seeing “No MPIO disks are present” is normal when only one path exists.)


Step 8 — Final quick health snapshot

I conclude with a light check to establish a “known-good” baseline: the CSV is online and owned by an expected node, the cluster core is healthy, and network roles are aligned with my design.

Check

If I need to set network roles later:


Conclusion

What makes this worth writing up isn’t that I brought a cluster back to life; that’s expected. but how I did it on Windows Server Core, where there’s no GUI safety net and everything lives and dies by the exact PowerShell you type. Clicking through wizards is easy; reconstructing the right sequence of commands months later is not. That’s why I built this as a deterministic, copy-pastable runbook: I always verify first, I only change what’s missing or wrong, and I keep each action to a single line so it’s testable, reversible, and easy to audit.

This step-by-step approach turns “tribal knowledge” into something I can repeat under pressure. Verifying features before installing prevents unnecessary reboots. Cleaning stale iSCSI sessions by SessionIdentifier avoids the usual dead-end errors. Preparing the LUN exactly once on a single node protects the cluster from split-brain formatting. Converting to CSV only after the disk is healthy gives me a stable, uniform path on every node. Those are small choices, but together they’re what make the process predictable.

Doing this in PowerShell also provides me with better troubleshooting capabilities. When a command fails, the error tells me why, and I can adjust one line at a time without breaking the rest of the plan. If a node can’t arbitrate a disk, I move ownership to the clean node and try again. If persistence doesn’t reflect immediately, I let the next reconnect handle it and confirm later. The sequence stays the same, even when the environment doesn’t.

In the end, I didn’t just fix storage; I documented a working method I can trust the next time I’m staring at a blank Core console. The post serves me later as a memory aid and helps anyone else running Hyper-V on Server Core follow the same path: verify before change, clean before add, fix iSCSI consistency first, initialize once, and let the cluster manage ownership. That’s the difference between a rescue and a repeatable practice.

Share this article if you think it is worth sharing. If you have any questions or comments, comment here, or contact me on Twitter (yes, for me it is not X, but still Twitter).

©2025 ProVirtualzone. All Rights Reserved
By | 2025-08-27T19:51:39+02:00 August 27th, 2025|Hyper-V, Hypervisor, Microsoft, Storage, Synology|0 Comments

About the Author:

I have over 20 years of experience in the IT industry. I have been working with Virtualization for more than 15 years (mainly VMware). I recently obtained certifications, including VCP DCV 2022, VCAP DCV Design 2023, and VCP Cloud 2023. Additionally, I have VCP6.5-DCV, VMware vSAN Specialist, vExpert vSAN, vExpert NSX, vExpert Cloud Provider for the last two years, and vExpert for the last 7 years and a old MCP. My specialties are Virtualization, Storage, and Virtual Backup. I am a Solutions Architect in the area VMware, Cloud and Backup / Storage. I am employed by ITQ, a VMware partner as a Senior Consultant. I am also a blogger and owner of the blog ProVirtualzone.com and recently book author.

Leave A Comment