/Veeam 12.3 upgrade failed on PostgreSQL and core services — how I fixed it

Veeam 12.3 upgrade failed on PostgreSQL and core services — how I fixed it

This week, I attempted to upgrade my Veeam server from version 12.3.0.310 to 12.3.2.3617 and encountered several issues. I saw other admins report the same problem, and some solved it by uninstalling and reinstalling Veeam Server. I did not want to rebuild a working server. My first blocker was PostgreSQL refusing the connection on localhost:5432 with the message that the connection was forcibly closed by the remote host. I resolved the issue by correcting the SSPI mapping, allowing the installer account to connect, and the upgrade proceeded.

This post, “Veeam 12.3 upgrade failed on PostgreSQL and core services — how I fixed it,” demonstrates how I completed the upgrade without needing to reinstall anything. I documented a real-world upgrade where the setup initially failed on the PostgreSQL mapping, and then was unable to start core services, including the Broker on port 9501, the Backup service, and Threat Hunter on port 6175. My goal is to show how I identified each issue, proved it with logs and port checks, applied a targeted fix, and verified the result until the installer completed cleanly.

This is a long post on purpose. I include the exact commands I ran, the log paths I tailed, the service states I checked, and the reasoning behind every change. If you are seeing the same errors, you can follow the same flow and avoid a reinstall. If your messages differ, you can still use the same method. Prove what is actually broken by reading the relevant log, testing the specific port, and then changing only what the evidence supports.

Environment highlights

  • Single VBR server on Windows Server 2022 with multiple NICs for LAN and storage
  • Local workgroup admin account
  • Built-in PostgreSQL 15 for the VBR configuration database

1 Error: “Failed to connect to PostgreSQL 5432” (SSPI mapping)

Veeam 12.3 upgrade failed on PostgreSQL and core services — how I fixed it

Symptom (what I saw):
During the 12.3 upgrade, the Configuration Check failed with:

What this usually means in practice:
My running Veeam services could communicate with PostgreSQL, but the upgrade process ran under my current Windows login, which was not authorized to connect to the embedded PostgreSQL instance. The updater account must be approved for the configuration database; otherwise, the check fails.

1) Before touching anything, tail the live installer log

The Veeam suite engine writes rotating logs here: C:\ProgramData\Veeam\Setup\Temp\SuiteEngine_*.log
Tail the newest one live during the upgrade:

This shows which product the wizard is on and where it stalls, for example, the start of a service that times out and prompts for Retry or Cancel.


Step 0. Baseline to prove Postgres is up and avoid wild goose chases

Why I did this: I wanted to rule out network or listener issues so I could focus on authentication.

  1. I confirmed the service was running
    Services console → postgresql-x64-15 → Status should be Running
  2. I confirmed the listener was active

What I expected: TcpTestSucceeded : True

If it is false, first fix the service or listener. If it is true, authentication is the prime suspect rather than networking.


Step 1. Get the exact identity PostgreSQL sees and do not guess

Why I did this: The mapping must match the identity string exactly. I extracted the username string from the PostgreSQL log and used it in the mapping in the exact format.

  1. I opened the newest log in:

  1. I looked for a line like:

The value inside quotes is the Windows identity PostgreSQL received. It can be a UPN, such as Admin@DOMAIN, or a down-level name such as MACHINE\Administrator. I mapped that value verbatim.

Tip: I also ran in the same elevated session I used to launch the upgrade:

This helped me predict the identity I would see in the log.


Step 2. Confirm the auth method and the map being used

Why I did this: Veeam’s embedded PostgreSQL on Windows uses SSPI with a username map. I confirmed both ends. pg_hba.conf sets the auth method and the map. pg_ident.conf holds the actual map entries.

  1. I opened:

  1. I verified localhost entries used SSPI and the veeam map, for example:

If the file already appears this way, I proceed.


Step 3. Fix the mapping in pg_ident.conf

Why I did this: This is the crux. I added the exact identity from Step 1 to the veeam map so the upgrade account could connect as the PostgreSQL database user.

  1. I opened:

  1. I appended a line using the exact identity. Example for a local admin:

If my log showed a different string, for example Administrator@VEEAM-V12BETA , or DOMAIN\AdminUser, I used that exact string instead. Case, slashes, and quotes matter. The map ties an external Windows identity to a PostgreSQL role, and map=veeam in pg_hba.conf activates it.

I reloaded or restarted PostgreSQL.

Services → postgresql-x64-15 → Restart

I could also use pg_ctl reload, but a restart is the simplest option on Windows.

Why this works: SSPI is a Windows single sign-on protocol. PostgreSQL receives the Windows identity and consults the Veeam map to see if that identity can be mapped to PostgreSQL. If the identity is not listed, the connection is rejected.


Step 4. Re-run the upgrade and verify

What I expected now: The Configuration Check would no longer fail on the database connection. If it passes and the installation proceeds, that would confirm that an authentication mismatch was the cause.

If it still fails, the fast iterations that I used

  1. I rechecked the newest PostgreSQL log after the retry
    Sometimes the installer spawns with a slightly different identity string, for example, UPN versus DOMAIN\User. If I saw a new authenticated as "X" value, I added another line for that identity and restarted PostgreSQL. This issue often appears after a host or domain rename, and resolving the new string resolves it.
  2. Windows Script Host disabled scenario that applies to 12.3
    If my error shifted to PostgreSQL package deployment failing during the upgrade, I checked whether a hardening baseline disabled Windows Script Host. For 12.3, WSH must be enabled during setup. I re-enabled it, completed the upgrade, then re-applied my baseline.
  3. Last resort only if the mapping route does not work
    I backed up my Veeam configuration, uninstalled VBR, and removed the embedded PostgreSQL instance. I then performed a fresh installation and restored the configuration. This clears the stubborn state when the instance was installed under a user that no longer exists or cannot be resolved. I only consider this after exhausting mapping fixes.

Root cause analysis and why this happened in my case

  • My Veeam services had a valid mapping, such as NT AUTHORITY\SYSTEM, so daily operations were fine.
  • I ran the upgrade under a login that was not present in pg_ident.conf. PostgreSQL rejected it, and the Configuration Check failed.
  • The fix was to add a mapping for the exact Windows identity that launched the installer.

Verification checklist I used after the fix

  • The upgrade Configuration Check passed the database step.
  • There were no new no match in usermap "veeam" lines in the newest PostgreSQL log.
  • Veeam services started up cleanly after the upgrade, and jobs ran as expected.

Rollback plan for safe edits

  • Before editing, I copied:
    pg_ident.conf to pg_ident.conf.bak
    pg_hba.conf to pg_hba.conf.bak

If anything went wrong, I restored the backups and restarted the PostgreSQL service.


Good practices I will follow to avoid this next time

  • I will run upgrades with the same Windows account that installed Veeam originally, or I will ensure that the account is mapped in pg_ident.conf. Running as SYSTEM via PsExec is also possible, but mapping the installer identity is the clearest approach.
  • I will document the identity strings I have mapped in both UPN and DOMAIN\User forms in my runbook.
  • After a host or domain rename, I will immediately update pg_ident.conf to reflect the new identity.
  • If I use security baselines that disable Windows Script Host, I will enable it during the setup stage that requires it, then reapply the baseline.

What I changed in my case, as a concrete example

  • I captured my login identity:
  • I added this line to pg_ident.conf:
  • I restarted PostgreSQL, re-ran the installer, and the Configuration Check passed.

Sources I consulted after I fixed it

  • Veeam knowledge articles about updater configuration checks and permissions
  • Veeam knowledge for SSPI authentication errors during upgrades
  • PostgreSQL documentation on SSPI and user name maps on Windows
  • Notes from the Veeam community about updating mappings after hostname changes
  • Veeam notes about Windows Script Host being required during the 12.3 upgrade stage

2 Error: SSPI authentication for user postgres failed.

After I fixed the above issue, I encountered another issue: SSPI authentication for user postgres failed.

Veeam 12.3 upgrade failed on PostgreSQL and core services — how I fixed it

The error is in German, but is similar to the previous one.

Symptom
During the upgrade, I saw a dialog reporting:

Which is: ‘SSPI authentication for user ‘postgres’ failed.’

What this told me
The installer was now reaching PostgreSQL on port 5432, but my Windows identity still did not match any entry in the PostgreSQL user name map. This is an authentication mapping problem, not a network or service problem.


1) I verified the authentication rules that PostgreSQL applies

Why I did this
PostgreSQL uses pg_hba.conf to decide the auth method and which user map to consult. If sspi map=veeam is not applied to localhost, the mapping in pg_ident.conf will never be checked.

What I checked
I opened:

I confirmed these lines were present and active, with no earlier conflicting host rules above them:

These entries were correct, which meant I needed to fix the identity mapping itself.


2) I added the exact Windows identity to the user map

Why I did this
PostgreSQL must see an exact match between the incoming SSPI identity and an entry in pg_ident.conf. Earlier, I had added veeambackup12\Administrator for the first error. The second error instructed me to ensure that all required identities were mapped, especially the one used by the running installer session.

What I changed
I opened:

I ensured I had the exact identity line for my session:

I also kept the LocalSystem mapping, since Veeam services often connect as SYSTEM:

I saved the file.


3) I reloaded PostgreSQL to apply the change

Why I did this
pg_ident.conf changes only take effect after a reload or restart.

What I ran


4) I proved SSPI works from my current session before retrying the installer

Why I did this
I wanted a quick proof that my identity could become the postgres role using SSPI, without waiting for the installer.

What I ran

Expected result
PostgreSQL prints its version in one row. I saw the version string, which confirmed that SSPI mapping was fixed for my session.


5) I re-ran the upgrade and watched the right logs in real time

Why I did this
If anything failed again, I wanted the error lines immediately, so I tailed the suite engine log and the PostgreSQL log while the Configuration Check and component installs were running.

Suite engine live tail

PostgreSQL live tail

Installer log quick tail when needed

Result
The Configuration Check moved past the database step. The suite continued with Enterprise Manager, then Backup and Replication. The logs showed normal progress and final success codes.


6) Why did this happen in my case

  • The PostgreSQL listener and service were functioning properly, so the first failure was not related to connectivity.
  • My upgrade session identity was not mapped to the postgres role in pg_ident.conf. PostgreSQL rejected the SSPI handshake.
  • Adding the exact Windows identity resolved the authentication path used by the installer.

7) My verification checklist after the upgrade

All Veeam services were running

  • The console showed the expected 12.3 build under Help, About
  • Enterprise Manager was reachable on https://localhost:9443 and reported Connected servers and a valid license
  • Repositories rescanned successfully, proxies and transport services validated under Properties, Apply
  • A small test backup job was completed, and Instant VM Recovery reached the final step without errors

8) Rollback safety for these edits

Before I edited, I backed up the auth files:

If I needed to revert, I would restore the backups and restart the PostgreSQL service.


9) Notes I will follow next time

  • I will run the upgrade with the same Windows account that installed Veeam, or I will ensure that the account is mapped in pg_ident.conf beforehand.
  • After any hostname or domain change, I will add new mappings that match the new identity string, as shown by whoami or by the PostgreSQL log.
  • I will maintain the system mapping for service operations and keep the one-line account that matches the one I actually use for the upgrade.

The exact line that fixed this specific error for me

veeam "veeambackup12\Administrator" postgres

After all the fixes, the installation upgrade of Veeam Enterprise Manager was able to continue and finish properly.

Veeam 12.3 upgrade failed on PostgreSQL and core services — how I fixed it

Both errors are identified by Veeam, and you can check the KB KB4542 and KB4542


3 Error: Veeam Backup Service failed to start because Broker on 9501 was down

With Enterprise Manager updated, I started the Veeam Backup and Replication upgrade. That is where I ran into a few errors.

Symptom I saw in SuiteEngine_*.log

Evidence I found in C:\ProgramData\Veeam\Backup\Svc.VeeamBackup.log

What this means and why it happens

The Veeam Backup Service communicates with the Veeam Broker Service over TCP port 9501 on the local host. If the Broker is not running, not bound to a local address, or another process occupies the port, the Backup service startup fails. The “actively refused” message indicates that a listener problem occurred on port 9501 at the time BackupSvc attempted to connect.

How I proved the state before changing anything

I verified service status and the port listener.

Expected outcome when things are healthy:

  • VeeamBrokerSvc is Running
  • Test-NetConnection to 9501 returns TcpTestSucceeded: True

In my case, Broker was not listening on 9501, so BackupSvc could not connect.

The fix I applied

I started the Broker service, gave it a few seconds to bind, then verified the port and retried the installer.

When 9501 returned a success, I switched back to the upgrade wizard and selected ‘Retry’. The Veeam Backup Service then started, and the wizard continued.

If Broker still does not bind to 9501

I used these quick checks to find the cause.

  • If another process was using 9501, I identified it by PID from netstat output and stopped or reconfigured that process, then restarted VeeamBrokerSvc.
  • If no listener appeared after starting Broker, I checked the Windows System and Application event logs around the same timestamp for service start failures and dependency issues, then restarted Broker again.

Note about the storage NIC and the IP shown in logs

My logs referenced the storage NIC address, for example, 192.168.10.169, which is normal on a multi-homed server. For local Broker communications, the key requirement is that a listener exists on TCP 9501 on the host. It does not need to be bound only to one specific interface as long as the local RPC endpoint can reach the Broker listener. If I hard bind services to particular interfaces, I make sure 127.0.0.1 and the primary management IP can still reach port 9501.

Verification after the fix

  • Test-NetConnection localhost -Port 9501 returned success.
  • Get-Service VeeamBrokerSvc,VeeamBackupSvc showed both services running.
  • The installer’s Retry succeeded and the wizard proceeded.
  • No new connection refused entries appeared in Svc.VeeamBackup.log.

Rollback safety

Before stopping or changing anything, I captured a quick baseline:

If I had needed to revert, I would have returned services to their original states and rechecked the port listener to match the baseline.

Root cause in my case

The Broker service was not listening on 9501 at the moment the Backup service attempted to start. Starting the broker and confirming that the listener resolved the failure, allowing the upgrade to continue.


Error 4. Threat Hunter would not start on 6175

Veeam 12.3 upgrade failed on PostgreSQL and core services — how I fixed it

Symptom I saw in SuiteEngine_*.log

Service log location I checked
C:\ProgramData\Veeam\Backup\Svc.VeeamThreatHunter.log

Typical healthy lines I expect in that log

What this means and why it happens

The Threat Hunter service exposes a local TCP listener on 6175. If the service does not bind to 6175, or if another process already owns that port, the service start will fail and the installer will stop at the configuration check.

How I proved the state before changing anything

I verified the service status, then confirmed whether 6175 was listening.

Expected when healthy

  • VeeamThreatHunterSvc is Running
  • Test-NetConnection to 6175 returns TcpTestSucceeded: True

In my case, the listener was not present, so the service had failed to bind to the port.

The fix I applied

I started the service, waited a few seconds for the listener to appear, then verified the port and the detailed service state.

When 6175 returned a success, I switched back to the installer and selected ‘Retry’. The wizard resumed and continued.

If the service still does not bind to 6175

I used these quick checks to find the cause.

  • If a different process had taken 6175, I stopped or reconfigured that process, then started VeeamThreatHunterSvc again.
  • If the service started, then stopped, I checked its log for binding or initialization errors and the Application and System event logs for dependency failures, then retried the start.

Threat Hunter also updates signatures over HTTPS on 443 to vendor endpoints. In egress-restricted environments, I allow those destinations or I stage updates internally. Signature download issues do not block the 6175 listener, but they can cause repeated restarts later, so I verify outbound 443 once the service is running.

Verification after the fix

  • Test-NetConnection localhost -Port 6175 returned success
  • Get-Service VeeamThreatHunterSvc showed Running
  • The installer Retry succeeded, and the wizard proceeded
  • Svc.VeeamThreatHunter.log showed the register and started listening to lines without new errors

Rollback safety, I used

Before making any changes, I captured a baseline.

If needed, I could return services to their original state and confirm the listener matched the baseline.

Root cause in my case

The Threat Hunter service was not listening on 6175 when the installer checked service health. Starting the service and confirming the listener resolved the failure and allowed the upgrade to continue.


Post-upgrade health checks

What I run right after the installer finishes

How I read the results

  • 9501 should be True for Broker. If False, I start VeeamBrokerSvc and retest 9501.
  • 6175 should be True for Threat Hunter. If false, I start VeeamThreatHunterSvc and retest 6175.
  • 9443 should be True for Enterprise Manager if Enterprise Manager is installed on this server. If False on a box that hosts EM, I check the EM website and service bindings, then retest 9443.

If one of the ports is False, I use a fast fix and verify

Functional checks I do next

  • I open the VBR console and verify repositories, proxies, and jobs are all visible and healthy.
  • For vSphere, I run a Host Discovery or a small ad hoc backup to confirm end-to-end execution.

Error 5. Veeam ONE Reporting Service did not start until ONE was upgraded

This is not an upgrade error. I found it during my post-upgrade sanity checks. The service was down. I am adding this here so you know what to do and are aware of version compatibility.

Symptom I saw
After the VBR upgrade, the only remaining stopped service on this host was VeeamRSS, which is the Veeam ONE Reporting Service. The Veeam ONE server had not been upgraded yet.

What this means and why it happens

Veeam ONE must run a build that is compatible with the VBR version it monitors. When VBR is newer than Veeam ONE, the reporting and integration pieces can hold the service in a stopped state until ONE is brought to a compatible build

How I proved it before changing anything

I validated the service state and checked recent events.

Result
VeeamRSS was stopped, with messages indicating version and integration checks.

The fix I applied

I upgraded Veeam ONE to a build compatible with VBR 12.3. After the upgrade completed, I started the reporting service and verified that it bound and stayed running.

Verification after the fix

  • VeeamRSS reported Running and remained stable.
  • The VBR console indicated that the ONE integration was healthy.
  • Reports and dashboards are populated without new errors.

If I do not use Veeam ONE on this host

If the host does not require Veeam ONE, I set the service to Manual or I remove the feature to avoid noise.

Root cause in my case

VBR was upgraded first, and Veeam ONE was still on an older build, so the reporting service did not start until I upgraded ONE to a compatible version. After that, the service started, and integration worked as expected.


Commands and snippets that saved me time

Tail the latest SuiteEngine log
I use this to jump straight to the newest installer log and watch the last 200 lines while I retry.

Check the pivotal service logs
I read the most recent Broker and Backup service messages to see why a start failed or a port was not listening.

Start Broker and retest 9501
I start the Broker, give it a few seconds to bind, then confirm the listener.

Start Threat Hunter and retest 6175
Same idea for Threat Hunter on its local port.

PostgreSQL quick sanity
I restart the embedded Postgres and confirm it responds locally to a simple query.

What I expect

  • 9501 True for Broker
  • 6175 True for Threat Hunter
  • 9443 True for Enterprise Manager if it is installed on this server

After that, I open the VBR console and check repositories, proxies, and jobs. For vSphere, I run a quick Host Discovery or a small ad hoc backup to confirm end-to-end health.


Why did it break during the upgrade, even though the Veeam Server was working properly?

Two practical causes showed up in my logs.

1) Service restart order
The upgrade will stop many services for an extended period. If the Broker starts late or fails to rebind while the Backup service is already validating compatibility, the Backup service times out on 9501 and sets up flags a failure to start.

I proved it by checking the status of both services and testing the port.

What fixed it: I started the Broker, waited a few seconds, rechecked 9501, then started the Backup service and clicked Retry in the installer.

2) Multi-NIC resolution
This server has a storage NIC. The logs often show the storage IP for local calls, which is normal. The critical check is whether 9501 is listening locally at all. Once the Broker is up on 9501, the Backup service starts immediately, regardless of which local IP appears in the log.

I proved it by searching for an actual listener on 9501 and confirming the ownership process.

If there was no listener, I started the Broker and verified the bind. If another process owned 9501, I identified it with netstat and stopped it before retrying.


Hardening and sanity tips are optional.

Reserve local ports that Veeam uses.
I keep 9501 for Broker and 6175 for Threat Hunter free from accidental ephemeral allocation so nothing transient squats on them during long upgrades. This prevents the OS from handing these ports out for outbound connections. It does not stop a service that explicitly binds to the port, but it removes a common source of conflicts.

Ensure outbound HTTPS for Threat Hunter signatures.
I verify that this host can reach update endpoints over 443. In restricted egress environments, I allow access to the required destinations or stage updates internally, then confirm connectivity.

Keep Veeam ONE aligned with VBR.
I keep Veeam ONE on a build that matches or is newer than VBR so reporting and integration services do not get stuck. If I upgrade VBR first, I plan to upgrade ONE immediately after and confirm that the VeeamRSS service starts.

Live tail the right logs during upgrades
I maintain a live view of SuiteEngine and the key services to pinpoint the root cause without guesswork.

With these in place, my upgrades remain predictable, my recovery steps are obvious, and I do not lose time chasing symptoms that a quick port check or a live tail would have immediately exposed.

Conclusion

I wrote this post to turn a frustrating upgrade into a simple, repeatable flow. My goal is to show exactly what I did, in the order I did it, with proof at every step, so the next time I or anyone else hits these errors, the fix takes minutes instead of hours.

My view after walking through all of this is that most Veeam upgrade failures are not mysterious. They come down to practical things I can test in seconds. The installer account is not mapped in PostgreSQL, a service starts at the wrong moment, a port is not listening, or a companion product is one version behind. The logs already tell the story if I watch the right files and test the right ports.

What I learned and will continue to use is straightforward. Capture the exact Windows identity and map it before retrying. Verify services and ports first, especially Broker on port 9501, Threat Hunter on port 6175, and Enterprise Manager on port 9443 when present. On multi-NIC servers, I do not panic when the storage IP shows up in logs; I only care that the listener exists locally. I keep Veeam ONE aligned with VBR, so reporting is not stuck. I tail the SuiteEngine and the key service logs during the change window because it removes guesswork. I also take quick backups of the PostgreSQL config files before edits, then restart and verify.

This post is not theory; it is a record of what worked for me. If I need to repeat this in six months, I will come back here, follow the same flow, and avoid rediscovering the same fixes.

Share this article if you think it is worth sharing. If you have any questions or comments, comment here, or contact me on Twitter(yes for me is not X but still Twitter).

©2025 ProVirtualzone. All Rights Reserved
By | 2025-08-29T14:59:10+02:00 August 29th, 2025|Backups Posts, Veeam|0 Comments

About the Author:

I have over 20 years of experience in the IT industry. I have been working with Virtualization for more than 15 years (mainly VMware). I recently obtained certifications, including VCP DCV 2022, VCAP DCV Design 2023, and VCP Cloud 2023. Additionally, I have VCP6.5-DCV, VMware vSAN Specialist, vExpert vSAN, vExpert NSX, vExpert Cloud Provider for the last two years, and vExpert for the last 7 years and a old MCP. My specialties are Virtualization, Storage, and Virtual Backup. I am a Solutions Architect in the area VMware, Cloud and Backup / Storage. I am employed by ITQ, a VMware partner as a Senior Consultant. I am also a blogger and owner of the blog ProVirtualzone.com and recently book author.

Leave A Comment