This week, I attempted to upgrade my Veeam server from version 12.3.0.310 to 12.3.2.3617 and encountered several issues. I saw other admins report the same problem, and some solved it by uninstalling and reinstalling Veeam Server. I did not want to rebuild a working server. My first blocker was PostgreSQL refusing the connection on localhost:5432 with the message that the connection was forcibly closed by the remote host. I resolved the issue by correcting the SSPI mapping, allowing the installer account to connect, and the upgrade proceeded.
This post, “Veeam 12.3 upgrade failed on PostgreSQL and core services — how I fixed it,” demonstrates how I completed the upgrade without needing to reinstall anything. I documented a real-world upgrade where the setup initially failed on the PostgreSQL mapping, and then was unable to start core services, including the Broker on port 9501, the Backup service, and Threat Hunter on port 6175. My goal is to show how I identified each issue, proved it with logs and port checks, applied a targeted fix, and verified the result until the installer completed cleanly.
This is a long post on purpose. I include the exact commands I ran, the log paths I tailed, the service states I checked, and the reasoning behind every change. If you are seeing the same errors, you can follow the same flow and avoid a reinstall. If your messages differ, you can still use the same method. Prove what is actually broken by reading the relevant log, testing the specific port, and then changing only what the evidence supports.
Environment highlights
- Single VBR server on Windows Server 2022 with multiple NICs for LAN and storage
- Local workgroup admin account
- Built-in PostgreSQL 15 for the VBR configuration database
1 Error: “Failed to connect to PostgreSQL 5432” (SSPI mapping)
Symptom (what I saw):
During the 12.3 upgrade, the Configuration Check failed with:
1 2 3 4 | Failed to connect to PostgreSQL server localhost:5432. An existing connection was forcibly closed by the remote host |
What this usually means in practice:
My running Veeam services could communicate with PostgreSQL, but the upgrade process ran under my current Windows login, which was not authorized to connect to the embedded PostgreSQL instance. The updater account must be approved for the configuration database; otherwise, the check fails.
1) Before touching anything, tail the live installer log
The Veeam suite engine writes rotating logs here: C:\ProgramData\Veeam\Setup\Temp\SuiteEngine_*.log
Tail the newest one live during the upgrade:
1 2 3 4 5 | Get-Content -Wait (Get-ChildItem 'C:\ProgramData\Veeam\Setup\Temp\SuiteEngine_*' -File | Sort-Object LastWriteTime -Descending | Select-Object -ExpandProperty FullName -First 1) |
This shows which product the wizard is on and where it stalls, for example, the start of a service that times out and prompts for Retry or Cancel.
Step 0. Baseline to prove Postgres is up and avoid wild goose chases
Why I did this: I wanted to rule out network or listener issues so I could focus on authentication.
- I confirmed the service was running
Services console → postgresql-x64-15 → Status should be Running - I confirmed the listener was active
1 2 3 4 | Test-NetConnection localhost -Port 5432 |
What I expected: TcpTestSucceeded : True
If it is false, first fix the service or listener. If it is true, authentication is the prime suspect rather than networking.
Step 1. Get the exact identity PostgreSQL sees and do not guess
Why I did this: The mapping must match the identity string exactly. I extracted the username string from the PostgreSQL log and used it in the mapping in the exact format.
- I opened the newest log in:
1 2 3 4 | C:\Program Files\PostgreSQL\15\data\log\ |
- I looked for a line like:
1 2 3 4 | no match in usermap "veeam" for user "postgres" authenticated as "SOMETHING" |
The value inside quotes is the Windows identity PostgreSQL received. It can be a UPN, such as Admin@DOMAIN, or a down-level name such as MACHINE\Administrator. I mapped that value verbatim.
Tip: I also ran in the same elevated session I used to launch the upgrade:
1 2 3 4 5 | whoami whoami /upn |
This helped me predict the identity I would see in the log.
Step 2. Confirm the auth method and the map being used
Why I did this: Veeam’s embedded PostgreSQL on Windows uses SSPI with a username map. I confirmed both ends. pg_hba.conf sets the auth method and the map. pg_ident.conf holds the actual map entries.
- I opened:
1 2 3 4 | C:\Program Files\PostgreSQL\15\data\pg_hba.conf |
- I verified localhost entries used SSPI and the veeam map, for example:
1 2 3 4 5 | host all all 127.0.0.1/32 sspi map=veeam host all all ::1/128 sspi map=veeam |
If the file already appears this way, I proceed.
Step 3. Fix the mapping in pg_ident.conf
Why I did this: This is the crux. I added the exact identity from Step 1 to the veeam map so the upgrade account could connect as the PostgreSQL database user.
- I opened:
1 2 3 4 | C:\Program Files\PostgreSQL\15\data\pg_ident.conf |
- I appended a line using the exact identity. Example for a local admin:
1 2 3 4 | veeam "veeambackup12\Administrator" postgres |
If my log showed a different string, for example Administrator@VEEAM-V12BETA , or DOMAIN\AdminUser, I used that exact string instead. Case, slashes, and quotes matter. The map ties an external Windows identity to a PostgreSQL role, and map=veeam in pg_hba.conf activates it.
I reloaded or restarted PostgreSQL.
Services → postgresql-x64-15 → Restart
I could also use pg_ctl reload, but a restart is the simplest option on Windows.
Why this works: SSPI is a Windows single sign-on protocol. PostgreSQL receives the Windows identity and consults the Veeam map to see if that identity can be mapped to PostgreSQL. If the identity is not listed, the connection is rejected.
Step 4. Re-run the upgrade and verify
What I expected now: The Configuration Check would no longer fail on the database connection. If it passes and the installation proceeds, that would confirm that an authentication mismatch was the cause.
If it still fails, the fast iterations that I used
- I rechecked the newest PostgreSQL log after the retry
Sometimes the installer spawns with a slightly different identity string, for example, UPN versusDOMAIN\User. If I saw a newauthenticated as "X"value, I added another line for that identity and restarted PostgreSQL. This issue often appears after a host or domain rename, and resolving the new string resolves it. - Windows Script Host disabled scenario that applies to 12.3
If my error shifted to PostgreSQL package deployment failing during the upgrade, I checked whether a hardening baseline disabled Windows Script Host. For 12.3, WSH must be enabled during setup. I re-enabled it, completed the upgrade, then re-applied my baseline. - Last resort only if the mapping route does not work
I backed up my Veeam configuration, uninstalled VBR, and removed the embedded PostgreSQL instance. I then performed a fresh installation and restored the configuration. This clears the stubborn state when the instance was installed under a user that no longer exists or cannot be resolved. I only consider this after exhausting mapping fixes.
Root cause analysis and why this happened in my case
- My Veeam services had a valid mapping, such as
NT AUTHORITY\SYSTEM, so daily operations were fine. - I ran the upgrade under a login that was not present in
pg_ident.conf. PostgreSQL rejected it, and the Configuration Check failed. - The fix was to add a mapping for the exact Windows identity that launched the installer.
Verification checklist I used after the fix
- The upgrade Configuration Check passed the database step.
- There were no new
no match in usermap "veeam"lines in the newest PostgreSQL log. - Veeam services started up cleanly after the upgrade, and jobs ran as expected.
Rollback plan for safe edits
- Before editing, I copied:
pg_ident.conftopg_ident.conf.bak
pg_hba.conftopg_hba.conf.bak
If anything went wrong, I restored the backups and restarted the PostgreSQL service.
Good practices I will follow to avoid this next time
- I will run upgrades with the same Windows account that installed Veeam originally, or I will ensure that the account is mapped in
pg_ident.conf. Running as SYSTEM via PsExec is also possible, but mapping the installer identity is the clearest approach. - I will document the identity strings I have mapped in both UPN and
DOMAIN\Userforms in my runbook. - After a host or domain rename, I will immediately update
pg_ident.confto reflect the new identity. - If I use security baselines that disable Windows Script Host, I will enable it during the setup stage that requires it, then reapply the baseline.
What I changed in my case, as a concrete example
- I captured my login identity:
1234whoami → veeambackup12\Administrator - I added this line to
pg_ident.conf:
1234veeam "veeambackup12\Administrator" postgres - I restarted PostgreSQL, re-ran the installer, and the Configuration Check passed.
Sources I consulted after I fixed it
- Veeam knowledge articles about updater configuration checks and permissions
- Veeam knowledge for SSPI authentication errors during upgrades
- PostgreSQL documentation on SSPI and user name maps on Windows
- Notes from the Veeam community about updating mappings after hostname changes
- Veeam notes about Windows Script Host being required during the 12.3 upgrade stage
2 Error: SSPI authentication for user postgres failed.
After I fixed the above issue, I encountered another issue: SSPI authentication for user postgres failed.
The error is in German, but is similar to the previous one.
Symptom
During the upgrade, I saw a dialog reporting:
1 2 3 4 | SSPI-Authentifizierung für Benutzer „postgres“ fehlgeschlagen |
Which is: ‘SSPI authentication for user ‘postgres’ failed.’
What this told me
The installer was now reaching PostgreSQL on port 5432, but my Windows identity still did not match any entry in the PostgreSQL user name map. This is an authentication mapping problem, not a network or service problem.
1) I verified the authentication rules that PostgreSQL applies
Why I did this
PostgreSQL uses pg_hba.conf to decide the auth method and which user map to consult. If sspi map=veeam is not applied to localhost, the mapping in pg_ident.conf will never be checked.
What I checked
I opened:
1 2 3 4 | C:\Program Files\PostgreSQL\15\data\pg_hba.conf |
I confirmed these lines were present and active, with no earlier conflicting host rules above them:
1 2 3 4 5 | host all all 127.0.0.1/32 sspi map=veeam host all all ::1/128 sspi map=veeam |
These entries were correct, which meant I needed to fix the identity mapping itself.
2) I added the exact Windows identity to the user map
Why I did this
PostgreSQL must see an exact match between the incoming SSPI identity and an entry in pg_ident.conf. Earlier, I had added veeambackup12\Administrator for the first error. The second error instructed me to ensure that all required identities were mapped, especially the one used by the running installer session.
What I changed
I opened:
1 2 3 4 | C:\Program Files\PostgreSQL\15\data\pg_ident.conf |
I ensured I had the exact identity line for my session:
1 2 3 4 | veeam "veeambackup12\Administrator" postgres |
I also kept the LocalSystem mapping, since Veeam services often connect as SYSTEM:
1 2 3 4 | veeam "NT AUTHORITY\SYSTEM" postgres |
I saved the file.
3) I reloaded PostgreSQL to apply the change
Why I did this
pg_ident.conf changes only take effect after a reload or restart.
What I ran
1 2 3 4 | Restart-Service -Name "postgresql-x64-15" -Force |
4) I proved SSPI works from my current session before retrying the installer
Why I did this
I wanted a quick proof that my identity could become the postgres role using SSPI, without waiting for the installer.
What I ran
1 2 3 4 | & "C:\Program Files\PostgreSQL\15\bin\psql.exe" -U postgres -h 127.0.0.1 -c "select version();" |
Expected result
PostgreSQL prints its version in one row. I saw the version string, which confirmed that SSPI mapping was fixed for my session.
5) I re-ran the upgrade and watched the right logs in real time
Why I did this
If anything failed again, I wanted the error lines immediately, so I tailed the suite engine log and the PostgreSQL log while the Configuration Check and component installs were running.
Suite engine live tail
1 2 3 4 | Get-Content -Wait (Get-ChildItem 'C:\ProgramData\Veeam\Setup\Temp\SuiteEngine_*' -File | Sort-Object LastWriteTime -Descending | Select-Object -ExpandProperty FullName -First 1) |
PostgreSQL live tail
1 2 3 4 | Get-Content -Wait (Get-ChildItem 'C:\Program Files\PostgreSQL\15\data\log\postgresql-*.log' | Sort-Object LastWriteTime -Descending | Select-Object -ExpandProperty FullName -First 1) |
Installer log quick tail when needed
1 2 3 4 | $f=(Get-ChildItem 'C:\ProgramData\Veeam\Setup\Temp\*' -File | Sort-Object LastWriteTime -Descending | Select-Object -ExpandProperty FullName -First 1); $f; Get-Content -Tail 200 $f |
Result
The Configuration Check moved past the database step. The suite continued with Enterprise Manager, then Backup and Replication. The logs showed normal progress and final success codes.
6) Why did this happen in my case
- The PostgreSQL listener and service were functioning properly, so the first failure was not related to connectivity.
- My upgrade session identity was not mapped to the
postgresrole inpg_ident.conf. PostgreSQL rejected the SSPI handshake. - Adding the exact Windows identity resolved the authentication path used by the installer.
7) My verification checklist after the upgrade
All Veeam services were running
1 2 3 4 | Get-Service 'Veeam*' | Select Name,Status | Sort Name |
- The console showed the expected 12.3 build under Help, About
- Enterprise Manager was reachable on
https://localhost:9443and reported Connected servers and a valid license - Repositories rescanned successfully, proxies and transport services validated under Properties, Apply
- A small test backup job was completed, and Instant VM Recovery reached the final step without errors
8) Rollback safety for these edits
Before I edited, I backed up the auth files:
1 2 3 4 5 | copy "C:\Program Files\PostgreSQL\15\data\pg_ident.conf" "C:\Program Files\PostgreSQL\15\data\pg_ident.conf.bak" copy "C:\Program Files\PostgreSQL\15\data\pg_hba.conf" "C:\Program Files\PostgreSQL\15\data\pg_hba.conf.bak" |
If I needed to revert, I would restore the backups and restart the PostgreSQL service.
9) Notes I will follow next time
- I will run the upgrade with the same Windows account that installed Veeam, or I will ensure that the account is mapped in
pg_ident.confbeforehand. - After any hostname or domain change, I will add new mappings that match the new identity string, as shown by
whoamior by the PostgreSQL log. - I will maintain the system mapping for service operations and keep the one-line account that matches the one I actually use for the upgrade.
The exact line that fixed this specific error for me
veeam "veeambackup12\Administrator" postgres
After all the fixes, the installation upgrade of Veeam Enterprise Manager was able to continue and finish properly.
Both errors are identified by Veeam, and you can check the KB KB4542 and KB4542
3 Error: Veeam Backup Service failed to start because Broker on 9501 was down
With Enterprise Manager updated, I started the Veeam Backup and Replication upgrade. That is where I ran into a few errors.
Symptom I saw in SuiteEngine_*.log
1 2 3 4 5 6 | [INFO] Service name: VeeamBackupSvc [INFO] Starting service... [ERROR] Failed to start VeeamBackupSvc service. |
Evidence I found in C:\ProgramData\Veeam\Backup\Svc.VeeamBackup.log
1 2 3 4 5 | No connection could be made because the target machine actively refused it 192.168.10.169:9501 ... tcp://VeeamBackup12:9501/BrokerService ... |
What this means and why it happens
The Veeam Backup Service communicates with the Veeam Broker Service over TCP port 9501 on the local host. If the Broker is not running, not bound to a local address, or another process occupies the port, the Backup service startup fails. The “actively refused” message indicates that a listener problem occurred on port 9501 at the time BackupSvc attempted to connect.
How I proved the state before changing anything
I verified service status and the port listener.
1 2 3 4 5 6 7 8 | # Check both services Get-Service 'VeeamBrokerSvc','VeeamBackupSvc' | Select Name,Status # Test the local listener on TCP 9501 Test-NetConnection -ComputerName localhost -Port 9501 |
Expected outcome when things are healthy:
- VeeamBrokerSvc is Running
- Test-NetConnection to 9501 returns TcpTestSucceeded: True
In my case, Broker was not listening on 9501, so BackupSvc could not connect.
The fix I applied
I started the Broker service, gave it a few seconds to bind, then verified the port and retried the installer.
1 2 3 4 5 6 7 8 9 10 | # Start Broker and recheck the port Start-Service VeeamBrokerSvc -ErrorAction Continue Start-Sleep -Seconds 5 Test-NetConnection -ComputerName localhost -Port 9501 # If BackupSvc is Stopped, start it after Broker is healthy Get-Service VeeamBackupSvc | Start-Service -ErrorAction Continue |
When 9501 returned a success, I switched back to the upgrade wizard and selected ‘Retry’. The Veeam Backup Service then started, and the wizard continued.
If Broker still does not bind to 9501
I used these quick checks to find the cause.
1 2 3 4 5 6 7 8 9 10 11 | # See if anything else is already bound to 9501 Get-NetTCPConnection -LocalPort 9501 -State Listen -ErrorAction SilentlyContinue # Fallback on older hosts without Get-NetTCPConnection netstat -ano | findstr :9501 # Check Windows Firewall state for the service process if the port is not reachable Get-NetFirewallProfile | Select-Object Name,Enabled |
- If another process was using 9501, I identified it by PID from netstat output and stopped or reconfigured that process, then restarted VeeamBrokerSvc.
- If no listener appeared after starting Broker, I checked the Windows System and Application event logs around the same timestamp for service start failures and dependency issues, then restarted Broker again.
Note about the storage NIC and the IP shown in logs
My logs referenced the storage NIC address, for example, 192.168.10.169, which is normal on a multi-homed server. For local Broker communications, the key requirement is that a listener exists on TCP 9501 on the host. It does not need to be bound only to one specific interface as long as the local RPC endpoint can reach the Broker listener. If I hard bind services to particular interfaces, I make sure 127.0.0.1 and the primary management IP can still reach port 9501.
Verification after the fix
Test-NetConnection localhost -Port 9501returned success.Get-Service VeeamBrokerSvc,VeeamBackupSvcshowed both services running.- The installer’s Retry succeeded and the wizard proceeded.
- No new connection refused entries appeared in
Svc.VeeamBackup.log.
Rollback safety
Before stopping or changing anything, I captured a quick baseline:
1 2 3 4 5 | Get-Service 'VeeamBrokerSvc','VeeamBackupSvc' | Select Name,Status Get-NetTCPConnection -LocalPort 9501 -State Listen -ErrorAction SilentlyContinue |
If I had needed to revert, I would have returned services to their original states and rechecked the port listener to match the baseline.
Root cause in my case
The Broker service was not listening on 9501 at the moment the Backup service attempted to start. Starting the broker and confirming that the listener resolved the failure, allowing the upgrade to continue.
Error 4. Threat Hunter would not start on 6175
Symptom I saw in SuiteEngine_*.log
1 2 3 4 5 6 | [INFO] Service name: VeeamThreatHunterSvc [INFO] Starting service... [ERROR] Failed to start VeeamThreatHunterSvc service. |
Service log location I checked
C:\ProgramData\Veeam\Backup\Svc.VeeamThreatHunter.log
Typical healthy lines I expect in that log
1 2 3 4 5 | [CSrvTcpChannelRegistration] Registering TCP server channel [avsvc] ... Port: [6175] Started listening... |
What this means and why it happens
The Threat Hunter service exposes a local TCP listener on 6175. If the service does not bind to 6175, or if another process already owns that port, the service start will fail and the installer will stop at the configuration check.
How I proved the state before changing anything
I verified the service status, then confirmed whether 6175 was listening.
1 2 3 4 5 6 7 8 | # Check service status Get-Service VeeamThreatHunterSvc | Select Name,Status # Check the listener on TCP 6175 Test-NetConnection -ComputerName localhost -Port 6175 |
Expected when healthy
- VeeamThreatHunterSvc is Running
- Test-NetConnection to 6175 returns TcpTestSucceeded: True
In my case, the listener was not present, so the service had failed to bind to the port.
The fix I applied
I started the service, waited a few seconds for the listener to appear, then verified the port and the detailed service state.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | Start-Service VeeamThreatHunterSvc -ErrorAction Continue Start-Sleep -Seconds 5 # Verify detailed service properties Get-CimInstance Win32_Service -Filter "Name='VeeamThreatHunterSvc'" | Select Name,State,Status,StartMode,StartName,PathName # Verify the listener and owning process Get-NetTCPConnection -LocalPort 6175 -ErrorAction SilentlyContinue | Select LocalAddress,LocalPort,State,OwningProcess # Simple connectivity check Test-NetConnection -ComputerName localhost -Port 6175 |
When 6175 returned a success, I switched back to the installer and selected ‘Retry’. The wizard resumed and continued.
If the service still does not bind to 6175
I used these quick checks to find the cause.
1 2 3 4 5 6 7 8 9 10 11 12 13 | # See if another process already owns 6175 netstat -ano | findstr :6175 # If I get a PID from netstat, map it to a process Get-Process -Id <PID> # Check recent service start errors in the Windows logs Get-WinEvent -FilterHashtable @{LogName='Application'; StartTime=(Get-Date).AddMinutes(-10)} | Where-Object {$_.Message -match 'VeeamThreatHunterSvc' -or $_.Message -match '6175'} | Select TimeCreated,Id,ProviderName,LevelDisplayName,Message |
- If a different process had taken 6175, I stopped or reconfigured that process, then started VeeamThreatHunterSvc again.
- If the service started, then stopped, I checked its log for binding or initialization errors and the Application and System event logs for dependency failures, then retried the start.
Threat Hunter also updates signatures over HTTPS on 443 to vendor endpoints. In egress-restricted environments, I allow those destinations or I stage updates internally. Signature download issues do not block the 6175 listener, but they can cause repeated restarts later, so I verify outbound 443 once the service is running.
Verification after the fix
Test-NetConnection localhost -Port 6175returned successGet-Service VeeamThreatHunterSvcshowed Running- The installer Retry succeeded, and the wizard proceeded
Svc.VeeamThreatHunter.logshowed the register and started listening to lines without new errors
Rollback safety, I used
Before making any changes, I captured a baseline.
1 2 3 4 5 | Get-Service VeeamThreatHunterSvc | Select Name,Status Get-NetTCPConnection -LocalPort 6175 -State Listen -ErrorAction SilentlyContinue |
If needed, I could return services to their original state and confirm the listener matched the baseline.
Root cause in my case
The Threat Hunter service was not listening on 6175 when the installer checked service health. Starting the service and confirming the listener resolved the failure and allowed the upgrade to continue.
Post-upgrade health checks
What I run right after the installer finishes
1 2 3 4 5 6 7 8 9 10 | "`n=== Broker ThreatHunter EM ports ===" foreach($p in 9501,6175,9443){ $t = Test-NetConnection localhost -Port $p "{0}: TcpTestSucceeded={1}" -f $p,$t.TcpTestSucceeded } "=== Veeam services not running ===" Get-Service 'Veeam*' | Where-Object {$_.Status -ne 'Running'} | Select-Object Name,Status | Sort-Object Name |
How I read the results
- 9501 should be True for Broker. If False, I start VeeamBrokerSvc and retest 9501.
- 6175 should be True for Threat Hunter. If false, I start VeeamThreatHunterSvc and retest 6175.
- 9443 should be True for Enterprise Manager if Enterprise Manager is installed on this server. If False on a box that hosts EM, I check the EM website and service bindings, then retest 9443.
If one of the ports is False, I use a fast fix and verify
1 2 3 4 5 6 7 8 9 10 | # Broker Start-Service VeeamBrokerSvc -ErrorAction Continue # Threat Hunter Start-Service VeeamThreatHunterSvc -ErrorAction Continue Start-Sleep -Seconds 5 # Recheck listeners 9501,6175,9443 | ForEach-Object { "{0}: {1}" -f $_,(Test-NetConnection localhost -Port $_).TcpTestSucceeded } |
Functional checks I do next
- I open the VBR console and verify repositories, proxies, and jobs are all visible and healthy.
- For vSphere, I run a Host Discovery or a small ad hoc backup to confirm end-to-end execution.
Error 5. Veeam ONE Reporting Service did not start until ONE was upgraded
This is not an upgrade error. I found it during my post-upgrade sanity checks. The service was down. I am adding this here so you know what to do and are aware of version compatibility.
Symptom I saw
After the VBR upgrade, the only remaining stopped service on this host was VeeamRSS, which is the Veeam ONE Reporting Service. The Veeam ONE server had not been upgraded yet.
What this means and why it happens
Veeam ONE must run a build that is compatible with the VBR version it monitors. When VBR is newer than Veeam ONE, the reporting and integration pieces can hold the service in a stopped state until ONE is brought to a compatible build
How I proved it before changing anything
I validated the service state and checked recent events.
1 2 3 4 5 6 7 8 9 10 | # Service state Get-Service VeeamRSS | Select Name,Status,StartType # Recent Application log entries mentioning Veeam ONE or VeeamRSS Get-WinEvent -FilterHashtable @{LogName='Application'; StartTime=(Get-Date).AddMinutes(-15)} | Where-Object { $_.Message -match 'Veeam ONE' -or $_.Message -match 'VeeamRSS' } | Select TimeCreated, Id, ProviderName, LevelDisplayName, Message |
Result
VeeamRSS was stopped, with messages indicating version and integration checks.
The fix I applied
I upgraded Veeam ONE to a build compatible with VBR 12.3. After the upgrade completed, I started the reporting service and verified that it bound and stayed running.
1 2 3 4 5 6 | Start-Service VeeamRSS -ErrorAction Continue Start-Sleep -Seconds 5 Get-Service VeeamRSS | Select Name,Status,StartType |
Verification after the fix
VeeamRSSreported Running and remained stable.- The VBR console indicated that the ONE integration was healthy.
- Reports and dashboards are populated without new errors.
If I do not use Veeam ONE on this host
If the host does not require Veeam ONE, I set the service to Manual or I remove the feature to avoid noise.
1 2 3 4 5 6 7 | # Set to Manual if I want the service present but not started automatically Set-Service -Name VeeamRSS -StartupType Manual # Or remove the feature through Programs and Features if Veeam ONE is not needed on this host |
Root cause in my case
VBR was upgraded first, and Veeam ONE was still on an older build, so the reporting service did not start until I upgraded ONE to a compatible version. After that, the service started, and integration worked as expected.
Commands and snippets that saved me time
Tail the latest SuiteEngine log
I use this to jump straight to the newest installer log and watch the last 200 lines while I retry.
1 2 3 4 | $f=(Get-ChildItem 'C:\ProgramData\Veeam\Setup\Temp\*' -File | Sort-Object LastWriteTime -Descending | Select-Object -ExpandProperty FullName -First 1); $f; Get-Content -Tail 200 $f |
Check the pivotal service logs
I read the most recent Broker and Backup service messages to see why a start failed or a port was not listening.
1 2 3 4 5 | Get-Content -Tail 200 'C:\ProgramData\Veeam\Backup\Svc.VeeamBackup.log' Get-Content -Tail 200 'C:\ProgramData\Veeam\Backup\Svc.VeeamBroker.log' |
Start Broker and retest 9501
I start the Broker, give it a few seconds to bind, then confirm the listener.
1 2 3 4 | Start-Service VeeamBrokerSvc -ErrorAction Continue; Start-Sleep -Seconds 5; Test-NetConnection localhost -Port 9501 |
Start Threat Hunter and retest 6175
Same idea for Threat Hunter on its local port.
1 2 3 4 | Start-Service VeeamThreatHunterSvc -ErrorAction Continue; Start-Sleep -Seconds 5; Test-NetConnection localhost -Port 6175 |
PostgreSQL quick sanity
I restart the embedded Postgres and confirm it responds locally to a simple query.
1 2 3 4 | Restart-Service -Name 'postgresql-x64-15' -Force; & 'C:\Program Files\PostgreSQL\15\bin\psql.exe' -U postgres -h 127.0.0.1 -c 'select version();' |
What I expect
- 9501 True for Broker
- 6175 True for Threat Hunter
- 9443 True for Enterprise Manager if it is installed on this server
After that, I open the VBR console and check repositories, proxies, and jobs. For vSphere, I run a quick Host Discovery or a small ad hoc backup to confirm end-to-end health.
Why did it break during the upgrade, even though the Veeam Server was working properly?
Two practical causes showed up in my logs.
1) Service restart order
The upgrade will stop many services for an extended period. If the Broker starts late or fails to rebind while the Backup service is already validating compatibility, the Backup service times out on 9501 and sets up flags a failure to start.
I proved it by checking the status of both services and testing the port.
1 2 3 4 5 | Get-Service VeeamBrokerSvc,VeeamBackupSvc | Select Name,Status Test-NetConnection localhost -Port 9501 |
What fixed it: I started the Broker, waited a few seconds, rechecked 9501, then started the Backup service and clicked Retry in the installer.
1 2 3 4 5 6 7 | Start-Service VeeamBrokerSvc Start-Sleep -Seconds 5 Test-NetConnection localhost -Port 9501 Start-Service VeeamBackupSvc |
2) Multi-NIC resolution
This server has a storage NIC. The logs often show the storage IP for local calls, which is normal. The critical check is whether 9501 is listening locally at all. Once the Broker is up on 9501, the Backup service starts immediately, regardless of which local IP appears in the log.
I proved it by searching for an actual listener on 9501 and confirming the ownership process.
1 2 3 4 5 | Get-NetTCPConnection -LocalPort 9501 -ErrorAction SilentlyContinue | Select LocalAddress,State,OwningProcess Test-NetConnection localhost -Port 9501 |
If there was no listener, I started the Broker and verified the bind. If another process owned 9501, I identified it with netstat and stopped it before retrying.
Hardening and sanity tips are optional.
Reserve local ports that Veeam uses.
I keep 9501 for Broker and 6175 for Threat Hunter free from accidental ephemeral allocation so nothing transient squats on them during long upgrades. This prevents the OS from handing these ports out for outbound connections. It does not stop a service that explicitly binds to the port, but it removes a common source of conflicts.
1 2 3 4 5 6 7 8 9 | # Show current exclusions netsh int ipv4 show excludedportrange protocol=tcp # Exclude 9501 and 6175 from the dynamic pool netsh int ipv4 add excludedportrange protocol=tcp startport=9501 numberofports=1 netsh int ipv4 add excludedportrange protocol=tcp startport=6175 numberofports=1 |
Ensure outbound HTTPS for Threat Hunter signatures.
I verify that this host can reach update endpoints over 443. In restricted egress environments, I allow access to the required destinations or stage updates internally, then confirm connectivity.
1 2 3 4 5 | # Replace with your update proxy or target Test-NetConnection -ComputerName <update-endpoint-or-proxy> -Port 443 |
Keep Veeam ONE aligned with VBR.
I keep Veeam ONE on a build that matches or is newer than VBR so reporting and integration services do not get stuck. If I upgrade VBR first, I plan to upgrade ONE immediately after and confirm that the VeeamRSS service starts.
Live tail the right logs during upgrades
I maintain a live view of SuiteEngine and the key services to pinpoint the root cause without guesswork.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | # Live tail the newest SuiteEngine log $f = Get-ChildItem 'C:\ProgramData\Veeam\Setup\Temp\*' -File | Sort-Object LastWriteTime -Descending | Select-Object -First 1 $f.FullName Get-Content -Wait -Tail 50 $f.FullName # Live tail pivotal service logs Get-Content -Wait -Tail 50 'C:\ProgramData\Veeam\Backup\Svc.VeeamBroker.log' Get-Content -Wait -Tail 50 'C:\ProgramData\Veeam\Backup\Svc.VeeamBackup.log' Get-Content -Wait -Tail 50 'C:\ProgramData\Veeam\Backup\Svc.VeeamThreatHunter.log' |
With these in place, my upgrades remain predictable, my recovery steps are obvious, and I do not lose time chasing symptoms that a quick port check or a live tail would have immediately exposed.
Conclusion
I wrote this post to turn a frustrating upgrade into a simple, repeatable flow. My goal is to show exactly what I did, in the order I did it, with proof at every step, so the next time I or anyone else hits these errors, the fix takes minutes instead of hours.
My view after walking through all of this is that most Veeam upgrade failures are not mysterious. They come down to practical things I can test in seconds. The installer account is not mapped in PostgreSQL, a service starts at the wrong moment, a port is not listening, or a companion product is one version behind. The logs already tell the story if I watch the right files and test the right ports.
What I learned and will continue to use is straightforward. Capture the exact Windows identity and map it before retrying. Verify services and ports first, especially Broker on port 9501, Threat Hunter on port 6175, and Enterprise Manager on port 9443 when present. On multi-NIC servers, I do not panic when the storage IP shows up in logs; I only care that the listener exists locally. I keep Veeam ONE aligned with VBR, so reporting is not stuck. I tail the SuiteEngine and the key service logs during the change window because it removes guesswork. I also take quick backups of the PostgreSQL config files before edits, then restart and verify.
This post is not theory; it is a record of what worked for me. If I need to repeat this in six months, I will come back here, follow the same flow, and avoid rediscovering the same fixes.
Share this article if you think it is worth sharing. If you have any questions or comments, comment here, or contact me on Twitter(yes for me is not X but still Twitter).
Leave A Comment