/Insufficient resources to satisfy vSphere HA failover level on cluster

Insufficient resources to satisfy vSphere HA failover level on cluster

Configuring vSphere HA (High Availability) may not be excessively complex, but it is essential to adhere to specific guidelines. Failure to do so may result in inadequately configured vSphere HA and the occurrence of the error message “Insufficient resources to satisfy vSphere HA failover level on cluster.”

Even if sufficient resources are available for host failovers, a misconfiguration in vSphere HA may cause the “Insufficient resources to satisfy vSphere HA failover level on cluster” error

Insufficient resources to satisfy vSphere HA failover level on cluster

In this example above, there are 10 ESXi hosts and 1,25Tb (represents more than 50% of usage memory on the cluster) of free memory and only 5% of CPU usage, and still, we get this warning. Should be more than enough resources and ESXi hosts for any host failure.

Checking vSphere HA – Configurations Issues in Monitor Tab, we have a warning regarding a wrong HA configuration.

Insufficient resources to satisfy vSphere HA failover level on cluster

Next, we need to check what is wrong with our configuration.

The following images show that the “Failure and Response” configurations have a typical setup. Nothing special is here in this section, VMs will power off and restart in one of the available ESXi hosts. Proactive HA is disabled.

Insufficient resources to satisfy vSphere HA failover level on cluster

 

Insufficient resources to satisfy vSphere HA failover level on cluster

Therefore, the issue must be in the Admission Control option.

What is vSphere HA Admission Control?

vSphere HA Admission Control is a vCenter feature that ensures resources are available in case of a host failover inside a cluster. Admission Control ensures that VMs have resources in case of a host or hosts, failover, and VMs have the resources needed in their resource reservations.

The basis for vSphere HA admission control is how many host failures your cluster can tolerate and still guarantee failover. The host failover capacity can be set in three ways:

  • Cluster resource percentage
  • Slot policy
  • Dedicated failover hosts

HA Host Failures To Tolerate Policy uses slots to perform Admission Control.

  • It is based on the slot size – Power on VM + largest CPU + largest memory resources = 1 slot
  • Then Hosts resources / Slot size = Smaller slot size that can support.
  • After Admission Control, check the results with the Admission Control configuration Failover.

– Slot size is calculated.
– HA determines how many slots each host in the cluster can hold.
– Calculates the Current Failover Capacity of the Cluster.
– Compares this to the Configured Failover Capacity.

Check HERE for more details on how Slot Policy works and is calculated.

As we have seen in the above images, Admission Control has three control policies.

  • Specify Failover Hosts Admission Control Policy
    Based on Hosts and pool reservations.
  • Percentage of Cluster Resources Reserved Admission Control Policy
    Based on CPU and Memory reservations (no resource pool reservations are taken into account for the resources reservations in the Cluster)
  • Host Failures Cluster Tolerates Admission Control Policy
    Based on VM Power on Slot Policy  (considering all Power ON VMs and CPU and Memory used or reserved in Pools).

You can read more about vSphere HA Admission Control in the VMware knowledge base.

Admission Control

In this example, the Host failures cluster tolerates set with 1, and the Host failover capacity used “Slot Policy (Powered-on VMs)”.

For a Cluster with 10 vSphere hosts, this is more than enough resources for the failover

05

In this case here, the issues were in the above option.

When you select Slot Policy (Powered-on VMs), Admission Control will start resource calculations and VMs slots. If you have significant VMs or big Reservation Pools, the VM slot will also be large, and could not fit in the host failure capacity (if you have 1 or 2 host failover cluster).

In our specific scenario, we have several large VMs with memory sizes ranging from 64 to 128GB. We also have Reservation Pools allocated for these VMs or groups of VMs. However, the Slot Policy calculation reveals that certain reservations cannot guarantee the availability of the reserved resources in case of a host failure. This discrepancy arises due to variations in CPU and memory sizes across different ESXi hosts.

The solution here is to change the Admission Control policy to the Cluster Resources Percentage policy and, for the safe side, reserve some % for CPU and memory in case of a host failover.

We will reserve 25% of the CPU and Memory of this Cluster for the failover.
06

After changing the Admission Control policy, HA vSphere is all green without any warnings. As we can check in the following image, we have 1,25Tb of free memory in the Cluster with a 25% reserve for the Failover Cluster. Also, the Cluster DRS balance is also in good condition.

07

For the final test, I put two ESXi hosts of the Cluster in maintenance mode to check if HA vSphere shows any warning about a lack of resources. As we can see in the following image, there are no warnings, and still with 787Gb free memory. Also, the 25% reserved resources for the Failover Cluster.

Insufficient resources to satisfy vSphere HA failover level on cluster

With this policy change in vSphere HA Admission Control, I fix the problem with the error “Insufficient resources to satisfy vSphere HA failover level on cluster” in the vSphere HA Cluster.

Conclusion.

Like is shown above, the vSphere HA Admission Control policy can be set in three different ways, and we need to find the best one that fits our Virtual Infrastructure and the size of VMs and amount of resources we have in the Cluster but also the number of reserved resources we have in the Cluster.

vSphere HA is a feature that is not easy to handle and also is a feature that is not 100% viable from VMware. We all remember the struggle with had with this feature in all vSphere versions. So it is essential to know your infrastructure and configure vSphere HA considering the existing resources and the level of reserved resources used in a Cluster.

I have written several articles on the Vembu BDRSuite blog site regarding this subject in my VMware for Beginners blog series. Please have a look to learn more about vSphere HA, Admission Control, etc.

By following the previous articles, you will gain a comprehensive understanding of how High Availability (HA) functions and acquire the necessary guidance to configure and manage your vSphere HA effectively.

Note: This article was updated on 08/06/2023

Share this article if you think it is worth sharing. If you have any questions or comments, comment here, or contact me on Twitter.

©2023 ProVirtualzone. All Rights Reserved

 

By | 2023-06-08T18:27:18+02:00 November 3rd, 2018|vCenter, VMware Posts, vSphere|2 Comments

About the Author:

I have over 20 years of experience in the IT industry. I have been working with Virtualization for more than 15 years (mainly VMware). I recently obtained certifications, including VCP DCV 2022, VCAP DCV Design 2023, and VCP Cloud 2023. Additionally, I have VCP6.5-DCV, VMware vSAN Specialist, vExpert vSAN, vExpert NSX, vExpert Cloud Provider for the last two years, and vExpert for the last 7 years and a old MCP. My specialties are Virtualization, Storage, and Virtual Backup. I am a Solutions Architect in the area VMware, Cloud and Backup / Storage. I am employed by ITQ, a VMware partner as a Senior Consultant. I am also a blogger and owner of the blog ProVirtualzone.com and recently book author.

2 Comments

  1. jon39.onn3d@yahoo.com 30/11/2021 at 20:30

    Thanks

Leave A Comment