Resource Pools are often misunderstood, disliked, and untrusted by vSphere Administrators. However, resource pools can be very useful tools for administrators who want to configure resource management without having to individually configure each VM. This leads to the administrator’s desire to explore the proper usage of resource pools.
This series of posts examines several scenarios based on actual customer implementations where resource pools were expected to be useful. Some scenarios describe examples of poorly configured pools, and others describe examples of resource pools that were well-configured to obtain a desired result.
Resource Pool – Basic Concepts
Resource pools are a type of container that can be utilized to organize VMs, much like folders. But what makes resource pools unique is that they can be used to implement resource controls, including Shares, Limits, and Reservations on CPU and RAM usage. Limits establish a hard cap on the resource usage. For example, a resource pool whose CPU Limit is set to 2 GHz restricts the concurrent CPU usage of all VMs in the pool to a maximum of 2 GHz collectively, although some physical CPU capacity may remain unused. Reservations establish a minimum guarantee of resource usage. Shares establish a relative priority on resource usage that is only applied during periods of resource contention. For example, one VM is configured with 500 CPU Shares and another is configured with four times as many, or 2000, CPU Shares. These settings are completely ignored unless CPU contention occurs. During contention, the VM with 2000 CPU Shares will be granted four times the available CPU cycles than the VM with 500 CPU Shares.
Scenario One – Prioritizing VMs
One reoccurring issue that has plagued many administrators is related to the attempt to assign higher CPU or RAM priority to a set of VMs. For example, an administrator created two resource pools, one named “High Priority” and one named “Low Priority,” and configured CPU and RAM Shares on each pool that corresponds to their names. The CPU and RAM Shares on the “High Priority” pool are set to High, and the Shares on the “Low Priority” pool are set to Low. The administrator understood that Shares apply a priority only when the corresponding resources are under contention, so he expected that under normal conditions, all VMs get the CPU and RAM resources they request. But, he expected that if processor or memory contention occurs, then each VM in the High Priority pool would get a greater share of the resources than each VM in the Low Priority pool. He expected that during contention, the performance of the High Priority VMs may remain rather steady, while the Low Priority VMs may noticeably begin to move slowly. Unfortunately, the opposite result was realized, where the VMs in the Low Priority pool actually began to run faster than the VMs in the High Priority Pool.
The real cause of this problem was that CPU and RAM shares were configured for the resource pools without the administrator fully understanding the impact of shares. In this case, the administrator assumed that setting High CPU Shares on a resource pool containing 50 VMs is equivalent to setting High CPU Shares on each VM individually. But this assumption is incorrect. To understand how Shares were truly applied, consider the following example.
The administrator creates two resource pools
- High Priority Pool – CPU share are set to “High.”
- Low Priority Pool – CPU shares are set to “Low.”
- The High Priority Pool contains 100 VMs, each with one virtual CPU and default resource settings.
- The Low Priority Pool contains 10 VMs, each with one virtual CPU and default resource settings.
Naturally, under normal conditions, where no CPU contention exists, these Share settings are ignored and have no impact on the performance of the VMs. Whenever CPU contention does occur, where one or more VMs are in active competition with other VMs for available CPU time, then the Shares are applied relatively, based on the number of Shares for each component. When Shares are applied, they are automatically applied to objects or containers at the same level of the inventory hierarchy, and then the Shares are applied to the sub-level or children objects.
In this example, the only two objects that reside at the top level of the cluster are the resource pools, High Priority and Low Priority. The value of the Shares at the High Priority is set to High, which is the equivalent of 8,000 shares. The value of the Shares at the Low Priority is set to Low, which is the equivalent of 2000 shares. This guarantees that under CPU contention, the High Priority Pool will receive at least 80 percent of the available CPU resources, while the Low Priority Pool, will receive at least 20 percent of the available CPU. If the High Priority pool contains 100 VMs, then each VM in the High Pool competes for the CPU time that has been assigned to the pool. In this case, each VM receives 1 percent of the CPU time assigned to the High Priority Pool or 0.8 percent. If the Low Priority resource pool contains just ten VMs, then each VM in the pool competes for the CPU time that has been assigned to the entire pool. In this case, each VM receives 2 percent of the available CPU, which is equivalent to 250 percent of the assignment to each High Priority VM.
The main issue is that the administrator intended to give higher priority and higher resource access to a set of VMs, but wound up actually providing lower priority to each these VMs. The High Priority Pool did, in fact, receive four times the amount the amount of CPU than the Low Priority Pool, or 80 percent of the available CPU. But, since the High Priority Pool contained ten times the number of VMs than Low Priority pool, the CPU resources in the Production Pool had to be evenly divided and spread to all its VMs, resulting in lower per-virtual-CPU allocation. Notice that the use of the High Priority and Low Priority pools in this example would actually work nicely, if each pool contained nearly the same number of VMs.
Recommendation
In this scenario, a better approach may be for the administrator to avoid creating any resource pools. Instead, simply configure each individual VM with either High or Low Shares. This plan requires one extra configuration step while deploying each new VM, but it is likely the best method.
Another option to consider is to customize the Shares on the two pools to account for the fact that the High Priority Pool contains ten times the number of VMs as the Low Priority Pool. In other words, modify the CPU and RAM shares on the High Priority pool from High (8,000 shares) to a custom value of 80,000. This would provide each High Priority VM with 80 CPU shares or four times the CPU Shares of each Low Priority VM. Note, that this plan requires attention, such that if the ratio of High Priority VMs to Low Priority VMs changes significantly, then the Shares values should be adjusted accordingly.
Reproduced from Global Knowledge White Paper: Recommended Use of Resource Pools in VMware vSphere DRS Clusters
Related Courses
VMware vSphere: Install, Configure, Manage [V5.1]
VMware vSphere: Fast Track [V5.1]
VMware vSphere: Optimize and Scale [V5.1]
Recommended Uses of VMware Resource Pools Series
- VMware Resource Pools: Prioritizing VMs