vCPUs and hypervisor performance
Perhaps counter-intuitively for someone accustomed to working in a non-virtualized environment, more vCPUs doesn't mean more power.
vCPUs and hypervisor overload
Multiple vCPUs are useful for managing guest activities, but they don't add processor cycles. In fact, the opposite may be true: too many vCPUs may degrade system performance.
A vCPU is a VM thread (see cpu in the VM Configuration Reference
chapter). These vCPUs appear to a guest just like
physical CPUs (pCPUs). A guest's scheduling algorithm can't know that when it is migrating
execution between vCPUs it is switching threads, not pCPUs.
This switching between threads can degrade performance of all guests and the
overall system. This is especially common when VMs are configured with more vCPUs
than there are pCPUs on the hardware.
Specifically, if the hypervisor host has more threads (including vCPU threads) ready to run than there are pCPUs available to run them, the hypervisor host scheduler must use the thread priorities and apply its scheduling policies (round-robin, FIFO, etc.) to decide which threads to run. These policies may employ preemption and time slicing to manage threads competing for pCPUs.
Every preemption requires a guest exit, context switch and restore, and a guest entrance. Thus, inversely to what usually occurs with pCPUs, reducing the number of vCPUs in a VM can improve overall performance: fewer threads will compete for time on the pCPUs, so the hypervisor will not be obliged to preempt threads (with the attendant guest exits) as often.
In brief, fewer vCPUs in a VM may sometimes yield the best performance. You can run fewer vCPUs than there are pCPUs, including just one vCPU.
Multiple vCPUs sharing a pCPU
When configuring your VM, it is prudent to not assume that the guest will always do the right thing. For example, multiple vCPUs pinned to a single pCPU may cause unexpected behavior in the guest: timeouts or delays might not behave as expected, or spin loops might never return, etc. Assigning different priorities to the vCPUs might exacerbate the problem; for example, the vCPU might never be allowed to run.
In short, when assembling a VM, consider carefully how the guest will run on it.
For more information about scheduling in a hypervisor system, see Scheduling
in the Understanding Virtual
Environments
chapter.