cpu

Create a new vCPU in the VM

Synopsis:

cpu [options]*

Options:

cluster cluster_name
Allow the vCPU to run only on the physical CPUs (pCPUs, or cores) within the specified cluster; this is known as core pinning. A cluster is a group of associated pCPUs. The startup code defines the clusters on the system, including their names and CPU mapping. More details about clusters are given in the Processor affinity, clusters, runmasks, and inherit masks section of the QNX OS Programmer's Guide.
Assigning vCPUs to clusters implies important design choices. If a vCPU is assigned to a cluster with multiple pCPUs, then it is said to be floating, which means the vCPU may migrate. Migration is useful on some systems, because with migration a vCPU will move to another core that is free in the same cluster. However, for some realtime guests, assigning a vCPU to one core that isn't shared with other vCPUs can improve realtime determinism.
The default for a vCPU is no restriction to a particular cluster (i.e., no core pinning), which is another case of floating. QNX recommends that you first use floating vCPUs in your design, then move to restrict or prohibit migration as required.
sched priority[r | f | o ]
sched normal_priority,low_priority,max_replacements,replacement_period,initial_budget s
Set the vCPU's scheduling priority and scheduling algorithm (or policy). The algorithm can be any of the ones supported by QNX OS: round-robin (r), FIFO (f), or sporadic (s). The other (o) algorithm is reserved for future use; currently it is equivalent to round-robin.
The default vCPU configuration uses round-robin scheduling for vCPUs. Our testing has indicated that most guests respond most favorably to this algorithm. It allows a guest that has its own internal scheduling policies to operate efficiently.
For more information about sporadic scheduling, see Configuring sporadic scheduling below.
Note that the vCPU's priority is independent of the priority of any guest thread in the VM and hence, the thread scheduling in the host isn't affected by the relative importance of different guest threads. For further explanation, see Controlling guest behavior through vCPUs in the Configuration chapter.

Description:

The cpu option creates a new vCPU in the VM. Every vCPU is a thread, so a vCPU can be assigned to a cluster to restrict the pCPUs on which it runs. Similarly, standard (QNX OS) scheduling priorities and algorithms can be applied to the vCPU. Note that vCPU threads are threads in the hypervisor host domain.

If no cpu option is specified, the qvm process instance creates a single vCPU. Determining the optimal number of vCPUs for the VM (and hence, the guest) depends on a few factors, such as performance considerations, as described in vCPUs and hypervisor performance in the Performance Tuning chapter, and the hardware and OS limits, as explained in the next section.

For information about how priorities for hypervisor threads and guest threads are handled, see Scheduling in the Understanding QNX Virtual Environments chapter.

Maximum vCPUs per guest

Each hypervisor VM can host only one guest OS, so the maximum number of vCPUs that can exist in a VM is also the maximum number that may be defined for the guest OS running in that VM. This limit is the lower of two values determined by the following factors:

Hardware
On supported AArch64 (ARMv8) and x86-64 platforms, the hardware currently allows a maximum of 254 vCPUs on the board. This number may change with newer hardware.
CAUTION:
QNX strongly recommends that you don't give a guest more vCPUs than there are pCPUs on the underlying hardware platform, or unpredictable behavior will result.
Specific hardware components may further limit the number of vCPUs per guest. For example, on x86-64 boards, the LAPIC virtualization limits a guest to a maximum of 15 vCPUs. On AArch64 boards, the maximum number of vCPUs per guest is limited by the version of the GIC vdev. For a vdev running in GICv3 mode the maximum number of vCPUs allowed for a guest is eight (8).
Guest OS
Current QNX OSs support a maximum of 64 CPUs. This limit also applies to vCPUs, since a guest makes no distinction between a pCPU and a vCPU. For other types of guests, check the latest documentation for their OSs to learn about the maximum number of CPUs they support.

Configuring sporadic scheduling

Note:
Before using FIFO or sporadic scheduling, consult with QNX engineering support.

For sporadic scheduling, you need to specify the following five parameters:

  • normal_priority – the normal priority value
  • low_priority – the low priority value
  • max_replacements – the maximum number of times the vCPU's budget can be replenished due to blocking
  • replacement_period – the number of nanoseconds that must elapse before the vCPU's budget can be replenished after being blocked, or after overrunning max_replacements
  • initial_budget – the number of nanoseconds to run at normal_priority before being dropped to low_priority

Examples:

Example 1: pinned vCPU, set scheduling priority and algorithm

The following creates a vCPU that is permitted to run only on the _cpu-3 cluster, which is a predefined cluster that contains only pCPU 3:
cpu _cpu-3 sched 8r

Because the vCPU is pinned to a single pCPU, it is not floating and may not migrate. The priority of the vCPU thread is 8, and the scheduling algorithm is round-robin.

Example 2: non-pinned vCPUs, set scheduling priority

The following creates four vCPUs (0, 1, 2, 3), all with priority 10:
cpu sched 10
cpu sched 10
cpu sched 10
cpu sched 10

The cluster option isn't specified, so the default of no core pinning is used. The microkernel scheduler can run each vCPU thread on whatever available pCPU it deems most appropriate.

Example 3: two pinned vCPUs, default scheduling

The following creates four vCPUs (0, 1, 2, 3), with restrictions on pCPUs for only the first two:
cpu cluster _cpu-2, _cpu-3  # vCPU 0 may run only on cores 2 or 3.
cpu cluster _cpu-2, _cpu-3  # vCPU 1 may run only on cores 2 or 3.
cpu  # vCPU 2 may run on any pCPU.
cpu  # vCPU 3 may run on any pCPU.

For vCPUs 0 and 1, their cluster options are set to restrict them to clusters that contain only pCPU 2 and only pCPU 3. They won't migrate to pCPU 0 or 1 even if either is idle. No cluster option is specified for vCPUs 2 and 3, so they will use the default of no pinning, meaning they can run on any available pCPU (including 2 and 3).

No scheduling options are configured so the default scheduling priority and algorithm are used.

Page updated: