Create a new vCPU in the VM
Options:
- partition
name
-
If adaptive partitioning (APS) is implemented in the hypervisor host domain,
run the vCPU in the host domain APS partition specified by
name. If the partition option isn't
specified, the vCPU thread will run in the partition where the
qvm process was started.
- runmask
cpu_number{,cpu_number}
-
Allow the vCPU to run only on the specified physical CPUs (this is known as
core pinning). CPU numbering is zero-based. The default is no
restrictions (floating).
- The association of a thread with specific processors is known as
processor affinity.
- Assigning runmasks to vCPUs implies important design choices. If a vCPU
is allowed to float (i.e., no runmask is set, or the runmask includes
multiple CPUs), then the vCPU may migrate.
Migration is useful on some systems, because with migration a vCPU will move
to a core that is free, if the runmask permits it. However, for some
realtime guests, assigning a vCPU to one core that isn't shared with other
vCPUs can improve realtime determinism.
- QNX recommends that you first use floating vCPUs in your design, then move
to restrict or prohibit migration as required.
- sched
priority[r | f | o ]
- sched
high_priority,low_priority,max_replacements,replacement_period,initial_budget
s
-
Set the vCPU's scheduling priority and scheduling algorithm. The
algorithm can be round-robin (r), FIFO
(f), or sporadic (s). The
other (o) algorithm is reserved for future
use; currently it is equivalent to r.
- The default vCPU configuration uses round-robin scheduling for vCPUs. Our
testing has indicated that most guests respond most favorably to this
algorithm. It allows a guest that has its own internal scheduling
policies to operate efficiently.
- See Configuring sporadic scheduling
below for more information about sporadic scheduling.
Description:
The cpu option creates a new vCPU in the VM. Every vCPU is a thread,
so a runmask can be used to restrict the vCPU to specific physical CPUs.
Standard thread scheduling priorities and algorithms can be applied to the vCPU.
Note that vCPU threads are threads in the hypervisor host domain.
If no cpu option is specified, the qvm process
instance creates a single vCPU.
For more information, see vCPUs and hypervisor performance in the Performance Tuning chapter.
Configuring sporadic scheduling
Note: You should consult with QNX engineering support before using scheduling such as
FIFO or sporadic.
For sporadic scheduling, you need to specify the following five parameters:
- high_priority – the high priority value
- low_priority – the low priority value
- max_replacements – the maximum number of times the
vCPU's budget can be replenished due to blocking
- replacement_period – the number of nanoseconds that
must elapse before the vCPU's budget can be replenished after being blocked, or
after overrunning max_replacements
- initial_budget – the number of nanoseconds to run at
high_priority before being dropped to
low_priority
Maximum vCPUs per guest
The maximum number of vCPUs that may be defined for each guest running in a
hypervisor VM is limited by a number of factors:
Note: Presently a VM can host only one guest so this limit is also the
maximum number of vCPUs that can exist in a VM.
- Hardware
- On supported AArch64 (ARMv8) and x86-64 platforms, the hardware currently
allows a maximum of 254 vCPUs on the board. This number may change with
newer hardware.
- Specific hardware components may also limit the number of vCPUs per guest.
For example, on x86-64 boards, the LAPIC virtualization limits a guest to a
maximum of 15 vCPUs. Similarly, on AArch64 boards, the maximum number of
vCPUs per guest is limited by the GIC version in which the GIC vdev is
running. With the GIC vdev running in GICv2 mode the maximum number of vCPUs
that can be assigned to a guest is eight (8).
Note: QNX recommends that you don't give a guest more vCPUs than there are
physical CPUs on the underlying hardware platform.
- Guest OS
- Current QNX OSs support a maximum of 32 CPUs (except on ARM boards with
GICv2, for which the limit is 8 CPUs). This limit also applies to vCPUs,
since a guest OS makes no distinction between a CPU and a vCPU.
- Check the latest documentation for your guest OSs
(QNX Neutrino and Linux)
for more information about the maximum number of CPUs they support.
Examples:
Example 1: pin vCPU, set scheduling priority
The following creates a vCPU that is permitted to run only on physical CPU 3
(numbering is zero-based):
cpu runmask 3 sched 8r
The priority is 8. The scheduling algorithm is round-robin.
Example 2: floating vCPUs, set scheduling priority
The following creates four vCPUs (0, 1, 2, 3), all with priority 10:
cpu sched 10
cpu sched 10
cpu sched 10
cpu sched 10
The runmask option isn't specified, so the default of no
restrictions (floating) is used.
Since no processor affinity has been specified for any of the vCPU threads,
the hypervisor microkernel scheduler can run each vCPU thread on whatever available
physical CPU it deems most appropriate.
Example 3: two vCPUs pinned to physical CPUs, default scheduling
The following creates four vCPUs (0, 1, 2, 3), with restrictions on physical CPUs
for only the first two:
cpu runmask 2,3 # vCPU 0 may run only on pCPU 2 or 3.
cpu runmask 2,3 # vCPU 1 may run only on pCPU 2 or 3.
cpu # vCPU 2 may run on any pCPU.
cpu # vCPU 3 may run on any pCPU.
For vCPUs 0 and 1, their runmask options are set to pin them to
pCPUs 2 and 3. This allows them to run only on these pCPUs; they won't migrate to
pCPU 0 or 1 even if these pCPUs are idle. No runmask option is
specified for vCPUs 2 and 3, so they will use the default (no restrictions).
They can run on any available physical CPU (including pCPUs 2 and 3).
For information about how priorities for hypervisor threads and guest threads are
handled, see Scheduling in the Understanding QNX Virtual Environments chapter.
For more information about processor affinity and scheduling, see the
Processor affinity, clusters, runmasks, and inherit masks topic in the
Multicore Processing chapter of the QNX Neutrino
Programmer's Guide.