Supported architectures and guest OSs
Supported hardware architectures
This release of the hypervisor uses a 64-bit kernel and supports the following 64-bit architectures:
- AArch64 on ARMv8 hardware
- x86-64 on x86 hardware
CPU privilege levels
A CPU privilege level controls the access of the program currently running on a processor to resources such as memory regions, I/O ports, and special instructions. A guest runs at a lower privilege than the QNX OS microkernel, and applications running in that guest run at an even lower privilege. This architecture provides hardware-level safety from untrusted software components.
See also Exception Level (EL)
and
Ring
in the Terminology
appendix.
PCI support
The QNX PCI vendor ID is 7173 (0x1C05
).
For more information about PCI Vendor IDs, see the PCI SIG website at
https://pcisig.com/.
For more information about the QNX PCI Vendor ID,
please contact your QNX representative.
Supported guest OSs
This hypervisor release supports QNX OS and Linux guests for the hardware architectures specified above. The supported guests include:
- QNX OS 8.0
- Linux Ubuntu 20.04
Other guest OSs (e.g., Android, older QNX guests) are possible. For information about supporting these guests, contact QNX Engineering Services.
Guest OSs must be compiled for the hardware architecture on which the hypervisor host is running. For example, AArch64 guests can run on ARMv8 hardware only.
For both ARM and x86 platforms, the hypervisor host domain requires 64-bit hardware and supports 64-bit guests. Guests may run as single-core or multi-core; that is, a guest may run in a VM configured with a single virtual CPU (vCPU), or in a VM configured with multiple vCPUs.
The number of vCPUs in a VM affects performance. Adding vCPUs adds vCPU threads
to the qvm process instance for the VM hosting the guest.
Although independent of the number of hardware CPU cores, the number of vCPUs
chosen by the system designer is often related to this number of physical cores.
Increasing the number of vCPUs can improve performance, but there are
circumstances where additional vCPUs may reduce performance.
This is explained further in the vCPUs and hypervisor performance
section.