Terminology

The following terms are used throughout the QNX Hypervisor documentation.

CPU privilege level
A CPU privilege level controls the access of the program currently running on a processor to resources such as memory regions, I/O ports, and special instructions (see CPU privilege levels in the QNX Virtual Environments chapter).
DMA
Direct Memory Access
Exception Level (EL)
With ARMv8, ELs have replaced the ARMv7 Privilege Levels (PLs). ELs are ARM-defined levels of permissions at which processes can run, with EL0 having the least privileges, and EL3 having the most (used for security).
Emulation
Emulation refers to the capability to mimic a particular type of hardware for an operating system regardless of the underlying host operating system. In a QNX environment, emulation refers to emulated hardware devices that a guest will see as real hardware.
In a virtualized environment, a hypervisor includes virtual devices that emulate physical devices. From the perspective of a guest, these vdevs look like the physical devices they emulate. The guest needs drivers to manage them, and receives interrupts and sends signals to the vdevs, just as it would to a physical device in non-virtualized system.
For example, the hypervisor host domain typically emulates an interrupt controller for each guest. An emulated driver may communicate with the underlying hardware, but the guest doesn't to know if this is the case. In fact, the physical device being emulated doens't even have to exist on the hardware (see Virtual devices in the QNX Virtual Environments chapter).
EOI
End of Interrupt
FIQ
Fast interrupt request: a feature of some ARM boards, FIQs are higher-priority interrupt requests, prioritized by disabling IRQ and other FIQ handlers during request servicing; no other interrupts are processed until processing of the active FIQ interrupt is complete.
Guest
A guest is an OS running in a QNX Hypervisor qvm process; this process presents the virtual machine (VM) in which the guest runs (see A note about nomenclature in the QNX Virtual Environments chapter).
Guest-physical address
A memory address in guest-physical memory (see “Guest-physical memory” below).
Guest-physical memory
The memory assembled for a VM by the qvm process that creates and configures the VM. ARM calls this assembled memory intermediate physical memory; Intel calls it guest physical memory. For simplicity, regardless of the platform, we will use the term Bugnion, Nieh and Tsafrir use in Hardware and Software Support for Virtualization (Morgan & Claypool, 2017): “guest-physical memory”, and the corresponding term: “guest-physical address”.
Host
Either the development host (the desktop or laptop, which you can connect to your target system to load a new image or debug), or the hypervisor host domain. Unless otherwise specified (as in the instructions for building a hypervisor system or transferring it to a target), in the hypervisor documentation host refers to the hypervisor host domain (see below).
Host-physical address
A memory address in host-physical memory (see “Host-physical memory” below).
Host-physical memory
The physical memory; this is the memory seen by the hypervisor host, or any other entity running in a non-virtualized environment. (see Guest-physical memory above).
Hypervisor
A microkernel that includes virtualization extensions. In a QNX environment, these extensions are enabled by adding the module=qvm directive in a QNX buildfile.
Hypervisor host domain
The hypervisor and all its components. The hypervisor host domain (or simply host domain) is the lowest layer of the software stack that boots first after a board power-up or reset, and must be running before any guests can be launched.
The hypervisor host domain includes drivers that support para-virtualization, the qvm processes that present VMs in which guests run, and components that monitor VMs, and perform other system monitoring and maintenance tasks.
Hypervisor microkernel
The QNX OS microkernel to which virtualization extensions and libraries are added to create a hypervisor.
IOMMU
Input/Output Memory Management Unit. A memory management unit (MMU) that connects a DMA–capable I/O bus to the main memory. Like a traditional MMU, which translates CPU-visible intermediate addresses to physical addresses, an IOMMU maps device-visible intermediate addresses (also called device addresses or I/O addresses in this context) to physical addresses. This mapping ensures that DMA devices cannot interact with memory outside their bounded areas.
Intel Virtualization Technology for Directed I/O (VT-d) is an Intel IOMMU implementation. Both x86 (VT-d) and ARMv8 (SMMU) provide hardware support for IOMMU. In a QNX environment, IOMMU is enabled when using pass-through mode in a guest (see your hardware documentation).
IPI
Interprocess interrupts. Used by the hypervisor to communicate between its own cores, or to interrupt VMs.
NUC
Next Unit of Computing, an Intel embedded platform.
Para-virtualization
A virtualized environment in which a guest knows that it is a guest, and acts accordingly; for example, the guest uses a para-virtualized device (for which no hardware equivalent exists) instead of a virtual device (that emulates a hardware device).
Pass-through
Pass-through is a technique for giving a guest direct access to hardware. Pass-through allows a device driver located in a guest to control a hardware interface directly (see Pass-through devicesin the QNX Virtual Environments chapter).
pCPU
Physical CPU. Each pCPU corresponds to one hardware core. For example, a quad-core SoC will have four pCPUs. (compare with vCPU).
PPI
Per-processor interrupt
Privilege level
An ARMv7 term; now exception level (see “Exception Level (EL)” above, “Ring” below, and CPU privilege levels in the QNX Virtual Environments chapter).
PSCI
Power State Coordination Interface: an ARM API used to coordinate power control among supervisory systems running concurrently on an ARM board.
QNX Hypervisor
The running instance of a QNX Neutrino OS plus a virtualization extension (libmod_qvm.a or libmod_qvm-qos_capable.a).
Ring
x86 levels of permissions at which processes can run, with Ring3 having the least privileges, and Ring0 having the most.
SMP
Symmetrical Multiprocessing.
vCPU
Virtual CPU: a hypervisor qvm process thread that emulates a physical CPU (pCPU). The guest hosted in the qvm process sees the vCPU as a pCPU. A QNX virtualized environment may have more vCPUs than there are pCPUs on the hardware. As the hypervisor schedules VMs to run, it can attach vCPUs to a pCPU or detach them from a pCPU. You can configure your vCPUs to float between pCPUs, leaving the decision of where the vCPU should run to the hypervisor, or you can pin vCPUs to specific pCPUs (see “pCPU” above).
vdev
Virtual device (see Devices).
VirtIO
A standard for virtual devices that “by design ... look like physical devices to the guest within the virtual machine” (see the OASIS specification: Virtual I/O Device (VIRTIO) Version 1.0). A guest's VirtIO device driver is aware that it is running in a virtual environment, and cooperates with the hypervisor to allow the sharing of an interface.
The hypervisor and the guest must both be configured for VirtIO support. Typical use cases for VirtIO devices involve sharing (between guests) of Ethernet, of block devices (storage), of memory, and of consoles. A VirtIO driver in the host domain can also be shared between the host domain and the guest, if required.
The QNX virtualized environment supports versions 0.95 and 1.0 of VirtIO.
Virtualization extension
An extension to an operating system to provide support for virtualization. In a QNX environment, these extensions can be added to an existing QNX microkernel to enable a standard QNX microkernel to support virtualization.
Virtualized system
A software system that includes a QNX Hypervisor hosting one or more guests.
VM
Virtual machine. In a QNX Hypervisor virtualized environment, the hypervisor creates a qvm process instance for each guest it hosts. Each qvm process presents a VM to its guest.