Terminology

The following terms are used throughout the QNX Hypervisor documentation.

Blob
The guest as seen by the hypervisor host. The term blob is used to emphasize that fact that the hypervisor can't know any more about its guests than a board can know about the OS running on hardware.
CPU privilege level
A CPU privilege level controls the access of the program currently running on a processor to resources such as memory regions, I/O ports, and special instructions (see CPU privilege levels in the Understanding QNX Virtual Environments chapter).
DMA
Direct Memory Access
Exception Level (EL)
With ARMv8, ELs have replaced the ARMv7 Privilege Levels (PLs). ELs are ARM-defined levels of permissions at which processes can run, with EL0 having the least privileges, and EL3 having the most (used for security).
Emulation
Emulation refers to the capability to mimic a particular type of hardware for an operating system regardless of the underlying host operating system. In a QNX environment, emulation refers to emulated hardware devices that a guest will see as real hardware.
In a virtualized environment, a hypervisor includes virtual devices that emulate physical devices. From the perspective of a guest, these vdevs look like the physical devices they emulate. The guest needs drivers to manage them, and receives interrupts and sends signals to the vdevs, just as it would to a physical device in non-virtualized system.
For example, the hypervisor host domain typically emulates an interrupt controller for each guest. An emulated driver may communicate with the underlying hardware, but the guest doesn't to know if this is the case. In fact, the physical device being emulated doesn't even have to exist on the hardware (see Virtual devices in the Understanding QNX Virtual Environments chapter).
EOI
End of Interrupt
Execute, Execution
The act of completing an instruction on a physical CPU. Contrast with Run below. In a QNX hypervisor system, the hypervisor host runs directly on the hardware, while a guest OS runs in a virtual machine (VM) which has virtual CPUs (vCPUs). Ultimately, however, both the hypervisor and the guest execute on the physical CPUs (see also Lahav Line below).
FIQ
Fast interrupt request: a feature of some ARM boards, FIQs are higher-priority interrupt requests, prioritized by disabling IRQ and other FIQ handlers during request servicing; no other interrupts are processed until processing of the active FIQ interrupt is complete.
Guest
A guest is an OS running in a QNX hypervisor qvm process; this process presents the virtual machine (VM) in which the guest runs (see A note about nomenclature in the Understanding QNX Virtual Environments chapter).
Guest-physical address
A memory address in guest-physical memory (see “Guest-physical memory” below).
Guest-physical memory
The memory assembled for a VM by the qvm process that creates and configures the VM. ARM calls this assembled memory intermediate physical memory; Intel calls it guest physical memory. For simplicity, regardless of the platform, we will use the term Bugnion, Nieh and Tsafrir use in Hardware and Software Support for Virtualization (Morgan & Claypool, 2017): “guest-physical memory”, and the corresponding term: “guest-physical address”.
Host
Either the development host (the desktop or laptop, which you can connect to your target system to load a new image or debug), or the hypervisor host domain. Unless otherwise specified (as in the instructions for building a hypervisor system or transferring it to a target), in the hypervisor documentation host refers to the hypervisor host domain (see below).
Host-physical address
A memory address in host-physical memory (see “Host-physical memory” below).
Host-physical memory
The physical memory; this is the memory seen by the hypervisor host, or any other entity running in a non-virtualized environment. (see Guest-physical memory above).
Hypervisor
A microkernel that includes virtualization extensions. In a QNX environment, these extensions are enabled by adding the module=qvm directive in a QNX buildfile.
Hypervisor host domain
The hypervisor and all its components. The hypervisor host domain (or simply host domain) is the lowest layer of the software stack that boots first after a board power-up or reset, and must be running before any guests can be launched.
The hypervisor host domain includes drivers that support para-virtualization, the qvm processes that present VMs in which guests run, and components that monitor VMs, and perform other system monitoring and maintenance tasks.
Hypervisor microkernel
The QNX OS microkernel to which virtualization extensions and libraries are added to create a hypervisor.
IOMMU
Input/Output Memory Management Unit. A memory management unit (MMU) that connects a DMA–capable I/O bus to the main memory. Like a traditional MMU, which translates CPU-visible intermediate addresses to physical addresses, an IOMMU maps device-visible intermediate addresses (also called device addresses or I/O addresses in this context) to physical addresses. This mapping ensures that DMA devices cannot interact with memory outside their bounded areas.
Intel Virtualization Technology for Directed I/O (VT-d) is an Intel IOMMU implementation. Both x86 (VT-d) and ARMv8 (SMMU) provide hardware support for IOMMU. In a QNX environment, IOMMU is enabled when using pass-through mode in a guest (see your hardware documentation).
IPI
Interprocess interrupts. Used by the hypervisor to communicate between its own cores, or to interrupt VMs.
Lahav Line
A line that describes how a hypervisor host runs directly on the hardware, while a guest runs in hypervisor virtual machine with virtual CPUs, but both execute on physical CPUs (see Two representations of a QNX hypervisor system).
NUC
Next Unit of Computing, an Intel embedded platform.
Para-virtualization
A virtualized environment in which a guest knows that it is a guest, and acts accordingly; for example, the guest uses a para-virtualized device (for which no hardware equivalent exists) instead of a virtual device (that emulates a hardware device).
Pass-through
Pass-through is a technique for giving a guest direct access to hardware. Pass-through allows a device driver located in a guest to control a hardware interface directly (see Pass-through devicesin the Understanding QNX Virtual Environments chapter).
pCPU
Physical CPU. Each pCPU corresponds to one hardware core. For example, a quad-core SoC will have four pCPUs. (compare with vCPU).
PPI
Per-processor interrupt
Privilege level
An ARMv7 term; now exception level (see “Exception Level (EL)” above, “Ring” below, and CPU privilege levels in the Understanding QNX Virtual Environments chapter).
PSCI
Power State Coordination Interface: an ARM API used to coordinate power control among supervisory systems running concurrently on an ARM board.
QHS
QNX Hypervisor for Safety
QNX Hypervisor
A running instance of a QNX Neutrino OS or a QNX OS for Safety, plus a virtualization extension (libmod_qvm.a).
QNX Hypervisor for Safety (QHS)
A safety-certified variant of a QNX hypervisor.
QNX OS for Safety (QOS)
A safety-certified variant of a QNX Neutrino OS.
QOS
QNX OS for Safety
Ring
x86 levels of permissions at which processes can run, with Ring3 having the least privileges, and Ring0 having the most.
Run
In a QNX hypervisor system, the hypervisor host runs directly on the hardware, while a guest OS runs in a virtual machine (VM) which has virtual CPUs (vCPUs). Ultimately, however, both the hypervisor and the guest execute on the physical CPUs (see also Lahav Line above).
SMP
Symmetrical Multiprocessing.
vCPU
Virtual CPU: a hypervisor qvm process thread that emulates a physical CPU (pCPU). The guest hosted in the qvm process sees the vCPU as a pCPU. A QNX virtualized environment may have more vCPUs than there are pCPUs on the hardware. As the hypervisor schedules VMs to run, it can attach vCPUs to a pCPU or detach them from a pCPU. You can configure your vCPUs to float between pCPUs, leaving the decision of where the vCPU should run to the hypervisor, or you can pin vCPUs to specific pCPUs (see “pCPU” above).
vdev
Virtual device (see Devices).
VirtIO
A standard for virtual devices that “by design ... look like physical devices to the guest within the virtual machine” (see the OASIS specification: Virtual I/O Device (VIRTIO) Version 1.0). A guest's VirtIO device driver is aware that it is running in a virtual environment, and cooperates with the hypervisor to allow the sharing of an interface.
The hypervisor and the guest must both be configured for VirtIO support. Typical use cases for VirtIO devices involve sharing (between guests) of Ethernet, of block devices (storage), of memory, and of consoles. A VirtIO driver in the host domain can also be shared between the host domain and the guest, if required.
The QNX virtualized environment supports versions 0.95 and 1.0 of VirtIO.
Virtualization extension
An extension to an operating system to provide support for virtualization. In a QNX environment, these extensions can be added to an existing QNX microkernel to enable a standard QNX microkernel to support virtualization.
Virtualized system
A software system that includes a QNX hypervisor hosting one or more guests.
VM
Virtual machine. In a QNX hypervisor virtualized environment, the hypervisor creates a qvm process instance for each guest it hosts. Each qvm process presents a VM to its guest.