Hypervisor security

Updated: April 19, 2023

Because the QNX hypervisor is built as an extension of the QNX Neutrino microkernel, it inherits the security features of the microkernel itself as well as the secure execution environment the microkernel creates. It also has additional layers that are purpose built for secure virtual machine operation.

Hypervisor security feature Description
Static partitioning of CPUs and pinning Hypervisor can enforce a static runmask for a virtual CPU; the virtual CPU cannot change its runmask after being created (because it does not have the ability). This enforcement includes one-to-one pinning of virtual CPU to physical CPU.
Static resource allocation to avoid sharing The QNX hypervisor must virtualize the CPU domain only (including PCI if it exists). All other devices can be given to guests including graphics, disks, networking, audio, bluetooth, and so on. No sharing of these devices is required.
Bounding of virtual CPU execution

Because virtual CPUs are threads of execution of an instance of qvm, they honor the system limits set by the system designer (via the rlimit structure).

In addition, the QNX Adaptive Partitioning Scheduler can be used to give one or more virtual CPUs a maximum boundary percent of CPU use when the target system becomes completely busy. When the system is not 100% busy, the set of vCPUs can use more than their maximum.

See the Adaptive Partitioning chapter of System Architecture.

Support for ARM TrustZone via hypercall and virtual device A TrustZone implementation involves the following layers:
  1. The hypervisor host owns the TrustZone hardware and provides the service to the guest.
  2. Host applications can use TrustZone based on their security policy and permissions.
  3. A guest can make a hypercall to request a TrustZone service. The hypercall is trapped by a custom virtual device and then passed to the host for servicing.
Restricting code running at high privilege The QNX hypervisor is divided into a kernel component and a virtual machine manager (qvm) component. The virtual machine manager is designed to run as little code as possible at highest privilege (EL2). When a high privilege operation is needed, it then invokes the kernel component through a private and secure interface. If there is a bug or security hole in the qvm, there is less exposure to the entire system.
Virtual machine does not start unless all supporting services are available

The virtual machine does not turn on the guest until all supporting services are verified as available including bounding of pass-through regions, virtual device connections, available ram, cores, and so on.

This behavior prevents an attack that changes a configuration to try to cause the virtual machine to halt in an unknown state.

Virtual machine cannot be changed after launch

Only a very small subset of configuration can be changed after the virtual machine is launched. This subset includes an option to allow the guest to create shared memory region after launch (this ability can be disabled) and options to change the core pinning, priority, and scheduling of the virtual CPUs.

All other configurations are static after launch. For example, the guest cannot increase its RAM, the number of CPUs, add more virtual devices, or create new pass-through regions.

For more information, see Hypervisor security in the “Security features for system integrators” section.