QNX Technical Articles
QNX® Hypervisor 2.2: Release Notes
Date of this edition: October 13, 2022
![]() |
Changes to these notes since January 27, 2021 are highlighted below with this icon:
![]() |
The QNX Hypervisor 2.2 is a QNX hypervisor non-safety release built for QNX SDP 7.1 development and target systems.
![]() |
QNX hypervisors are available in two variants: QNX Hypervisor, and QNX Hypervisor for Safety (QHS). The QNX Hypervisor variant, which includes the present release, is not a safety-certified product. It must not be used in a safety-related production system. If you are building a safety-related system, you must use the QHS variant that has been built and approved for use in the type of system you are building, and you must use it only as specified in its Safety Manual. If you have any questions, contact your QNX representative. |
![]() |
Be sure to install this commercial release in a new, separate installation for your QNX Hypervisor 2.2 work. You should not install this release in a QNX SDP 7.1 installation that includes any Early Access version of this product. Early access versions may conflict with this commercial software. |
Target platforms
QNX Hypervisor 2.2 supports AArch64 (ARMv8) and x86-64 hardware architectures. For information about support for specific boards, please contact your QNX representative.
Guest OSs
You can run the following guest OSs on a QNX Hypervisor 2.2 system:
- QNX Neutrino 7.1 (for AArch64 and x86-64 targets)
- QNX Neutrino 7.0 (for AArch64 targets only)
- Linux Ubuntu 18.04 or 16.04
- Android version 6.0 (Marshmallow) or later
Development host OSs
You can work with the following development hosts:
- Linux — Red Hat Enterprise Linux 7 64-bit, or Ubuntu Desktop 18.04 LTS 64-bit
- macOS version 10.14, 10.15
- Microsoft Windows 10 Professional 64-bit, or Windows 8.1 Professional 64-bit
Build environments
You will need the appropriate environments to modify and build the hypervisor host domain, and you may also need build environments for the guest OSs you plan to run in the hypervisor's virtual machines (VMs):
- QNX SDP 7.1 for the hypervisor host domain and for QNX Neutrino 7.1 guests
Note: When you set up your QNX SDP 7.1 environment, you should update your installation with any available QNX SDP 7.1 patches.
- Appropriate build environments for Linux or Android guest OSs
Although QNX Neutrino 7.0 guests are supported, this support is meant for using existing guests from that legacy release with this newer QNX Hypervisor release for the short term. Any new QNX guests that you want to run in the hypervisor should be built with QNX SDP 7.1.
Throughout this document, you may see reference numbers associated with particular issues, changes, etc. When corresponding with our Technical Support staff about a given issue, please quote the relevant reference number. You might also find the reference numbers useful for tracking issues as they become fixed.
Contents
- What's in this release?
- Installation
- Usage
- Fixed issues
- Known issues
- Getting started with the documentation
- Technical support
What's in this release?
Packaging
The following packages are installed when you install QNX Hypervisor 2.2:
- com.qnx.qnx710.target.hypervisor.core_0.0.3.00288T202101091505S
- com.qnx.qnx710.target.hypervisor.group_2.2.0.00288T202101091505S
- com.qnx.qnx710.target.hypervisor.vdev.devel_0.0.3.00288T202101091505S
- com.qnx.qnx710.target.hypervisor.libhyp_0.0.3.00687T202012081527S
- com.qnx.qnx710.target.hypervisor.vdevpeer_0.0.3.00687T202012081527S
- com.qnx.qnx710.target.driver.virtio_0.0.3.00687T202012081527S
- com.qnx.qnx710.bsp.hypervisor_guest_armv7_0.0.2.00009T202012021915E
- com.qnx.qnx710.bsp.hypervisor_guest_x86_0.0.2.00005T202012021935E
- com.qnx.qnx710.bsp.hypervisor_host_0.0.3.00040T202012041003E
- com.qnx.qnx710.target.hypervisor.docs.plugins_0.0.3.00047T202012221022S
To see a list of the contents of a package, right-click it in the QNX Software Center, choose Properties, then click Package Contents. For the BSP packages (com.qnx.qnx710.bsp.*), if you try to see their contents, you'll see an error instead. This is due to a known issue with QNX Software Center.
Functionality updates
QNX Hypervisor 2.2 includes the following new features and improvements:
Interrupt handling
Interrupt handling now includes two new pass intr attributes, t and u. These are specified after the guest_intr argument and before vector_number if this value is specified:
intr guest_intr[,{t|u}][=vector_number]
- The default method for handling interrupts has been changed to use ISR rather
than IST, reducing the work required to handle interrupts. The new
pass intr t attribute allows you to override the default and
configure your system to use IST, however.
Note that, currently, ISRs are supported only if the targeted guest's interrupt controller is the gic vdev (i.e., on ARM boards); ISRs aren't supported on x86 boards, so the t attribute has no effect on these boards.
- The new pass intr u
attribute permits immediate unmasking of interrupts to a pass-through device,
eliminating the need for a guest exit in order to unmask the EOI.
Note that incorrect implementation of this feature might create a race condition and result in the loss of the device from the system.
Configuration support and troubleshooting
Configuration support and troubleshooting now include:
- The new qvm dryrun option can be specified in the VM configuration
to initialize the complete environment in which a guest will run, then exit without
actually starting the guest. If the VM configuration isn't valid, the qvm
process instance logs an error before exiting. Error logging will use the mechanisms
configured by the guest (see logger in the QNX Hypervisor 2.2 User's Guide
VM Configuration Reference
chapter).This option is especially useful when working with FDTs, and can be used together with the set option's fdt-dump-file argument.
- The set option's new fdt-dump-file argument allows you to instruct
a qvm process instance to write the VM's FDT to a specified file.
You can then examine the FDT to better understand any customizations you may need to
make to the FDT (see
FDT's ARM
in the User's GuideVM Configuration Reference
chapter).If you use this argument with the dryrun option, you can get the guest's FDT without starting the guest. For example, starting a qvm process instance as follows:
qvm @mainline-guest.qvmconf set fdt-dump-file /tmp/fdt.dtb dryrun
would cause the process instance to write the guest's FDT to /tmp/fdt.dtb, then exit with a message like this:
FDT saved to '/tmp/fdt.dtb' Exiting: dryrun completed
Device hand-off
Technology exists in the QNX Engineering Services team to hand off pass-through devices from the QNX Hypervisor host to a guest. For example, a rearview camera can be displayed by drivers and applications running in the hypervisor host immediately after bootup, and the rearview camera hardware can then be handed over to a guest. At this point, the guest driver takes ownership of the device.
Handoff mechanisms vary depending on the hardware and the guest configuration. One common technique is to use PCI probing from the guest (which is detected by the hypervisor) to signal the transition. If this technology interests you, please contact your QNX representative.
Installation
The QNX Software Center has all of the components you will need for your QNX Hypervisor 2.2 release. To get and install your QNX Hypervisor 2.2 system, follow the instructions below.
The QNX Hypervisor 2.2 development environment
Add a new, separate installation for your QNX Hypervisor 2.2 work, even if you already have a QNX SDP 7.1 installation on your development host.
QNX Hypervisor 2.2 is built on QNX SDP 7.1. The hypervisor files are uniquely named, so installing the QNX Hypervisor 2.2 packages won't overwrite existing files in your QNX SDP 7.1 target or host directories. However, in some cases a board support file may require updating in order to work properly in a hypervisor system. The change may be one you make yourself, or one recommended to you by QNX Engineering.
It is best to prepare for such possible changes when you first install QNX Hypervisor 2.2 on your development host. To do this, in the QNX Software Center:
- On the Welcome page, choose + Add Installation (top option in the left-side list) to create a new installation.
- From the Available tab, expand the QNX Hypervisor 2.2 group and then choose the QNX Hypervisor 2.2 package to add a new installation.
BSPs
The QNX Hypervisor 2.2 packages that you download from the QNX Software Center include the BSPs for the hardware platforms supported by this release. However, if you will modify and build hypervisor systems, you may need to get additional BSPs. Remember the following:
- Your hypervisor host needs the board-specific BSP for the hardware platform on which it will run.
- QNX guests need guest OS release-specific and architecture-specific hypervisor guest
BSPs; for example:
QNX SDP 7.1 Hypervisor guest for generic ARM virtual machines
. - Even if you already have a BSP for your board, get the latest BSP.
- NXP i.MX8QM requires some special BSP configuration and setup for running the hypervisor. Please contact the QNX sales team for details.
- For the Qualcomm ADP family (8155/6155/8195), the distribution of the BSP software
is usually provided in pre-built archives or
chipcode
repositories. We always recommend following the hardware vendor's process to install BSP components.
Usage
Startup and shutdown
The QNX Hypervisor 2.2 User's Guide Booting and Shutting Down
chapter
has instructions on how to:
- boot and shut down the hypervisor host
- start a qvm process instance
- boot and shut down guests in a hypervisor VM
Building a system
When you are ready to build your own system, just follow the instructions in the
Building a QNX Hypervisor System
chapter of the User's Guide.
Developing virtual devices
Developing custom virtual devices (vdevs) that use the Virtualization API (libhyp.a) or building front ends for vdevs may require additional documentation. If you're working with the Virtual Device Developer's Guide and you require details on Virtualization API functions and features, please contact the QNX support team.
Optimizing how the hypervisor sets up system RAM for the guest
The Performance Tuning
chapter of the User's Guide discusses how to configure RAM for the guest.
However, a walkthrough example describing the setup choices makes things better understood.
There are two ways to configure system RAM for the guest—this is the RAM that the guest uses for its main operations—in the guest configuration file:
- Use the ram option:
This is simplest because the hypervisor can provide memory to the guest from any RAM areas available to the host. There is no need to know where the memory is coming from in the host. The hypervisor must zero this memory before giving it to the guest.
- Use the pass option:
This can be very useful in embedded system design because you can explicitly define what memory from the host is given to the guest. The hypervisor does not zero this pass memory and assumes you have already ensured it is ready for use by the guest.
For a working example of how to use these options, see the QNX Hypervisor Public Forum at: https://community.qnx.com/sf/sfmain/do/viewProject/projects.qnx_hypervisor_public_forum.
You need a myQNX account to log into this site, and your account needs to be granted free access by your QNX Sales team. Please contact your QNX Sales team for details.
Fixed issues
The following issues have been fixed:
- Running a guest that uses symmetric multiprocessing (SMP) on a target system where the hypervisor host doesn't use SMP no longer causes the host to crash. (Ref# J2887883)
- QNX guests now support GICv3 Locality-specific Peripheral Interrupts (LPIs) or Interrupt Translation Services (ITS). Accordingly, the num-lpis and its options can now be used with QNX guests. (Ref# J2790132)
- ARM. The devc-virtio driver in QNX guests now properly unmask MSIs from the virtio-console vdev if this vdev is configured to use PCI instead of MMIO. (Ref# J2795609)
- x86. When booting up on Denverton platforms, the PCI server won't present the
following error anymore (Ref# J2553131):
ERROR,0,0,3 The HW module could not be loaded. Either the environment variable is not set or the module is not accessible
- Textual substitution is now supported for configuration information passed to a qvm process instance through the command line, as well as through configuration files. (Ref# J2718608) might check if it's running in a QNX hypervisor on an x86 hardware platform has been corrected. (Ref# J2785970)-->
- ARM. The calculation for the mask used for LPI configuration has been corrected. (Ref# J2795882)
- ARM. We have updated our coding of the cache attributes for Stage 2 page tables so that a guest's erroneous interpretation of attributes for non-cacheable memory regions passed through to it won't create conditions that might cause the guest to become unresponsive. (Ref# J2791590)
- ARM. We now check that the provided Interrupt Collection ID (ICID) is valid before attempting to find the guest CPU collection pointer. (Ref# J2798229)
- ARM. We have corrected the definition of the IRM bit used when interpreting guest writes of an incorrect value to the GICD_IROUTERn registers. (Ref# J2798222)
- x86. Problems with the accuracy of information in the DSDT PCI interrupt routing table have been corrected. (Ref# J2797187)
Known issues
This release contains the following known issues:
In the QNX Hypervisor 2.2 User's Guide, the ram VM configuration option has an inaccurate example for mapping PCI pass-through devices on x86 boards. The given example does not fully avoid the 2G to 4G range as claimed and consequently, the guest in the VM won't boot if you use this configuration. (Ref# J2924179)
Workaround: Use most of the configuration as it's specified but for the last line, use this setting instead of what's shown in the user documentation:
ram 4096M,3000M # Get 3 GB. Starts above the 4G boundary.
- Do not use fdt pass loc. This causes qvm to terminate with SIGILL under most circumstances. (Ref# J2893679)
- Guests that are repeatedly shut down and restarted become, at some point, unable to
obtain an IP address and therefore their networking doesn't function.
This was seen when repetitively booting guests via an SSH console, but wasn't seen
when using a serial one. (Ref# J2890244)
Workaround: To recover guest networking:
- Slay the guest; for example, using shutdown or slay -s SIGQUIT qvm.
- Do the following network configuration steps on the host to fix the issue:
ifconfig vp0 destroy ifconfig vp1 destroy mount -Tio-pkt -o peer=/dev/qvm/qnx-guest-1/guest_to_host,bind=/dev/vdevpeers/vp0 devnp-vdevpeer-net.so mount -Tio-pkt -o peer=/dev/qvm/qnx-guest-2/guest_to_host,bind=/dev/vdevpeers/vp1 devnp-vdevpeer-net.so ifconfig vp0 -ip4csum -tcp4csum -udp4csum -tcp6csum -udp6csum ifconfig vp0 up ifconfig vp1 -ip4csum -tcp4csum -udp4csum -tcp6csum -udp6csum ifconfig vp1 up brconfig bridge0 add vp0 up add vp1 up
- Boot the guest and start the networking service in it.
- Observe that vt0 now has a valid IP address.
- For power-aware virtualized systems where virtual CPUs are being suspended, a guest
could wake up in an improper state. (Ref# J2889062)
Workaround: For QNX Hypervisor 2.2, don't use a power-aware guest.
- ARM. If an SERROR is generated while a guest is running, this causes the qvm process that hosts the guest to immediately terminate. (Ref# J2888038)
- The threads option for the virtio-blk vdev
is useful for guests that will have multiple streams of block requests to the
host-based block driver. The documented maximum value that can be set is 32 threads.
However, setting a high value within this threshold can cause more throttling and
overhead than expected. (Ref# J2877688)
Workaround: Set this value to match the total number of large bulk streams that the guest will use simultaneously. This is typically 1 to 4 streams in a Linux or Android guest; therefore, limiting the threads value to a maximum of 4 is recommended for best performance.
- x86. The guest may experience throughput rates that vary more than expected when connected to the outside world through the host. This issue is being investigated. If your design requires fixed throughput rates from guest to host to an external network, please discuss with the QNX Sales Support team. (Ref# J2872938)
- ARM. Support for PCI in QNX guests is marked as
Experimental
. To use this feature, download from the QNX Software Center and install the following package: com.qnx.sdp.target.connectivity.pci.hw.pci_hw_fdt. This package contains pci_hw-fdt.so.2, which you need for PCI support in your QNX guest. Support for PCI in Linux and Android guests is officially supported, and marked asStable
. (Ref# J2790132) - If you attempt to pass through to the guest some PCI devices with a specific memory
layout, these devices will cause the qvm process to fail with
the message
msix table and pending bits array are on the same page
. (Ref# J2769737) An oversubscribed QNX guest can cause undesired behavior. (Ref# J2648397)
Sharing physical cores is supported. However, undesired behavior in terms of timing and performance may be seen if any hypervisor vCPU thread can't run when it is ready due to CPU usage by a competing thread, whether it's another vCPU or a host thread.
In other words, be aware of oversubscribing in your design. This can occur if, for example:
- a multicore guest is given only one vCPU
- a guest has more vCPUs than there are physical cores (there's contention for many cores)
- two guests share the same core (oversubscribing of the core)
- host threads pre-empt a vCPU thread (oversubscribing of the system)
Workaround: Avoid contention for physical CPUs in your design and ensure that any pre-emption is accounted for (see the User's Guide
Performance Tuning
chapter).- ARM. GDB debugging: single-stepping fails if the instruction at the breakpoint throws an exception (e.g., a page fault). (Ref# J2502869)
- The hypervisor's use of the MSI capability means that a guest driver may find
that the MSI vectors are initially masked. We have observed that some drivers
(e.g., QNX Neutrino 6.6 Ethernet) do not unmask the MSI vectors as expected,
and are thus not able to receive interrupts from the device.
(Ref# J2250729).
Note: QNX Neutrino 6.6 guests aren't officially supported for this release. If you want to support such guests, contact the QNX support team.
Workaround: In the guest experiencing problems, clear the masking bits in the MSI capability before starting the driver. For example, for the Ethernet device on a QNX Neutrino 6.6 guest, use the pci-tool utility tool as follows:
pci-tool -dB:D:F - -write CFG:0x60=0
where B is the bus, D is the device, and F is the function, as seen by the guest.
Note that the offset used above is device-specific; it may change for other devices.
- ARM. GDB debugging from the host isn't supported for 32-bit guests. (Ref# J2204919)
Getting started with the documentation
After you've installed the QNX Hypervisor 2.2 documentation package, you'll find HTML documentation in the help system of the QNX Momentics Integrated Development Environment (IDE). For instructions on starting the IDE, see the QNX SDP release notes.
The individual documents include:
- QNX Hypervisor 2.2 User's Guide — Provides instructions for installing and running a QNX Hypervisor system, changing system components and configuration, and using hypervisor features such as virtual devices.
- QNX Hypervisor 2.2 Virtual Device Developer's Guide — Provides instructions for developing your own virtual devices (vdevs) for a QNX Hypervisor system, as well as samples of source code for some simple vdevs.
- QNX Hypervisor 2.2 Virtual Device Developer's API Reference — Describes the API for developing virtual devices (vdevs) for a QNX Hypervisor system.
You can also access the documentation using your myQNX account that has been given license for the QNX Hypervisor 2.2, as follows:
- Log into our website with your myQNX account at: https://www.qnx.com/account/login.html
- Go to the download folder for this release: https://www.qnx.com/download/group.html?programid=56048
- Download the PDF files for the documents.
Technical support
For information about technical support for any QNX product, go to the Support area on our website: https://blackberry.qnx.com/en/support. You'll find a range of support options.
You can also contact your QNX representative.