About Writing Virtual Devices

This chapter explains the concepts you need to understand before you start writing virtual devices (vdevs) for a QNX hypervisor system.

Types of vdevs

There are three fundamental types of vdevs:
  • boot vdev — intervenes to change the boot process of a guest; in a QNX hypervisor system, this intervention would occur when the vdev received the VDEV_CTRL_GUEST_STARTING callback (see Lifecycle of a vdev)
  • instruction trap vdev — emulates a special guest instruction that was trapped by the hardware virtualization extensions
  • emulation vdevs and para-virtualized devices — the type of vdevs discussed in this guide: these emulate physical devices, or provide the functionality that might be provided by physical devices in a non-virtualized environment, but without the constraints of emulating hardware

You can write boot and instruction trap vdevs to use in a QNX hypervisor system. Emulation vdevs and para-virtualized devices are far more common and are therefore the focus of this guide. Unless otherwise specified, in this document vdev refers to emulation vdevs and para-virtualized devices.

About QNX virtual environments and vdevs

QNX hypervisors include a few implicit vdevs; these are always included in the hypervisor code itself and don't need to be explicitly specified in a virtual machine (VM) configuration.

The code also includes many optional vdevs that you can add to a VM configuration to make them available to the guest that runs in that VM. These optional vdevs are shared objects, which you must explicitly add to the configuration for each VM that will need them. VMs are created by qvm process instances.

When you write a vdev, you create a shared object, which you must make available to the hypervisor host because that is where it will run. This is an important point to remember: a vdev belongs to the hypervisor host domain, not to the guest. The guest uses a vdev in the same way that it uses a hardware device.

The distinction between implicit and explicit vdevs is an implementation detail: implicit vdevs are included in the hypervisor code; explicit vdevs are shared objects that must be added to a VM's configuration.

A more important difference between vdevs is based on what they do and how they do it:
  • An emulation vdev emulates, in whole or in part, a physical device. The guest can interact with the vdev just like it does with a physical device. The vdev may emulate everything (e.g., a timer chip, vdev timer8254), or emulate only some aspects of the physical device. An emulation vdev may also act as an intermediary between the guest and a physical device (e.g., vdev ser8250).
  • A para-virtualized device is a vdev that has no hardware equivalent. To use it, a guest must be aware that it is running in a VM; it needs an appropriate driver and interface. Para-virtualized devices can be efficient in a hypervisor system, providing important functionality without the constraint of having to fully emulate a hardware device.

For more discussion of virtual devices and of devices in general in a QNX hypervisor system, see Devices in the User's Guide.

Public header files

The vdevs described in this guide use functions available in:
CAUTION:

Many QNX hypervisor functions take pointers to data structures not defined in the public header files. Do not try to use the internal data fields in these structures.

The meanings of these structures are:
  • struct guest_cpu: encapsulates a vCPU of the guest
  • struct guest_timer: encapsulates a single time-counting entity that can have a callback function invoked after a certain amount of guest time has elapsed

Device drivers

If you write an emulation vdev, your guest should be able to use the driver it would use to work with the corresponding hardware device in a non-virtualized system. If you plan to write a para-virtualized device, you may need to write a corresponding driver for your guest OS.

Note:
Linux includes modules for the more common types of VIRTIO devices, so you may only need to make sure these modules are loaded into your Linux kernel.

Terminology

Hypervisors are relatively new and some of the terms used to describe hypervisors and their components, and what they do, may not always be used to indicate or describe the same thing. For example, the terms guest and VM are often used interchangeably, when in fact they indicate very different things.

Or, inversely, different writers may use different terms to describe the same thing. For example, ARM calls the memory assembled for a VM intermediate physical memory, but Intel calls it guest physical memory.

We try to be precise and consistent in our use of terminology across our own documentation. For more information about how we use hypervisor and virtualization terminology, see A note about nomenclature and Terminology in the User's Guide.

Page updated: