Configuring guests
Guest images in a QNX virtualized environment are configured in the same way as they are configured in a non-virtualized environment. For example, for QNX guests, use a buildfile.
Assembling and configuring VMs.
Configuration of any resources such as devices and memory regions that a guest needs, the interrupts delivered to the guest, and the host-domain vCPUs threads that run guest code is all done through the VM configuration file. The sections that follow explain how to configure each of these components.
Information about building guest images is given in
Building guests
in the Building a QNX Hypervisor
System
chapter.
Information about starting guests is provided in Starting a guest
in the Booting
and Shutting Down
chapter.
Guest resource types
space). This space defines the resource type. In the VM configuration, the location entries consist of the common vdev option loc followed by a resource type identifier and other parameters specific to the resource type:
- mem:address ...
-
The location is a guest-physical address (intermediate physical address).
The first parameter gives the physical address
as understood by the guest.
For a full explanation and examples of all parameters used with this identifier, see the first form of the pass loc option in the
VM Configuration Reference
chapter. - io:port ...
-
The location is in the x86 I/O space. The first parameter is the port
number.
For an explanation of all parameters used with this identifier, see the first form of the pass loc option.
- pci:{pci_vid/pci_did|pci_bus:pci_dev[.pci_func] ...}
-
The location is a PCI device. The device can be specified either by the
vendor ID and device ID, or by the bus, device, and optionally, function
number.
For explanations and examples of these two different ways of specifying the PCI device, see the second form and the third form of the pass loc option.
Default resource types
If no resource type identifier is specified in the device configuration, a suitable
default is chosen. The most common default resource type is mem:
,
but the default changes according to the vdev. For example, the default resource
type for VIRTIO devices is pci:
, though these devices can also be
specified as mem:
resources. See the descriptions of individual
devices for each device's default resource type.
Guest interrupts
A guest interrupt is specified by an intr entry in the VM configuration. Depending on the platform, this entry can have two parts:
- the guest device interrupt controller name, which is specified by the vdev name property
- the interrupt controller input line that is asserted when the device wishes to raise an interrupt, which is specified by the number given as the part of the intr argument
Information about these two properties is given in
Common vdev options
.
On x86 platforms, the Local component Advanced Programmable Interrupt Controller
(LAPIC) hardware is automatically supplied. There is no need to specify a vdev for
it, and guest vdevs that feed interrupts to the LAPIC hardware should simply specify
apic
for their interrupts; no input line number needs to be
stated.
vdev ioapic
intr apic
name myioapic
vdev ser8250
hostdev >-
intr myioapic:4
On ARM platforms, the Generic Interrupt Controller (GIC) hardware is automatically
supplied; it is not necessary to specify this vdev. You can still specify it if
you want to change its option values, including the input line that gets asserted
(for details, see the vdev gic reference).
The default name for guest devices that feed interrupts to the GIC is gic
,
but you can use the vdev's name property to change this.
vdev pl011
loc 0x1c090000
intr gic:37
name mygic
Controlling guest behavior through vCPUs
Many OSs auto-detect functionality offered by the underlying CPUs. For guest OSs in a hypervisor system, you should usually configure a vCPU to run only on pCPUs (cores) of the same type. For details on doing so, see the cpu cluster option. If you want to run a vCPU on different core types, ensure you know which CPU features the guest will use and you restrict the vCPU to a cluster of pCPUs that support these features.
For the vCPU priority, you can set it in the VM configuration and the underlying qvm process will apply this priority when starting the guest. Afterwards, there's no qvm mechanism for changing it. In the hypervisor host, you can adjust the priority of any thread, including a vCPU thread, via the QNX OS mechanisms such as ThreadCtl(). It is best practice that the vCPUs in a given VM be configured with the same thread priority, whether at startup or any time afterwards.
Also, note that thread priorities within a guest are distinct from those visible to the
host scheduler. If this scheduler has a low priority host thread that is ready to run and
this thread's priority is higher than the vCPU's priority, it does not matter that inside
the guest it is running a high priority guest thread. For further explanation, see the
Scheduling
section in the
Understanding QNX Virtual Environments
chapter.