QNX Neutrino system considerations

Why doesn't QNX Neutrino allow a partition to be deleted?

It's safer and much more efficient not to delete a partition. A suggested alternative is to set the partition's budget to zero.

To delete a partition, QNX Neutrino would have to locate all threads (or assert that there are none) in a partition and move them to some other partition.

Threads are mapped to their partitions with a single pointer. There is no back pointer, as it would require a linked list to implement a many-to-one mapping to chain together all threads.

In addition, QNX Neutrino would require additional kernel memory for a two-way queue through all thread_entry structures. In addition, QNX Neutrino also have to do two-way queue extractions every time it (QNX Neutrino) inherited partitions (e.g. message sending) while evading the simultaneous destruction of other threads.

How does the thread scheduler plug into procnto?

See the next answer.

Is the classic scheduler still present when the thread scheduler is active?

Adaptive partitioning scheduler is part of the kernel.

It is shipped as a library module (libmod) that is built into the image along with procnto. The procnto also contains the code for the classic QNX Neutrino scheduler when the thread scheduler module is not present. However, when the thread scheduler module is present, procnto initializes the thread scheduler instead of the classic scheduler. The thread scheduler then directs a set of function pointers, one for each primitive scheduling operation (such as ready(), block(), etc.), to its own function constants. Subsequently, it creates the system partition, which it returns to procnto.

Does the thread scheduler inhibit I/O interrupts?

Yes. The thread scheduler calls InterruptDisable() for slightly longer than the time required to call ClockCycles() each time it must microbill. That includes not inhibiting interrupts to get mutual exclusion between the clock interrupt handler, scheduling algorithm, getting partition statistics, or changing budgets.

SchedCtl(SCHED_APS_PARTITION_STATS,...)

Is there a performance limitation on how often I can call SchedCtl(SCHED_APS_PARTITION_STATS,...) to get statistics?

Other than the cost of the SchedCtl() kernel call, the answer is no.

Getting statistics doesn't inhibit interrupts, or delay window rotations or the scheduling algorithm (on other SMP processors.) Consistent retrieval of statistics is accomplished by detecting collisions and having the API withdraw and retry. Note that the call to SchedCtl( SCHED_APS_PARTITION_STATS, ... ) fails with EINTR only in the unlikely case of three consecutive collisions. In general, this can occur only if the user has set the clock period to such a short value that it's likely unsafe for the rest of the system.