[Previous] [Contents] [Next]

Caution: This version of this document is no longer maintained. For the latest documentation, see http://www.qnx.com/developers/docs.

Programming Issues

Scheduling

The main difference in scheduling between QNX 4 and QNX Neutrino is that scheduling is done by thread, not by process. In QNX Neutrino, the highest-priority thread is chosen to run, regardless of what process it's in.

This has some interesting ramifications. For instance, from a QNX 4 perspective, a process can preempt itself! Of course, this is minimized when migrating to QNX Neutrino, since all your processes will be single-threaded.

Priority range

QNX Neutrino extends the priority range (from 0 to 31) to 0 through 63. Higher numbers still represent higher priority. The sched_get_priority_min() and sched_get_priority_max() calls simply return the minimum and maximum priority; the special idle thread (in the process manager) has priority 0.

At each priority, the threads in QNX Neutrino are scheduled as in QNX 4, with the exception that there's no longer an adaptive scheduling policy. The available policies are FIFO and round-robin, both of which operate the same as in QNX 4.

Remember that these policies come into play only when there's more than one thread ready to run at the same priority.


Note: According to POSIX, there's a third scheduling algorithm called SCHED_OTHER, which is up to the OS vendor to decide what it actually means. Currently in QNX Neutrino, SCHED_OTHER is the same as SCHED_RR (round-robin), but that may change some day, so we don't recommend using SCHED_OTHER.

QNX Neutrino supports the getprio() and setprio() function calls from QNX 4. But because the scheduling in QNX Neutrino is by thread, not by process, there's a caveat here: When attempting to set the priority of a process by calling setprio(), thread number 1 in the process has its priority changed, not all threads in the process.

If the process ID given to setprio() is zero, indicating the current process, it's the calling thread within that process whose priority will be set. Since QNX 4 code ported to QNX Neutrino would likely contain contain one thread anyway, this is just what you'd want to have happen.

Because of its increased number of synchronization primitives, as well as the inclusion of threads, QNX Neutrino has more states than QNX 4:

QNX Neutrino also has these additional states, which were either slightly different or not present at all in QNX 4:

The QNX Neutrino pidin utility lets you see the thread state under the "STATE" column. It's roughly analogous to ps -ef in QNX 4. In cases where a thread is blocked awaiting some other thread (e.g. waiting for the reply to a write on a serial port), the "Blocked" column shown by pidin indicates the thread ID that the thread is blocked on.

Process issues

Process creation

The qnx_spawn() function and the qnx_spawn_options structure are no longer supported. The spawn*() family (spawnl(), spawnve(), ...) of functions still exist. There's also a new function called spawn() that provides much of the functionality in the qnx_spawn_options structure.


Note: POSIX has a new function called posix_spawn(), which we don't support as of the time of this writing.

Something similar to the io vector (iov parameter to qnx_spawn() and iov member of qnx_spawn_options) is available via the fd_map to spawn(). However, the FDs passed in this will be the only ones open for the child.

When calling the spawn() function, note there are some undocumented SPAWN_ flags. These are undocumented, because they're mainly intended for people migrating from QNX 4. They can be found in the <spawn.h> header file. Some readily recognizable ones are:

QNX Neutrino also supports fork(), but there's the restriction that fork() can't be used within a multithreaded process. For this case, you might consider vfork() instead.

Process flags

The following are the issues involved with the most frequently used flags you can set in QNX 4 using the qnx_pflags() function:

Native QNX networking

The equivalent of FLEET, the QNX 4 native networking, is Qnet. From the command line, instead of using node IDs ("nids", you use node names. From code, instead of using nids, you use node descriptors. For detailed information on these issues, see the sections called Qnet Networking in the Programmer's Guide and System Architecture.

I/O Managers vs Resource Managers

QNX Neutrino has the same concept as QNX 4 I/O managers, but they're called resource managers instead. In QNX 4, unless you had used the I/O manager framework available as free software, then you'll have to rewrite most of your I/O manager from scratch.

In QNX Neutrino, a resource manager library is provided for you as part of the regular libraries. This library hides a lot of the gory details, allowing you to concentrate on code that's specific to your application while still presenting a POSIX front end to the client.

The downside to this is that migrating I/O managers will likely be the one set of code that involves the most work, because this is where there are the most differences. The work is the least if you had used the I/O manager framework that's available for QNX 4 as free software, since your process will be similar architecturally.

There's a chapter in the the Programmer's Guide called "Writing a Resource Manager" that goes into detail on how to write these.

Messages

Connection-oriented philosophy

The QNX Neutrino OS still uses the send/receive/reply model. A receiver still blocks on some receive function call, a sender still sends via some send function call and blocks until the receiver replies.

The functions involved are:

There's still the multipart message option -- the above functions whose names contain a "v" suffix are the multipart message versions (e.g. MsgSendv() is analogous to Sendmx()).

There's also an additional function for replying called MsgError(). It takes an errno value as a parameter and causes the MsgSend*() to return with -1 and errno set to this value. The MsgReply*() also has an interesting new status parameter. Whatever you pass for this will be what the MsgSend*() returns.

Channel IDs vs process IDs

There are also some fundamental differences. Under QNX 4, the sender sent to a process via a process ID, but this no longer works when there could be multiple threads within the receiving process.

Under QNX Neutrino, some thread in the receiving process creates a channel (via ChannelCreate()). Then whichever thread or threads want to receive messages from that channel call a MsgReceive*() function, passing it the channel ID (chid). So, in QNX Neutrino, you receive using channel IDs, not process IDs.

Some thread in the sending process then creates a connection to that channel (usually via ConnectAttach()). Then whichever thread or threads want to send a message will send using the connection ID (coid) via a MsgSend*() function.

So, in QNX Neutrino, you send through a connection, not to a process ID. The MsgReceive*() function returns a receive ID (rcvid) and passes this to the MsgReply*(). So in QNX Neutrino, you reply to a receive ID, not a process ID.

Notice that this is connection-oriented, unlike in QNX 4 where any sender could send to any receiver just by passing the receiver's process ID to the send function call. In QNX Neutrino, the receiver must deliberately advertise its channel ID before any sender can create a connection to it and send.

How should the receiver be written?

There are various ways to write the receiver. You have the same options as under QNX 4 -- from plain receiver all the way to resource managers (called I/O managers in QNX 4). The difference is we recommend that you write receivers as resource managers in QNX Neutrino applications.

One of the reasons for this recommendation is that the resource manager library takes care of many of the details for you. This is even more important in QNX Neutrino as there are now more details. For example, in QNX 4 when receiving a message from a sender on another node of the native QNX network, the number of bytes received was the smaller of what the sender was sending and what the receiver was asking to receive, but no smaller.

Under QNX Neutrino, however, the receiver could potentially have received less than what the sender was sending and the receiver was receiving, depending on the packet size of the protocol used. The resource manager library handles this detail for you (if you're not using the resource manager library, see the section "Receiving from across the network" in this chapter.

Although we recommend writing resource managers, we also recognize that people migrating from QNX 4 who used a simple Receive() loop may not want to make the many changes required to convert to resource managers. As such, this section will go into some detail on doing proper MsgReceive() handling.

How does the sender find the receiver?

We said above that the server creates a channel and advertises the channel ID. The sender then somehow determines that channel ID and connects to it. But how does the server advertise the channel ID?

  1. If you're willing to rewrite your receivers, or if you previously wrote them as I/O managers, then you could write receivers as resource managers. In this case, the ChannelCreate() is done by the resource manager library and the ConnectAttach() is done by the open(). The sender finds the receiver by calling open():
    fd = open(_name_receive_registered, ...);
    ...
    MsgSend(fd, ...);

    One thing that falls very nicely from this is that to connect to a server on another node of the native QNX network, you need only put node information on the front of the name you pass to open() (e.g. fd = open("/net/console/dev/robotarm", ...)).

    Note that a resource manager is the equivalent of an I/O manager in QNX 4. If you wrote your I/O managers using the iomanager framework in /usr/free (or had a similar one of your own) then, although you'll have to use different function calls, architecturally the resource manager library and the I/O manager framework are very similar.

  2. In QNX Neutrino, there's a set of functions, including name_attach() and name_open(), that do the job that qnx_name_attach() and qnx_name_locate() do in QNX 4. Note that global names are supported via the gns process.
  3. If the receiver is parent and the sender is child, then the channel ID can be passed in spawn arguments lists.
  4. If you have a starter process that starts the above two processes, then starter could create a channel and pass the channel ID to its children via command-line args. The children would then send their respective channel IDs to starter (effectively registering with starter) and request each other's channel IDs from starter.

The method that you choose depends on how much migrating you wish to do. If you want to do as little as possible, then the migration library is the starting place. If you don't mind rewriting a little of your Send code, then name_attach()/name_open() might be the way to go. If you're already using I/O managers or want to migrate to resource managers, then go ahead and do so.


Note: This solution is deprecated for Neutrino, version 6.3.0.

Receiving messages in a resource manager

When looking at writing a resource manager, one of the first things you'll wonder about is how to send messages to it and get replies back. There are various ways:

  1. The POSIX way is for the sender to use the POSIX devctl() function call to send the message. The resource manager would have an io_devctl handler registered for processing the message. The only disadvantage to this approach is that there's only one message buffer parameter and one size parameter in the devctl() call. This means that if you want to send a 10-byte message and get a 1000-byte reply back, you must provide a 1000-byte buffer and specify a size of 1000 bytes. Even though the buffer contains only 10 bytes of data for the send message, it will send the entire 1000 byte buffer.
  2. Another way is to use message_attach() to register a range of message types and a handler to be called whenever a message is received whose type is in that range. With this method, the sending can be done via MsgSend*() and the entire contents of the send message and the reply message are in your control.
  3. You can send messages using MsgSend*() by putting a header of type io_msg_t at the front of your message. Set the type member to _IO_MSG. The resource manager would register a msg handler in the resmgr_context_funcs_t structure, and when a message of type _IO_MSG arrives, your msg handler will be called with the message. The reply can be anything.

Don't forget that the above steps show how to send a message and get a reply back. If all your client wants to do is send some data, then the write() function call may be all you need. The read() function can be used for the opposite direction.

_PPF_SIGCATCH

Just as QNX 4 has the _PPF_SIGCATCH flag, QNX Neutrino has the _NTO_CHF_UNBLOCK flag for the same reasons. In QNX Neutrino, the flag is set for the channel that's being received on -- it's passed to ChannelCreate(). One difference is that in QNX 4 this affected REPLY-blocked senders who are hit with a signal. In QNX Neutrino, it still affects REPLY-blocked senders who are hit with a signal, but also if they want to time out (via TimerTimeout() or timer_timeout()).

The _NTO_CHF_UNBLOCK flag is automatically set for resource managers. If the receiver is a resource manager, then when the REPLY-blocked sender wants to unblock, the resource manager library will call an io_unblock handler. If you don't provide an io_unblock handler, then default handling will be done for you resulting in your client potentially not unblocking when it wants to. The rcvid member of the resmgr_context_t structure (ctp->rcvid -- you'll learn about this structure when you do resource managers) is the one you would reply to and/or use to look up the sender in a list of blocked senders.

If you're calling MsgReceive*() directly, then a pulse will arrive from the kernel (pulses are discussed later). The code member of the pulse message will be _PULSE_CODE_UNBLOCK and the value member of the pulse message will be the receive ID from the time that the MsgReceive*() received the sender's message. This receive id is who you would reply to and/or use to look up the sender in a list of blocked senders. Note that it's generally not a good idea to keep the receive id around once the reply has been done, because after that point its value is recycled.

Message priority

In QNX 4, the _PPF_PRIORITY_REC flag would be used to have messages be received in the order in which they were sent. In QNX Neutrino, this is automatically the behavior of the receiver.

Priority floating

The behavior that you get in QNX 4 by setting _PPF_PRIORITY_FLOAT is now the default in QNX Neutrino. To have your receiver's priority not float, set the _NTO_CHF_FIXED_PRIORITY flag in the call to ChannelCreate().

Receiving from across the network

As mentioned above, when receiving a message from a sender on another node of the native QNX network, it's possible that the number of bytes received could be smaller than both what the sender was sending and what the receiver was asking for. The fourth parameter passed to your MsgReceive*() is an info parameter of type struct _msg_info. It has a member called msglen, which contains the number of bytes that were actually copied into your receive buffer. It also has a member called srcmsglen which will contain the number of bytes that the sender wants to send, but only if you pass the _NTO_CHF_SENDER_LEN flag when calling ChannelCreate(). So a code snippet for handling this situation would be:

int              chid, rcvid;
struct _msg_info info;
my_msg_t         msg;

chid = ChannelCreate (_NTO_CHF_SENDER_LEN);

for (;;) {
    rcvid = MsgReceive (chid, &msg, sizeof(msg), &info);
    if (rcvid > 0 && info.srcmsglen > info.msglen &&
            info.msglen < sizeof(msg)) { // got it all?
        int nbytes;

        if ((nbytes = MsgRead_r(rcvid, (char *) msg + info.msglen,
                sizeof(msg) - info.msglen, info.msglen)) < 0) {
            MsgError(rcvid, -nbytes); // nbytes contains an errno value
            ...
        }
        ...
    }
    // now we have it all
    ...
}

Events

Later in this chapter we'll look at pulses, a replacement for QNX 4 proxies that also let you pass a little information along. We'll also examine POSIX signals. Because these (and other) primitives are similar, there's an underlying mechanism called an event that handles them.

An event in QNX Neutrino is a form of notification that can come from a variety of places: timer, interrupt handler, your threads, etc. An event can contain a pulse. A user hitting Ctrl-C on a keyboard causes an event containing a signal to be delivered. A timer could expire, delivering an event containing a pulse.

A thread delivers an event to another thread by calling MsgDeliverEvent(). We'll see an example of this when we talk about pulses. This function takes a receive id and an event structure of type struct sigevent. The latter contains several fields, including:

sigev_notify
The type of the event, whether it's a signal, a pulse, or whatever.
sigev_priority
The priority of the event; higher numbers mean higher priority.
sigev_code and sigev_value
The code and value fields for a pulse.
sigev_signo
The signal number for a signal.

There are macros in the include file <sys/siginfo.h> that make it simple for you to fill in the fields in this structure.

Proxies vs pulses

QNX 4 proxies have disappeared from QNX Neutrino. They've been replaced by pulses. A pulse is like a QNX 4 proxy in one critical way -- it's asynchronous, so the sending thread doesn't block. But the data for QNX 4 proxies was "canned data" that couldn't be changed from one Trigger() call to the next. With pulses, the data can be different from one "trigger" to the next.

Each QNX Neutrino pulse carries with it two items of information:

Although the "code" is a signed quantity, you should use only values in the range _PULSE_CODE_MINAVAIL to _PULSE_CODE_MAXAVAIL. The remaining code values are reserved for the OS.

Pulses are received by having your resource manager register a pulse-handler function. This is done by calling pulse_attach(). When the pulse arrives, the handler will be called.

If you're not writing a resource manager, then pulses can also be received by the MsgReceive*() and MsgReceivePulse*() functions. The return value will be zero and the message buffer will contain a message of type struct _pulse.

In QNX 4, at setup time, the receiver would typically attach a proxy and send the proxy id to the process doing the triggering. Whenever necessary, the triggerer would trigger the proxy. In QNX Neutrino, you'd do something very similar. At setup time, the receiver would fill an event structure with a pulse and send it to the process doing the delivering. Whenever necessary, the deliverer would deliver the event using MsgDeliverEvent(). As a result, the receiver would receive a pulse message.

There's another function for sending a pulse called MsgSendPulse(). When migrating from QNX 4, you would use this to replace Trigger() in cases where the triggering process had attached the proxy to the receiver (instead of the receiver attaching the proxy to itself). With MsgSendPulse() there's no event structure to fill in.

Example of pulses with a resource manager

The following code snippets illustrate sending and receiving pulses using resource managers. The deliverer is a resource manager called pulsesnd -- it registers the name /dev/pulsesnd.

First of all, since we have two processes communicating with each other, we have the following in a common header file:

// _IOMGR_PULSESND identifies the pulsesnd resource manager
#define _IOMGR_PULSESND				_IOMGR_PRIVATE_BASE

#define PULSESND_SUBTYPE_GIVE_EVENT	1

typedef struct {
    struct _io_msg  hdr;   // standard header for _IO_MSG messages
    struct sigevent event; // the event to deliver
} pulsesnd_io_msg_give_event_t;

// the reply for IO_MSG_SUBTYPE_GIVE_EVENT is empty

Next, we have the receiver process that will be receiving the pulse. The following is the code that registers the pulse handler, fills in an event structure, and sends the event structure to the deliverer:

main
{
    pulsesnd_io_msg_give_event_t msg;

    ... // setup code for the resource manager goes here

    // register our pulse handler, note that this call will figure out
    // a pulse code for us, pulse_handler() will be called whenever
    // the pulse arrives

    our_pulse_code = pulse_attach (dpp, MSG_FLAG_ALLOC_PULSE,
                                   0, pulse_handler, NULL);
	
    // send a pulse event structure to pulsesnd, pulsesnd is the
    // process (another resource manager) that will deliver this pulse
    // event when data is available.  When it does, pulse_handler()
    // will be called.

    fd = open ("/dev/pulsesnd", O_RDONLY); // find pulsesnd

    // create a connection to the channel that our resource manager is
    // receiving on

    coid = message_connect (dpp, MSG_FLAG_SIDE_CHANNEL);

    // fill message buffer with an _IO_MSG type message (for this
    // example)

    msg.hdr.type = _IO_MSG;
    msg.hdr.combine_len = sizeof(msg.hdr);
    msg.hdr.mgrid = _IOMGR_PULSESND; /* resmgr identifier */
    msg.hdr.subtype = PULSESND_SUBTYPE_GIVE_EVENT;

    // this macro fills in the event structure portion of the message

    SIGEV_PULSE_INIT(&msg.event, coid, getprio (0), our_pulse_code, 0);

    // send it to pulsesnd so that it can deliver it when it wants to

    MsgSend (fd, &msg, sizeof(msg), NULL, 0);	

    ...
}

//
// pulse_handler - Will be called when the pulse is delivered
//
int
pulse_handler (message_context_t *ctp, int code, unsigned flags,
               void *handle)
{
    if ( code == our_pulse_code ) {
        // we got a pulse (we're not expecting any others, this check
        // is for example only)

	}
	return code;
}

Next we have the code where the deliverer receives the event structure. Note that it is really a msg type handler that the resource manager library calls whenever a message of type _IO_MSG arrives. The _IO_MSG that arrives is the message sent to us in the snippet above. This msg type handler is registered with the resource manager library just as you would register a read or write handler.

int
io_msg (resmgr_context_t *ctp, io_msg_t *msg, RESMGR_OCB_T *ocb)
{
    pulsesnd_io_msg_give_event_t pmsg;

    // go get the message again to make sure we got it all

    MsgRead(ctp->rcvid, &pmsg, sizeof(pmsg), 0);

    // we need to store away the event and the rcvid if we are to
    // deliver the pulse later.  This is the same idea as saving away
    // a proxy id in QNX 4.

    pulse_event = pmsg.event;
    pulserec_rcvid = ctp->rcvid;

    MsgReply(ctp->rcvid, 0, NULL, 0);
	
    return (_RESMGR_NOREPLY);
}

Lastly, when the deliverer process wants to wake up the receiver, it delivers the event. This is analogous to Trigger(proxy) in QNX 4.

// here is where we send the pulse message.  Note that pulserec_rcvid
// and pulse_event were saved away above.

MsgDeliverEvent (pulserec_rcvid, &pulse_event);

Signal services

Here are the fundamental changes to signals:

  1. There are a whole bunch of new user-defined signals. These range in value from SIGRTMIN to SIGRTMAX (defined in <signal.h>). According to POSIX, these can carry data and can be queued.
  2. The traditional UNIX signals (SIGINT, SIGHUP, etc.) still exist and in fact are a part of POSIX. According to POSIX, these cannot carry data and cannot be queued. Note, however, that QNX Neutrino doesn't enforce this restriction, so they can carry data and be queued.
  3. As mentioned above, signals can now be queued. Just as in QNX 4, if a signal is set on a process and that process has the signal blocked (or masked), then the signal is made pending. Unlike QNX 4, however, if the same signal is set on the process a second time while the signal is still blocked, QNX Neutrino can remember that the same signal is now pending twice. When the signal is unblocked (or unmasked), then the signal action will take place twice. This is a queued signal. It is set on a signal-by-signal basis and is done at the process level.

    The default is that a signal is not queued (i.e. just as in QNX 4). If the signal is set on the process multiple times while the signal is blocked, when it's unblocked the signal will take effect only once. To indicate that a signal is to be queued, set the SA_SIGINFO flag in the sa_flags member of the struct sigaction structure when passing it to sigaction().

  4. Signals can also carry data. As with kill(), sigqueue() can be used to set a signal on a process. Unlike kill(), sigqueue() has a value parameter. This is data that will be passed to your signal handler. To access that data, your handler function will now have a parameter of type siginfo_t, which has a member called si_value. This will contain the value passed to sigqueue().

    Because of the different handler parameters, you must register your handler using sigaction(). In QNX 4, you'd put the address of your handler in the sa_handler member of the struct sigaction structure. There is now a new member called sa_sigaction. This is where you would put the address of a handler that wanted to get data.

  5. You can no longer set SIGKILL and SIGSTOP to be ignored, handled, or blocked (masked).
  6. There are issues with multithreaded processes (see the next section).

The signal(), sigaction(), kill(), sigaddset(), sigdelset(), sigemptyset(), sigfillset(), sigismember(), siglongjmp(), sigpending(), sigprocmask(), sigsetjmp(), and sigsuspend() functions are used just as in QNX 4. The sa_flags member of the struct sigaction structure that is passed to sigaction() now has some flags: SA_NOCLDSTOP and SA_SIGINFO. SA_NOCLDSTOP tells the system not to call the handler if the child is stopped via SIGSTOP (only relevant for the SIGCHLD signal). SA_SIGINFO states that the signal is a queued signal.

There are also some new functions:

Signals and threads

Having threads affects how signals are handled. If, for instance, a given process contains six threads, and a signal arrives for the process, which thread is the recipient?

Here are the rules for delivering a signal to a process that has many threads:

Because of these rules, an easy approach is to dedicate one thread as the "signal-handling thread" and mask signals in all threads except that one. This signal-handling thread could then call sigwaitinfo() so as to not consume CPU time while waiting for the signal.

Shared memory

The QNX Neutrino interface to shared memory uses shm_open(), ftruncate(), mmap(), and so on. It is almost the same as newer QNX 4 applications. One major difference is that ftruncate() is used, where in QNX 4 you would have used ltrunc(). Another difference is that the name of a shared memory object must begin with a slash (/) character and contain only one slash to conform to POSIX and to appear in /dev/shmem. If a name doesn't begin with a slash, it will appear in the current directory. The shm_ctl() function is also available for setting additional attributes.

If you're used to calling the qnx_segment*() functions, then they'll need to be converted to shm_open(), ftruncate(), mmap() for the new OS. The qnx_segment*() functions are no longer supported.

Semaphores

The function calls for semaphores -- mainly sem_init(), sem_wait(), sem_trywait(), sem_post(), and sem_destroy() -- are supported in QNX Neutrino with no changes necessary. However, there's the addition of named semaphores.

In QNX 4, a semaphore is typically placed into a shared memory area so that two processes can share it. The processes must agree in advance where the semaphore is in memory ("there's a semaphore at offset such-and-such in the shared memory named XYZ"). With QNX Neutrino, and multiple threads, it might make sense for an application to have a semaphore declared locally to a process.

Named semaphores

To make sharing semaphores between processes easier, QNX Neutrino supports POSIX named semaphores. These are semaphores that can be accessed by a name instead of having to be placed in shared memory. For named semaphores to work, you must run the mqueue process. Named semaphores are created and cleaned up using sem_open(), sem_close(), and sem_unlink().

Note that sem_wait() and sem_post() with an unnamed semaphore use kernel calls to do their work, whereas the same functions with a named semaphore work by sending messages to the mqueue process and will be considerably slower.

POSIX Message Queues

The QNX Neutrino interface for POSIX message queues is almost the same as QNX 4. The following are the differences that you'll need to be aware of:

Timers

The timing functions have changed very little:

Timeouts

Timeouts can now be done using one of two new functions: TimerTimeout() or timer_timeout(). The only difference between the two is the types of the parameters.

Under QNX 4, a timeout could be achieved by having your blocking function be unblocked by a signal after a certain amount of time elapsed. A problem arises if you're preempted for longer than the timeout. In that case, when your process gets to run again, your signal handler would be called and then you'd enter the blocking function (with no timeout in place, if you didn't use a repeating timer).

Here's a code snippet for a timeout on a MsgSend(). Note that we're passing timer_timeout() the possible states for MsgSend() that we want to timeout.

event.sigev_notify = SIGEV_UNBLOCK;
timeout.tv_sec = 10;
timeout.tv_nsec = 0;
timer_timeout (CLOCK_REALTIME,
               _NTO_TIMEOUT_SEND | _NTO_TIMEOUT_REPLY, 
               &event, &timeout, NULL );
MsgSend (coid, NULL, 0, NULL, 0); 

Unfortunately, this still isn't perfect, because the timing is relative to the call to timer_timeout(). However, if you're preempted between the timer_timeout() and MsgSend() for longer than the timeout period, the timeout is still waiting to take place, even though it will be immediate in that case. You also don't have to cancel the timeout, since that will be done automatically before the MsgSend() returns.

Interrupts

The only significant change to writing QNX Neutrino interrupt handlers is that the task has been simplified slightly.

You'll find a chapter in the Programmer's Guide entitled "Writing an Interrupt Handler." There's also a section in the "Writing a Resource Manager" chapter of that book entitled "Handling interrupts" that shows how to do interrupt handlers from within a resource manager.

Instead of calling qnx_hint_attach(), you would call InterruptAttach(). The last parameter for qnx_hint_attach() was the data segment selector of your process. You no longer need to provide this. The interrupt handler will simply be using the data segment of the process that the handler is in.

In QNX 4, your handler was limited to waking up the process via a proxy. In QNX Neutrino, your handler can return with an event that would contain either a pulse, a signal, or an event of notify type SIGEV_INTR. In the latter case, the attaching thread would block on InterruptWait().

There's also a new way of handling interrupts -- with InterruptAttachEvent(). In this case you would fill an event with a pulse, signal, or an event of notify type SIGEV_INTR. When the interrupt is generated, the kernel will mask the interrupt and deliver the event, thereby waking up a thread. The thread would then do the required work and then unmask the interrupt. This masking by the kernel is necessary for handling level-sensitive interrupts.

As in QNX 4, you're limited as to which functions you can call from within an interrupt handler. When you look at a function in the library reference manual, one of the areas under the "Classification" heading shows whether or not you can safely call the function from an interrupt handler.

Note that just as in QNX 4 you needed I/O privileges to register an interrupt handler, you still need it under QNX Neutrino. Under the new OS you actually need it for any Interrupt*() function except InterruptWait().

To get I/O privileges under QNX 4, you would link with -T1. To get I/O privileges under QNX Neutrino, you call ThreadCtl(_NTO_TCTL_IO, NULL). Note that you must be root in order to make this call to ThreadCtl().

There are now functions that can be called from both a thread and the interrupt handler for masking and unmasking interrupts: InterruptMask() and InterruptUnmask().

Hardware I/O

Port I/O

Port I/O on x86 is done using special machine instructions. On some other platforms, such as PowerPC and MIPS, it's done by mapping in and accessing memory. As such, there's one extra function you need to call that basically works out to a NOP for x86, but something else on PowerPC and MIPS. That is mmap_device_io(). You pass it the number of consecutive ports you want to access and the address of the first port. It simply returns the address of the first port (the same one you gave it). From then on you use instructions such as in8(), out8(), in16(). For addresses, pass them the value returned by mmap_device_io() (the base port) plus some offset from this base port.

Note that just as in QNX 4 you needed I/O privileges to do port I/O, you still need I/O privileges under QNX Neutrino.

To get I/O privileges under QNX 4, you would link with -T1. To get I/O privileges under QNX Neutrino, you call ThreadCtl(_NTO_TCTL_IO, NULL). Note that you must be root in order to make this call to ThreadCtl().

The following is a short example of doing port I/O in QNX Neutrino:



#define SERIAL_BASE_PORT 0x2f8
...
#define R_IE    1   /* interrupt enable */
...
#define R_LS    5   /* line status */
...
#define NPORTS  8   /* no. of ports from base port */

uintptr_t iobase;   /* base of io memory (io ports) */

/* initialization, need to do only once */
ThreadCtl (_NTO_TCTL_IO, NULL);
iobase = mmap_device_io (NPORTS, SERIAL_BASE_PORT);

...

/* wait for the transmit holding register to be empty */
while ((in8(iobase + R_LS) & 0x20) == 0)
    ;
                    
/* Enable just the modem status as an interrupt source */
out8 (iobase + R_IE, 0x08);

Memory-mapped I/O

When programming for QNX 4, you occasionally need to access physical memory. Typically, this is done for memory-mapped devices (e.g. the PC video RAM). Under QNX Neutrino, the situation is slightly different from QNX 4, but no more complex. There are, moreover, several ways to map physical memory.

The simplest method is to call the QNX Neutrino mmap_device_memory() function:

virtual_address = mmap_device_memory( NULL, length,
                     PROT_READ | PROT_WRITE | PROT_NOCACHE,
                     MAP_SHARED | MAP_PHYS, physical_address );

The above call to mmap_device_memory() just does the following:

virtual_address = mmap( 0, length,
                        PROT_READ | PROT_WRITE | PROT_NOCACHE,
                        MAP_PHYS | MAP_SHARED, NOFD, physical_address );

Note that in neither case do you have to call shm_open() as you do in QNX 4.

Memory for DMA

DMA requires that the OS allocate some memory for use by your driver and the DMA controller. You need the virtual address of this memory and the controller needs the physical address. This can all be done using the following code:

virtual_address = mmap( 0, length,
                        PROT_READ | PROT_WRITE | PROT_NOCACHE,
                        MAP_PHYS | MAP_ANON, NOFD, 0 );
mem_offset( virtual_address, NOFD, length, &physical_address, 0);

Your driver code would use the virtual_address and would give physical_address to the controller.

PCI functions

QNX 4 has a set of functions whose names begin with _CA_PCI_*(). The analogous functions for QNX Neutrino are called pci_*(). Note that for QNX Neutrino you must also run a PCI server process (e.g. pci-bios). There are no special compile options or stack issues. You also need to call pci_attach() to connect to the PCI server before making any other pci_*() calls.

Getting system information

The qnx_osinfo() function is no longer available. Instead, information can be gathered from a number of places. Not all of the corresponding information that was available from qnx_osinfo() is either available or relevant. See the source for the qnx_osinfo() function in the migration library to see how to get information for fields for which there is information available. Note that to get a QNX Neutrino-style nodename (the nodename member of the struct _osinfo structure), you can call netmgr_ndtostr().

Getting process information

Under QNX 4, this was done by repeated calls to qnx_psinfo(). As a resource manager in QNX Neutrino, the process manager makes visible the proc filesystem (or procfs). If you have a look at the contents of /proc you'll see some numbers. These numbers are the process IDs of the processes that are currently executing. To get information on them, you open them and then make devctl() calls to the resulting file descriptor. See the source for the qnx_psinfo() function in the migration library to see how to do this for a specific process or to walk through all processes. Keep in mind that where under QNX 4 some information would be process-related (e.g. state, blocked on) this information is now thread-related.

The term_() Functions

The QNX 4 functions such as term_delete_char(), which originated with QNX 2.1, are not supported under QNX Neutrino.

A program using these would need to be reimplemented to use something like curses. Also, the Watcom text calls such as _gettextcursor(), and the graphics calls like _pg_initchart(), are not supported. Basically, anything in the Watcom Graphics Library Reference is not available in QNX Neutrino.


[Previous] [Contents] [Next]