Caution: This version of this document is no longer maintained. For the latest documentation, see http://www.qnx.com/developers/docs.

Signals, Timeouts, and Interrupts

This chapter includes:

Handling client unblocking due to signals or timeouts

Another convenient service that the resource manager library does for us is unblocking.

When a client issues a request (e.g. read()), this translates (via the client's C library) into a MsgSend() to our resource manager. The MsgSend() is a blocking call. If the client receives a signal during the time that the MsgSend() is outstanding, our resource manager needs to have some indication of this so that it can abort the request.

Because the library set the _NTO_CHF_UNBLOCK flag when it called ChannelCreate(), we'll receive a pulse whenever the client tries to unblock from a MsgSend() that we have MsgReceive()'d.

As an aside, recall that in the Neutrino messaging model, the client can be in one of two states as a result of calling MsgSend(). If the server hasn't yet received the message (via the server's MsgReceive()), the client is in a SEND-blocked state — the client is waiting for the server to receive the message. When the server has actually received the message, the client transits to a REPLY-blocked state — the client is now waiting for the server to reply to the message (via MsgReply()).

When this happens and the pulse is generated, the resource manager library handles the pulse message and synthesizes an _IO_UNBLOCK message.

Looking through the resmgr_io_funcs_t and the resmgr_connect_funcs_t structures (see the QNX Neutrino Library Reference), you'll notice that there are actually two unblock message handlers: one in the I/O functions structure and one in the connect functions structure.

Why two? Because we may get an abort in one of two places. We can get the abort pulse right after the client has sent the _IO_OPEN message (but before we've replied to it), or we can get the abort during an I/O message.

Once we've performed the handling of the _IO_CONNECT message, the I/O functions' unblock member will be used to service an unblock pulse. Therefore, if you're supplying your own io_open handler, be sure to set up all relevant fields in the OCB before you call resmgr_open_bind(); otherwise, your I/O functions' version of the unblock handler may get called with invalid data in the OCB. (Note that this issue of abort pulses “during” message processing arises only if there are multiple threads running in your resource manager. If there's only one thread, then the messages will be serialized by the library's MsgReceive() function.)

The effect of this is that if the client is SEND-blocked, the server doesn't need to know that the client is aborting the request, because the server hasn't yet received it.

Only in the case where the server has received the request and is performing processing on that request does the server need to know that the client now wishes to abort.

For more information on these states and their interactions, see the MsgSend(), MsgReceive(), MsgReply(), and ChannelCreate() functions in the QNX Neutrino Library Reference; see also the chapter on Interprocess Communication in the System Architecture book.

If you're overriding the default unblock handler, iofunc_unblock_default(), you should always call the default handler to process any generic unblocking cases first (which it does by calling iofunc_unblock()). For example:

if((status = iofunc_unblock_default(...)) != _RESMGR_DEFAULT) {
    return status;
}

This ensures that any client waiting on a resource manager list (such as an advisory lock list) will be unblocked if possible.

Then you'll need some way to walk your table of blocked client rcvids, looking for a match, and unblocking them. That is done by replying to them; you aren't replying to the unblock request as such, but to the original client call (so either partially read the data or give an EINTR error as appropriate).

The routine should confirm the unblock is still pending (to avoid the race condition where the client was replied to by the time you got the unblock pulse, and the rcvid now indicates another client), by calling MsgInfo() and then checking for the _NTO_MI_UNBLOCK_REQ flag. If you can't find a matching client, you can ignore the unblock request by returning _RESMGR_NOREPLY:

/* Check if rcvid is still valid and still has an unblock
   request pending. */
if (MsgInfo(ctp->rcvid, &info) == -1 ||
    !(info.flags & _NTO_MI_UNBLOCK_REQ)) {
	return _RESMGR_NOREPLY;
}

If you don't provide an unblock handler, having your client thread left REPLY-blocked on the server is expected behavior; the server has to be given a chance to clean up client data structures when a client terminates.

Handling interrupts

Resource managers that manage an actual hardware resource will likely need to handle interrupts generated by the hardware. For a detailed discussion on strategies for interrupt handlers, see the chapter on Writing an Interrupt Handler in the QNX Neutrino Programmer's Guide.

How do interrupt handlers relate to resource managers? When a significant event happens within the interrupt handler, the handler needs to inform a thread in the resource manager. This is usually done via a pulse (discussed in the Handling private messages and pulses section of the Handling Other Messages chapter), but it can also be done with the SIGEV_INTR event notification type. Let's look at this in more detail.

When the resource manager starts up, it transfers control to thread_pool_start(). This function may or may not return, depending on the flags passed to thread_pool_create() (if you don't pass any flags, the function returns after the thread pool is created). This means that if you're going to set up an interrupt handler, you should do so before starting the thread pool, or use one of the strategies we discussed above (such as starting a thread for your entire resource manager).

However, if you're going to use the SIGEV_INTR event notification type, there's a catch — the thread that attaches the interrupt (via InterruptAttach() or InterruptAttachEvent()) must be the same thread that calls InterruptWait().

Sample code for handling interrupts

Here's an example that includes relevant portions of the interrupt service routine and the handling thread:

#define INTNUM 0
#include <stdio.h>
#include <stddef.h>
#include <stdlib.h>
#include <sys/iofunc.h>
#include <sys/dispatch.h>
#include <sys/neutrino.h>

static resmgr_connect_funcs_t   connect_funcs;
static resmgr_io_funcs_t        io_funcs;
static iofunc_attr_t            attr;

void *
interrupt_thread (void * data)
{
    struct sigevent event;
    int             id;

    /* fill in "event" structure */
    memset(&event, 0, sizeof(event));
    event.sigev_notify = SIGEV_INTR;

    /* Obtain I/O privileges */
    ThreadCtl( _NTO_TCTL_IO, 0 );

    /* intNum is the desired interrupt level */
    id = InterruptAttachEvent (INTNUM, &event, 0);

    /*... insert your code here ... */

    while (1) {
        InterruptWait (NULL, NULL);
        /*  do something about the interrupt,
         *  perhaps updating some shared
         *  structures in the resource manager 
         *
         *  unmask the interrupt when done
         */
        InterruptUnmask(INTNUM, id);
    }
}

int
main(int argc, char **argv) {
    thread_pool_attr_t    pool_attr;
    resmgr_attr_t         resmgr_attr;
    dispatch_t            *dpp;
    thread_pool_t         *tpp;
    int                   id;


    if((dpp = dispatch_create()) == NULL) {
        fprintf(stderr,
                "%s: Unable to allocate dispatch handle.\n",
                argv[0]);
        return EXIT_FAILURE;
    }

    memset(&pool_attr, 0, sizeof pool_attr);
    pool_attr.handle = dpp; 
    pool_attr.context_alloc = dispatch_context_alloc; 
    pool_attr.block_func = dispatch_block;  
    pool_attr.unblock_func = dispatch_unblock; 
    pool_attr.handler_func = dispatch_handler; 
    pool_attr.context_free = dispatch_context_free;
    pool_attr.lo_water = 2;
    pool_attr.hi_water = 4;
    pool_attr.increment = 1;
    pool_attr.maximum = 50;

    if((tpp = thread_pool_create(&pool_attr, 
                                 POOL_FLAG_EXIT_SELF)) == NULL) {
        fprintf(stderr, "%s: Unable to initialize thread pool.\n",
                argv[0]);
        return EXIT_FAILURE;
    }

    iofunc_func_init(_RESMGR_CONNECT_NFUNCS, &connect_funcs,
                     _RESMGR_IO_NFUNCS, &io_funcs);
    iofunc_attr_init(&attr, S_IFNAM | 0666, 0, 0);
        
    memset(&resmgr_attr, 0, sizeof resmgr_attr);
    resmgr_attr.nparts_max = 1;
    resmgr_attr.msg_max_size = 2048;

    if((id = resmgr_attach(dpp, &resmgr_attr, "/dev/sample", 
                           _FTYPE_ANY, 0,
                 &connect_funcs, &io_funcs, &attr)) == -1) {
        fprintf(stderr, "%s: Unable to attach name.\n", argv[0]);
        return EXIT_FAILURE;
    }

    /* Start the thread that will handle interrupt events. */
    pthread_create (NULL, NULL, interrupt_thread, NULL);

    /* Never returns */
    thread_pool_start(tpp);
}

Here the interrupt_thread() function uses InterruptAttachEvent() to bind the interrupt source (intNum) to the event (passed in event), and then waits for the event to occur.

This approach has a major advantage over using a pulse. A pulse is delivered as a message to the resource manager, which means that if the resource manager's message-handling threads are busy processing requests, the pulse will be queued until a thread does a MsgReceive().

With the InterruptWait() approach, if the thread that's executing the InterruptWait() is of sufficient priority, it unblocks and runs immediately after the SIGEV_INTR is generated.