ChannelCreate(), ChannelCreate_r(), ChannelCreatePulsePool()

Updated: May 06, 2022

Create a communications channel


#include <sys/neutrino.h>

int ChannelCreate( unsigned flags );

int ChannelCreate_r( unsigned flags );

int ChannelCreatePulsePool( unsigned flags,
                            struct nto_channel_config  const* config );


Flags that can be used to request notification pulses from the kernel or request other changes in behavior; a combination of the following:

For more information, see below.

(ChannelCreatePulsePool() only) A pointer to a struct nto_channel_config that specifies the attributes of the private pulse pool; see Creating a private pulse pool,” below.



Use the -l c option to qcc to link against this library. This library is usually included automatically.


The ChannelCreate() and ChannelCreate_r() kernel calls create a channel that can be used to receive messages and pulses. Once created, the channel is owned by the process and isn't bound to the creating thread. These functions are identical, except in the way they indicate errors. See the Returns section for details.

ChannelCreatePulsePool() is similar to ChannelCreate() but creates a private pulse pool for the channel; see Creating a private pulse pool,” below.

Threads wishing to communicate with the channel attach to it by calling ConnectAttach(). The threads may be in the same process, or in another process on the same node (or a remote node if the network manager is running).

Note: If a process wants other processes to communicate with it, it typically uses name_attach() to create a channel and associate a name with it, and the sender process uses name_open() to locate that name and create a connection to it.

Once attached, these threads use MsgSendv() or MsgSendPulse() to enqueue messages and pulses on the channel. Messages and pulses are enqueued in priority order.

To dequeue and read messages and pulses from a channel, use MsgReceive(). Any number of threads may call MsgReceive() at the same time, in which case they block and queue (if no messages or pulses are waiting) for a message or pulse to arrive. A multithreaded server typically creates multiple threads and has them all RECEIVE-blocked on the channel.

The return value of ChannelCreate() is a channel ID and identifies the newly created channel owned by the calling process. Most managers use a single channel for most, if not all, their communications with clients.

By default, when a message is received from a channel, the thread priority of the receiver is set to match that of the thread that sent the message. This basic priority inheritance prevents priority inversion. If a message arrives at a channel and there's no thread waiting to receive it, the system boosts (if necessary) all threads in the process that have received a message from the channel in the past. This boost prevents a priority inversion of the client in the case where all threads are currently working on behalf of other clients, perhaps at a lower priority. For more information, see Server boost in the Interprocess Communication chapter of the System Architecture guide.

Priority inheritance can be disabled by setting _NTO_CHF_FIXED_PRIORITY in the flags argument. In this case a thread's priority isn't affected by messages it receives on a channel.

A manager typically involves the following loop. There may be one or more threads in the loop at a time. Typically your program calls ChannelCreate() only once, and all threads block on that channel.

chid = ChannelCreate(flags);
for(;;) {
    rcvid = MsgReceive(chid, &msg, sizeof(msg), &info);

    if(rcvid == -1 ) {
        // handle error
    } else if (rcvid == 0 ) {
        // handle pulse
    } else {
       // handle message, typically identified by a message type
       /* msg is filled in by MsgReceive() */
       switch(msg.type) {
       MsgReply(rcvid, status, &reply, sizeof(reply));

Some of the channel flags in the flags argument request changes from the default behavior; others request notification pulses from the kernel. The pulses are received by MsgReceive() on the channel and are described by a _pulse structure.

The channel flags and (where appropriate) associated values for the pulse's code and value are described below:

Deliver a pulse to this channel for each connection that belongs to the calling process when the channel that the connection is attached to is destroyed.

In this scenario, your process is acting as a client to a server. If the server's channel is destroyed, a pulse is sent to your new channel (the one you're creating with this call) for each connection from your process to the server's channel.

Pulse code Pulse value
_PULSE_CODE_COIDDEATH Connection ID (coid) of a connection that was attached to a destroyed channel

Note that if a server exits or closes its channel at more or less the same time that the client closes a connection to the channel, the kernel might or might not send a _PULSE_CODE_COIDDEATH pulse to the client. If the client then opens a new connection to another server before getting the pulse, the pulse will seem to indicate that it's the new server that has died. Your code for handling the _PULSE_CODE_COIDDEATH pulse needs to include something like this:

got_pulse(struct _pulse *pulse) {
    if(pulse->type == _PULSE_CODE_COIDDEATH) {
        int coid = pulse->value.sival_int;

        if(ConnectServerInfo(0, coid, NULL) != coid) {
            // server's really gone, so clean up the connection state
        } else {
            // stale pulse; probably can ignore it
Deliver a pulse when all connections from a client process on this channel are detached (e.g., close(), ConnectDetach(), name_close()).

In this scenario, your process is acting as a server. If the client detaches all of its connections to the channel (the one you're creating with this call), a pulse is sent to the channel.

Pulse code Pulse value

If a process dies without detaching all its connections, the kernel detaches them for it. When this flag is set, the server must call ConnectDetachscoid ) where scoid is the server connection ID in the pulse message. Failure to do so leaves an invalid server connection ID that can't be reused. Over time, the server may run out of available IDs. If this flag isn't set, the kernel removes the server connection ID automatically, making it available for reuse.

Suppress priority inheritance when receiving messages. Receiving threads won't change their priorities to those of the sending threads. If you're using adaptive partitioning, the receiving threads won't run in the sending threads' partitions.
(QNX Neutrino 7.0.4 or later) When a thread receives a message on the channel, the thread inherits the sender's runmask; when a thread receives a pulse on the channel, the thread reverts to its default runmask. If the receiving thread is running on a processor that the runmask excludes, the thread is rescheduled.
Inform the kernel that the server supports message pausing by implementing the pause/resume protocol that resolves page faults of client message-passing buffers.

Pausing a message allows the kernel to avoid deadlock. When the kernel resolves things (possibly by loading an appropriate page of memory), it sends a _PULSE_CODE_RESTART to the server, which can then try again to read, write, or reply to the message. For more information, see the entry for MsgPause().

Reserved for the io_pkt* resource manager.
Mark the channel as being private. There are some cases where it's useful to have a channel for a process's internal use, but you don't want other people to be able to attach to it. If another process calls ConnectAttach() to attach to your process's private channel, that call fails with an error code of EPERM.

In order to create a public channel (i.e., without _NTO_CHF_PRIVATE set), your process must have the PROCMGR_AID_PUBLIC_CHANNEL ability enabled. For more information, see procmgr_ability().

Deliver a pulse on the death of any thread in the process that owns the channel.
Pulse code Pulse value
Deliver a pulse when a thread that's REPLY-blocked on a channel attempts to unblock before its message is replied to. This occurs between the time of a MsgReceive() and a MsgReplyv() by the server. The sending thread may be unblocked because of a signal or a kernel timeout.
Pulse code Pulse value
_PULSE_CODE_UNBLOCK Receive ID (rcvid)
Note: In most cases, you'll set the _NTO_CHF_UNBLOCK flag.

If the sending thread unblocks, MsgReplyv() fails. The manager may not be in a position to handle this failure. It's also possible that the client will die because of the signal and never send another message. If the manager is holding onto resources for the client (such as an open file), it may want to receive notification that the client wants to break out of its MsgSendv().

Setting the _NTO_CHF_UNBLOCK bit in flags prevents a thread that's in the REPLY-blocked state from unblocking. Instead, a pulse is sent to the channel, informing the manager that the client wishes to unblock. In the case of a signal, the signal will be pending on the client thread. When the manager replies, the client is unblocked and at that point, any pending signals are acted upon. From the client's point of view, its MsgSendv() will have completed normally and any signal will have arrived on the opcode following the successful kernel call.

When the manager receives the pulse, it can do one of these things:

  • If it believes that it will be replying shortly, it can discard the pulse, resulting in a small latency in the unblocking, or it can signal the client. A short blocking request to a filesystem often takes this approach.
  • If the reply is going to take some time or an unknown amount of time, the manager should cancel the current operation and reply back with an error or whatever data is available at this time in the reply message to the client thread. A request to a device manager waiting for input would take this approach.

Creating a private pulse pool

Under normal circumstances, pulses are allocated from a global pool when they're delivered and there isn't a thread waiting to receive them. Doing the allocation in this context may be undesirable as pulses are meant to be small, low-overhead messages. Privileged and unprivileged processes share the same global pool, and unprivileged processes can interfere by receiving and not handling large numbers of pulses. Additionally, pulse delivery may fail in low-memory situations.

Creating a pulse pool for a channel reduces interference in these ways:

The ChannelCreatePulsePool() function lets you create channels with fixed pools of pulses that ensure well-behaved servers can receive pulses under most circumstances. The pool itself is allocated when the kernel channel object is created.

When a pulse is sent to a channel that has its own pulse pool, and there's no thread available to receive the pulse, the pulse pool is used. If there are no available pulses in the pool, by default the channel owner is terminated with a SIGKILL. To specify the attributes of the private pulse pool, including what to do when there are no available pulses, use the struct nto_channel_config that the config argument to ChannelCreatePulsePool() points to:

struct nto_channel_config {
    struct sigevent event;
    unsigned num_pulses;
    unsigned rearm_threshold;
    unsigned options;
    unsigned reserved[3];

The members include the following:

A sigevent that you want to be delivered when a pulse can't be obtained from the pool. The event type must be SIGEV_SEM or SIGEV_NONE. If you want to use this event, you must set _NTO_CHO_CUSTOM_EVENT in the options member.

Providing a sigevent of type SIGEV_NONE allows a channel to silently drop pulses, but dropping pulses can leave the system in an inconsistent state. For example:

  • Dropping a pulse from an interrupt sigevent could cause a driver to leave an interrupt masked.
  • Dropping a disconnect pulse could leak a scoid or client entry in a server (e.g., a resource manager).
The number of pulses in the pool.
A basic flow-control mechanism for the dropped-pulse notification. Once an event is delivered, no further event will be delivered until the pool utilization drops below the threshold. The classes of values for rearm_threshold are as follows:
If rearm_threshold is: Then the notification:
0 Fires once and never rearms
1 through num_pulses Fires and rearms when the pool utilization drops below the value of rearm_threshold
Greater than num_pulses Is permanently armed. Be careful not to overwhelm the system.
A bitwise OR of zero or more of the following flags:
  • _NTO_CHO_CUSTOM_EVENT — use the sigevent specified in the event member to notify the channel's owner when a pulse can't be allocated from the pool. If you don't set this flag, and a pulse can't be allocated, a SIGKILL is delivered to the channel owner.

Blocking states

These calls don't block.


The channel ID of the newly created channel. If an error occurs:


All kernel channel objects are in use.
(ChannelCreatePulsePool() only) An argument was invalid.
(ChannelCreatePulsePool() only) There wasn't enough memory to create the pulse pool.
(ChannelCreatePulsePool() only) An arithmetic overflow occurred because num_pulses was too large.
The process tried to create a public channel without having the required permission; see procmgr_ability().
Unsupported channel flag requested.


QNX Neutrino

Cancellation point No
Interrupt handler No
Signal handler Yes
Thread Yes