Dinkum Threads Library

A C or C++ program can call on a number of functions from the Dinkum Threads Library, a portable library for managing multiple threads of control.

Threads Table of Contents

"Dinkum/threads/threads.h" · "Dinkum/threads/xtimec.h"

"Dinkum/threads/condition" · "Dinkum/threads/exceptions" · "Dinkum/threads/mutex" · "Dinkum/threads/once" · "Dinkum/threads/recursive_mutex" · "Dinkum/threads/tss" · "Dinkum/threads/xtime"

C Interface · C++ Interface

Overview · Memory Visibility · Condition Variables · Mutexes · Once Functions · Thread-specific Storage


A thread is a separate flow of execution within an application. On a multi-processor system threads can execute simultaneously on different processors. On a single-processor system and on a multi-processor system with fewer available processors than active threads two or more threads must share a processor. The details of switching a processor from one thread to another are handled by the operating system.

The Dinkum Threads Library lets you create and control multiple threads, and synchronize the sharing of data between these threads. It consists of compatible and complementary interfaces for programming in either C or C++. The C Interface is very similar to the thread support interface defined in the the Posix Standard (also known as pthreads), while the C++ Interface is very similar to the boost.threads library for C++.

When a C or C++ program begins execution it runs in a single thread, executing the main function. The program can create additional threads as needed. Each thread has its own copy of all auto variables, so auto values in one thread are independent of auto values in the other threads. Data with static storage duration is accessible to all threads, so those values are shared. However, changes to values in shared data often are not immediately visible in other threads. A multi-threaded application uses condition variables, mutexes, and once functions to coordinate the use of shared data by its threads, in order to ensure that shared data is not made inconsistent by simultaneous changes from more than one thread, that changes to shared data are visible to a thread when needed, and that a thread that needs data that is being created by another thread can be notified when that data becomes available.

Memory Visibility

Changes made by one thread to values in shared data often are not immediately visible in other threads. For example, on a system with two separate processors and two threads running on the two processors, if the two processors simply share main memory an attempt by one processor to write data while the other processor reads the same data could result in the second processor reading a data value that has only been partially changed by the other processor. In order to avoid this inconsistency one processor has to lock the other one out until it finishes. This locking is usually done through the hardware and is known as a bus lock. Bus locks are unavoidable, but they slow the processors down. To minimize the effect of this slowdown, multi-processor systems have separate cache memory for each processor. This cache memory holds a copy of some of the data in main memory. When a processor writes data it writes to the cache. Sometime later the changes made to the cache are written to main memory. Thus, the processors could each have a different value for a data item in their caches, and those values could be different from the value in main memory.

There are three times at which changes made to memory by one thread are guaranteed to be visible in another thread:

In practice this means that:

Note, however, that locking a mutex to prevent modification of shared data while it is being read also prevents other threads from locking the mutex in order to read the data. Such critical sections should be kept as short as possible to avoid blocking other threads any longer than necessary.

Condition Variables

A condition variable is used by a thread to wait until another thread notifies it that a condition has become true. Code that waits for a condition variable must also use a mutex; before calling any of the functions that wait for the condition variable the calling thread must lock the mutex, and when the called function returns the mutex will be locked. During the time that a thread is blocked waiting for the condition to become true the mutex is not locked.

Spurious wakeups occur when threads waiting for condition variables become unblocked without appropriate notifications. Code that waits for a condition to become true should explicitly check that condition when returning from a wait function to recognize such spurious wakeups. This is usually done with a loop:

while (condition is false)
    wait for condition variable;

The condition variable functions use a mutex internally; when a thread returns from a wait function any changes made to memory by threads that called a wait function or a notify function before the return will be visible to the caller.


A mutex is used to insure that only one thread executes a region of code, known as a critical section, at any one time. On entry into the critical section the code locks the mutex; if no other thread holds the mutex the lock operation succeeds and the calling thread holds the mutex. On exit from the critical section the code unlocks the mutex. If another thread holds the mutex when a thread tries to lock it the thread that tried to lock the mutex blocks until the mutex is unlocked. When more than one thread is blocked waiting for the mutex an unlock releases one of the blocked threads.

A mutex can be recursive or non-recursive. When a thread that already holds a recursive mutex attempts to lock it again the thread does not block. The thread must unlock the mutex as many times as it locked it before any other thread will be permitted to lock the mutex. When a thread that already holds a non-recursive mutex attempts to lock it again the thread will block. Since the thread cannot then unlock the mutex, the result is a deadlock. Non-recursive mutexes are usually smaller and faster than recursive mutexes, so a properly written program that uses non-recursive mutexes can be faster than one that uses recursive mutexes.

A mutex supports test and return if it provides a lock call that does not block if the mutex is already locked. Such a lock call returns a value that indicates whether the mutex was locked as a result of the call.

A mutex supports timeout if it provides a lock call that blocks until no later than a specified time waiting for the mutex to be unlocked. Such a lock call returns a value that indicates whether the mutex was locked as a result of the call.

Once Functions

A once function is a function that should only be called once during a program's execution. Once functions are typically used to initialize data that is shared between threads: the first thread that needs the data initializes it by calling the once function, and later threads that need the data do not call the once function. Each once function should have an associated once flag, statically initialized to indicate that the function has not been called. Code that needs to insure that the once function has been called calls call_once, passing the flag and the address of the once function. The code in call_once atomically checks the flag, and if the flag indicates that the function has not been called, calls the once function and sets the flag to indicate that the function has been called.

The function call_once uses a mutex internally; when it returns any changes made to memory by the once function will be visible to the caller.

Thread-specific Storage

Thread-specific storage is global data that can hold a distinct value for each thread that uses it. This permits functions executing in a single thread to share data without interfering with the data shared by the same functions when executing in other threads.

See also the Table of Contents and the Index.

Copyright © 1992-2013 by Dinkumware, Ltd. Portions derived from work copyright © 2001 by William E. Kempf. All rights reserved.