Operating periodically

So how does the diagnostics system “operate periodically?” You can imagine some process in the car's CPU that does something similar to the following:

// Diagnostics Process

int
main (void)     // ignore arguments here
{
    for (;;) {
        perform_diagnostics ();
        sleep (15);
    }
    
    // You'll never get here.
    return (EXIT_SUCCESS);
}

Here we see that the diagnostics process runs forever. It performs one round of diagnostics and then goes to sleep for 15 seconds, wakes up, goes through the loop again, and again, ...

Way back in the dim, dark days of single-tasking, where one CPU was dedicated to one user, these sorts of programs were implemented by having the sleep (15); code do a busy-wait loop. You'd calculate how fast your CPU was and then write your own sleep() function:

void
sleep (int nseconds)
{
    long    i;

    while (nseconds--) {
        for (i = 0; i < CALIBRATED_VALUE; i++) ;
    }
}

In those days, since nothing else was running on the machine, this didn't present much of a problem, because no other process cared that you were hogging 100% of the CPU in the sleep() function.

Note: Even today, we sometimes hog 100% of the CPU to do timing functions. Notably, the nanospin() function is used to obtain very fine-grained timing, but it does so at the expense of burning CPU at its priority. Use with caution!

If you did have to perform some form of “multitasking,” it was usually done via an interrupt routine that would hang off the hardware timer or be performed within the “busy-wait” period, somewhat affecting the calibration of the timing. This usually wasn't a concern.

Luckily we've progressed far beyond that point. Recall, from “Scheduling and the real world” in the Processes and Threads chapter, what causes the kernel to reschedule threads:

In this chapter, we're concerned with the first two items on the list: the hardware interrupt and the kernel call.

When a thread calls sleep(), the C library contains code that eventually makes a kernel call. This call tells the kernel, “Put this thread on hold for a fixed amount of time.” The call removes the thread from the running queue and starts a timer.

Meanwhile, the kernel has been receiving regular hardware interrupts from the computer's clock hardware. Let's say, for argument's sake, that these hardware interrupts occur at exactly 10-millisecond intervals.

Let's restate: every time one of these interrupts is handled by the kernel's clock interrupt service routine (ISR), it means that 10 ms have gone by. The kernel keeps track of the time of day by incrementing its time-of-day variable by an amount corresponding to 10 ms every time the ISR runs.

So when the kernel implements a 15-second timer, all it's really doing is:

  1. Setting a variable to the current time plus 15 seconds.
  2. In the clock ISR, comparing this variable against the time of day.
  3. When the time of day is the same as (or greater than) the variable, putting the thread back onto the READY queue.

When multiple timers are outstanding, as would be the case if several threads all needed to be woken at different times, the kernel would simply queue the requests, sorting them by time order—the nearest one would be at the head of the queue, and so on. The variable that the ISR looks at is the one at the head of this queue.

Note: At most 50 timer events are generated per clock tick, to limit the consumption of system resources. If the system has a large number of timers set to expire in the same clock period, then at most 50 of the timer events will actually trigger on time; the rest will be handled on the next clock tick (subject to the same limitation of at most 50 timer events).

That's the end of the timer five-cent tour.

Actually, there's a little bit more to it than first meets the eye.