Obituaries are quite simple. Recall that a client creates a connection to a server. The client then asks the server to perform various tasks, and the client blocks until the server receives the message and replies. When the server replies, the client unblocks.

One way of receiving an obituary is to have the client send the server a message, stating “please do not reply to this message, ever.” While the server is running, the client thread is blocked, and when the server faults, the kernel will automatically unblock the client with an error. When the client thread unblocks, it has implicitly received an obituary message.

A similar mechanism works in the opposite direction to notify the server of the client's death. In this case, the client calls open() to open the server, and never closes the file descriptor. If the client dies, the kernel will synthesize a close() message, which the server can interpret as an obituary. The kernel's synthetic close() looks just like the client's close()—except that the client and server have agreed that the close() message is an obituary message, and not just a normal close(). The client would never issue a normal close() call, so if the server gets one, it must mean the death of the client.

Putting this into perspective, in the warm and hot standby cases, we can arrange to have the two processes (the primary and the standby) work in a client/server relationship. Since the primary will always create the standby (so the standby can take over in the event of the primary's death), the standby can be a client of, or a server for, the primary. Using the methods outlined above, the standby can receive an instant obituary message when the primary dies.

Should the standby be a client or a server? That depends on the design of your system. In most cases, the primary process will be a server for some other, higher-level processes. This means that the standby process had better be a server as well, because it will need to take over the server functionality of the primary. Since the primary is a server, then we need to look at the warm and hot standby cases separately.

In the warm standby case, we want the secondary to start up, initialize, and then go to sleep, waiting for the primary to fail. The easiest way to arrange this is for the secondary to send a message to the primary telling it to never reply to the message. When the primary dies, the kernel unblocks the secondary, and the secondary can then proceed with becoming a primary.

In the hot standby case, we want the secondary to start up, initialize, and then actively receive updates from the primary, so that it stays synchronized. Either method will work (the secondary can be a client of the primary, as in the warm standby case, or the secondary can be a server for the primary).

Implementing the secondary as a client of the primary is done by having the secondary make requests like “give me the next update,” and block, until the primary hands over the next request. Then, the secondary digests the update, and sends a message asking for the next update.

Implementing the secondary as a server for the primary means that the secondary will be doing almost the exact same work as it would as primary—it will receive requests (in this case, only updates) from the primary, digest them, and then reply with the result. The result could be used by the primary to check the secondary, or it could simply be ignored. The secondary does need to reply in order to unblock the primary.

If the secondary is a client, it won't block the primary, but it does mean that the primary needs to keep track of transactions in a queue somewhere in case the secondary lags behind. If the secondary is a server, it blocks the primary (potentially causing the primary's clients to block as well), but it means that the code path that the secondary uses is the same as that used when it becomes primary.

Whether the secondary is a client or a server is your choice; this is one aspect of HA system design you will need to think about carefully.