Problems with existing disk filesystems
QNX SDP8.0System ArchitectureDeveloperUser
Although existing disk filesystems are designed to be robust and reliable, there's still the possibility of losing data, depending on what the filesystem is doing when a catastrophic failure (such as a power failure) occurs.
For example:
- Each sector on a hard disk includes a 4-byte
error-correcting code (ECC) that the drive uses to catch hardware errors
and so on.
If the driver is writing the disk when the power fails, then the heads
are removed to prevent them from crashing on the surface, leaving
the sector half-written with the new content.
The next time you try to read that block—or sector—the inconsistent ECC
causes the read to fail, so you lose both the old
and new content.
You can get hard drives that offer atomic sector upgrades and promise you that either all of the old or new data in the sector will be readable, but these drives are rare and expensive.
- Some filesystem operations require updating multiple on-disk data structures. For example, if a program calls unlink(), the filesystem has to update a bitmap block, a directory block, and an inode, which means it has to write three separate blocks. If the power fails between writing these blocks, the filesystem will be in an inconsistent state on the disk. Critical filesystem data, such as updates to directories, inodes, extent blocks, and the bitmap are written synchronously to the disk in a carefully chosen order to reduce—but not eliminate—this risk.
- If the root directory, the bitmap, or inode file (all in the first few blocks of the disk) gets corrupted, you wouldn't be able to mount the filesystem at all. You might be able to manually repair the system, but you need to be very familiar with the details of the filesystem structure.
Page updated: