"Write hole" in filesystems

Unsurprisingly, filesystems are also suspectible to damage when the power failure occurs during write. The most simple example is the file being deleted. If the clusters are deallocated (marked free) first, and then a power failure occurs before the file record is removed, then we got a file having its data stored in free clusters on the disk. If a new file is subsequently created and uses the same cluster, the cross-link siutation occurs, potentialy leading to data loss.

There are several ways around this problem.

Careful write ordering. The sequence of operations can be ordered in such a way that the damage due to the incomplete write is predictable, easy to repair, and confined to a single file. This is the cheapest option. It does not require any change to the on-disk structures if you want to implement it on the existing filesystem.

Multisector transfer protections (used e.g. in NTFS). If several sectors are to be written out as a group, each sector in a group stores a specific signature. When the group is later read, the driver verifies signatures in all sectors of the group. Should the signatures not match, the data is rejected as corrupt. This only allows for error detection, but not correction.

Intention logging is the most complex option, similar to a database transaction logging. The filesystem driver ensures so called atomicity of certain operations, meaning that the operation either completes entirely, or no change occurs at all. This option is implemented in most modern high-capacity filesystems, most widespread being NTFS and ext3/ext4.

Comments

Popular posts from this blog

QNAP Data Recovery

APFS Data Recovery