CPU Utilization As a Function of Number of Checksum Words
The performance of disk I/O integrity checking is an explicit tradeoff between the amount of data sampled and CPU utilization. As the number of words per disk block used to generate a checksum increases, the probability of detecting bit, byte, and byte string corruption increases, but the rate of increase of detection slows. The additional computation required to generate more accurate checksum values also causes CPU utilization to increase.
Because it is not possible to know how many words need to be checked per disk block to ensure both optimal detection of corruption and minimal CPU utilization, the number of words to check is user‑tunable. You can specify sample counts ranging from 0 to 64 64-bit words per disk block at both system and table levels. The default sample count is one word per disk block.