Filling a Drive with Random Data: urandom, dd and Patience

Filling a block device with random data is a good first step in file system encryption, particularly where there will be no partition table or the partition table itself is to be encrypted. The objective is to make encrypted data indistinguishable from free space, obscuring important metrics like file and partition sizes. Although a single pass is not enough to completely obliterate any trace of previously existing files, tools like shred can be used where this is necessary.

There are two sources of random numbers provided by the linux kernel: /dev/random and /dev/urandom. /dev/random is more "random" than urandom; it takes cues from hardware sources like sound cards and cursor movements to produce output which is not likely to be fingerprinted. Consequentially, random only generates output as entropy becomes available which rules it out for anything intensive like overwriting a hard drive. Urandom is a software-implemented pseudo-random generator which means it does its best to produce genuinely random output from mathematical algorithms. Being based on fixed formulas, the output of urandom is theoretically fingerprintable and reproducible given a set of known factors. The upside is that it is relatively fast while providing arguably more than enough randomness for disk encryption camouflage.

# dd if=/dev/urandom of=/dev/sdb bs=1M

dd is a low-level file swiss army knife. The command above instructs dd to take data from input file /dev/urandom and put it into output file /dev/sdb (our second SATA hard drive). The default block size is too small to efficiently make use of a hard drive's pipe so be sure to add bs=1M or more if you don't want it to take weeks.

Unfortunately, urandom is computationally intensive enough that it is not likely to pump out data as fast as your drive can take it without maxing out a CPU core. It is possible to take advantage of multi-core systems by running multiple instances of dd with clever use of the seek flag.

To give you an idea of the time frame you are looking at, to completely overwrite a 2TB drive on a 2.4GHz Core 2 Quad one core was at 99% utilization for 49 and a half hours:

# dd if=/dev/urandom of=/dev/sdd bs=1M
1907728+1 records in
1907727+1 records out
2000397885440 bytes (2.0 TB) copied, 177862 s, 11.2 MB/s

Contrast 11.2MB/s with the typical 70MB/s+ write speed of this drive. It should be noted that both shred and openssl can be used instead if you would prefer to trade off randomness for speed but 2 days for 2TB isn't all that bad - in my most humble of opinions - and doubles as a sort of burn-in test for new drives.


There are no comments for this item.