Documentary for Dinner: Vice Fringes: Africa's Moonshine Epidemic (2012)
Vice travels to Uganda, the "drunkest place on earth" according to UN alcohol consumption statistics to taste test traditional Ugandan moonshine, Weragi.
Vice travels to Uganda, the "drunkest place on earth" according to UN alcohol consumption statistics to taste test traditional Ugandan moonshine, Weragi.
One of the oldest and simplest ways to back up UNIX-like systems is with rsync. Automated remote backups are a simple matter of setting up rsync as a daemon on the system you want to back up and running an appropriate rsync command as a cron job on the backup target.
Let's create /etc/rsyncd.conf on the machine to be backed up:
[etc] path=/etc read only=TRUE hosts allow=192.168.0.0/24 hosts deny=* uid = 0 exclude = ssh/
Any number of modules can be defined with the [module_name] convention so different settings can be applied to different locations on the file system. Module names are used in the rsync:// protocol format insted of full path names. This block says, in English: Serve the contents of /etc/ except /etc/ssh/ to any host on the 192.168.0.0 subnet at the location rsync://localhost/etc as though root was reading the files but don't allow any modifications to the local file system.
Read the rsyncd.conf man page for more configuration options. Your rsync package probably comes with a compatible init script, be sure to enable it for your default runlevel.
On the backup server we'll drop this little bash script into /etc/cron.hourly:
#!/bin/bash rsync -a -u --delete rsync://192.168.0.10/etc /mnt/backups/serverxyz/etc
Now every hour the backup target will connect to the rsync daemon and download any files which have been changed or do not already exist in the backup tree, also deleting files which no longer exist on that server to preserve space.
Recovering from a disk failure in a software RAID is a very straightforward and easy process. If your server, chipset and kernel module allow, it may be possible to replace the offending drive without downtime. This is generally not the case with cheap SATA setups, where the server will have to be powered down for the drive replacement. Although it might be safe to remove a SATA drive while the system is running your kernel module may not support recognising the new drive without reloading or rebooting.
The failed drive will have to be identified. This is a very easy process with hardware controllers (typically an indicator light will flash on the failed drive) but not quite as simple when using software RAID. The surefire way to identify a drive is by serial number; find the device node (either through cat /proc/mdstat or dmesg) then run either:
# hdparm -i /dev/{disk node}
# smartctl -a /dev/{disk node}
If your configuration does not support hot swapping power down the machine and replace the bad drive. If it does, you may need to remove other partitions on the target disk from their respective arrays:
# mdadm -f /dev/{md node} /dev/{partition node}
will mark the partition as FAILED, then it can be removed from the set:
# mdadm -r /dev/{md node} /dev/{partition node}
If you are running swap space on the failed drive be sure to disable it with swapoff before removing the disk.
It should be possible to boot the machine off of the remaining set but if you run in to trouble it is just as easy to perform these operations from a livecd.
First we need to copy the partition table from one of the healthy disks exactly. Let sda represent a healthy disk and sdb represent the new one:
# sfdisk -d /dev/sda | sfdisk /dev/sdb
Now we need to add the new partition(s) to our existing RAID set. If you are booting off a livecd and your sets were not automatically configured on boot (they should have been) use mdadm --assemble to assemble them. Then:
# mdadm /dev/md0 -a /dev/sdb1
for each set and partition which needs to be added.
We can watch the resynchronization status with:
# watch cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[2] sda2[1] 243922816 blocks [2/1] [_U] [================>....] recovery = 83.8% (204426880/243922816) finish=58.3min speed=11271K/sec md0 : active raid1 sdb1[0] sda1[1] 272960 blocks [2/2] [UU]
The resync process will slow down considerably if there is heavy disk i/o at the same time and you can expect below-average performance until recovery has completed.
Al Jazeera documentary series 101 East explore's China's environmental issues, one child policy, rapidly growing economy and changing political tides.
dm-crypt is a part of modern Linux's device mapper system which allows for the transparent application of a broad range of block cyphers to a virtual block device. The virtual block device is configured with the cryptsetup command and can point to a real block device (i.e. a real hard drive or partition) or a file which has been attached to a loop device as the underlaying source.
There are a lot of great reasons to use LUKS (Linux Unified Key Setup), not the least of which is the ability to encrypt the host operating system's partition or change the encrypted volume's passphrase. In this article however, we will simply be covering the mundane encryption of block devices with dm-crypt.
One of the advantages of encrypting a physical hard drive from head to toe is that there is no partition table around to leak metrics; if you followed Filling a Drive with Random Data: urandom, dd and Patience your encrypted file system will span the size of the device and any cryptographic boundaries should be undetectable.
If you will be working with a file instead of a real block device it will be necessary to create the file and set it up on a loop device before proceeding. Just as with wiping a disk it is recommended that /dev/urandom is used to initialize the file insted of /dev/zero but you may find the same benefit for much less time in simply creating a sparse file (please see Managing Raw Disk/File System Image Files for more details).
# dd if=/dev/urandom of=encrypted.img bs=1M count=1000 OR # dd if=/dev/zero of=encrypted.img seek=1000 bs=1M count=0 THEN # losetup /dev/loop0 encrypted.img
Now we're going to run the device through dm-crypt using 256 bit AES and SHA256 ESSIV. ESSIV is a method of generating initialization vectors which are difficult to predict; this helps protect against watermarking attacks. You will be asked to provide a passphrase, the longer and more complex the better.
# cryptsetup -c aes-cbc-essiv:sha256 create encryptedVolume /dev/loop0 (or /dev/sdd, etc) Enter passphrase:
Alternatively, you may prefer to use a large chunk of random data stored in a file, perhaps on a USB stick.
# dd if=/dev/urandom of=/mnt/usb/passphrase.key bs=1K count=4 # cat /mnt/usb/passphrase.key | cryptsetup -c aes-cbc-essiv:sha256 create encryptedVolume /dev/loop0
This method provides excellent protection against brute force attacks but may add a physical security dilemma. Consider a case where law enforcement agents have a warrant to search and sieze your property; if they find the USB stick and figure out that it contains the key to your encrypted drive they don't have to pressure you for your passphrase to use it. On the other hand, depending where and with whom the key is stored this approach could have benefits in a rubber-hose attack situation as 4K of random data is virtually impossible to memorize.
Our new virtual block device is located under /dev/mapper. Now we can create the filesystem of our choice on it:
# mke2fs -j /dev/mapper/encryptedVolume
Once the filesystem is in place the device can be mounted and used like any regular block or loop device:
# mkdir /mnt/encrypted # mount /dev/mapper/encryptedVolume /mnt/encrypted
As long as the device is available through device mapper the contents of the encrypted volume are vulnerable to the same kind of attacks any part of your regular system is: malware, viruses, cockpit error and so on. When not in use be sure to unmount the file system and destroy the device mapper entry:
# umount /mnt/encrypted # cryptsetup remove encryptedVolume
If your volume is file-backed it is now safe to unhitch it from the loop device:
# losetup -d /dev/loop0