Snapshot Backup using DNS-321, Review and HOWTO

After many years of faithful service, it was time to replace my backup server.  I settled on a D-Link DNS-321 with a Western Digital Samsung 2 TByte hard disk for a mere $200.

The D-Link DNS-321 (or its cousins DNS-323 and CH3SNAS) is marketed as a Network Attached Storage (NAS) solution.  It features two drive bays, a little ARM processor, 64 MByte of memory and a GigE interface.  There is an active support community for tweaking this box at the wiki [dead link to dns323.kood.org] and the forum dsmg600.info.

This NAS will be reshaped to serve as a backup server for the Linux and Windows machines around my house and my web hosting service.  My requirements:

  • backups are pulled from the client machines running a variety of operating systems
  • plain files on a standard file system (ext3)
  • multilevel rotating snapshots (daily, weekly and monthly)
  • snapshots should be read-only accessable using CIFS
  • snapshots should be spread over two physical drives
  • efficient use of disk space
  • preserve file meta data, such as as dates
  • when interrupted the subsequent backup shouldn’t start all over again
  • initial backup of 1 TByte should finish in a week

This document describes how to make the DNS-321 a full fledged Linux server.  It continues by reviewing several popular backup tools and their configuration.

Adding the usual Linux bells and whistles

Under the hood, the DNS-321 runs Linux and is very easy to expand.  It almost begs to be used as a plain Linux box.  So that is what we will do, but first things first: we need to configure the box and format the hard drives.

If you are using an advanced format drive (4k sectors) such as the Western Digital EARS, you need to manually partition as described in this article.

Basic setup

Get the DNS-321 up and running with a hard disk in the right bay.

  1. Power-on
  2. Upgrade the firmware to 1.02 or later (to get ext3 file system support)
  3. Shutdown
  4. Slide the front panel up, and insert a SATA hard disk in the right bay
  5. Power-on
  6. Connect to it using a web browser. The address is http://dlink-xxxxxx, where xxxxxx are the last 6 digits of the MAC address. The user name is admin with no password.
  7. Finish the configuration in the web browser.
    1. configure the hard disk as standard ext3
    2. setup > device > workgroup = your workgroup name
    3. setup > device > name = backup
    4. setup > device > description = D-link DNS-321 Backup Server
    5. [save settings]
    6. tools > admin password > user name = admin
    7. tools > admin password > password = yourpassword
    8. [save settings]
    9. tools > time > time zone = pacific time
    10. tools > time > server = time.vonk
    11. [save settings]
    12. tools > email alerts > user name = username
    13. tools > email alerts > password = youremailpassword
    14. tools > email alerts > SMTP server = smtp.gmail.com
    15. tools > email alerts > sender email = youremailaddr
    16. tools > email alerts > receiver email = youremailaddr
    17. [save settings]
  8. Verify that the disk can be accessed from a Windows client using i.e. \\backup\volume_1

Extend the functionality

The boot sequence is:

  1. The boot loader loads the Linux kernel and the the rootfs in ramdisk.
  2. Linux starts the /etc/inittab script, that in turn calls /etc/rc.sh. This does the usual housekeeping and mounts the main hard drive partition on /mnt/HD_a2
  3. It then calls /mnt/HD_a2/fun_plug.

Adding functionality to this box straightforward using the popular Fonz’ fun_plug:

  • Copy the fun_plug script and the corresponding fun_plug.tgz archive to\\backup\volume_1 using CIFS/SMB (or   FTP).
  • Reboot by either
    • holding the front button for 5 seconds, or
    • using the web interface Tools > System > Restart.

This allows the boot sequence to continue:

  1. On first invocation, this fun_plug script extracts the tar-ball to /mnt/HD_a2/ffp;
  2. it creates a symbolic link /ffp to /mnt/HD_a2/ffp.
  3. /ffp/etc/rc starts the deamons in /ffp/start.

Secure Shell (SSH)

Out of the box, Fonz’ fun_plug (ffp) has a Telnet daemon enabled with no password.  The first time you install the fun_plug it will take about 5 minutes before the telnetd is up.  Use this service to access the box, and set the root password.

[user@linux]$ telnet backup
    passwd                       # set the root password
    mkdir -p /ffp/home/root      # make home directory
    pwconv                       # convert passwd to shadow (ignore error msgs)
    login                        # test by logging in as root
    exit                         # exit from login shell

Enable SSH (details at nas-tweaks.net). First, move root’s home directory to non-volatile storage (these two steps need to be repeated after installing new firmware).

[root@backup]$                    # while still telnet'ed in ..
    store-passwd.sh               # copy to flash (/dev/mtdblock0, /dev/mtdblock1)
    chmod a+x /ffp/start/sshd.sh
    sh /ffp/start/sshd.sh start   # generate a key pairs for the box
    exit                          # from initial telnet

Enable SSH public key authentication (details at nas-tweaks.net)

[user@linux]$ssh root@backup      # make sure your ~/.ssh/config doesn't interfer
    cd /ffp/etc/ssh
    mv sshd_config sshd_config~ ; sed < sshd_config~ 's/^#Pubkey/Pubkey/' > sshd_config
    mkdir ~/.ssh
    cd ~/.ssh
    scp user@linux:~/.ssh/id_rsa .   # generated using ssh-keygen -t rsa
    scp user@linux:~/.ssh/id_rsa.pub .
    cp id_rsa.pub authorized_keys
    chmod -R go-rwx .
    exit  # from ssh

Login again:

[user@linux]$ ssh root@backup             # should not ask for a passwd this time
    chmod -R go-rwx /ffp/start/telnetd.sh # disable telnet daemon

Create a dedicated “backup” user

The backup will run as user backup.

[user@linux]$ ssh root@backup
    useradd -c "Rsync/SSH based backup" -m -d /ffp/home/backup -s /ffp/bin/sh -r backup
    passwd backup         # set Linux passwd
    smbpasswd -A backup   # add a samba password while we are at it
    store-passwd.sh       # copy password files to non-volatile storage (/dev/mtdblock[01])
    touch /ffp/start/fixhomes.sh  # see below for content
    chmod +x  /ffp/start/fixhomes.sh
    mkdir -p /ffp/home/backup/.ssh
    cp -R ~root/.ssh/* /ffp/home/backup/.ssh  # use the same certificates as root
    chmod -R go-rwx /ffp/home/backup/.ssh
    mkdir -p /mnt/HD_a2/backup
    chown -R backup /ffp/home/backup /mnt/HD_a2/backup

For some odd reason, the NAS boot sequence resets some of home directories in /etc/passwd.  I work around this using a little script /ffp/start/fixhomes.sh.  Remember to give the script execute permissions (chmod +x).

# during boot, the home directory gets reset; workaround this
/ffp/sbin/usermod -d /ffp/home/backup backup 2>/dev/null
# set the time zone while we're at it
echo "PST8PDT" > /etc/TZ

In case you are going to use rsync authentication on the client, store the rsync password to the clients in a file

    cp /dev/tty /ffp/home/backup/.rsync.passwd
    chmod 600 /ffp/home/backup/.rsync.passwd

Add more packages

Install whatever pre-compiled packages that we can get our hands on.

[user@linux]$ ssh root@backup
    # the bare minimum for the tool chain would be
    #   gcc-4.1-2.tgz uclibc-0.9.29-7.tgz kernel-headers-
    #   binutils- make-3.81-3.tgz distcc-3.0-1.tgz
    mkdir -p /ffp/home/backup/packages
    cd /ffp/home/backup/packages
    rsync -avP inreto.de::dns323/fun-plug/0.7/oabi/packages/* .
    funpkg -i *.txz

Review of backup tools

Now that the box has the usual Linux bells and whistles, it is time to start sifting through the magnitude of backup solutions. The table below shows how each tool scores against our requirements.

server pulls backups yes yes yes yes yes
multi-level snapshots yes yes yes yes yes
plain files on disk only last backup all no all all
spreads over disks no no no no yes
preserve metadata yes no ? no no
efficient use of disk very yes yes yes yes
continues after failure starts all-over again yes ? yes yes
Additional info
reporting statistics only yes, web interface yes, using logwatch
throughput 10 Mbit/s (rsynclib/ssh) 33 Mbit/s (rsyncd) 14 Mbit/s (rsyncd) 33 Mbit/s (rsyncd) 10 Mbit/s (rsync/ssh)
web interface premature no yes, nice no no
language phyton and C perl perl bash perl
transport mechanism rsynclib/ssh rsyncd, rsync/ssh smb, tar/ssh, rsyncd, rsync/ssh rsyncd rsync/ssh
delete random snapshots only before a date yes yes yes yes
last update 2009-03 2005-01 2007-11 2005-03 2007-08
issues with ‘ ‘-char in filenames


  • I use one of my computers for HD video production. With 1 TByte of data to protected, at 10 Mbps an initial backup would take about 12 days!
  • For the initial backup, I like to save time and connect the hard drive from the NAS using eSATA to client computers. I use the Ext2/Ext3 file system driver for Windows to copy files over. (Remember to activate the service if you do this.)
  • Over time, I will add another hard disk, so that daily backups can alternate between the two disks. This will double the number snapshots and will make it nearly as reliable as RAID1 without the hassle of rebuilding a RAID.  Note that the error rate for TBbyte drives is about 3% in the first years, but climbs steeply in the years following [ref].

Some history:

  1. Art Mulder pioneered the use of hard links and rsync in his initial script (2001).
  2. Mike Rubel build upon this and created rsync_snapshots (2001-2004)
  3. From there it forked into three:
    • Mike Heins’ snapback2 (2004)
    • Nathan Rosenquist’s rsnapshot (2003-2005)
    • Coert Vonk’s (the author of this document) backup-using-rsync (2003-2005).
  4. Ben Escoto took a different approach and built rdiff-backup upon the rsync library (2001-2009)
  5. Craig Barratt took yet another approach with BackupPC that detects identical files in different backups (2001-2007)

The remainder of this document describes the installation and configuration of these backup tools. In doing so, it refers to the following machines:

  • backup.vonk is the brand new shiny D-Link DNS-321
  • ws.vonk is a Windows 8 system (or any other client system)
  • linux.vonk is a Fedora Linux development system

Choosing the ideal backup tool depends on your requirements. For what it is worth, I am still using my backup-using-rsync.  I could further polish backup-using-rsync, but I only write bash and perl scripts to create solutions; it is not my passion.

This article describes my old but proven script.  Detailed descriptions of alternate methods can be found in  Automated backup using D-Link DNS-321 details.

My old but proven script, backup-using-rsync

Many years ago, I stumbled across Art Muler’s snapback script. At the time, my backup solution lacked a bash shell, so I massaged the script to run under ash. I also extended the snapshot rotation mechanism, and made it more robust to interrupted backups. Like so many other tools, it uses “snapshot”-style backups with hard links to create the illusion of multiple, full backups without much of the space or processing overhead.

Configure the server

Create some place holders for the scripts and configuration files.

[user@linux]$ ssh root@backup
    mkdir /ffp/etc/backup
    touch /ffp/bin/backup
    touch /ffp/bin/backup-using-rsync
    chown backup:backup /ffp/etc/backup /ffp/bin/backup /ffp/bin/backup-using-rsync

From the backup account (ssh backup@backup), create /ffp/bin/backup, and give it execute-permissions (chmod 755)

# GPL $Id$
echo "start cron job" | /ffp/bin/logger -s -t backup


echo "Starting $0 ..."
echo "Starting $0 ..." | $LOGGER -s -t backup

IFS=$(echo -en "\n\b")
FILES=`( cd ${CONFIG_DIR} ; $LS --quoting-style=shell */* )`
for node in $FILES ; do
   echo ${node}
        --password-file=${PASSWD_FILE} \
	--exclude-from=${CONFIG_DIR}/${node} \
	$* \
	${node} \
	${BACKUP_DIR}/${node} 2>&1 | $LOGGER -s -t backup

$DF -h  ${BACKUP_DIR} 2>&1 | $LOGGER -s -t backup
$DF -hi ${BACKUP_DIR} 2>&1 | $LOGGER -s -t backup
( cd ${BACKUP_DIR} ; $LS -dl --time-style=long-iso */*/* | $AWK '{ printf("stored backups: %08s %s\n", $6, $8) }' | $LOGGER -s -t backup )

echo "Done $0 .."

From the backup account, create /ffp/bin/backup-using-rsync, and give it execute-permissions (chmod 755)

# GPL $Id$
# ----------------------------------------------------------------------
# rotating-filesystem-snapshot utility using 'rsync'
# inspired by http://www.mikerubel.org/computers/rsync_snapshots
# ----------------------------------------------------------------------
# probably still runs under /ffp/bin/ash if you want ..

#set -o nounset  # do not allow uninitialized variables
#set -o errexit  # exit if any statement returns an error value

# ------------- file locations -----------------------------------------

LOCKFILE=/mnt/HD_a2/ffp/home/backup/`basename $0`.pid

# ------------- system commands used by this script --------------------


# after parsing the command line parameters, these the following commands
# will be prefixed with $DRY


# ------------- other local variables ----------------------------------

PROGRAM=`basename $0`
Usage: $PROGRAM [--parameters] SRC DST
    --verbose              - increase verbosity
    --quiet                - decrease verbosity
    --exclude=PATTERN      - exclude files matching PATTERN
    --exclude-from=FILE    - patterns listed in FILE
    --include-from=FILE    - don't exclude patterns listed in FILE
    --dry-run              - do not start any file transfers
                             just report the actions it would have taken
    --remove-last-daily    - remove the last backup
    --version              - shows revision number
    $PROGRAM --verbose --exclude-from=/ffp/etc/backup/hostname/module rsync://hostname/module $SNAPSHOT_DIR/hostname/module

# ------------- the script itself --------------------------------------

usage() {
    $ECHO "$USAGE"

case "$1" in
    	exit 0
        REVISION=`$ECHO '$Revision 0.1$'|tr -cd '0-9.'`
        $ECHO "$PROGRAM version $REVISION"
        exit 0
        exit 0

# ------ print the error message to stderr, and remount r/o-------------

die() {
    $ECHO "$PROGRAM: $*"
    $ECHO "use '$PROGRAM --help' for help"
    #$MOUNT -t ext3 -o remount,ro $SNAPSHOT_DEV $SNAPSHOT_DIR
    exit 1

# ------ execute a command, and exit on error --------------------------

checkExit() {
    $* || die "ERROR: $*"

# ----- returns 0 if $LOCKFILE exists, 1 otherwise ---------------------

removeOldLock() {
    if [ -e ${LOCKFILE} ] ; then
        a=`cat ${LOCKFILE}`                                                                                                                   
        if ! `$PS | $AWK "\\$1 == \"$a\" { exit 1 }"` ; then                                                                                    
            $ECHO "$PROGRAM:isLocked: WARNING cleaning old lockfile"                                                                          
            rm -f $LOCKFILE                                                                                                                   

isLockedOBSOLETE() {
    if [ ! -e $LOCKFILE ] ; then
        return 0

   # if the process that created the lock is dead, then cleanup its lockfile     
   a=`cat ${LOCKFILE}`                                                 
   if ! `$PS | $AWK "\\$1 == \"$a\" { exit 1 }"` ; then
       $ECHO "$PROGRAM:isLocked: WARNING cleaning old lockfile"
       rm -f $LOCKFILE                                         
       return 0;                                                       

   return 1;

# ------- cleanup TERM, EXIT and INT traps -----------------------------

cleanup() {
    trap - INT TERM EXIT

    if [ -e $LOCKFILE ] ; then    
        if [ "$$" = "$LOCKFILE_PROCID" ] ; then
            $RM -f $LOCKFILE
            $ECHO "$PROGRAM: Can't remove lockfile ($LOCKFILE)"
            $ECHO "process $LOCKFILE_PROCID created the lock, while this process is $$)"
    exit $1

# ----- print to stdout when the debug level $VERB >= $1 ---------------

verbose() {
    local LEVEL="$1"
    [ ! -z "$LEVEL" ] || die "verbose: unspecified LEVEL"

    if [ $VERB -ge $LEVEL ] ; then
        echo "$PROGRAM: $*"

# ------ prints directory, if debug level $VERB >= $1 ------------------

verbose_ls() {
    [ $VERB -lt $1 ] || ( shift ; ls -l $*/ )

# --- returns 0 if rsyncd is running on host $1, 1 otherwise -----------

rsyncRunningOnRemote() {
    local SOURCE="$1"
    local HOSTNAME

    [ ! -z "$SOURCE" ] || die "rsyncRunningOnRemote: unspecified source"

    # was if $ECHO $SOURCE | grep '^rsync://'  2>/dev/null >/dev/null ; then

        if [ -z "$SSH" ] ; then
            HOSTNAME=`$ECHO $SOURCE | $CUT -d/ -f3`:
            HOSTNAME=`$ECHO $SOURCE | $CUT -d: -f1`
        echo $HOSTNAME >&2
        if $RSYNC $SSH $PWDFILE $HOSTNAME:  2>/dev/null >/dev/null ; then
            return 0
            return 1
#    else
#        return 1
#    fi

# ------ returns the name of the oldest daily/weekly backup directory --

findOldest() {
    local TYPE="$1"
    local ALL_DAILY
    local OLDEST_DAILY

    [ ! -z "$TYPE" ] || die "findOldest: unspecified duration {daily|weekly}"

    ALL_DAILY=`ls -d -r $DST/$TYPE.* 2>/dev/null`
    OLDEST_DAILY=`$ECHO $ALL_DAILY | $SED "s,^$DST/,," | $CUT -d' ' -f1`

    echo $OLDEST_DAILY

# ----- returns 0 if weekly backup should be made, 1 otherwise ---------

shouldMakeWeeklyBackup() {
    local OLDEST_DAILY

    OLDEST_DAILY=`findOldest daily`

# ran out of inodes, let's not do weekly backups
#return 0

    # no point in making a weekly backup, if there is no daily one
    if [ -z $OLDEST_DAILY ] ; then
        return 1

    # only make a weekly backup if the oldest daily backup is at least 7 days old

    TODAY_DAY=`$DATE +%j | $SED 's/^0*//g'` # leading 0 would represent Octal

    OLDEST_DAILY_DAY=`$DATE -r $DST/$OLDEST_DAILY +%j | $SED 's/^0*//g'`


    if [ $TODAY_YEAR -ne $OLDEST_DAILY_YEAR ] ; then
        let TODAY_DAY+="356 * ($TODAY_YEAR - $OLDEST_DAILY_YEAR)"

    if [ $TODAY_DAY -lt $DAY_OF_FIRST_WEEKLY ] ; then
        verbose 2 "No weekly backup, $TODAY_DAY -lt $DAY_OF_FIRST_WEEKLY"
        return 1

    # make a weekly backup, if the last weekly backup was >= 14 days ago, or
    # there was no last weekly backup.

    TODAY_DAY=`$DATE +%j | $SED 's/^0*//g'`

    if [ -d $DST/weekly.0 ] ; then
        LAST_WEEKLY_DAY=`$DATE -r $DST/weekly.0 +%j | $SED 's/^0*//g'`
        LAST_WEEKLY_YEAR=`$DATE -r $DST/weekly.0 +%Y`

    if [ $TODAY_YEAR -ne $LAST_WEEKLY_YEAR ] ; then
        let TODAY_DAY=$TODAY_DAY+365

    if [ $TODAY_DAY -ge $DAY_OF_NEXT_WEEKLY ] ; then
        verbose 2 "Weekly backup, today($TODAY_DAY) -ge next($DAY_OF_NEXT_WEEKLY)"
        return 0
        verbose 2 "No weekly backup, today($TODAY_DAY) -ge next($DAY_OF_NEXT_WEEKLY)"
        return 1

# ----- renumber the $1 {daily,weekly} backups, starting at $2 ---------

renumber() {
    local TYPE="$1"
    local START="$2"

    [ ! -z "$TYPE" ] || die "renumber: missing TYPE"
    [ ! -z "$START" ] || die "renumber: missing START"

    [ "$TYPE" = "daily" ] || [ "$TYPE" = "weekly" ] || die "renumber: incorrect TYPE"

    LIST=`ls -d $DST/$TYPE.* 2>/dev/null`
    for ITEM in $LIST ; do
        $MV $ITEM $ITEM.tmp

    for ITEM in $LIST ; do
        $MV $ITEM.tmp $ITEM_NEW
        let COUNT++

# ----- create the backup ------------------------------------ ---------

backup() {
    local OLDEST_DAILY

    $MKDIR -p $DST || die "backup: $MKDIR -p $DST"

    verbose 2 "STEP 0: the status quo"
    verbose_ls 2 $DST

    if shouldMakeWeeklyBackup ; then

        verbose 2 "STEP 1: delete weekly.2 backup, if it exists"

        if [ -d $DST/weekly.2 ] ; then
            $RM -rf $DST/weekly.2
        fi ;

        verbose_ls 2 $DST

        verbose 2 "STEP 2: shift the middle weekly backups(s) back by one,"\
                  "if they exist"

        renumber weekly 1

        verbose_ls 2 $DST

        OLDEST_DAILY=`findOldest daily`

        verbose 2 "STEP 3: make a hard-link-only (except for dirs) copy of"\
                  "$OLDEST_DAILY, into weekly.0"

        if [ -d $DST/$OLDEST_DAILY ] ; then
            #echo $CP -al $DST/$OLDEST_DAILY $DST/weekly.0
            $CP -al $DST/$OLDEST_DAILY $DST/weekly.0

        verbose_ls 2 $DST

        # note: do *not* update the mtime of weekly.0; it will reflect
        # when daily.7 was made, which should be correct.
        verbose 2 "STEP 1: no weekly backup needed, skipping STEP 2 and 3"

    verbose 2 "STEP 4: delete daily.7 backup, if it exists"

    if [ -d $DST/daily.7 ] ; then
        $RM -rf $DST/daily.7

    verbose_ls 2 $DST

    verbose 2 "STEP 5: shift the middle backups(s) back by one, if they exist"

    renumber daily 1

    verbose_ls 2 $DST

    verbose 2 "STEP 6: make a hard-link-only (except for dirs) copy of the"\
          "latest backup, if that exists"

    if [ -d $DST/daily.1 ] ; then
        $CP -al $DST/daily.1 $DST/daily.0
        $MKDIR -p $DST/daily.0
        $CHMOD 755 $DST/daily.0

    verbose_ls 2 $DST

    verbose 2 "STEP 7: rsync from $SRC to $DST/daily.0"

    # (notice that rsync behaves like cp --remove-destination by default, so
    # the destination is unlinked first.  If it were not so, this would copy
    # over the other backup(s) too!

    verbose 1 "$RSYNC $SSH $PWDFILE  --archive --delete --delete-excluded $PARAM $SRC $DST/daily.0"
    verbose 0 "$SRC"

    echo ============================================================
    echo $DRY $RSYNC $SSH $PWDFILE --archive --delete --delete-excluded $PARAM $SRC $DST/daily.0
    echo ============================================================

    # --compress
    $DRY $RSYNC $SSH $PWDFILE --archive --delete --delete-excluded $PARAM $SRC $DST/daily.0

    verbose 1 "$RSYNC done"
    verbose 2 "STEP 8: update the mtime of daily.0 to reflect the backup time"

    $TOUCH $DST/daily.0

    # at the end of the week, the oldest daily backup, becomes last weeks
    # backup

    verbose_ls 2 $DST

    verbose 1 "STEP 9: done"

# ----- remove the last daily backup -----------------------------------

removeLastDaily() {
    verbose 2 "STEP 1: renumbering daily backups starting at ($DST/daily.0)"

    renumber daily 0

    verbose 2 "STEP 2: deleting the newest backup, if it exists "\

    if [ -d $DST/daily.0 ] ; then
        $RM -rf $DST/daily.0

        verbose 2 "STEP 3: renumbering daily backups starting at "\

        renumber daily 0

# ----- remount the file system ----------------------------------------

remount() {
    local MOUNT_MODE="$1"
    [ ! -z "$MOUNT_MODE" ] || die "remount, missing MOUNT_MODE"


# ------------- main ---------------------------------------------------


while [ -n "$1" ] ; do
    case $1 in
            let VERB++
            PARAM="$PARAM $1"
            [ $VERB -eq 0 ] || let VERB--
        --help | -h)
            exit 1;
            PARAM="$PARAM $1"
            PARAM="$PARAM $1"
            if [ -z $SRC ] ; then
		# is SRC on the LAN?
	        HD=backup.vonk ; HD=${HD##*.}  # doesn't have "hostname -f"
	        SD=${1%%/*}    ; SD=${SD##*.}
	        if [[ ${SD} == ${HD} ]] ; then
                  # use rsyhostdomain nc protocol to backup hosts on the LAN
                  PARAM="$PARAM --chmod=u=rwx"	# make everything accessible
                  echo RSYNC PARAM=$PARAM SRC=$SRC
              	  # use rsync over SSH to backup remove hosts
              	  # assume that ~/.ssh/config contains the connection info
              	  #   such as port# and keys to use.
                  SSH="-e '/ffp/bin/ssh'"
                  PWDFILE=""			# ignore "--password-file"
                  SRC=${1/\//:}/		# replace / with :
                  echo RSYNCoSSH SSH=$SSH SRC=$SRC
                if [ -z $DST ] ; then
                    die "ignoring parameter '$1'"

RSYNC_VERS=`$RSYNC --version | $AWK '$1 == "rsync" && $2 == "version" { print $3 }'`

[ ! -z $SRC ] || die "source not specified"
[ ! -z $DST ] || die "destination not specified"

# [ `id -u` = 0 ] || die "only root can do that"

trap 'cleanup' TERM EXIT INT  # catch kill, script exit and int

echo testing for lock
if [ -z $DRY ] ; then
    mkdir -p /var/lock

	echo removing old lock

    	echo creating new lock
    if ( set -o noclobber ; echo "$$" > $LOCKFILE ) 2> /dev/null ; then
        trap 'cleanup' TERM EXIT INT  # clean up lockfile at kill, script exit or ^c
        die "Failed to acquire lock: $LOCKFILE held by $(cat $LOCKFILE)"
     	echo got the lock

MOUNT="checkExit $DRY /ffp/bin/mount"
MKDIR="checkExit $DRY /ffp/bin/mkdir"
CHMOD="checkExit $DRY /ffp/bin/chmod"
RM="checkExit $DRY /ffp/bin/rm"
MV="checkExit $DRY /ffp/bin/mv"
CP="checkExit $DRY /ffp/bin/cp"
TOUCH="checkExit $DRY /ffp/bin/touch"

verbose 2 "Backup '$SRC' -> '$DST'"
verbose 2 "parameters: '$PARAM'"

if [ ! -z $REMOVE_LAST_DAILY ] ; then
    exit 0

if rsyncRunningOnRemote $SRC ; then
    remount rw
    remount ro
    $ECHO "RSYNC daemon not running on '$SRC'"

exit $RET

The configuration file name indicate what machines should be backed up. The contents of the file dictate what files should be excluded from the backup. For example, the file /ffp/etc/backup/ws.vonk/users, tells the /ffp/bin/backup script that module “users” on client “ws.vonk” should be backed up.

Create a configuration file: /ffp/etc/backup/ws.vonk/users

- /User Name/Virtual Machines/
- /User Name/Documents/.emacs.d/auto-save-list/
- /User Name/Documents/.bash_history
- /User Name/Documents/Other/Not Backed Up/
- /User Name/Videos/Layout/releases/*/*.iso

Create an other configuration file: /ffp/etc/backup/ws.vonk/settings

- /Default/
- /Default User/
- /HelpAssistant/
- /LocalService/
- /NetworkService/
- /All Users/Microsoft/Windows Defender/
- /All Users/Microsoft/Search/
- /All Users/Microsoft/RAC/
- /All Users/Microsoft/eHome/
- /All Users/Microsoft/Windows NT/MSFax/
- /*/ntuser.dat*
- /*/AppData/
- /*/Application Data/
- /*/Local Settings/
- /*/My Documents/
- /*/NetHood/
- /*/PrintHood/
- /*/Recent/
- /*/Searches/

The bash script relies on syslog to communicate with the outside world. Enable syslog deamon. The syslog server configuration is described further down.

cd /ffp/start/
chmod a+x syslogd.sh
vi syslogd.sh
# add the line: syslogd_flags="-m 0 -R linux"
# change the line: klogd_flags="-c 8" # was -c 3

./syslogd.sh start

parted (fun-plug 0.5 only)

For fun-plug, refer to inreto.de.

This section is for the people that want to manually create the partitions:

  • insert the disk in the right bay, and free it
    • smb stop ; nfs stop
    • umount /mnt/HD_b?
    • swapoff /dev/sdb1
  • get parted
    • wget http://www.inreto.de/dns323/fun-plug/0.5/packages/parted-1.8.8-1.tgz
    • funpkg -i parted-1.8.8-1.tgz # install parted funpack
  • partition (see forum)
    • parted
      • (parted) mklabel msdos
      • (parted) unit s
      • (parted) p
        • Model: WDC WD30EZRX-00MMMB0 (scsi)
          Disk /dev/sdb: 5860533168s
          Sector size (logical/physical): 512B/512B
          Partition Table: msdos
          #    Start         End        Size    Type  File system
          1      64s    1060287s    1060224s primary  linux-swap
          4 1060288s    3164799s    2104512s primary  ext3
          2 3164800s 4294967294s 4291802495s primary  ext3
      • mkswap /dev/sdb1
      • mke2fs -j -m 0 -T largefile4 /dev/sdb2 # or specify the bytes per inode
      • mke2fs -j /dev/sdb4
      • hd_verify -w
      • reboot # move the disk from the /dev/sdb to /dev/sda location
      • df -h ; df -hi
      • sudo /ffp/sbin/tune2fs -l /dev/sda2

A word on inodes: the number of inodes on a ext3 f/s can only be changed by reformatting the partition. Old daily and weekly backups use an inode for each file, even when they are pointing to identical files. For example, if the systems that are being backed up have an average file size of 7 MByte; the backup device has a 2 TByte disk and keeps 7 daily and 3 weekly backups, then you need about 2T/7M*(7+3-1), almost 3M inodes. That corresponds to about 800,000 bytes/inode. Note that GPT partitions are not supported by the kernel version.

Install on the client

Install rsyncd on the client system as described in “Prepare the client::rsyncd” earlier in this document.

Run some tests, and start a backup

[backup@backup]$ rsync rsync://ws.vonk/users/     # test rsync (requires passwd)

    backup --verbose --verbose
#meanwhile ..
    tail -f /var/log/messages

Schedule a daily backup

Create the file /ffp/start/backup.sh and give it execute permission. Add the backup to the crontab when the DNS-321 reboots.

# set the time zone, while we are at it
echo "PST8PDT" > /etc/TZ

# schedule backup to run every day at 12:15
# work around initgroups operation not permitted
chmod 4755 ${CRONTAB}
# during daylight savings time, commands appear to execute 1 hour earlier
${CRONTAB} -u backup -l > ${CRON} /bin/echo "15 12 * * * /ffp/bin/backup --verbose" >> ${CRON}
${CRONTAB} ${CRON} -u backup /bin/rm $CRON

This time add the backup to crontab by hand

[root@backup]$ /ffp/start/backup.sh

Backing up remote files

I recently made some changes to backup-using-rsync to use Rsync over SSH for remote sites. This is what I use to backup my web site. On the web site itself I use a script to dump the databases to file so that they become part of this backup.

FNBASE=${BKBASE}/`date +%Y-%m-%d`
echo ${FNBASE}
for ii in mydbase1 mydbase2 mydbase3 ; do
    echo "${FNBASE}_${ii}"
    mysqldump -u myname_${ii} --password=mypasswd myname_${ii} | gzip > ${FNBASE}_${ii}.sql.gz
    ls -c1 ${BKBASE}/*${ii}.sql.gz | tail --lines=+$((FILESTOKEEP+1)) | xargs rm &2>/dev/null

Configure the Linux Syslog server

We will use logwatch as included in the Fedora distribution.  Logwatch is set up to run once a day and generates a single email gathering the backup log analysis. To allow logwatch to check backup-using-rsync logs running on a Fedora (or other Linux system) you need to install this script & conf file.

On the syslog server, define which log files should be analyzed (vi /etc/logwatch/conf/logfiles/backup.conf)

# GPL $Id$
# defines which log files should be analyzed for service backup

LogFile = messages
Archive = messages-*

On the syslog server, define the service (vi /etc/logwatch/conf/services/backup.conf).

# GPL $Id$
# defines the service logwatch

Title = "BACKUP script (backup)"

LogFile = messages
*OnlyService = backup
*OnlyHost = backup
*RemoveHeaders =

On the syslog server, create the logwatch script (vi /etc/logwatch/scripts/services/backup), and give it execute permission.

# GPL $Id$
# script for BACK logwatch for service backup

# example:
#  export show_only_server=ws.vonk
#  logwatch --archives --range yesterday \
#           --hostname backup.vonk --service backup --mailto root

$ShowOnlyServer    = $ENV{'show_only_server'}    || "";
$ShowSuccess       = $ENV{'show_successful'}     || 1;
$ShowFailed        = $ENV{'show_failed'}         || 1;
$ShowIOerror       = $ENV{'show_io_error'}       || 1;
$ShowVanishedFiles = $ENV{'show_vanished_files'} || 1;
$ShowFailedFiles   = $ENV{'show_failed_files'}   || 1;
$ShowDiskFree      = $ENV{'show_disk_free'}      || 1;
$ShowStored        = $ENV{'show_stored'}         || 1;
$ShowUnmatched     = $ENV{'show_unmatched'}      || ( $ShowOnlyServer eq "" );

sub showServer {
my($server) = @_;
return ( length($ShowOnlyServer) == 0 or ( $ShowOnlyServer eq $server ) );

while (defined($ThisLine = &amp;lt;STDIN
)) {

if ( ($Server,$Service) =
($ThisLine =~ /RSYNC daemon not running on \'rsync:\/\/(.*?)\/(.*?)\'/i ) ) {

if ( showServer($Server) ) {

} elsif ( ($Server,$Service) =
($ThisLine =~ /backup-using-rsync: rsync:\/\/(.*?)\/(.*?)$/i ) ) {

if ( showServer($Server) ) {

} elsif ( ($FileName,$Service) = ($ThisLine =~ /file has vanished: \"(.*?)\" \(in (.*?)\).*$/i ) ) {

if ( showServer($Server) ) {

} elsif ( ($FileName,$Service) = ($ThisLine =~ /rsync: read errors mapping \"(.*?)\" \(in (.*?)\):.*$/i ) ) {

if ( showServer($Server) ) {

} elsif ( ($ThisLine =~ /IO error encountered -- skipping file deletion/ ) ) {
if ( showServer($Server) ) {

} elsif ( ($Date,$Server,$Service,$Period) =
($ThisLine =~ /stored backups: (.*?) (.*?)\/(.*?)\/(.*?)$/i )) {

if ( showServer($Server) ) {
{$Period} = $Date;

} elsif ( ($ThisLine =~ /ERROR: file corruption in/ ) or
($ThisLine =~ /rsync error: some files could not be transferred/ ) or
($ThisLine =~ /rsync: failed to connect to nis.vonk/ ) or
($ThisLine =~ /rsync error: error in socket IO \(code 10\) at clientserver.c/ ) or
($ThisLine =~ /--help/ ) or
($ThisLine =~ /backup-using-rsync: ERROR:/ ) ) {
# ignore

} elsif ( ($ThisLine =~ /Filesystem/ ) or
($ThisLine =~ /\/dev\/md0/ ) ) {
push @DiskFreeList,$ThisLine;

} else {
# Report any unmatched entries...
push @OtherList,$ThisLine;

if ($ShowSuccess) {
if (keys %{$Success}) {
print "\nSuccessful Backups:\n";
foreach    $Server (sort {$a cmp $b} keys %{$Success}) {
foreach $Service (sort {$a cmp $b} keys %{$Success-
{$Server}}) {
print "\t" . $Server . "/" . $Service;
$count = $Success-
if ( $count
 1 ) {
print " (" . $count . " times)";
print "\n";

if ($ShowFailed) {
if (keys %{$Failed}) {
print "\nFailed Backups:\n";
foreach    $Server (sort {$a cmp $b} keys %{$Failed}) {
foreach $Service (sort {$a cmp $b} keys %{$Failed-
{$Server}}) {
print "\t" . $Server . "/" . $Service;
$count = $Failed-
if ( $count
 1 ) {
print " (" . $count . " times)";
print "\n";

if ($ShowFailedFiles) {
if (keys %{$FailedFiles}) {
print "\nFiles skipped due to file locking:\n";
foreach    $Server (sort {$a cmp $b} keys %{$FailedFiles}) {
foreach $Service (sort {$a cmp $b} keys %{$FailedFiles-
{$Server}}) {
print "\t" . $Server . "/" . $Service . "\n";
foreach $FileName (sort {$a cmp $b} keys %{$FailedFiles-
{$Service}}) {
print "\t\t";
my $len=length($FileName);
if ( $len
 40 ) {
print ".." . substr( $FileName, $len - 38, 38);
} else {
print $Filename;
$count = $FailedFiles-
if ( $count
 1 ) {
print " (" . $count . " times)";
print "\n";

if ($ShowIOerror) {
if (keys %{$IOerror}) {
print "\nOld files not deleted as a precaution for an IO error:\n";
foreach    $Server (sort {$a cmp $b} keys %{$IOerror}) {
foreach $Service (sort {$a cmp $b} keys %{$IOerror-
{$Server}}) {
print "\t" . $Server . "/" . $Service;
$count = $IOerror-
if ( $count
 1 ) {
print " (" . $count . " times)";
print "\n";

if ($ShowVanishedFiles) {
if (keys %{$VanishedFiles}) {
print "\nFiles that vanished:\n";
foreach    $Server (sort {$a cmp $b} keys %{$VanishedFiles}) {
foreach $Service (sort {$a cmp $b} keys %{$VanishedFiles-
{$Server}}) {
print "\t" . $Server . "/" . $Service . "\n";
foreach $FileName (sort {$a cmp $b} keys %{$VanishedFiles-
{$Service}}) {
print "\t\t";
my $len=length($FileName);
if ( $len
 40 ) {
print ".." . substr( $FileName, $len - 38, 38);
} else {
print $Filename;
$count = $VanishedFiles-
if ( $count
 1 ) {
print " (" . $count . " times)";
print "\n";

if ($ShowStored) {
if (keys %{$StoredBackup}) {
print "\nStored Backups:\n";
foreach    $Server (sort {$a cmp $b} keys %{$StoredBackup}) {
foreach $Service (sort {$a cmp $b} keys %{$StoredBackup-
{$Server}}) {
print "\t" . $Server . "/" . $Service . "\n";
foreach $Period (sort {$a cmp $b} keys %{$StoredBackup-
{$Service}}) {
print "\t\t" . $StoredBackup-
{$Period} .
" (" . $Period . ")\n";

if (($ShowDiskFree) and ($#DiskFreeList
= 0)) {
print "\nDisk Space:\n\n";
print @DiskFreeList;

if (($#OtherList
= 0) and ($ShowUnmatched)) {
print "\n**Unmatched Entries**\n";
print @OtherList;


On the syslog server, make sure that sendmail forwards to your mail provider.  See this document for forwarding to gmail.

On the syslog server, try the logwatch report

logwatch --archives --hostlimit backup --service backup | less

On the syslog server, schedule a cronjob to execute logwatch for this service (as root, crontab -e)

0 2 * * * /usr/sbin/logwatch --archives --range yesterday --hostlimit backup --service backup --mailto you@hostname.com
0 1 * * * export show_only_server=ws.vonk ; /usr/sbin/logwatch --archives --range yesterday --service backup --hostlimit backup --mailto you@hostname.com

Prepare the client

The installation and configuration examples in this section focus on Windows 8, but can easily be adopted to other windows flavors. Most Linux users will already have the programs installed, and can just use the configuration examples listed below. I assume that the same is true for OS X users.

MS Windows does not include native versions of the popular rsync, rsyncd or ssh. Depending on the backup tool used, we will have to install or or more of these programs. The backup tool installation notes (further down) will tell you what programs are needed on the client computer


Rsyncd deamon runs as a service on the client and waits for the backup server to connect. I like to use it for the initial backup, because it is about three times as fast as rsync-over-ssh. The weak authentication and lack of encryption however makes it unsuitable for backing up remote systems.

Remember to disable the rsyncd when you switch over to the secure rsync/ssh (right-click Computer > manage > Services and Applications > Services > RsyncServer, stop and disable”.

To install:

  1. download and install cwRsyncServer_4.0.2_Installer.zip to “C:\Program Files (x86)\rsyncd”
  2. create a new service account “backup”

Create a secrets file (notepad “C:\Program Files (x86)\rsyncd\rsyncd.secrets”). Not exactly a germ in authentication, but beats a total lack of authentication.


Configure rsyncd (notepad “C:\Program Files (x86)\rsyncd\rsyncd.conf”). For full restores, I like to have the option of a writable directory on the client. When using cwRsyncServer, you will need to prepare to allow the service to write to this directory (Start > cwRsyncServer > Prepare a Dir for Upload).

use chroot = false
strict modes = false
hosts allow = backup.vonk
auth users = backup
secrets file = rsyncd.secrets

path = /cygdrive/u/Users/
read only = yes
list = yes

path = /cygdrive/c/Users/
read only = yes
list = yes

# path = /cygdrive/u/Users/recovered/
# read only = no
# list = no


A far more secure approach is to run rsync over a ssh-encrypted tunnel. The sshd deamon runs as a service on the client and awaits connections from the backup server. Authentication will use public key authentication. From there on, rsync will communicate over the ssh tunnel.

  1. Install copSSH
    • download copSSH 3.0.1 and install to “C:\Program Files (x86)\sshd\”
    • create a “backup” account, run as a service, do not generate user keys
  2. Copy rsync.exe from cwRsync_4.0.1_Installer.zip to “C:\Program Files (x86)\sshd\bin”

Copy the public key to the client

[backup@backup]$ scp ~/.ssh/id_rsa.pub backup@ws.vonk:.ssh/authorized_keys2

Create a short cut to often used directories

[backup@backup]$ ssh ws.vonk

ln -s /cygdrive/c/Users settings

ln -s /cygdrive/u/Users users



Another approach is to run rdiff-backup over a ssh-encrypted tunnel. Again, the sshd deamon waits for the server to connect, and will authenticate using a public key. But in this case rdiff-backup communicate with its peer over the ssh tunnel.

  1. Install copSSH
    • download copSSH 3.0.1 and install to “C:\Program Files (x86)\sshd\”
    • create a “backup” account, run as a service, do not generate user keys
  2. Download rdiff-backup-1.2.8-win32.zip and copy the .exe to “C:\Program Files (x86)\sshd\bin\”

Copy the public key to the client

[backup@backup]$ scp ~/.ssh/id_rsa.pub backup@ws.vonk:.ssh/authorized_keys2

Notes / feedback / questions

  • Feedback and question can be addressed through the DNS-323 forum.
  • The build-in fdisk dumps core. The forum discussion points to an alternate fdisk
  • My Windows 7 system was leaking Nonpaged Kernel Memory. The typical error message was “rsync: writefd_unbuffered failed to write 4092 bytes to socket [sender]: No buffer space available (105)”.  This turned out to be caused by Sun/Oracle’s VirtualBox.  An upgrade of this software addressed the problem.
Coert Vonk

Coert Vonk

Independent Firmware Engineer at Los Altos, CA
Welcome to the things that I couldn’t find.This blog shares some of the notes that I took while deep diving into various fields.Many such endeavors were triggered by curious inquiries from students. Even though the notes often cover a broader area, the key goal is to help the them adopt, flourish and inspire them to invent new technology.
Coert Vonk

Latest posts by Coert Vonk (see all)

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>




Protected with IP Blacklist CloudIP Blacklist Cloud