Describes how to make daily and weekly backups using rsync, while preventing file duplication between the different backups.
It uses “snapshot”-style backups with hard links to create the illusion of multiple, full backups without much of the space or processing overhead.
Scripts and configuration examples are given for
- Linux clients
- Windows clients (using cygwin)
- Synology server
Twenty years ago, I stumbled across Art Muler’s snapback script. At the time, my backup solution lacked a bash shell, so I massaged the script to run under ash. I also extended the snapshot rotation mechanism, and made it more robust to interrupted backups. Like so many other tools, it uses “snapshot”-style backups with hard links to create the illusion of multiple, full backups without much of the space or processing overhead.
The code is available at
Server side
Using the Synology control panel, create user “backup” with administrator rights, and make sure you can ssh
into your box. Here we’ll assume it is called backup
.
Scripts
From the backup account (ssh backup@backup), create ~/bin/backup
, and give it execute-permissions (chmod 755)
Source code
~/bin/backup
#/bin/bash
# GPL $Id$
echo "start cron job" | /bin/logger -s -t backup -p error
LOGGER=/bin/logger
CONFIG_DIR=${HOME}/config
RSYNC_BACKUP=${HOME}/bin/backup-using-rsync
BACKUP_DIR=${HOME}/data
AWK=/bin/awk
DF=/bin/df
PASSWD_FILE=${HOME}/.rsync.passwd
LS=/bin/ls
SED=/bin/sed
echo "Starting $0 ..."
echo "Starting $0 ..." | $LOGGER -s -t backup -p error
echo $USER
whoami
if [ `whoami` != "backup" ]; then
echo "This script must be run as backup" 1>&2
exit 1
fi
IFS=$(echo -en "\n\b")
pushd ${CONFIG_DIR}
for node in */* ; do
echo "${node}"
$RSYNC_BACKUP \
--password-file=${PASSWD_FILE} \
--exclude-from="${CONFIG_DIR}/${node}" \
$* \
"${node}" \
"${BACKUP_DIR}/${node}" 2>&1 | $LOGGER -s -t backup -p error
done
popd
$DF -h ${BACKUP_DIR} 2>&1 | $LOGGER -s -t backup -p error
$DF -hi ${BACKUP_DIR} 2>&1 | $LOGGER -s -t backup -p error
( cd ${BACKUP_DIR} ; $LS -dl --quote-name --time-style=long-iso */*/* | awk '{ printf("stored backups: %08s ", $6); for (i=8;i<=NF;i++){printf "%s ", $i}; printf "\n"; }' ) | $LOGGER -s -t backup -p error
echo "Done $0 .."
[/code]
</details>
</p>
<p>
From the backup account (ssh backup@backup), create <code>~/bin/backup-using-rsync</code>, and give it execute-permissions (<em>chmod 755</em>)
<details>
<summary>
Source code <code>~/bin/backup-using-rsync</code>
</summary>
#!/bin/bash
# GPL $Id$
# ----------------------------------------------------------------------
# rotating-filesystem-snapshot utility using 'rsync'
#
# inspired by http://www.mikerubel.org/computers/rsync_snapshots
# ----------------------------------------------------------------------
# probably still runs under /bin/ash if you want ..
#set -o nounset # do not allow uninitialized variables
#set -o errexit # exit if any statement returns an error value
# ------------- file locations -----------------------------------------
#SNAPSHOT_DEV="/dev/sda2"
SNAPSHOT_DIR=~backup/data
LOCKFILE=~backup/`basename $0`.pid
# ------------- system commands used by this script --------------------
ECHO=/bin/echo
CUT=/bin/cut
PING=/bin/ping
GREP=/bin/grep
SED=/bin/sed
AWK=/bin/awk
PS=/bin/ps
DIRNAME=/bin/dirname
DATE=/bin/date
# after parsing the command line parameters, these the following commands
# will be prefixed with $DRY
#MOUNT=/bin/mount
MKDIR=/bin/mkdir
CHMOD=/bin/chmod
RM=/bin/rm
MV=/bin/mv
CP=/bin/cp
TOUCH=/bin/touch
RSYNC=/bin/rsync
# ------------- other local variables ----------------------------------
PROGRAM=`basename $0`
USAGE="
Usage: $PROGRAM [--parameters] SRC DST
--verbose - increase verbosity
--quiet - decrease verbosity
--exclude=PATTERN - exclude files matching PATTERN
--exclude-from=FILE - patterns listed in FILE
--include-from=FILE - don't exclude patterns listed in FILE
--dry-run - do not start any file transfers
just report the actions it would have taken
--remove-last-daily - remove the last backup
--version - shows revision number
Example:
$PROGRAM --verbose --exclude-from=/etc/backup/hostname/module rsync://hostname/module $SNAPSHOT_DIR/hostname/module
"
# ------------- the script itself --------------------------------------
usage() {
$ECHO "$USAGE"
}
case "$1" in
--help|"")
usage
exit 0
;;
--version)
REVISION=`$ECHO '$Revision 0.1$'|tr -cd '0-9.'`
$ECHO "$PROGRAM version $REVISION"
exit 0
;;
--help)
usage
exit 0
;;
esac
# ------ print the error message to stderr, and remount r/o-------------
die() {
$ECHO "$PROGRAM: $*"
$ECHO "use '$PROGRAM --help' for help"
#$MOUNT -t ext3 -o remount,ro $SNAPSHOT_DEV $SNAPSHOT_DIR
exit 1
}
# ------ execute a command, and exit on error --------------------------
checkExit() {
# $* || die "ERROR: $*"
"$1" "$2" "$3" "$4" "$5" "$6" "$7" "$8" "$9" || die "ERROR: $*"
}
# ----- returns 0 if $LOCKFILE exists, 1 otherwise ---------------------
removeOldLock() {
if [ -e ${LOCKFILE} ] ; then
a=`cat ${LOCKFILE}`
if ! `$PS | $AWK "\\$1 == \"$a\" { exit 1 }"` ; then
$ECHO "$PROGRAM:isLocked: WARNING cleaning old lockfile"
rm -f $LOCKFILE
fi
fi
}
isLockedOBSOLETE() {
if [ ! -e $LOCKFILE ] ; then
return 0
fi
# if the process that created the lock is dead, then cleanup its lockfile
a=`cat ${LOCKFILE}`
if ! `$PS | $AWK "\\$1 == \"$a\" { exit 1 }"` ; then
$ECHO "$PROGRAM:isLocked: WARNING cleaning old lockfile"
rm -f $LOCKFILE
return 0;
fi
return 1;
}
# ------- cleanup TERM, EXIT and INT traps -----------------------------
cleanup() {
trap - EXIT HUP INT QUIT TERM
if [ -e $LOCKFILE ] ; then
LOCKFILE_PROCID=`cat $LOCKFILE`
if [ "$$" = "$LOCKFILE_PROCID" ] ; then
$RM -f $LOCKFILE
else
$ECHO "$PROGRAM: Can't remove lockfile ($LOCKFILE)"
$ECHO "process $LOCKFILE_PROCID created the lock, while this process is $$)"
fi
fi
exit $1
}
# ----- print to stdout when the debug level $VERB >= $1 ---------------
verbose() {
local LEVEL="$1"
[ ! -z "$LEVEL" ] || die "verbose: unspecified LEVEL"
if [ $VERB -ge $LEVEL ] ; then
shift
echo "$PROGRAM: $*"
fi
}
# ------ prints directory, if debug level $VERB >= $1 ------------------
verbose_ls() {
[ $VERB -lt $1 ] || ( shift ; ls -l "$*/" )
}
# --- returns 0 if rsyncd is running on host $1, 1 otherwise -----------
rsyncRunningOnRemote() {
local SOURCE=$1
local HOSTNAME
[ ! -z "$SOURCE" ] || die "rsyncRunningOnRemote: unspecified source"
# was if $ECHO $SOURCE | grep '^rsync://' 2>/dev/null >/dev/null ; then
if [ -z "$SSH" ] ; then
HOSTNAME=`$ECHO "$SOURCE" | $CUT -d/ -f3`:
else
HOSTNAME=`$ECHO "$SOURCE" | $CUT -d: -f1`
fi
echo $HOSTNAME >&2
if $RSYNC $SSH $PWDFILE $HOSTNAME: 2>/dev/null >/dev/null ; then
return 0
else
return 1
fi
# else
# return 1
# fi
}
# ------ returns the name of the oldest daily/weekly backup directory --
findOldest() {
local TYPE="$1"
local ALL_DAILY
local OLDEST_DAILY
[ ! -z "$TYPE" ] || die "findOldest: unspecified duration {daily|weekly}"
ALL_DAILY=`ls -d -r "$DST/$TYPE".* 2>/dev/null`
OLDEST_DAILY=`$ECHO $ALL_DAILY | $SED "s,^$DST/,," | $CUT -d' ' -f1`
echo $OLDEST_DAILY
}
# ----- returns 0 if weekly backup should be made, 1 otherwise ---------
shouldMakeWeeklyBackup() {
local OLDEST_DAILY
local TODAY_DAY TODAY_YEAR
local OLDEST_DAILY_DAY OLDEST_DAILY_YEAR
OLDEST_DAILY=`findOldest daily`
# no point in making a weekly backup, if there is no daily one
if [ -z $OLDEST_DAILY ] ; then
return 1
fi
# only make a weekly backup if the oldest daily backup is at least 7 days old
TODAY_DAY=`$DATE +%j | $SED 's/^0*//g'` # leading 0 would represent Octal
TODAY_YEAR=`$DATE +%Y`
OLDEST_DAILY_DAY=`$DATE -r "$DST/$OLDEST_DAILY" +%j | $SED 's/^0*//g'`
OLDEST_DAILY_YEAR=`$DATE -r "$DST/$OLDEST_DAILY" +%Y`
#$DATE -r "$DST/$OLDEST_DAILY" +%j | $SED 's/^0*//g' >&2
#echo OLDEST_DAILY_DAY=${OLDEST_DAILY_DAY} >&2
#echo OLDEST_DAILY_YEAR=${OLDEST_DAILY_YEAR} >&2
DAY_OF_FIRST_WEEKLY=$((OLDEST_DAILY_DAY+7))
if [ $TODAY_YEAR -ne $OLDEST_DAILY_YEAR ] ; then
TODAY_DAY=$((TODAY_DAY+356*(TODAY_YEAR-OLDEST_DAILY_YEAR)))
fi
if [ $TODAY_DAY -lt $DAY_OF_FIRST_WEEKLY ] ; then
verbose 2 "No weekly backup, $TODAY_DAY -lt $DAY_OF_FIRST_WEEKLY"
return 1
fi
# make a weekly backup, if the last weekly backup was >= 14 days ago, or
# there was no last weekly backup.
TODAY_DAY=`$DATE +%j | $SED 's/^0*//g'`
TODAY_YEAR=`$DATE +%Y`
if [ -d "$DST/weekly.0" ] ; then
LAST_WEEKLY_DAY=`$DATE -r "$DST/weekly.0" +%j | $SED 's/^0*//g'`
LAST_WEEKLY_YEAR=`$DATE -r "$DST/weekly.0" +%Y`
else
LAST_WEEKLY_DAY=0
LAST_WEEKLY_YEAR=0
fi
DAY_OF_NEXT_WEEKLY=$((LAST_WEEKLY_DAY+14))
if [ $TODAY_YEAR -ne $LAST_WEEKLY_YEAR ] ; then
TODAY_DAY=$((TODAY_DAY+365))
fi
if [ $TODAY_DAY -ge $DAY_OF_NEXT_WEEKLY ] ; then
verbose 2 "Weekly backup, today($TODAY_DAY) -ge next($DAY_OF_NEXT_WEEKLY)"
return 0
else
verbose 2 "No weekly backup, today($TODAY_DAY) -ge next($DAY_OF_NEXT_WEEKLY)"
return 1
fi
}
# ----- renumber the $1 {daily,weekly} backups, starting at $2 ---------
renumber() {
local TYPE="$1"
local START="$2"
[ ! -z "$TYPE" ] || die "renumber: missing TYPE"
[ ! -z "$START" ] || die "renumber: missing START"
[ "$TYPE" = "daily" ] || [ "$TYPE" = "weekly" ] || die "renumber: incorrect TYPE"
echo RENUMBER
for item in "$DST/$TYPE".* ; do
$MV "$item" "$item.tmp"
done
COUNT=$START
for item in "$DST/$TYPE".* ; do
ITEM_NEW=`$DIRNAME "$item"`/$TYPE.$COUNT
$MV "$item" "$ITEM_NEW"
COUNT=$((COUNT+1))
done
}
# ----- create the backup ------------------------------------ ---------
backup() {
local OLDEST_DAILY
#echo 1
#echo \"$DST\"
# echo $MKDIR -p "$DST" || die "backup: $MKDIR -p $DST"
#echo 2
verbose 2 "STEP 0: the status quo"
verbose_ls 2 "$DST"
if shouldMakeWeeklyBackup ; then
verbose 2 "STEP 1: delete weekly.2 backup, if it exists"
if [ -d "$DST/weekly.2" ] ; then
$RM -rf "$DST/weekly.2"
fi ;
verbose_ls 2 "$DST"
verbose 2 "STEP 2: shift the middle weekly backups(s) back by one,"\
"if they exist"
renumber weekly 1
verbose_ls 2 "$DST"
OLDEST_DAILY=`findOldest daily`
#echo OLDEST_DAILY=${OLDEST_DAILY}
verbose 2 "STEP 3: make a hard-link-only (except for dirs) copy of"\
"$OLDEST_DAILY, into weekly.0"
if [ -d "$DST/$OLDEST_DAILY" ] ; then
#echo $CP -al "$DST/$OLDEST_DAILY" "$DST/weekly.0"
$CP -al "$DST/$OLDEST_DAILY" "$DST/weekly.0"
fi
verbose_ls 2 "$DST"
# note: do *not* update the mtime of weekly.0; it will reflect
# when daily.7 was made, which should be correct.
else
verbose 2 "STEP 1: no weekly backup needed, skipping STEP 2 and 3"
fi
verbose 2 "STEP 4: delete daily.7 backup, if it exists"
if [ -d "$DST/daily.7" ] ; then
$RM -rf "$DST/daily.7"
fi
verbose_ls 2 "$DST"
verbose 2 "STEP 5: shift the middle backups(s) back by one, if they exist"
renumber daily 1
verbose_ls 2 "$DST"
verbose 2 "STEP 6: make a hard-link-only (except for dirs) copy of the"\
"latest backup, if that exists"
if [ -d "$DST/daily.1" ] ; then
$CP -al "$DST/daily.1" "$DST/daily.0"
else
$MKDIR -p "$DST/daily.0"
$CHMOD 755 "$DST/daily.0"
fi;
verbose_ls 2 "$DST"
verbose 2 "STEP 7: rsync from $SRC to $DST/daily.0"
# (notice that rsync behaves like cp --remove-destination by default, so
# the destination is unlinked first. If it were not so, this would copy
# over the other backup(s) too!
verbose 1 "$RSYNC $SSH $PWDFILE --archive --delete --delete-excluded $PARAM $SRC $DST/daily.0"
verbose 0 "$SRC"
echo ============================================================
echo $DRY $RSYNC $SSH $PWDFILE --archive --delete --delete-excluded $PARAM --exclude-from=\"$EXCLUDEFROM\" \"$SRC\" \"$DST/daily.0\"
echo ============================================================
# --compress
$DRY $RSYNC $SSH $PWDFILE --archive --delete --delete-excluded $PARAM --exclude-from="$EXCLUDEFROM" "$SRC" "$DST/daily.0"
verbose 1 "$RSYNC done"
verbose 2 "STEP 8: update the mtime of daily.0 to reflect the backup time"
$TOUCH "$DST/daily.0"
# at the end of the week, the oldest daily backup, becomes last weeks
# backup
verbose_ls 2 "$DST"
verbose 1 "STEP 9: done"
}
# ----- remove the last daily backup -----------------------------------
removeLastDaily() {
verbose 2 "STEP 1: renumbering daily backups starting at ($DST/daily.0)"
renumber daily 0
verbose 2 "STEP 2: deleting the newest backup, if it exists "\
"($DST/daily.0)"
if [ -d "$DST/daily.0" ] ; then
$RM -rf "$DST/daily.0"
verbose 2 "STEP 3: renumbering daily backups starting at "\
"($DST/daily.0)"
renumber daily 0
fi
}
# ----- remount the file system ----------------------------------------
remount() {
local MOUNT_MODE="$1"
[ ! -z "$MOUNT_MODE" ] || die "remount, missing MOUNT_MODE"
#$MOUNT -t ext3 -o remount,$MOUNT_MODE $SNAPSHOT_DEV $SNAPSHOT_DIR
}
# ------------- trap errors --------------------------------------------
function err_trap_handler()
{
SCRIPTNAME="$0"
LASTLINE="$1"
LASTERR="$2"
die "${SCRIPTNAME}: line ${LASTLINE}: exit status of last command: ${LASTERR}"
}
# ------------- main ---------------------------------------------------
PARAM=
VERB=0
DRY=
REMOVE_LAST_DAILY=
SSH=
PWDFILE=
SRC=
DST=
EXCLUDEFROM=
# trap commands with non-zero exit code
trap 'err_trap_handler ${LINENO} $?' ERR
while [ -n "$1" ] ; do
case $1 in
--verbose)
shift
VERB=$((VERB+1))
[ $VERB -ge 2 ] && PARAM="$PARAM --verbose"
;;
--quiet)
PARAM="$PARAM $1"
shift
[ $VERB -eq 0 ] || VERB=$((VERB-1))
;;
--help | -h)
shift;
usage
exit 1;
;;
--dry-run)
PARAM="$PARAM $1"
shift;
DRY="$ECHO"
;;
--remove-last-daily)
shift;
REMOVE_LAST_DAILY=y
;;
--password-file*)
PWDFILE="$1"
shift
;;
--exclude-from*)
EXCLUDEFROM=${1:15}
shift
;;
-*)
PARAM="$PARAM $1"
shift
;;
*)
if [ -z "$SRC" ] ; then
if [[ "$1" == *\.ssh* ]] ; then
# use rsync over SSH to backup remove hosts
# assumes that ~/.ssh/config contains the connection info
# such as port# and keys to use.
SSH="-e '/bin/ssh'"
PWDFILE="" # ignore "--password-file"
SRC=${1/\//:}/ # replace / with :
echo RSYNCoSSH SSH=$SSH SRC=$SRC
else
# use rsyhostdomain nc protocol to backup hosts on the LAN
SRC=rsync://backup@$1
PARAM="$PARAM --chmod=u=rwx" # make everything accessible
echo RSYNC PARAM=$PARAM SRC=$SRC
fi
else
if [ -z "$DST" ] ; then
DST=$1
else
die "ignoring parameter '$1'"
fi
fi
shift
;;
esac
done
RSYNC_VERS=`$RSYNC --version | $AWK '$1 == "rsync" && $2 == "version" { print $3 }'`
[ ! -z "$SRC" ] || die "source not specified"
[ ! -z "$DST" ] || die "destination not specified"
# [ `id -u` = 0 ] || die "only root can do that"
#was:
#trap 'cleanup' TERM EXIT INT # catch kill, script exit and int
# The 1st trap removes the lock at the end of the script. The 2nd trap causes the
# script to terminate after receiving one of the specified signals. Before the
# script terminates, the trap for "signal EXIT" is executed, effectively removing
# the lock.
trap 'cleanup' EXIT
trap 'exit 2' HUP INT QUIT TERM
echo testing for lock
if [ -z $DRY ] ; then
mkdir -p /var/lock
echo removing old lock
removeOldLock
echo creating new lock
if ( set -o noclobber ; echo "$$" > $LOCKFILE ) 2> /dev/null ; then
trap 'cleanup' TERM EXIT INT # clean up lockfile at kill, script exit or ^c
else
die "Failed to acquire lock: $LOCKFILE held by $(cat $LOCKFILE)"
fi
echo got the lock
fi
verbose 2 "Backup '$SRC' -> '$DST'"
verbose 2 "parameters: '$PARAM'"
if [ ! -z $REMOVE_LAST_DAILY ] ; then
removeLastDaily
exit 0
fi
if rsyncRunningOnRemote "$SRC" ; then
remount rw
backup
RET=$?
remount ro
else
$ECHO "RSYNC daemon not running on '$SRC'"
RET=1
fi
exit $RET
Configuration files
The configuration file name indicate what machines should be backed up. The contents of the file dictate what files should be excluded from the backup.
For Windows clients
For example, the file ~/config/hostname/users
, tells the the script that module "users" on client "hostname" should be backed up.
- /User Name/Documents/.emacs.d/auto-save-list/ - /User Name/Documents/.bash_history - /User Name/Videos/Layout/releases/*/*.iso
Create a configuration file: ~/config/hostname/settings
- /Default/ - /Default User/ - /HelpAssistant/ - /LocalService/ - /NetworkService/ - /All Users/Microsoft/Windows Defender/ - /All Users/Microsoft/Search/ - /All Users/Microsoft/RAC/ - /All Users/Microsoft/eHome/ - /*/ntuser.dat* - /*/AppData/ - /*/Application Data/ - /*/Local Settings/ - /*/My Documents/ - /*/NetHood/ - /*/PrintHood/ - /*/Recent/ - /*/Searches/
For Linux clients
We will connect to Linux clients over secure shell (SSH). We assume you are familiar with configuring SSH.
Install the server's private key ~/.ssh/remote.key
and create ~/.ssh/config
with the connection details. The alias of the remote system must end in .ssh
. This lets the script know to establish an SSH connection.
Host remote.ssh IdentityFile ~/.ssh/remote.key Port 22 User username Hostname remote.com PasswordAuthentication no
Using password authentication, copy the public key to the client
[backup@backup]$ scp ~/.ssh/remote backup@remote.com:~/.ssh/authorized_keys
Create a matching configuration file: ~/config/remote.ssh/users
as shown earlier.
Client side
We will use plain rsync
for Windows clients, but run it over SSH for Linux clients.
Linux
Make sure rsync
and the SSH deamon are installed. Make sure you can ssh in from the server using its alias (from ~/.ssh/config).
SQL databases
To backup a SQL database, you need to first dump it to file on the host, so it becomes part of the backup.
#!/bin/bash /bin/whoami set -x FILESTOKEEP=3 BKBASE=${HOME}/sql_backup FNBASE=${BKBASE}/`date +%Y-%m-%d` mkdir -p ${BKBASE} echo ${FNBASE} for ii in dbase1 dbase2 ; do echo "${FNBASE}_${ii}" mysqldump --defaults-extra-file=${BKBASE}/.${ii}.cnf my_${ii} | gzip > ${FNBASE}_${ii}.sql.gz ls -tr ${BKBASE}/*${ii}.sql.gz | head -n -3 | xargs --no-run-if-empty rm &2>/dev/null done
Create a configuration file for each SQL database. For example dbase1.cnf
[client] user=my_dbase1_user password="your_dbase1_passwd" no-tablespaces
MS Windows
MS Windows does not include native versions of the popular rsync or rsyncd. We will install a CygWin-based rsync:
-
Download and install cwRsyncServer_4.1.0_Installer.zip to
"C:\Program Files (x86)\rsyncd"
. Let it create a new service account "backup" -
Download and install cwRsync_5.5.0_Installer. Use these binaries to overwrite the ones
"C:\Program Files (x86)\rsyncd"
.
Create a secrets file (C:\Program Files (x86)\rsyncd\rsyncd.secrets"
). Yeah, yeah, I know .. but it beats no authentication.
backup:your_rsync_password
To hide the backup user, change/add the Windows registry HKEY_LOCAL_MACHINE > SOFTWARE > Microsoft > Windows NT > CurrentVersion > Winlogon > SpecialAccounts > UserList > "backup", to dword:00000000
Configure the rsync deamon (C:\Program Files (x86)\rsyncd\rsyncd.conf
)
use chroot = false strict modes = false hosts allow = backup localhost auth users = backup secrets file = rsyncd.secrets log file = rsyncd.log [users] path = /cygdrive/c/Users/ read only = yes list = yes [settings] path = /cygdrive/c/Users/ read only = yes list = yes
Make sure the service starts automatically (Computer Management > Services and Applications > Services > RsyncServer). Change the startup type to automatic, apply and start the service. I had to add the startup parameter "--start RsyncServer".
Verify that your PC is listening (netstat -na | Select-String "873"
), and the firewall allows the incoming connection to rsync.exe
.
Back on the server
Run some tests, and start a backup
[backup@backup]$ rsync rsync://hostname/users/ # test rsync (requires passwd) backup --verbose --verbose # #meanwhile .. tail -f /var/log/messages
Schedule a daily backup
Using the Synology control panel use the Task Scheduler to schedule /var/services/homes/backup/bin/backup -v
to run daily.
Reporting
The backup
script relies on syslog to communicate with the outside world. We will use logwatch to dig through the syslog files. You can download logwatch from logwatch. Just run the install schript on the server, and point it to your python binary (/bin/python).
Setup
Logwatch is set up to run once a day and generates a single email gathering the backup log analysis. To allow logwatch to check backup-using-rsync logs you need to install this script & conf file.Define which log files should be analyzed (vi /etc/logwatch/conf/logfiles/backup.conf)
# GPL $Id$ # defines which log files should be analyzed for service backup LogFile = messages Archive = messages-*
On the syslog server, define the service and script, but creating the following files:
Logwatch configuration
/etc/logwatch/conf/services/backup.conf
# GPL $Id$
# defines the service logwatch
Title = "BACKUP script (backup)"
LogFile = messages
*OnlyService = backup
*OnlyHost = backup
*RemoveHeaders =
Logwatch script
/etc/logwatch/scripts/services/backup
#!/bin/perl
# GPL $Id$
# script for BACK logwatch for service backup
# example:
# export show_only_server=truus.lan.vonk
# logwatch --archives --range yesterday \
# --hostname back.vonk --service backup --mailto root
$ShowOnlyServer = $ENV{'show_only_server'} || "";
$ShowSuccess = $ENV{'show_successful'} || 1;
$ShowFailed = $ENV{'show_failed'} || 1;
$ShowIOerror = $ENV{'show_io_error'} || 1;
$ShowVanishedFiles = $ENV{'show_vanished_files'} || 1;
$ShowFailedFiles = $ENV{'show_failed_files'} || 1;
$ShowDiskFree = $ENV{'show_disk_free'} || 1;
$ShowStored = $ENV{'show_stored'} || 1;
$ShowUnmatched = $ENV{'show_unmatched'} || ( $ShowOnlyServer eq "" );
sub showServer {
my($server) = @_;
return ( length($ShowOnlyServer) == 0 or ( $ShowOnlyServer eq $server ) );
}
while (defined($ThisLine = <STDIN>)) {
if ( ($Server,$Service) =
($ThisLine =~ /RSYNC daemon not running on \'rsync:\/\/(.*?)\/(.*?)\'/i ) ) {
$CurrServer="";
$CurrService="";
if ( showServer($Server) ) {
$Failed->{$Server}->{$Service}++;
}
} elsif ( ($Server,$Service) =
($ThisLine =~ /rsync-backup: rsync:\/\/(.*?)\/(.*?)$/i ) ) {
$CurrServer=$Server;
$CurrService=$Service;
if ( showServer($Server) ) {
$Success->{$Server}->{$Service}++;
}
} elsif ( ($FileName,$Service) =
($ThisLine =~ /file has vanished: \"(.*?)\" \(in (.*?)\).*$/i ) ) {
if ( showServer($Server) ) {
$VanishedFiles->{$CurrServer}->{$Service}->{$FileName}++;
}
} elsif ( ($FileName,$Service) =
($ThisLine =~ /rsync: read errors mapping \"(.*?)\" \(in (.*?)\):.*$/i ) ) {
if ( showServer($Server) ) {
$FailedFiles->{$CurrServer}->{$Service}->{$FileName}++;
}
} elsif ( ($ThisLine =~ /IO error encountered -- skipping file deletion/ ) ) {
if ( showServer($Server) ) {
$IOerror->{$CurrServer}->{$CurrService}++;
}
} elsif ( ($Date,$Server,$Service,$Period) =
($ThisLine =~ /stored backups: (.*?) (.*?)\/(.*?)\/(.*?)$/i )) {
if ( showServer($Server) ) {
$StoredBackup->{$Server}->{$Service}->{$Period} = $Date;
}
} elsif ( ($ThisLine =~ /ERROR: file corruption in/ ) or
($ThisLine =~ /rsync error: some files could not be transferred/ ) or
($ThisLine =~ /rsync: failed to connect to nis.vonk/ ) or
($ThisLine =~ /rsync error: error in socket IO \(code 10\) at clientserver.c/ ) or
($ThisLine =~ /--help/ ) or
($ThisLine =~ /rsync-backup: ERROR:/ ) ) {
# ignore
} elsif ( ($ThisLine =~ /Filesystem/ ) or
($ThisLine =~ /\/dev\/md0/ ) ) {
push @DiskFreeList,$ThisLine;
} else {
# Report any unmatched entries...
push @OtherList,$ThisLine;
}
}
if ($ShowSuccess) {
if (keys %{$Success}) {
print "\nSuccessful Backups:\n";
foreach $Server (sort {$a cmp $b} keys %{$Success}) {
foreach $Service (sort {$a cmp $b} keys %{$Success->{$Server}}) {
print "\t" . $Server . "/" . $Service;
$count = $Success->{$Server}->{$Service};
if ( $count > 1 ) {
print " (" . $count . " times)";
}
print "\n";
}
}
}
}
if ($ShowFailed) {
if (keys %{$Failed}) {
print "\nFailed Backups:\n";
foreach $Server (sort {$a cmp $b} keys %{$Failed}) {
foreach $Service (sort {$a cmp $b} keys %{$Failed->{$Server}}) {
print "\t" . $Server . "/" . $Service;
$count = $Failed->{$Server}->{$Service};
if ( $count > 1 ) {
print " (" . $count . " times)";
}
print "\n";
}
}
}
}
if ($ShowFailedFiles) {
if (keys %{$FailedFiles}) {
print "\nFiles skipped due to file locking:\n";
foreach $Server (sort {$a cmp $b} keys %{$FailedFiles}) {
foreach $Service (sort {$a cmp $b} keys %{$FailedFiles->{$Server}}) {
print "\t" . $Server . "/" . $Service . "\n";
foreach $FileName (sort {$a cmp $b} keys %{$FailedFiles->{$Server}->{$Service}}) {
print "\t\t";
my $len=length($FileName);
if ( $len > 40 ) {
print ".." . substr( $FileName, $len - 38, 38);
} else {
print $Filename;
}
$count = $FailedFiles->{$Server}->{$Service}->{$FileName};
if ( $count > 1 ) {
print " (" . $count . " times)";
}
print "\n";
}
}
}
}
}
if ($ShowIOerror) {
if (keys %{$IOerror}) {
print "\nOld files not deleted as a precaution for an IO error:\n";
foreach $Server (sort {$a cmp $b} keys %{$IOerror}) {
foreach $Service (sort {$a cmp $b} keys %{$IOerror->{$Server}}) {
print "\t" . $Server . "/" . $Service;
$count = $IOerror->{$Server}->{$Service};
if ( $count > 1 ) {
print " (" . $count . " times)";
}
print "\n";
}
}
}
}
if ($ShowVanishedFiles) {
if (keys %{$VanishedFiles}) {
print "\nFiles that vanished:\n";
foreach $Server (sort {$a cmp $b} keys %{$VanishedFiles}) {
foreach $Service (sort {$a cmp $b} keys %{$VanishedFiles->{$Server}}) {
print "\t" . $Server . "/" . $Service . "\n";
foreach $FileName (sort {$a cmp $b} keys %{$VanishedFiles->{$Server}->{$Service}}) {
print "\t\t";
my $len=length($FileName);
if ( $len > 40 ) {
print ".." . substr( $FileName, $len - 38, 38);
} else {
print $Filename;
}
$count = $VanishedFiles->{$Server}->{$Service}->{$FileName};
if ( $count > 1 ) {
print " (" . $count . " times)";
}
print "\n";
}
}
}
}
}
if ($ShowStored) {
if (keys %{$StoredBackup}) {
print "\nStored Backups:\n";
foreach $Server (sort {$a cmp $b} keys %{$StoredBackup}) {
foreach $Service (sort {$a cmp $b} keys %{$StoredBackup->{$Server}}) {
print "\t" . $Server . "/" . $Service . "\n";
foreach $Period (sort {$a cmp $b} keys %{$StoredBackup->{$Server}->{$Service}}) {
print "\t\t" . $StoredBackup->{$Server}->{$Service}->{$Period} .
" (" . $Period . ")\n";
}
}
}
}
}
if (($ShowDiskFree) and ($#DiskFreeList >= 0)) {
print "\nDisk Space:\n\n";
print @DiskFreeList;
}
if (($#OtherList >= 0) and ($ShowUnmatched)) {
print "\n**Unmatched Entries**\n";
print @OtherList;
}
exit(0);
Give it a spin
On the syslog server, generate a logwatch report
logwatch --archives --hostlimit backup --service backup | less
Run it every day
On the syslog server, schedule a cronjob to execute logwatch for this service (as root, crontab -e)
0 2 * * * /usr/sbin/logwatch --archives --range yesterday --hostlimit backup --service backup --mailto you@domain.com 0 1 * * * export show_only_server=hostname ; /usr/sbin/logwatch --archives --range yesterday --service backup --hostlimit backup --mailto you@domain.com