2020-02-16
Here we describe how to quickly setup efficient, automatic rolling backups on Linux. The backup at any particular week is simply a folder and can be accessed with your usual filemanager. Only the files that changed from the previous week occupy any space. The backups be made remotely, or locally.
Here’s a quick overview of the process:
backup/2020-02-14
, backup/2020-02-15
from the last two daysbackup/2020-02-16
we hard link any file that hasn’t changed from the previous backups, and then copy any file that has changed.To make the backup itself, you just need to run the command:
rsync -xa -HAX --delete-after --fuzzy --fuzzy --delete-excluded \ --exclude pat1 --exclude pat2 \ --link-dest=../2020-02-14 --link-dest=../2020-02-15 \ source-dir/ dest-dir/2020-02-16/
For a full eplanation of the options consult the rsync manual page. The useful ones for our purpose are:
--exclude pat1
: Patterns of file / directory names you want excluded from the backup.--link-dest=...
: All old backup directories need to be specified as paths relative to the destination backup directory.dest-dir
is the destination directory, which can be a remote folder using a variety of supported protocols.As time progresses you need to manually delete older backup directories. All of this is a simple enough task that you can hack your own script quickly. Or you can download my perl script here. My script keeps:
If your target directory is on a remote host, then my script also requires you be able to ssh
in there without having to type your password.
Once you have tuned your backup script to your liking, you can automate it easily using systemd as follows. (The older method to do this was using cron
. Using systemd
is simpler and more reliable.)
First setup a service that will notify you by email.
Put the following in ~/.config/systemd/user/notify-email@.service
:
[Unit] Description=Send email [Service] Type=oneshot ExecStart=sh -c 'systemctl --user status %i | \ mailx -s "[SYSTEMD %i] Fail" gautam'
(Replace gautam
with your username of course.)
Now setup a service that will do the backups.
Put the following in ~/.config/systemd/user/backup.service
:
[Unit] OnFailure=notify-email@%i.service [Service] Type=oneshot ExecStart=/home/gautam/bin/backup \ /home/gautam LOCAL_DST_DIR/backups -- -v ExecStart=/bin/bash -c '\ eval "$(keychain --noask --agents ssh -Q --quiet --eval)"; \ ~/bin/backup ~ REMOTEHOST:backups -- -v'
Replace /home/gautam
with the directory you want to backup.
If you have a second hard drive, and want to do backups locally, then use the first ExecStart
command.
If you want to do backups remotely, then use the second ExecStart
command.
For this, you need a way to log in remotely without typing a password.
The most common way of doing this is using ssh
with a public/private key pair.
If you have this setup, then your private key is typically encrypted and you only decrypt it in memory using ssh-agent
.
In order to let your backup run, you need to pass this information to systemd.
I do it using keychain in the snippet above, but other was are possible too.
Finally setup a timer that runs the backup service.
Put the following in ~/.config/systemd/user/backup.timer
:
[Unit] Description="Automatically run backups" [Timer] OnCalendar=daily Persistent=true [Install] WantedBy=timers.target
Test it using
systemctl --user start backup.service
If there is a problem it should print an error message on the screen, and email you. You can also view the logs using
journalctl --user-unit=backup.service
Once it is working, enable it so it runs automatically:
systemctl --user enable backup.timer
Alex (2022-12-17 01:43:30 EST)
I have a script that works fine as listed below. The only problem is that I want it to only write out into the log file, Backup OK date and time, or write a problem… At the moment it is listing all the files that it backed up, any suggestions?
Many thaks Alex
dow=$(/usr/bin/date “+%a”)
ldow=${dow,,}
/usr/bin/rsync -av /home/alexe/afolders/ /media/alexe/Elements/$ldow/ > /home/alexe/Desktop/afolders.txt