User Tools

Site Tools


Installing duplicity and duply

The actual backup logic is implemented by duplicity. It creates full backups, calculates and executes incremental backups and applies encryption (e.g. GPG).

Duply is a wrapper around ducplicity that makes it easier to use, run and especially schedule duplicity backup jobs.

The following command installs everything necessary for backing up regularly on an FTP server.

apt-get install duply ncftp

Note that ducplicity can use other mechanisms than FTP too. The necessary changes are minimal (for details see the duplicity documentation).

Creating a GPG key pair

In order to encrypt your backups with GPG you first need to create a key pair.

gpg --gen-key

Use 4096 bit keys just to be safe.

<note tip>If your server is a virtual server it might have trouble to generate enough entropy (random data) for the key generation. If this happens, generate the keys on your desktop and upload them to the server.</note>

Creating a duply profile

A duply profile is a set of configuration options that tells duply what and how to backup.

Create a profile that is called serverbackup. You can add more profiles later if you need them.

duply serverbackup create

Now tell duply what to backup and where the backup goes in the file /root/.duply/serverbackup/conf. The important lines are listed below. Search the file for the configuration keys. Some of them need to be uncommented.

GPG_KEY='<eight digit GPG key id>'
GPG_PW='<GPG key password>'
TARGET='ftp://<FTP user>@<host>/<path>'
TARGET_PASS='<FTP password>'
# set MAX_AGE to 12 months, else purging will not work
# force a full backup after one month
DUPL_PARAMS="$DUPL_PARAMS --full-if-older-than $MAX_FULLBKP_AGE " 
# increase volsize to 200MB (default: 25MB)
# log added/modified files
DUPL_PARAMS="$DUPL_PARAMS --verbosity info "

Replace the placeholders with the actual values.

Alternative: Duply backup to Amazon S3

Create a backup bucket.
aws s3 mb "s3://backup.$DOMAIN"

Create an IAM user for the S3 backup.

aws iam create-user --user-name serverbackup

Note down the access key for the newly created backup user!

aws iam put-user-policy --user-name serverbackup --policy-name S3DuplyServerBackup --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

In the configuration file set the following parameters:

TARGET_USER='<AWS access key ID>'
TARGET_PASS='<AWS access key>'

Don't forget to replace the region of your S3 bucket from the example with yours.

Configuring duply

Not everything on a server needs to go into a backup. Everything that can be installed from packages, temporary files and pseudo-files like those under /proc should be excluded.

Edit /root/.duply/severbackup/exclude:

- **/*[Cc]ache*
- **/*[Hh]istory*
- **/*[Ss]ocket*
- **/*[Bb]ackup
- **/*.[Bb]ak
- **/*[Dd]ump
- **/*.[Ll]ock
- **/*.log
- **/*.[Tt]mp
- **/*.[Tt]emp
- **/*.swp
- **/*~
- **/.cache
- **/.dbus
- **/.fonts
+ /etc
+ /root
+ /home
- /var/tmp
+ /var/
- **

Note that despite the name you can include files here too.

Running duply the first time

Time to try out our setup so far. First we will do a full backup.

duply serverbackup full

Then we have a look at the list of the files in the backup.

duply serverbackup list

Important: Backup your duply profile in a safe place. It contains your settings and the gpg key:

cd ~/.duply/
zip -rv ~/ serverbackup/

Repeat this any time you change your duply configuration.

Scheduling duply

Create file /root/bin/

set -o nounset
set -e
timestamp () {
   while read line
       echo $(/bin/date '+%Y-%m-%d %H:%M:%S')  $line
if [[ ! -e ${LOG_DIR} ]]; then
    mkdir ${LOG_DIR}
if [[ $0 == *incr* ]]
echo "Running $0 in mode $MODE" | timestamp 2>&1 >> ${LOG}
$DUPLY ${PROFILE} ${MODE} | timestamp 2>&1 >> ${LOG}
if [ "$?" -eq "0" ]; then
  $DUPLY ${PROFILE} purge --force | timestamp 2>&1 >> ${LOG}

Then run the following commands:

chmod ug+x /root/bin/
ln -s /root/bin/ /etc/cron.daily/duply-incremental
ln -s /root/bin/ /etc/cron.weekly/duply-full

The first command make the shell script executable otherwise run-parts, which is started by Cron unless Anacron is installed will not run the script. The next two lines create symbolic links that install the cron jobs: * daily: incremental backup * weekly: full backup

Test if cron scripts are working:

run-parts -v /etc/cron.daily
run-parts -v /etc/cron.weekly

The names of the skripts should apear now. If they don't, check the file permissions and make sure that the links in the cron directories do not end in .sh.


Fixing: "Import of duplicity.backends.giobackend Failed: No module named gio"

If you recieve this warning message in your backup logs, then you are probably missing the package python-gobject-2.

apt-get install python-gobject-2

Fixing: "Import of duplicity.backends.paramiko Failed: No module named paramiko"

If you recieve this warning message in your backup logs, then you are probably missing the package python-paramiko.

apt-get install python-paramiko

Fixing: "BackendException: No connection to backend" when using S3

In Frankfurt “eu-central-1” (and later probably in other regions) only signatures version 4 are supported. Add the following line to your configuration:

export S3_USE_SIGV4="True"

See also: Article on the "no connection to backend" exception on the Raim blog

duplicity.txt · Last modified: 2015/09/26 11:57 by sebastian