Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Ask HN: Best Linux server backup system?

Linux Backup Solutions

I've been looking for the best Linux backup system, and also reading lots of HN comments.

Instead of putting pros and cons of every backup system I'll just list some deal-breakers which would disqualify them.

Also I would like that you, the HN community, would add more deal breakers for these or other backup systems if you know some more and at the same time, if you have data to disprove some of the deal-breakers listed here (benchmarks, info about something being true for older releases but is fixed on newer releases), please share it so that I can edit this list accordingly.

Amanda (comments by sammcj)

  • It has a lot of management overhead and that's a problem if you don't have time for a full time backup administrator.
  • It mainly comprises of using tar for backups which is pretty inflexible by modern standards.
  • The enterprise web interface is OK but it's had so many bugs it's not funny.
  • Backups are very slow.
  • Restores are slow and painful to manage.
  • I haven't found it to be great when trying to integrate with puppet / automation frameworks.

Bacula (from the Why section on Burp):

  • Too complex to configure
  • Stores catalog separate from backups, need to backup catalog
  • Doesn't deduplicate
  • Relies on clock accuracy
  • Can't resume an interrupted backup
  • Retention policy

Snebu:

  • Doesn't do encryption
  • File level, not block level deduplication

Obnam:

  • Really slow for large backups (from a benchmark between obnam and attic)
  • To improve performance:
lru-size=1024
upload-queue-size=512

as per: http://listmaster.pepperfish.net/pipermail/obnam-support-obnam.org/2014-June/003086.html

Burp:

  • Client side encryption turns off delta differencing

Bup:

  • Can't purge old backups
  • Doesn't encrypt backups (well, there is encbup)

Tarsnap:

  • Slow restore performance on large backups? (Sorry Colin aka cperciva)
  • This was a really strong candidate until I read some comments on HN about the slow performance to restore large backups.
  • If this has changed in a recent version or someone has benchmarks to prove or disprove it, it would be really valuable.

Duplicity:

  • Slow restore performance on large backups?
  • This was also a really strong candidate until I read some comments on HN about the slow performance to restore large backups.
  • If this has changed in a recent version or someone has benchmarks to prove or disprove it, it would be really valuable.

BackupPC:

  • It doesn't do encrypted backups

backup2l:

  • No support for encryption

Arq:

  • Just included here because I knew someone would mention it in the comments. It's Mac OS X only. This list is for Linux server backup systems.

Other contenders (of which I don't have references or information):

Also Tarsnap scores really high on encryption and deduplication but it has 3 important cons:

  • Not having control of the server where your backups are stored
  • Bandwith costs make your costs unpredictable
  • The so called Colin-Percival-gets-hit-by-a-bus scenario

Attic

Attic has some really good comments on HN and good blog posts, doesn't have any particular deal-breaker (for now, if you have one please share with us), so for now is the most promising.

Roll your own

Some HN users have posted the simple script they use. The scripts usually use a combination of

rsync

LUKS

rdiff-backup

mikhailian's script

FROM=/etc
TO=/var/backups
LINKTO=--link-dest=$TO/`/usr/bin/basename $FROM`.1
OPTS="-a --delete -delete-excluded"
NUMBER_OF_BACKUPS=8

find $TO -maxdepth 1 -type d -name "`basename $FROM`.[0-9]"| sort -rn| while read dir
do
        this=`expr match "$dir" '.*\([0-9]\)'`; 
        let next=($this+1)%$NUMBER_OF_BACKUPS;
        basedirname=${dir%.[0-9]}
        if [ $next -eq 0 ] ; then
                 rm -rf $dir
        else
                 mv $dir $basedirname.$next
        fi
done
rsync $OPTS $LINKTO $FROM/ $TO/`/usr/bin/basename $FROM.0`

zx2c4's script

zx2c4@thinkpad ~ $ cat Projects/remote-backup.sh 
    #!/bin/sh
    
    cd "$(readlink -f "$(dirname "$0")")"
    
    if [ $UID -ne 0 ]; then
            echo "You must be root."
            exit 1
    fi
    
    umount() {
            if ! /bin/umount "$1"; then
                    sleep 5
                    if ! /bin/umount "$1"; then
                            sleep 10
                            /bin/umount "$1"
                    fi
            fi
    }
    
    unwind() {
            echo "[-] ERROR: unwinding and quitting."
            sleep 3
            trace sync
            trace umount /mnt/mybackupserver-backup
            trace cryptsetup luksClose mybackupserver-backup || { sleep 5; trace cryptsetup luksClose mybackupserver-backup; }
            trace iscsiadm -m node -U all
            trace kill %1
            exit 1
    }
    
    trace() {
            echo "[+] $@"
            "$@"
    }
    
    RSYNC_OPTS="-i -rlptgoXDHxv --delete-excluded --delete --progress $RSYNC_OPTS"
    
    trap unwind INT TERM
    trace modprobe libiscsi
    trace modprobe scsi_transport_iscsi
    trace modprobe iscsi_tcp
    iscsid -f &
    sleep 1
    trace iscsiadm -m discovery -t st -p mybackupserver.somehost.somewere -P 1 -l
    sleep 5
    trace cryptsetup --key-file /etc/dmcrypt/backup-mybackupserver-key luksOpen /dev/disk/by-uuid/10a126a2-c991-49fc-89bf-8d621a73dd36 mybackupserver-backup || unwind
    trace fsck -a /dev/mapper/mybackupserver-backup || unwind
    trace mount -v /dev/mapper/mybackupserver-backup /mnt/mybackupserver-backup || unwind
    trace rsync $RSYNC_OPTS --exclude=/usr/portage/distfiles --exclude=/home/zx2c4/.cache --exclude=/var/tmp / /mnt/mybackupserver-backup/root || unwind
    trace rsync $RSYNC_OPTS /mnt/storage/Archives/ /mnt/mybackupserver-backup/archives || unwind
    trace sync
    trace umount /mnt/mybackupserver-backup
    trace cryptsetup luksClose mybackupserver-backup
    trace iscsiadm -m node -U all
    trace kill %1

pwenzel suggests

  rm -rf backup.3
  mv backup.2 backup.3
  mv backup.1 backup.2
  cp -al backup.0 backup.1
  rsync -a --delete source_directory/  backup.0/

and https://gist.github.com/ei-grad/7610406

Meta-backup solutions (which use several backup solutions)

Backupninja

Deltaic

@mrcrilly

This comment has been minimized.

Show comment Hide comment
@mrcrilly

mrcrilly Mar 16, 2015

Here is a Markdown version for readability: https://gist.github.com/mrcrilly/1fe0c721964e0b0b5884

It would be swell if you could merge this into your Gist for an easier reading experience.

Here is a Markdown version for readability: https://gist.github.com/mrcrilly/1fe0c721964e0b0b5884

It would be swell if you could merge this into your Gist for an easier reading experience.

@andrewchambers

This comment has been minimized.

Show comment Hide comment
@janvlug

This comment has been minimized.

Show comment Hide comment
@janvlug

janvlug Mar 16, 2015

@jacek-berlin

This comment has been minimized.

Show comment Hide comment
@jacek-berlin

jacek-berlin Mar 16, 2015

There is also BackupPC, WebUI in the frontend and Rsync/ssh based backend.
most important features:

  • parallel backups
  • hardlinking to save space
  • retention/backup rotation
  • restore to any host
  • incremental backups
  • clientless architecture (only ssh key required)

http://backuppc.sourceforge.net/

There is also BackupPC, WebUI in the frontend and Rsync/ssh based backend.
most important features:

  • parallel backups
  • hardlinking to save space
  • retention/backup rotation
  • restore to any host
  • incremental backups
  • clientless architecture (only ssh key required)

http://backuppc.sourceforge.net/

@sammcj

This comment has been minimized.

Show comment Hide comment
@sammcj

sammcj Mar 16, 2015

Backup Ninja - https://labs.riseup.net/code/projects/backupninja

Essentially a 'meta-backup' program that makes managing many different backend engines easy and repeatable.

  • Databases: .mysql, .pgsql, .ldap
  • Source Control: .svn, .trac
  • Remote backup: .rsync, .rdiff, .dup, .wget
  • Other: .sys, .makecd, .sh, .maildir, .tar

It also has a nice(ish) curses based GUI called NinjaHelper

Oh and Amanda backup isn't in your list but to be honest - I wouldn't bother with it - unless you have the 'amanda enterprise' web frontend it's a right PITA and it's very old fashioned.

Has anyone tried BareOS (A OSS fork of Bacula)? - http://www.bareos.org/en/home.html / http://www.bareos.org/en/bareos-webui.html

sammcj commented Mar 16, 2015

Backup Ninja - https://labs.riseup.net/code/projects/backupninja

Essentially a 'meta-backup' program that makes managing many different backend engines easy and repeatable.

  • Databases: .mysql, .pgsql, .ldap
  • Source Control: .svn, .trac
  • Remote backup: .rsync, .rdiff, .dup, .wget
  • Other: .sys, .makecd, .sh, .maildir, .tar

It also has a nice(ish) curses based GUI called NinjaHelper

Oh and Amanda backup isn't in your list but to be honest - I wouldn't bother with it - unless you have the 'amanda enterprise' web frontend it's a right PITA and it's very old fashioned.

Has anyone tried BareOS (A OSS fork of Bacula)? - http://www.bareos.org/en/home.html / http://www.bareos.org/en/bareos-webui.html

@mikerev

This comment has been minimized.

Show comment Hide comment
@mikerev

mikerev Mar 16, 2015

@sammcj Why would you dismiss Amanda? It's actually pretty awesome and well seasoned, the enterprise version only gets you appliances for noobs who can't operate outside of a point and click paradigm and crap themselves when they see a terminal. Config management + Amanda (especially ZRM for LVM snapshot based backups of MySQL clusters) == profit.

mikerev commented Mar 16, 2015

@sammcj Why would you dismiss Amanda? It's actually pretty awesome and well seasoned, the enterprise version only gets you appliances for noobs who can't operate outside of a point and click paradigm and crap themselves when they see a terminal. Config management + Amanda (especially ZRM for LVM snapshot based backups of MySQL clusters) == profit.

@sammcj

This comment has been minimized.

Show comment Hide comment
@sammcj

sammcj Mar 16, 2015

@mikerev - I don't think that having a UI of sorts of a backup system makes it for 'noobs' - backups can quickly drain time from your team if they're not quick and easy to manage.

Some times a UI/GUI/WebUI is the most effective tool for a job (but usually not) - I think backup is an area that benefits from a good UI (especially if it's backend is also easily configurable using automation tools or standard structures such as yaml).

I've been using Amanda for the last three years with approx 350~ servers, I really haven't been impressed.

  • It has a lot of management overhead and that's a problem if you don't have time for a full time backup administrator.
  • It mainly comprises of using tar for backups which is pretty inflexible by modern standards.
  • The enterprise web interface is OK but it's had so many bugs it's not funny.
  • Backups are very slow.
  • Restores are slow and painful to manage.
  • I haven't found it to be great when trying to integrate with puppet / automation frameworks.

sammcj commented Mar 16, 2015

@mikerev - I don't think that having a UI of sorts of a backup system makes it for 'noobs' - backups can quickly drain time from your team if they're not quick and easy to manage.

Some times a UI/GUI/WebUI is the most effective tool for a job (but usually not) - I think backup is an area that benefits from a good UI (especially if it's backend is also easily configurable using automation tools or standard structures such as yaml).

I've been using Amanda for the last three years with approx 350~ servers, I really haven't been impressed.

  • It has a lot of management overhead and that's a problem if you don't have time for a full time backup administrator.
  • It mainly comprises of using tar for backups which is pretty inflexible by modern standards.
  • The enterprise web interface is OK but it's had so many bugs it's not funny.
  • Backups are very slow.
  • Restores are slow and painful to manage.
  • I haven't found it to be great when trying to integrate with puppet / automation frameworks.
@flyfloh

This comment has been minimized.

Show comment Hide comment
@flyfloh

flyfloh Mar 16, 2015

personally, I use rsnapshot: http://www.rsnapshot.org/

flyfloh commented Mar 16, 2015

personally, I use rsnapshot: http://www.rsnapshot.org/

@derekp7

This comment has been minimized.

Show comment Hide comment
@derekp7

derekp7 Mar 16, 2015

For Snebu, would it be sufficient to use something like a LUKS encrypted disk volume as the backup storage device? I'd really rather leave encryption to the experts, instead of rolling my own (even if I use a well tested library). Also proper encryption (with unique salt for each file) would break deduplication.

Of course this isn't ideal for remote backups, but for that I think the best plan would be to back up to a local device (encrypted at the FS layer), then once I get the replication code in place, an encrypting replication module can be used to send offsite (Amazon, rsync.net, google cloud, etc) or to tape.

Edit: For encryption to a remote device -- run Snebu locally, and set it up to have its vault on a network file system that does local encryption. You'd want to keep the sqlite catalog DB local for performance reasons, then write out a compressed copy of that to your backend storage when the backup is finished.

derekp7 commented Mar 16, 2015

For Snebu, would it be sufficient to use something like a LUKS encrypted disk volume as the backup storage device? I'd really rather leave encryption to the experts, instead of rolling my own (even if I use a well tested library). Also proper encryption (with unique salt for each file) would break deduplication.

Of course this isn't ideal for remote backups, but for that I think the best plan would be to back up to a local device (encrypted at the FS layer), then once I get the replication code in place, an encrypting replication module can be used to send offsite (Amazon, rsync.net, google cloud, etc) or to tape.

Edit: For encryption to a remote device -- run Snebu locally, and set it up to have its vault on a network file system that does local encryption. You'd want to keep the sqlite catalog DB local for performance reasons, then write out a compressed copy of that to your backend storage when the backup is finished.

@gaurish

This comment has been minimized.

Show comment Hide comment
@gaurish

gaurish Mar 16, 2015

@sknebel

This comment has been minimized.

Show comment Hide comment
@sknebel

sknebel Mar 16, 2015

HN discussion: https://news.ycombinator.com/item?id=9210505 (for people stumbling over the gist later/rediscovering it)

sknebel commented Mar 16, 2015

HN discussion: https://news.ycombinator.com/item?id=9210505 (for people stumbling over the gist later/rediscovering it)

@Firefishy

This comment has been minimized.

Show comment Hide comment
@Firefishy

Firefishy Mar 16, 2015

Also: https://github.com/zbackup/zbackup (Dedup, optional encryption. Active Development)

Also: https://github.com/zbackup/zbackup (Dedup, optional encryption. Active Development)

@feld

This comment has been minimized.

Show comment Hide comment
@feld

feld Mar 16, 2015

Seconding rsnapshot -- https://github.com/rsnapshot/rsnapshot

And for putting a copy of that in the cloud I use tarsnap.

feld commented Mar 16, 2015

Seconding rsnapshot -- https://github.com/rsnapshot/rsnapshot

And for putting a copy of that in the cloud I use tarsnap.

@biohazd

This comment has been minimized.

Show comment Hide comment
@biohazd

biohazd Mar 16, 2015

personally, I use rsnapshot: http://www.rsnapshot.org/

biohazd commented Mar 16, 2015

personally, I use rsnapshot: http://www.rsnapshot.org/

@mikhailian

This comment has been minimized.

Show comment Hide comment
@mikhailian

mikhailian Mar 16, 2015

I've recently followed a presentation of restic by its author. It is amazingly fast and has deduplication and encryption built in.

However, I prefer this little script above else. It stores up to 9 versions, but you can push it to store 99 with a bit of tweaking ;-)

FROM=/etc
TO=/var/backups
LINKTO=--link-dest=$TO/`/usr/bin/basename $FROM`.1
OPTS="-a --delete -delete-excluded"
NUMBER_OF_BACKUPS=8

find $TO -maxdepth 1 -type d -name "`basename $FROM`.[0-9]"| sort -rn| while read dir
do
        this=`expr match "$dir" '.*\([0-9]\)'`; 
        let next=($this+1)%$NUMBER_OF_BACKUPS;
        basedirname=${dir%.[0-9]}
        if [ $next -eq 0 ] ; then
                 rm -rf $dir
        else
                 mv $dir $basedirname.$next
        fi
done
rsync $OPTS $LINKTO $FROM/ $TO/`/usr/bin/basename $FROM.0`

I've recently followed a presentation of restic by its author. It is amazingly fast and has deduplication and encryption built in.

However, I prefer this little script above else. It stores up to 9 versions, but you can push it to store 99 with a bit of tweaking ;-)

FROM=/etc
TO=/var/backups
LINKTO=--link-dest=$TO/`/usr/bin/basename $FROM`.1
OPTS="-a --delete -delete-excluded"
NUMBER_OF_BACKUPS=8

find $TO -maxdepth 1 -type d -name "`basename $FROM`.[0-9]"| sort -rn| while read dir
do
        this=`expr match "$dir" '.*\([0-9]\)'`; 
        let next=($this+1)%$NUMBER_OF_BACKUPS;
        basedirname=${dir%.[0-9]}
        if [ $next -eq 0 ] ; then
                 rm -rf $dir
        else
                 mv $dir $basedirname.$next
        fi
done
rsync $OPTS $LINKTO $FROM/ $TO/`/usr/bin/basename $FROM.0`
@toddsiegel

This comment has been minimized.

Show comment Hide comment
@toddsiegel

toddsiegel Mar 16, 2015

From https://github.com/restic/restic: "WARNING: At the moment, consider restic as alpha quality software, it is not yet finished. Do not use it for real data!"

From https://github.com/restic/restic: "WARNING: At the moment, consider restic as alpha quality software, it is not yet finished. Do not use it for real data!"

@drkarl

This comment has been minimized.

Show comment Hide comment
@drkarl

drkarl Mar 16, 2015

@mrcrilly I added markup formatting, and also updated with some new content.

Owner

drkarl commented Mar 16, 2015

@mrcrilly I added markup formatting, and also updated with some new content.

@gyoza

This comment has been minimized.

Show comment Hide comment
@gyoza

gyoza Mar 16, 2015

Surprised this isn't on the list..

http://dar.linux.free.fr/

Pretty good, does baselines, diffs, incrementals. Pretty decent software. 10 years old.

gyoza commented Mar 16, 2015

Surprised this isn't on the list..

http://dar.linux.free.fr/

Pretty good, does baselines, diffs, incrementals. Pretty decent software. 10 years old.

@antitux

This comment has been minimized.

Show comment Hide comment
@antitux

antitux Mar 16, 2015

lvm snapshots possible?

antitux commented Mar 16, 2015

lvm snapshots possible?

@shulegaa

This comment has been minimized.

Show comment Hide comment
@shulegaa

shulegaa Mar 16, 2015

As of early 2015, Mondo Rescue v3.2.x seems to have managed to cope with the uncontrolled morass of 'systemd' dependencies (and systemd's inscrutable, binary config files and so on).  I haven't tried it (yet).  Before systemd, Mondo Rescue was a a remarkably powerful and easy-to-use (full-system-image) backup (and disaster recovery from bootable device/CD/DVD) tool.  It should be worth a try ;-)

http://www.mondorescue.org/

As of early 2015, Mondo Rescue v3.2.x seems to have managed to cope with the uncontrolled morass of 'systemd' dependencies (and systemd's inscrutable, binary config files and so on).  I haven't tried it (yet).  Before systemd, Mondo Rescue was a a remarkably powerful and easy-to-use (full-system-image) backup (and disaster recovery from bootable device/CD/DVD) tool.  It should be worth a try ;-)

http://www.mondorescue.org/

@benjamir

This comment has been minimized.

Show comment Hide comment
@benjamir

benjamir Mar 16, 2015

  • Bacula has file based deduplication
  • BackupPC up to v3 doesn't encrypt the pool of the deduplicated files (FDE can save you from off-line access anyway), but you can easily configure an archive host where you put encrypted tarballs (you can hook in with scripts at any[?] stage of the backup). e.g. use an on-site BackupPC setup which puts the pool on a encrypted partition/container/folder/etc. and use off-site storage as archive hosts.

Are you aware that your definition of your use case "the best Linux backup" opens the flood gates for bike shedding comments?

My advice: start with a (superficially) easy solution and try it out; read or at least skim and tag the mailing list of that software often.

  • Bacula has file based deduplication
  • BackupPC up to v3 doesn't encrypt the pool of the deduplicated files (FDE can save you from off-line access anyway), but you can easily configure an archive host where you put encrypted tarballs (you can hook in with scripts at any[?] stage of the backup). e.g. use an on-site BackupPC setup which puts the pool on a encrypted partition/container/folder/etc. and use off-site storage as archive hosts.

Are you aware that your definition of your use case "the best Linux backup" opens the flood gates for bike shedding comments?

My advice: start with a (superficially) easy solution and try it out; read or at least skim and tag the mailing list of that software often.

@seidler2547

This comment has been minimized.

Show comment Hide comment
@seidler2547

seidler2547 Mar 16, 2015

I use duplicity for backing up my server. Restoring has been unproblematic for me.

I like duplicity because

  • it had asymmetric encryption, meaning that I don't need to leave the decryptions keys on the server, it just encrypts using the public key
  • it has a good number of backends, including Google Drive (one of the cheapest storage options < 200GB) and Amazon S3

I use duplicity for backing up my server. Restoring has been unproblematic for me.

I like duplicity because

  • it had asymmetric encryption, meaning that I don't need to leave the decryptions keys on the server, it just encrypts using the public key
  • it has a good number of backends, including Google Drive (one of the cheapest storage options < 200GB) and Amazon S3
@eAndrius

This comment has been minimized.

Show comment Hide comment
@eAndrius

eAndrius Mar 17, 2015

Had the same need, enddded up adapting encrb for personal use-case: https://github.com/eAndrius/encrb

Had the same need, enddded up adapting encrb for personal use-case: https://github.com/eAndrius/encrb

@derekp7

This comment has been minimized.

Show comment Hide comment
@derekp7

derekp7 Mar 18, 2015

@drkarl, is lack of encryption the only deal breaker for Snebu? Or is it the primary deal breaker? (You may want to add "file level, not block level deduplication", as this impacts backing up VM images, although an add-on that specifically address VMs is in the works). If encryption is the main issue, I've put together a plan to address this without compromising some of the other features (such as minimal client-side requirements) -- I should be able to code it up this weekend.

derekp7 commented Mar 18, 2015

@drkarl, is lack of encryption the only deal breaker for Snebu? Or is it the primary deal breaker? (You may want to add "file level, not block level deduplication", as this impacts backing up VM images, although an add-on that specifically address VMs is in the works). If encryption is the main issue, I've put together a plan to address this without compromising some of the other features (such as minimal client-side requirements) -- I should be able to code it up this weekend.

@gam-phon

This comment has been minimized.

Show comment Hide comment
@gam-phon

gam-phon Mar 18, 2015

rsnapshot is in our production servers.

rsnapshot is in our production servers.

@drkarl

This comment has been minimized.

Show comment Hide comment
@drkarl

drkarl Mar 22, 2015

Owner

drkarl commented Mar 22, 2015

@Vincent14

This comment has been minimized.

Show comment Hide comment
@Vincent14

Vincent14 Mar 23, 2015

I use BackInTime (in the debian repo) : it's a graphical software based on inodes increments (physical links) for incrementials backups

I use BackInTime (in the debian repo) : it's a graphical software based on inodes increments (physical links) for incrementials backups

@amarendra

This comment has been minimized.

Show comment Hide comment
@amarendra

amarendra Jul 18, 2015

Backupninja didn't support Attic the last time I checked. Also not sure whether Attic does block based de-duplication or it's just file level. The former would make it a killer - that's one good strong point Tarsnap has that is missing in many backup clients.

Backupninja didn't support Attic the last time I checked. Also not sure whether Attic does block based de-duplication or it's just file level. The former would make it a killer - that's one good strong point Tarsnap has that is missing in many backup clients.

@redacom

This comment has been minimized.

Show comment Hide comment
@redacom

redacom Sep 23, 2015

I think we will never find the "best" backup tool as everyone has different needs. For example, I do not want to compress backups as they are very big and takes a lot of time and resources, while for others, compression is a must.

My crontribution are Fwbackups which is an easy tool to backup both locally and remotelly (compressing if needed)

Bera Backup another open source tool to backup files/folders but also configurations (crontabs, users, system config...) to easily replicate a system.

Amanda (stated above) is probably the best tool, very powerful but also more complex than the others...

redacom commented Sep 23, 2015

I think we will never find the "best" backup tool as everyone has different needs. For example, I do not want to compress backups as they are very big and takes a lot of time and resources, while for others, compression is a must.

My crontribution are Fwbackups which is an easy tool to backup both locally and remotelly (compressing if needed)

Bera Backup another open source tool to backup files/folders but also configurations (crontabs, users, system config...) to easily replicate a system.

Amanda (stated above) is probably the best tool, very powerful but also more complex than the others...

@sammcj

This comment has been minimized.

Show comment Hide comment
@sammcj

sammcj Jan 5, 2016

Has anyone found anything else recently to add to the list?

We're still pretty keen to replace Amanda backup with something, we've had so many problems with it again recently, it'd be great if there was a simple web interface that provided functionality similar to that of ninjahelper but also gave a breakdown of backup timelines and offered restore options.

sammcj commented Jan 5, 2016

Has anyone found anything else recently to add to the list?

We're still pretty keen to replace Amanda backup with something, we've had so many problems with it again recently, it'd be great if there was a simple web interface that provided functionality similar to that of ninjahelper but also gave a breakdown of backup timelines and offered restore options.

@sadid

This comment has been minimized.

Show comment Hide comment
@sadid

sadid Jan 10, 2016

I had some experience with backup tools and I came finally to these tools:
Zpaq, attic, Dar and Duplicity (Deja-Dup) (Obnam, Zbackup and bup was considerable but I dismess them after a while). The way bup handles deduplication is very interesting but I didn't test it since it had no encryption at the time.

As far as I remember:
The best regarding Simplicity and Ease of Use: Deja-Dup
feature rich: attic
and Fast and compression/deduplication effecient: Zpaq

Currently I'm just using Zpaq, Deja-Dup(Duplicity) and one archive with attic. Attic is slow in comparison to Dar and Zpaq here are my benchmark (I found it finally):
attic on HugeRepo with 120+GB takes ~5.5Hr and 100 GB final deduplicated backup
attic on MediumRepo with near 26GB takes ~1Hr and 13GB final deduplicated backup
attic on TinyRepo with near 4GB takes 19min and 1.94GB final deduplicated backup
zpaq on MediumRepo with 26GB takes 1300Sec and 12.2GB
zpaq on TinyRepo with near 4GB takes 170Sec and 1.7GB
(I'm not sure about parameters of each command)

sadid commented Jan 10, 2016

I had some experience with backup tools and I came finally to these tools:
Zpaq, attic, Dar and Duplicity (Deja-Dup) (Obnam, Zbackup and bup was considerable but I dismess them after a while). The way bup handles deduplication is very interesting but I didn't test it since it had no encryption at the time.

As far as I remember:
The best regarding Simplicity and Ease of Use: Deja-Dup
feature rich: attic
and Fast and compression/deduplication effecient: Zpaq

Currently I'm just using Zpaq, Deja-Dup(Duplicity) and one archive with attic. Attic is slow in comparison to Dar and Zpaq here are my benchmark (I found it finally):
attic on HugeRepo with 120+GB takes ~5.5Hr and 100 GB final deduplicated backup
attic on MediumRepo with near 26GB takes ~1Hr and 13GB final deduplicated backup
attic on TinyRepo with near 4GB takes 19min and 1.94GB final deduplicated backup
zpaq on MediumRepo with 26GB takes 1300Sec and 12.2GB
zpaq on TinyRepo with near 4GB takes 170Sec and 1.7GB
(I'm not sure about parameters of each command)

@romiras

This comment has been minimized.

Show comment Hide comment
@romiras

romiras Jul 2, 2016

I use ZBackup regularly. It's my favorite backup tool ever since I found it.

Typical scenario is
zip -r0 - some_dir | zbackup backup /path/to/zbackup-repo/backups/filename.zip
or
tar -c some_dir | zbackup backup /path/to/zbackup-repo/backups/filename.tar
or
zcat somefile.tar.gz | zbackup backup /path/to/zbackup-repo/backups/filename.tar
or
cat some_raw_file | zbackup backup /path/to/zbackup-repo/backups/filename.tar

romiras commented Jul 2, 2016

I use ZBackup regularly. It's my favorite backup tool ever since I found it.

Typical scenario is
zip -r0 - some_dir | zbackup backup /path/to/zbackup-repo/backups/filename.zip
or
tar -c some_dir | zbackup backup /path/to/zbackup-repo/backups/filename.tar
or
zcat somefile.tar.gz | zbackup backup /path/to/zbackup-repo/backups/filename.tar
or
cat some_raw_file | zbackup backup /path/to/zbackup-repo/backups/filename.tar

@lestercheung

This comment has been minimized.

Show comment Hide comment
@lestercheung

lestercheung Nov 10, 2016

My plan that works for small to mid size groups:

  • Use ZFS with stock Ubuntu (preferred) or BTRFS for filesystem snapshots.
  • Automate filesystem snapshots creation and removal (zfsnap)
  • burp for off-machine backup!

My plan that works for small to mid size groups:

  • Use ZFS with stock Ubuntu (preferred) or BTRFS for filesystem snapshots.
  • Automate filesystem snapshots creation and removal (zfsnap)
  • burp for off-machine backup!
@ThomasWaldmann

This comment has been minimized.

Show comment Hide comment
@ThomasWaldmann

ThomasWaldmann Dec 24, 2016

Replace attic (development stopped 1.5y ago) with borgbackup (== fork of attic + lots of fixes and enhancements).

Replace attic (development stopped 1.5y ago) with borgbackup (== fork of attic + lots of fixes and enhancements).

@xenithorb

This comment has been minimized.

Show comment Hide comment
@xenithorb

xenithorb Mar 2, 2017

Thanks @ThomasWaldmann, I made it to the end of the comments just to look for updates. As now it's 2017 a lof of the solutions mentioned here are very out of date and their lack of development is prevalent. (Almost half are no longer being maintained, it seems.) Some like attic haven't had any new commits in years, unfortunately.

Any more updates are appreciated. (Comments about stable software not needing commits and such notwithstanding, the focus is to trust in something that will continue to work)

Thanks @ThomasWaldmann, I made it to the end of the comments just to look for updates. As now it's 2017 a lof of the solutions mentioned here are very out of date and their lack of development is prevalent. (Almost half are no longer being maintained, it seems.) Some like attic haven't had any new commits in years, unfortunately.

Any more updates are appreciated. (Comments about stable software not needing commits and such notwithstanding, the focus is to trust in something that will continue to work)

@DJsupermix

This comment has been minimized.

Show comment Hide comment
@DJsupermix

DJsupermix Apr 5, 2017

Well, it's all about security issues we're now facing in the modern world. We tested Bacula's solutions, just like those https://www.baculasystems.com/enterprise-backup-solution-with-bacula-systems/easy-and-scalable-windows-backup though were neither satisfied, nor dissatisfied as there still have not been failures, luckily. Would be interested in getting feedback if someone used their products, want to be sure that everything would be OK on the X day.

Well, it's all about security issues we're now facing in the modern world. We tested Bacula's solutions, just like those https://www.baculasystems.com/enterprise-backup-solution-with-bacula-systems/easy-and-scalable-windows-backup though were neither satisfied, nor dissatisfied as there still have not been failures, luckily. Would be interested in getting feedback if someone used their products, want to be sure that everything would be OK on the X day.

@markfox1

This comment has been minimized.

Show comment Hide comment
@markfox1

markfox1 Jul 14, 2017

Thanks for the census @drkarl. For our Windows workstations, we settled on Duplicati, which uses a block-based deduplication algorithm to allow incremental backups to local, remote, or cloud object storage indefinitely. So the first backup is the big one, but all backups are incremental from there. It is open source and runs on the major unices. We are experimenting with it under Linux, and it does feel a bit weird running a C# program under Linux, but until someone writes an open source program with similar abilities to hashbackup, it seems to be the only game in town.

Thanks for the census @drkarl. For our Windows workstations, we settled on Duplicati, which uses a block-based deduplication algorithm to allow incremental backups to local, remote, or cloud object storage indefinitely. So the first backup is the big one, but all backups are incremental from there. It is open source and runs on the major unices. We are experimenting with it under Linux, and it does feel a bit weird running a C# program under Linux, but until someone writes an open source program with similar abilities to hashbackup, it seems to be the only game in town.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment