I just started getting into self hosting using docker compose and I wonder about possible backup solutions. I only have to safe my docker config so far, but I want host files as well. What software and hardware are you using for backup?
For app data, Borg as backup/restore software. Backup data is then stored on Hetzner as an offsite backup - super easy and cheap to setup. Also add healthchecks.io to get notified if a backup failed.
Edit: Backup docker compose files and other scripts (without API keys!!!) with git to GitHub.
I’ve been using Borg to back my stuff up. It gets backed up to rsync.net, which has good support for Borg:
https://www.rsync.net/products/borg.html
If you’re good enough at computers, you can even set up a special borg account with them that’s cheaper and has no tech support.
I’m on the same boat right now, borg and borgbase.
That looks cool, and they’ve got some other nifty looking things like https://www.pikapods.com/. Any idea how stable the company is? I partially like rsync.net because it’s pretty unlikely to just disappear someday.
Seconding this. On my unRAID host, I run a docker container called “Vorta” that uses Borg as its backend mechanism to backup to my SynologyNAS over NFS. Then on my Syno, run two backup jobs using HyperBackup, one goes to my cousin’s NAS connected via a Site-to-Site OpenVPN connection on our edge devices (Ubiquity Unifi Security Gateway Pro <-> UDM Pro), the other goes to Backblaze B2 Cloud Storage.
OP, let me know if you need any assistance setting something like this up. Gotta share the knowledge over here on Lemmy that we’re still used to searching evil Reddit for.
Love Borg and the associated docker containers and the like. Really is set and forget!
A lot of services have some kind of way to create backup files. I have cronjobs doing that daily then uploading it to some cloud storage with rclone.
I host everything on Proxmox VM’s so I just take daily snapshots to my NAS
I use rsync with an offsite backup.
Backblaze B2. Any software that is S3 compatible can use B2 as the target and it’s reasonably priced for the service. I backup all the PCs and services to a Synology NAS and then backup that to B2 (everything except my Plex media, that would be pricy and it’s easy enough to re-rip from disc if needed).
veeam is pretty simple and powerful, the community version is free if you are only using it for a small environment (CPU cores is what it counts)
I havn’t used it for docker but it says it is supported
I use Veeam to backup shares on my NAS to rotated external drives. I also backup a Linux server.
Duplicati. Works like a charm. Supports practically every backend (S3, backblaze, one drive, Google, storj, sia, even Tahoe!
I doubt your using NixOS so this config might seem useless but at its core it is a simple systemd timer service and bash scripting.
To convert this to another OS you will use cron to call the script at the time you want. Copy the part between script=“” and then change out variables like the location of where docker-compose is stored since its different on NixOS.
Let me explain the script. We start out by defining the backupDate variable, this will be the name of the zip file. As of now that variable would be 2023-07-12. We then go to each folder with a docker-compose.yml file and take it down. You could also replace down with stop if you don’t plan on updating each night like I do. I use rclone to connect to Dropbox but rclone supports many providers so check it out and see if it has the one you need. Lastly I use rclone to connect to my Dropbox and delete anything older than 7 days in the backup folder. If you end up going my route and get stuck let me know and I can help out. Good luck.
systemd = { timers.docker-backup = { wantedBy = [ "timers.target" ]; partOf = [ "docker-backup.service" ]; timerConfig.OnCalendar= "*-*-* 3:30:00"; }; services.docker-backup = { serviceConfig.Type = "oneshot"; serviceConfig.User = "root"; script = '' backupDate=$(date +'%F') cd /docker/apps/rss ${pkgs.docker-compose}/bin/docker-compose down cd /docker/apps/paaster ${pkgs.docker-compose}/bin/docker-compose down cd /docker/no-backup-apps/nextcloud ${pkgs.docker-compose}/bin/docker-compose down cd /docker/apps/nginx-proxy-manager ${pkgs.docker-compose}/bin/docker-compose down cd /docker/backups/ ${pkgs.zip}/bin/zip -r server-backup-$backupDate.zip /docker/apps cd /docker/apps/nginx-proxy-manager ${pkgs.docker-compose}/bin/docker-compose pull ${pkgs.docker-compose}/bin/docker-compose up -d cd /docker/apps/paaster ${pkgs.docker-compose}/bin/docker-compose pull ${pkgs.docker-compose}/bin/docker-compose up -d cd /docker/apps/rss ${pkgs.docker-compose}/bin/docker-compose pull ${pkgs.docker-compose}/bin/docker-compose up -d cd /docker/no-backup-apps/nextcloud ${pkgs.docker-compose}/bin/docker-compose pull ${pkgs.docker-compose}/bin/docker-compose up -d cd /docker/backups/ ${pkgs.rclone}/bin/rclone copy server-backup-$backupDate.zip Dropbox:Server-Backup/ rm server-backup-$backupDate.zip ${pkgs.rclone}/bin/rclone delete --min-age 7d Dropbox:Server-Backup/ ''; }; };
deleted by creator
raid1 + data duplication
Photos, videos, music, documents, etc… are available on multiple devices using SyncThing.
Never tried syncthing. I will look into it.
RAID is not a backup. I’m not sure about syncthing, does that count as backup? Have you tried restoring from it?
Sounds like pedantry to me.
If a program screws up and crashes while writing data to your drive, it can take out more than just the data it was dealing with. RAID will simply destroy data on both your drives at the same time, making any data recovery impossible.
It’s not pedantry, it’s just that RAID and instant data duplication or synchronization aren’t meant to protect you from many of the situations in which you would need a backup. If a drive fails, you can restore the information from wherever you duplicated the data to. If, however, your data is corrupted somehow, the corruption is just duplicated over and you have no way to restore the data to a state before the corruption happened. If you accidentally delete files you didn’t want to delete, the deletion is replicated over and, again, no way to restore them. RAID wasn’t built to solve the problems a backup tries to solve.
Well I guess my personal definition of backup is wrong.
I run a second Unraid server with a couple of backup-related applications, as well as Duplicati. I have my main server network mounted and run scheduled jobs to both copy data from the main pool to the backup pool, as well as to Backblaze. Nice having the on-site backup as well as the cloud based.
I occasionally burn to 100gb blurays as well for the physical backup.
So far I have had good experience with kopia. But it is definitly less battle-tested than the other alternatives and I do not use it for too critical stuff yet.
Local backup to my Synology NAS every night which is then replicated to another NAS at my folks house through a secure VPN tunnel. Pretty simple and easy to deploy.
Sounds good. What do you use for replication?
Most likely Hyper Backup & Hyper Vault, two applications built into Synology’s DSM software that runs on their NAS devices.
Just simple old rsync. The nas at the far-end is an old QNAP I had lying around.
ZFS send to a pair of mirrored HDDs on the same machine ever hour and a daily restic backup to S3 storage. Every six months I test and verify the cloud backup.
VM instances on the Proxmox VE with native integration with the Proxmox Backup Server (PBS).
For non-VM a little PBS agent.