I just got my home server up and running and was wondering what you guys recommend for backups. I figure it will probably be worth having backups on cloud servers tjay are external, are there any good services yall use for that?

  • @GustavoM@lemmy.world
    link
    fedilink
    English
    52 years ago

    As dumb/simple/boring as this may be…? An external hard drive.

    …what? It doesn’t require you to be online 24/7, works at any™ PC, and the speed is really great – even on a potato.

    Unless you work at NASA or at IBM or similars – then feel free to call me dum.

    • @raiun@lemmy.world
      link
      fedilink
      English
      22 years ago

      While I agree with you, hard drives do have a shelf life. How many years seems to be up for debate but it does exist. If you don’t have multiple drives that are of different ages you may be in a world of hurt one day.

      • @randombullet@lemmy.world
        link
        fedilink
        English
        12 years ago

        I have a hot storage NAS that backups to a warm storage NAS.

        I backup every week and scrub every month.

        I have 2 x ZFS1 pools that contains 3 x 20TB disks each.

        With ECC ram, scrubbing, and independent pools, it’ll take a house fire to kill my local storage.

        I also have a constant backing to Backblaze and yearly encrypted backup that I ship to a friend across the world.

      • @Chadus_Maximus@lemm.ee
        link
        fedilink
        English
        1
        edit-2
        2 years ago

        Why? If you check the drive once a month, and it fails once per 10 years on average, the time when both the back up drive and the main drive fail simultaneously is on average 2340 years. Of couse they are much more likely to fail if they’re old but the odds are very small.

    • @Arrayrepairman@lemmy.world
      link
      fedilink
      English
      32 years ago

      That is great for hardware failures, but what about disasters? I would hate to lose my house to a fire and all the data (including things not replaceable, like family photos) I have on my server at the same time because my primary and backup were both destroyed.

      • @GustavoM@lemmy.world
        link
        fedilink
        English
        12 years ago

        Eh…you’ve got a point there. Then again, there is always pendrives and other extremely small devices where you can copy your (mostly important/crucial) files in and carry it along with your house/car keys or something like that.

  • @spez_@lemmy.world
    link
    fedilink
    English
    6
    edit-2
    2 years ago

    I use Restic + Resticprofile to back up everything and store it on my local HDD.

    Then, I use Rclone to sync the local repository to Backblaze B2.

    Here’s my general setup:

    /.config/restic/
    ├── logs
    │   ├── statuses
    │   │   ├── restic-status-20230202T020202.json
    │   │   └── restic-status-20230101T010101.json
    │   ├── restic-check-20230202T020202.log
    │   └── restic-backup-20230101T010101.log
    ├── config
    │   ├── profiles.yaml
    │   ├── excludes.txt
    │   ├── rclone.conf
    │   └── password.txt
    ├── bin
    │   ├── restic_0.15.2_linux_arm64
    │   ├── rclone_1.63.1_linux_arm64
    │   └── resticprofile_0.22.0_linux_arm64
    
    version: "1"
    
    # Schedules (https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events)
    {{ $SCHEDULE_RESTIC_BACKUP := "*-*-* 22:00:00" }}       # Daily at 10PM
    {{ $SCHEDULE_RESTIC_CHECK := "Sat *-*-* 04:00:00" }}    # Weekly at 4AM on Saturday
    {{ $SCHEDULE_SYNC_BACKUP := "Sun *-*-* 21:30:00" }}     # Weekly at 11.30PM on Sunday
    {{ $SCHEDULE_POSTGRES_BACKUP := "Fri *-*-* 20:00:00" }} # Weekly at 8PM on Friday
    
    # Directories
    {{ $LOCATION_RESTIC_BINARY := "/home/deck/Desktop/.config/restic/bin/restic_0.15.2_linux_arm64" }}
    {{ $LOCATION_RESTIC_REPO := "/home/deck/Desktop/restic-repo" }}
    {{ $LOCATION_RESTIC_LOG := "/home/deck/Desktop/.config/restic/logs" }}
    {{ $LOCATION_RESTIC_STATUS := "/home/deck/Desktop/.config/restic/logs/statuses" }}
    {{ $LOCATION_RESTIC_BLOCKED_FILE := "/home/deck/Desktop/.config/restic/BLOCKED" }}
    {{ $LOCATION_RCLONE_BINARY := "/home/deck/Desktop/.config/restic/bin/rclone_1.63.1_linux_arm64" }}
    {{ $LOCATION_RCLONE_REPO := "bucket:restic-backup-12345" }}
    {{ $LOCATION_RCLONE_CONFIG := "/home/deck/Desktop/.config/restic/config/rclone.conf" }}
    {{ $LOCATION_RESTICPROFILE_LOCK := "/tmp/resticprofile-default.lock" }}
    {{ $LOCATION_POSTGRES_DUMP := "/home/deck/Desktop/dumps" }}
    {{ $LOCATION_PRIMARY_BACKUP_SOURCE := "/home/deck/Desktop/" }}
    
    # Configs
    {{ $CONFIG_CURRENT_TIME := .Now.Format "20060102T150405" }}
    {{ $CONFIG_RESTIC_PASSWORD := "/home/deck/Desktop/.config/restic/config/password.txt" }}
    {{ $CONFIG_RESTIC_EXCLUDE := "/home/deck/Desktop/.config/restic/excludes.txt" }}
    
    global:
      default-command: snapshots                      # Run 'snapshots' when no command is specified
      initialize: false                               # Do not initialize a repository if none exists
      priority: low                                   # Use priority class on Windows and "nice" on Unixes
      min-memory: 100                                 # Minimum required RAM for Resticprofile to start
      restic-lock-retry-after: 5m                     # Retry failed restic command acquisition every 5 minutes
      restic-stale-lock-age: 10h                      # Unlock stale lock if age exceeds 10 hours
      restic-binary: '{{ $LOCATION_RESTIC_BINARY }}'  # Location of the Restic binary
    
    default:
      lock: '{{ $LOCATION_RESTICPROFILE_LOCK }}'      # Local lockfile to prevent concurrent profile runs
      force-inactive-lock: true                       # Detect and remove stale locks
      initialize: true                                # Initialize repository if it doesn't exist
      repository: '{{ $LOCATION_RESTIC_REPO }}'       # Path to Restic repository
      password-file: '{{ $CONFIG_RESTIC_PASSWORD }}'  # File containing repository password
      status-file: '{{ $LOCATION_RESTIC_STATUS }}/{{ $CONFIG_CURRENT_TIME }}-restic-status.json'  # Output status file
      compression: 'max'                              # Maximum compression level
      run-after-fail:                                 # Block syncing if there was a failure. TODO: Add an email
        - 'echo "The command ${PROFILE_COMMAND} has failed in ${PROFILE_NAME}. Please check the logs." > {{ $LOCATION_RESTIC_BLOCKED_FILE }}'
    
      backup:
        run-before:                                   # Bring down Docker before backup
          - 'systemctl stop docker.socket'
          - 'systemctl stop docker'
        run-finally:
          - 'grep --invert-match -E "^unchanged|\(0 B added, 0 B stored\)|\(0 B added\)" {{ tempFile "backup.log" }} > {{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-backup.log'  # Copy log file, stripping out any unchanced files
          - 'systemctl start docker'                  # Bring Docker back online after backup
        one-file-system: false                        # Exclude other file systems
        no-error-on-warning: true                     # Don't consider warnings as backup failures
        source:                                       # Directories to back up
          - '{{ $LOCATION_PRIMARY_BACKUP_SOURCE }}'
        exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}'  # File containing exclude patterns
        exclude-caches: true                          # Exclude cache files
        schedule: '{{ $SCHEDULE_RESTIC_BACKUP }}'     # Backup schedule
        schedule-permission: system                   # Schedule permission
        schedule-lock-wait: 10m                       # Wait time for the lock during schedule
        schedule-log: '{{ tempFile "backup.log" }}'   # Log file to /tmp. This contains all information, including unchanged files which we do not care about
        verbose: 2                                    # Log details about processed files
    
      check:
        schedule: '{{ $SCHEDULE_RESTIC_CHECK }}'      # Verification schedule
        schedule-permission: system                   # Schedule permission
        schedule-lock-wait: 10m                       # Wait time for the lock during schedule
        schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-check.log'  # Log file
        read-data: true                               # Verify data during check
    
      prune:
        dry-run: true                                 # Only prune if safe to do so, change manually
        repack-uncompressed: true                     # Repack all uncompressed data
    
      forget:
        dry-run: true                                 # Only forget if safe to do so, change manually
    
      rewrite:
        dry-run: true                                 # Only rewrite if safe to do so, change manually
        forget: true                                  # Remove original snapshots after creating new ones
        exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}'  # File containing exclude patterns
    
      mount:
        allow-other: true                             # Allow other users to access the mount point
    
      rebuild-index:
        read-all-packs: true                          # Read all pack files to generate new index from scratch
    
    # The following shell profiles are simply to run other shell scripts at a scheduled time
    # We do not actually run the primary Restic commands listed, as we exit the process early
    
    shell-postgres:                                   # Profile to run shell scripts only. We exit the current process before Restic can run.
      backup:
        schedule: '{{ $SCHEDULE_POSTGRES_BACKUP }}'   # Postgres backup schedule
        schedule-permission: system                   # Schedule permission
        schedule-lock-mode: ignore                    # Ignore locks, if any
        schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-postgres-backup.log'  # Log file
        dry-run: true                                 # Don't write data
        run-before:                                   # Dump postgres databases
          - 'chmod 777 /var/run/docker.sock'
          - 'docker exec -t immich-postgres pg_dumpall -c -U postgres | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Immich database: {{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
          - 'docker exec -t joplin-postgres pg_dumpall -c -U joplin | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Joplin database: {{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
          - 'kill $$'
    
    shell-sync:
      backup:
        schedule: '{{ $SCHEDULE_SYNC_BACKUP }}'       # Sync backup schedule
        schedule-permission: system                   # Schedule permission
        schedule-lock-mode: ignore                    # Ignore locks, if any
        schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-rsync-backup.log'  # Log file
        dry-run: true                                 # Don't write data
        run-before:                                   # Sync the Restic repo, after checking if the repository is in good health
          - 'if [ -f "{{ $LOCATION_RESTIC_BLOCKED_FILE }}" ]; then echo "There has been a problem with the Restic repository, please check the logs. If everything is okay, delete the BLOCKED file." && kill $$; fi'
          - '{{ $LOCATION_RCLONE_BINARY }} -v sync {{ $LOCATION_RESTIC_REPO }} {{ $LOCATION_RCLONE_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }} --b2-hard-delete'
          - '{{ $LOCATION_RCLONE_BINARY }} cleanup {{ $LOCATION_RESTIC_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }}'
          - 'kill $$'
    

    Resticprofile doesn’t let me run other shell commands on a schedule, and because I wanted everything in a single configuration, I just created two new profiles which call the backup command. I then made the shell commands run before Restic, and then finally killed the instance before it got to actually run, which effectively does what I needed.

  • @kennyboy55@feddit.nl
    link
    fedilink
    English
    122 years ago

    I have an unraid server which hosts an docker image of Duplicacy. It is paid though for the web interface. And it backs up to Backblaze B2. I have roughly 175GB backed up, for which I pay $0.87 a month.

    • @lal309@lemmy.world
      link
      fedilink
      English
      12 years ago

      Do you have other clients backing up to your unraid? I’m looking for a complete solution to backing up end user workstations (windows, Mac and Linux) to my unraid server then backing up my unraid server to something like wasabi, Amazon, backblaze, etc. Preferably a single solution.

      • @kennyboy55@feddit.nl
        link
        fedilink
        English
        22 years ago

        Yes, I have another server automatically rsyncing important config files to a nfs share. And my pc has a samba share where I manually backup files to.

    • @Rakn@discuss.tchncs.de
      link
      fedilink
      English
      12 years ago

      Paid for the web interface as well. I really like that it’s super simple and just does it’s job. That would be the one I’d also recommend.

    • TheHolm
      link
      fedilink
      English
      12 years ago

      Their prices are ridicules if you add cost of outbound traffic.

  • @wibo@lemmy.world
    link
    fedilink
    English
    62 years ago

    I use restic to backup my raspberry Pi’s to my Synology NAS and backup my NAS to backblaze.

    • @Rakn@discuss.tchncs.de
      link
      fedilink
      English
      12 years ago

      Somehow “took me a while to wrap my head around it” doesn’t make me feel comfortable. Apart from git-annex themselves saying that they aren’t a backup system and just a building block to maybe create one, a backup system should imho be dead simple sind easy to understand.

        • @Rakn@discuss.tchncs.de
          link
          fedilink
          English
          02 years ago

          Features that are important to me are things like an easy overview of all backup jobs (ideal via a web UI), snapshots going back every day for a week and after that every month. Backup to providers like Backblaze or AWS and the ability to browse these backups and individual snapshots.

          I’d assume that you can build all of this with git annex in some way. But I really want something that works out of the box. E.g. install the backup software give it some things to backup and an B2 bucket and then go.

          What I’m curious about is that the git-annex site explicitly days that they aren’t a backup system, but you describe it as such.

            • @Rakn@discuss.tchncs.de
              link
              fedilink
              English
              22 years ago

              I don’t care about stuff working OOTB - half the fun is messing around with things IMO.

              I generally agree. Backups for me are just something I don’t want to tinker with. It’s important to me that they work OOTB, are easy to grasp and I have a good overview.

              The web interface is important to me because it gives me that overview from any device I’m currently using without needing to type anything into a terminal. The OOTB is important to me since I want to be able to easily set this all up again even without access to my Ansible setup or previous configuration.

              To each their own. I’m not saying your way of doing this is wrong. It’s just not for me. This is just my reasoning / preferences. It’s also the reason something like borg wasn’t my chosen solution, even though it’s generally considered great.

  • Giddy
    link
    fedilink
    English
    12 years ago

    I use nightly borg backup to a separate box and then that box uses rclone to back up the borg repo offsite. Before running the borg backup I export all databases and docker volumes so they get picked up.

  • Morethanevil
    link
    fedilink
    English
    22 years ago

    I do once a day rsync my data to another drive. I can restore a file, if I accidentaly deleted it. Important stuff goes encrypted via rclone additionaly to a hetzner storagebox.

  • @cctl01@feddit.nl
    link
    fedilink
    English
    32 years ago

    Duplicati to Backblaze B2 for the important stuff. For as far as the media library goes, no backup just local raid setup…

  • shadowbert
    link
    fedilink
    22 years ago

    Duplicati, to a friend’s home server who lives in another town.

    • @GlitzyArmrest@lemmy.world
      link
      fedilink
      102 years ago

      I hate to ask the scary question, but have you tried to restore your backups before? I used Duplicati and discovered that none of my backups were usable and ended up switching to Duplicacy.

  • @Revan343@lemmy.ca
    link
    fedilink
    English
    52 years ago

    rsync.net and learn to use Borg; they’re stupid cheap if you’re technically proficient enough to handle the Borg setup yourself. Like, charge by the gigabyte, but it’s 1.5¢/GB at the most expensive, and cheaper in bulk

  • @beerclue@lemmy.world
    link
    fedilink
    English
    32 years ago

    I used to have everything backed up to a 2TB USB drive. Which I accidentally dropped down the stairs. I lost thousands of family photos and documents. That changed my backup perspective.

    I now have a Synology NAS, with 12TB in a RAID5 array (for a bit of disk redundancy). All my home devices, Proxmox servers etc back up here. The NAS also holds a few TB of media. Attached to it I have a USB hard drive (also 12TB). The NAS gets fully backed up to the USB drive nightly.

    I also have a remote Raspberry Pi with a smaller USB drive (4TB) attached to it at my brother’s house (in another country), where I backup most of the contents of my home NAS. I don’t back up the media, just the important stuff. I might have to upgrade to a larger drive…

    • amigan
      link
      fedilink
      English
      152 years ago

      I used to have everything backed up to a 2TB USB drive. Which I accidentally dropped down the stairs. I lost thousands of family photos and documents. That changed my backup perspective.

      If it’s the only copy, it’s not a backup. It’s the master.

  • MusketeerX
    link
    fedilink
    English
    12 years ago

    I have been with idrive since 2009. At the time they were the only ones that allowed backups of network attached storage on their cheaper personal plans. Everyone else saw that as an “enterprise” feature which required a business plan. Which was bullsh*t, because lots of home NAS devices were being sold.

    Anyway, I haven’t done a recent comparison of services, but I remain happy with idrive.

    Thesedays I no longer backup on a computer with a mapped drive, but directly from my NAS which runs the idrive software.

    I had a catastrophic dual drive failure a few years ago, one failed and another failed during the raid rebuild! I was able to restore about 1tb of data and didn’t lose anything important.

    They also offer backup and restore by shipping a drive to you if you want to avoid the huge initial backup or a total restore, but I haven’t used that feature.

    They do also have a mobile app, but last time I tried it, it wasn’t great.

  • @jrest18n@lemm.ee
    link
    fedilink
    English
    42 years ago

    Veeam backup and replication at home and at work. At home a copy goes to a NAS, another copy goes to backblaze b2 currently.