I’m working on a project to back up my family photos from TrueNas to Blu-Ray disks. I have other, more traditional backups based on restic and zfs send/receive, but I don’t like the fact that I could delete every copy using only the mouse and keyboard from my main PC. I want something that can’t be ransomwared and that I can’t screw up once created.

The dataset is currently about 2TB, and we’re adding about 200GB per year. It’s a lot of disks, but manageably so. I’ve purchased good quality 50GB blank disks and a burner, as well as a nice box and some silica gel packs to keep them cool, dark, dry, and generally protected. I’ll be making one big initial backup, and then I’ll run incremental backups ~monthly to capture new photos and edits to existing ones, at which time I’ll also spot-check a disk or two for read errors using DVDisaster. I’m hoping to get 10 years out of this arrangement, though longer is of course better.

I’ve got most of the pieces worked out, but the last big question I need to answer is which software I will actually use to create the archive files. I’ve narrowed it down to two options: dar and bog-standard gnu tar. Both can create multipart, incremental backups, which is the core capability I need.

Dar Advantages (that I care about):

  • This is exactly what it’s designed to do.
  • It can detect and tolerate data corruption. (I’ll be adding ECC data to the disks using DVDisaster, but defense in depth is nice.)
  • More robust file change detection, it appears to be hash based?
  • It allows me to create a database I can use to locate and restore individual files without searching through many disks.

Dar disadvantages:

  • It appears to be a pretty obscure, generally inactive project. The documentation looks straight out of the early 2000s and it doesn’t have https. I worry it will go offline, or I’ll run into some weird bug that ruins the show.
  • Doesn’t detect renames. Will back up a whole new copy. (Problematic if I get to reorganizing)
  • I can’t find a maintained GUI project for it, and my wife ain’t about to learn a CLI. Would be nice if I’m not the only person in the world who could get photos off of these disks.

Tar Advantages (that I care about):

  • battle-tested, reliable, not going anywhere
  • It’s already installed on every single linux & mac PC , and it’s trivial to put on a windows pc.
  • Correctly detects renames, does not create new copies.
  • There are maintained GUIs available; non-nerds may be able to access

Tar disadvantages:

  • I don’t see an easy way to locate individual files, beyond grepping through snar metadata files (that aren’t really meant for that).
  • The file change detection logic makes me nervous - it appears to be based on modification time and inode numbers. The photos are in a ZFS dataset on truenas, mounted on my local machine via SMB. I don’t even know what an inode number is, how can I be sure that they won’t change somehow? Am I stuck with this exact NAS setup until I’m ready to make a whole new base backup? This many blu-rays aren’t cheap and burning them will take awhile, I don’t want to do it unnecessarily.

I’m genuinely conflicted, but I’m leaning towards dar. Does anyone else have any experience with this sort of thing? Is there another option I’m missing? Any input is greatly appreciated!

  • suicidaleggroll@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 days ago

    I don’t like the fact that I could delete every copy using only the mouse and keyboard from my main PC. I want something that can’t be ransomwared and that I can’t screw up once created.

    Lots of ways to get around that without having to go the route of burning a hundred blu-rays with complicated (and risky) archive splitting and merging. Just a handful of external HDDs that you “zfs send” to and cycle on some regular schedule would handle that. So buy 3 drives, backup your data to all 3 of them, then unplug 2 and put them somewhere safe (desk at work, friend or family member’s house, etc.). Continue backing up to the one you keep local for the next ~month and then rotate the drives. So at any given time you have a on-site copy that’s up-to-date, and two off-site copies that are no more than 1 and 2 months old respectively. Immune to ransomware, accidental deletion, fire, flood, etc. and super easy to maintain and restore from.

    • czardestructo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I do this except the offline copies are raspberry pis, they grab an update then turn their network card off and go black for about a month. Randomly they turn on the network card, pull a fresh copy and go black again. Safe from randomware and automatic.

      • Vorpal@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Unless they are in different cities they wouldn’t be safe from a fire, lightning strike, earth quake/flood/tsunami/typhon/hurricane/etc (remove whichever ones are not relevant to where you live).

    • Decipher0771@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      To add to this….ive added a layer of protection against accidental deletion and dumb fingering by making each year of my photos archive into a separate zfs dataset. Then each year I set each dataset to read-only and create a new one.

      Manual, but effective enough. I also have automatic snapshots against dumb fingering, but this helps against ones I don’t notice before the snapshots expire.

    • traches@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Yeah, you’re probably right. I already bought all the stuff, though. This project is halfway vibes based; something about spinning rust just feels fragile you know?

      I’m definitely moving away from the complex archive split & merge solution. fpart can make lists of files that add up to a given size, and fd can find files modified since a given date. Little bit of plumbing and I’ve got incremental backups that show up as plain files & folders on a disk.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    This is the sort of thing bacula was made for - physical backups spread out over multiple removable media (tapes mostly, but it can work with optical drives).

    https://www.bacula.org/free-tape-backup-software/

    It tracks where it puts your files, so it does have its own db that also needs backing up. But if you want to restore without needing to search manually through dozens of disks this is what you need.

    • traches@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Hey cool, I hadn’t heard of bacula! Looks like a really robust project. I did look into tape storage, but I can’t find a tape drive for a reasonable price that doesn’t have a high jank factor (internal, 5.25" drives with weird enterprise connectors and such).

      I’m digging through their docs and I can’t find anything about optical media, except for a page in the manual for an old version saying not to use it. Am I missing something? It seems heavly geared towards tapes.

  • Decipher0771@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    I did (am doing) something very similar. I definitely have issues with my indexing, but I’m just ordering it manually by year/date for now.

    I’m doing a little extra for parity though. I’m using 50-100gb discs for the data, and using 25gb discs as a full parity disc via dvdisaster for each disc I burn. Hopefully that reduces the risk of the parity data also being unreadable, and gives MORE parity data without eating into my actual data discs. It’s hard enough to break up the archives into 100gb chunks as is.

    Need to look into bacula as suggested by another poster.

    • traches@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 days ago

      Can borg back up to write-once optical media spread over multiple disks? I’m looking through their docs and I can’t find anything like that. I see an append-only mode but that seems more focused on preventing hacked clients from corrupting data on a server.

      • surph_ninja@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I’m not sure it would intelligently handle that on its own. There’d need to be some manual work on your end.