I would like to make manual backups of an SD card as a disk image so that it can be easily recreated when needed. I’d like to keep a few versions in case there is a problem I didn’t know about, it can be rolled back.

How can I do this incrementally, or with de-duplication, so that I don’t have to keep full copies of the complete SD card? It’s very big but most of the content won’t be changing much.

It’s for MiyooCFW ROM which is on FAT 32-formatted micro SD card.

Thanks for your help! Also let me know if I am going about the problem in a wrong way.

  • Krafty Kactus@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 months ago

    If it’s a filesystem, it can be backed up using BorgBackup. There are a few different clients but I personally use Vorta on Linux.

    • dan@upvote.au
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      +1 Borgbackup is great, and its deduplication works very well. Vorta works well, and there’s also GNOME Pika which has a very simple UI. For servers, I use Borgmatic.

    • RedSquadCampFollower@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      @tkk13909@sopuli.xyz @dan@upvote.au

      Borg backup has insane deduping. The first time I used it I thought it was broken because of how much smaller the backup was compared to the original. I used it with vorta GUI.

      I am not sure how to combine the task of making a disk image with backing up with borg either on the command line or via one of the GUIs?

  • Shdwdrgn@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    I’m not sure about anything that does rolling backups of full disks, but I have used rdiff-backup for years for rolling backups of individual files. The format for the backup is similar to (and based on) rsync so it’s fairly easy to script. For complete servers I just keep a copy of the install image on hand, in a catastrophic drive failure I can do a new installation to a new drive (creating the partitions, grub setup, etc), then restore the latest backup. An alternative might be to use dd and create a full drive image file to use as your starting point in a full recovery.

    One thing to keep in mind though is that the backups should NOT contain any system folders like dev or proc that get generated at boot. If possible, when making a starting image with dd, you want the drive to be separate and not part of the running OS, because some folders like dev and var have a basic set of files in place needed for the boot process which may be different than the final version you see after the OS is up and running. That’s why I find it easier to just plan around a clean install to new drives when needed.

    • RedSquadCampFollower@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Thanks I will look at rdiff. I am not sure if rsync is able to “see inside” the *.img files to discern the individual files. If it can then it would be helpful because I could just re-write the same file over and over again and keep backups using rsync or any of the various rsync-derrived tools?

      The filesystem will be cold at time of back up because I will need to shut it down, remove the card from the console and put it into my computer’s reader so no worries about that.

  • n0x0n@feddit.org
    link
    fedilink
    Deutsch
    arrow-up
    2
    ·
    5 months ago

    You’re looking for a block level incremental backup solution. This can either be achieved using filesystem based snapshots (ZFS, BRRFS) or using dedicated programs. I know rdiff-backup , restic and duplicity use block-level diffs, not sure about rsnapshot.

  • ouch@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    I recommend just backing up the files.

    But if you really have to back up the disk image, dd a copy of it, mount the copied image as loopback device, write the loopback filesystem full of /dev/zero, sync, delete the zero file(s), unmount, cp --sparse=always and store the result.

    The reason for using the loopback image step is to prevent wearing out the SD card with writing the free space full of zeros every time you make a backup.

    There may be an existing tool for this, but I don’t remember it.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    5 months ago

    I haven’t tried such a thing, but I remember ZFS has an option for block deduplication.

    So you would set up a ZFS with block deduplication (and probably without compression - try this point out), and then you make your backup images with the dd tool and the correct block size.

    Now you make always full copies and have them as normal files but they take only the disk space of the differences.

    let me know if I am going about the problem in a wrong way.

    I would not say “wrong way”. I’s fun to think about such things and try them out.

    On the other hand I think a FAT32 can have only 32Gb. I would not mind having many of them lying around on my home NAS that has 12 Tb on RAID :-)