Basically title. I’m in the process of setting up a proper backup for my configured containers on Unraid and I’m wondering how often I should run my backup script. Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent? Whats your schedule and do you strictly backup your appdata (container configs), or is there other data you include in your backups?

  • Darkassassin07@lemmy.ca
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    6 days ago

    I run Borg nightly, backing up the majority of the data on my boot disk, incl docker volumes and config + a few extra folders.

    Each individual archive is around 550gb, but because of the de-duplication and compression it’s only ~800mb of new data each day taking around 3min to complete the backup.

    Borgs de-duplication is honestly incredible. I keep 7 daily backups, 3 weekly, 11 monthly, then one for each year beyond that. The 21 historical backups I have right now RAW would be 10.98tb of data. After de-duplication and compression it only takes up 407.98gb on disk.

    With that kind of space savings, I see no reason not to keep such frequent backups. Hell, the whole archive takes up less space than one copy of the original data.

    • FryAndBender@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 days ago

      +1 for borg


                         Original size      Compressed size    Deduplicated size
      

      This archive: 602.47 GB 569.64 GB 15.68 MB All archives: 16.33 TB 15.45 TB 607.71 GB

                         Unique chunks         Total chunks
      

      Chunk index: 2703719 18695670

  • madame_gaymes@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 days ago

    I’m always backing up with SyncThing in realtime, but every week I do an off-site type of tarball backup that isn’t within the SyncThing setup.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    6 days ago

    Proxmox servers are mirrored zpools, not that RAID is a backup. Replication between Proxmox servers every 15 minutes for HA guests, hourly for less critical guests. Full backups with PBS at 5AM and 7PM, 2 sets apiece with one set that goes off site and is rotated weekly. Differential replication every day to zfs.rent. I keep 30 dailies, 12 weeklys, 24 monthly and infinite annuals.

    Periodic test restores of all backups at various granularities at least monthly or whenever I’m bored or fuck something up.

    Yes, former sysadmin.

  • AnExerciseInFalling@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    I use Duplicati for my backups, and have backup retention set up like this:

    Save one backup each day for the past week, then save one each week for the past month, then save one each month for the past year.

    That way I have granual backups for anything recent, and the further back in the past you go the less frequent the backups are to save space

  • desentizised@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    rsync from ZFS to an off-site unraid every 24 hours 5 times a week. on the sixth day it does a checksum based rsync which obviously means more stress so only do it once a week. the seventh day is reserved for ZFS scrubbing every two weeks.

  • battlesheep@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    Backup all of my proxmox-LXCs/VMs to a proxmox backup server every night + sync these backups to another pbs in another town. A second proxmox backup every noon to my nas. (i know, 3-2-1 rule is not reached…)

  • JASN_DE@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago

    Nextcloud data daily, same for the docker configs. Less important/rarely changing data once per week. Automatic sync to NAS and online storage. Irregular and manual sync to an external disk.

    7 daily backups, 4 weekly backups, “infinite” monthly backups retained (until I clean them up by hand).

  • slax@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    I have

    • Unraid back up it’s USB
    • Unraid appears gets backed up weekly by a community applications (CA app backup) and I use rclone to back it up to an old box account (100GB for life…) I did have it encrypted but seems I need to fix that…
    • Parity drive on my Unraid (8TB)
    • I am trying to understand how to use Rclone to back up my photos to Proton Drive so that’s next.

    Music and media is not too important yet but I would love some insight

  • zero_gravitas@aussie.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 days ago

    Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent?

    Only you can answer this. How many days of data are you prepared to lose? What are the downsides of running your backup scripts more frequently?

  • Lem453@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    Local zfs snap every 5 mins.

    Borg backups everything hour to 3 different locations.

    I’ve blown away docker folders of config files a few times by accident. So far I’ve only had to dip into the zfs snaps to bring them back.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      6 days ago

      Try ZFS send if you have ZFS on the other side. It’s insane. No file IO, just snap and time for the network transfer of the delta.

  • mosjek@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    I classify the data according to its importance (gold, silver, bronze, ephemeral). The regularity of the zfs snapshots (15 minutes to several hours) and their retention time (days to years) on the server depends on this. I then send the more important data that I cannot restore or can only restore with great effort (gold and silver) to another server once a day. For bronze, the zfs snapshots and a few days of storage time on the server are enough for me, as it is usually data that I can restore (build artifacts or similar) or is simply not that important. Ephemeral is for unimportant data such as caches or pipelines.