this post was submitted on 26 Feb 2026
141 points (98.0% liked)

Selfhosted

56975 readers
957 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I have a 56 TB local Unraid NAS that is parity protected against single drive failure, and while I think a single drive failing and being parity recovered covers data loss 95% of the time, I'm always concerned about two drives failing or a site-/system-wide disaster that takes out the whole NAS.

For other larger local hosters who are smarter and more prepared, what do you do? Do you sync it off site? How do you deal with cost and bandwidth needs if so? What other backup strategies do you use?

(Sorry if this standard scenario has been discussed - searching didn't turn up anything.)

you are viewing a single comment's thread
view the rest of the comments
[–] unit327@lemmy.zip 4 points 1 day ago* (last edited 1 day ago) (2 children)

I use aws s3 deep archive storage class, $0.001 per GB per month. But your upload bandwidth really matters in this case, I only have a subset of the most important things backed up this way otherwise it would take months just to upload a single backup. Using rclone sync instead of just uploading the whole thing each time helps but you still have to get that first upload done somehow...

I have complicated system where:

  • borgmatic backups happen daily, locally
  • those backups are stored on a btrfs subvolume
  • a python script will make a read-only snapshot of that volume once a week
  • the snapshot is synced to s3 using rclone with --checksum --no-update-modtime
  • once the upload is complete the btrfs snapshot is deleted

I've also set up encryption in rclone so that all the data is encrypted an unreadable by aws.

[–] CucumberFetish@lemmy.dbzer0.com 1 points 13 hours ago (1 children)

It is cheap as long as you don't need to restore your data. Downloading data from S3 costs a lot. OP asked about 56TB of storage, for which data retrieval would cost about 4.7k

https://aws.amazon.com/s3/pricing/ under data transfer

[–] unit327@lemmy.zip 1 points 7 hours ago

I'm aware, but I myself have < 3TB and if I actually need it I'll be more happy to pay. It's my "backup of last resort", I keep other backups on site and infrequently on a portable HDD offsite.

[–] quick_snail@feddit.nl 1 points 1 day ago (1 children)

Don't do this. It's a god damn nightmare to delete

[–] unit327@lemmy.zip 1 points 7 hours ago (1 children)

How so? I can easily just delete the whole s3 bucket.

[–] quick_snail@feddit.nl 1 points 2 hours ago

Maybe I'm thinking of glacier. It took months trying to delete that.