this post was submitted on 22 Apr 2025
38 points (93.2% liked)

Selfhosted

59003 readers
481 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I have 3 servers:

  • my house
  • my sister house
  • my parents house

My server has a lot of services (Nextcloud and Immich the ones that use more space), the other 2 servers only have Home Assistant, Frigate and some shared folders. On my server I use Backrest to backup locally and on Wasabi, the other 2...well...are not backed up πŸ™ˆ ...yet!

I was thinking to buy a couple of 14/20TB drives and install them in my parents and sister servers so that each server can backup data on the other 2. The backup will be done locally on all the servers with Backrest. How do I copy the backups across servers? Should I use Syncthing or is it better to use one repository per location on each Backrest? Or...other ideas?

Thanks!

all 27 comments
sorted by: hot top controversial new old
[–] BCsven@lemmy.ca 19 points 1 year ago (1 children)

Configure and Pre backup the drives before bringing them to family members to save yourself some bandwidth

[–] peregus@lemmy.world 5 points 1 year ago (1 children)

Definitely a good suggestion!

[–] BCsven@lemmy.ca 4 points 1 year ago

As for the other question in thr post: If you are using btrfs or zfs I believe both of those have a send function that operates at a block level and will only send block changes rather than full file changes

[–] HelloRoot@lemy.lol 6 points 1 year ago* (last edited 1 year ago) (1 children)

I'd do it like this:

  • borg backup

  • (optional) borgmatic for easier use, but a diy shell script might suffice

  • (optional) https://github.com/Ravinou/borgwarehouse for easier gui based "serverside" setup on each location

  • (if you have no way to reach the servers from the internet yet) set up dyndns for each location so you can reach them by domain

  • might need to setup portforwarding rules in the router of each location

[–] surph_ninja@lemmy.world 1 points 1 year ago

Didn’t know about borg warehouse. Thanks for the heads up!

[–] just_another_person@lemmy.world 6 points 1 year ago (1 children)

Tailscale+Headscale or Zerotier, use whatever backup software you want for the local backups. Simple script to rsync backups to other sites and remove copies past a certain age.

Pretty simple, and no need to expose machines in any less than safe ways.

[–] peregus@lemmy.world 3 points 1 year ago (1 children)

Why Tailscale AND Headscale? Arent't they the same thing?

[–] just_another_person@lemmy.world 4 points 1 year ago* (last edited 1 year ago) (1 children)

Tailscale is both a client and server. If you use only Tailscale, you have to pay for the service after so many devices are connected, which by all means support the company and do so and avoid using Headscale.

Headscale is an open source implementation of the Tailscale service, so it's free to use with all the usual Tailscale clients published. You setup Headscale somewhere, register your Tailscale clients to it, and use it like usual. It's just skipping the need to pay for Tailscale servers as a service, and gives you greater control over how traffic routed. Completely optional.

[–] NotKyloRen@lemmy.zip 1 points 1 year ago* (last edited 1 year ago)

Yeah but it's like 100 devices, I think. And I believe 3 users (meaning under one account; sharing a device with someone who makes their own account doesn't count as a "user"). You're right, but they're pretty generous.

I don't think it takes many resources to provide the service to consumers; it's not like you're using any of their bandwidth (minus the tiny amount used for coordination between clients). Oh, or if you use their DERP servers (encrypted, but still).

In general, people should know there are self hosted, truly private options, though. So thanks for mentioning Headscale.

I have an offsite NAS where I run the Restic REST server as a docker container. I connect to it over Nebula but you could also use a traditional VPN, Tailscale, Headscale, Pangolin or whatever.

Works like a charm.

[–] Cyber@feddit.uk 3 points 1 year ago

Whatever you end up with

  • test a restore
  • consider how to deal with the source data being corrupted / ransomed / etc, ie multiple versions
[–] czardestructo@lemmy.world 3 points 1 year ago

I went a little crazy and setup my own wireguard VPN network, all the remote hosts connect to the VPN and the primary server connects to each of them and pushes backups. Because I use btrfs and lots of snapshots I use btrbk, annoying to setup but now my hourly snapshots get pushed everywhere, minimal bandwidth and it flawlessly has worked for years.

[–] rumba@lemmy.zip 2 points 1 year ago

I use sync thing with untrusted keys. That way the data ends up in multiple locations but it's not accessible remotely. If you don't care about the data and the locations you don't have to do that but it's a nice feature.

[–] piefood@feddit.online 2 points 1 year ago

I use rsync for a similar system. One of the nice things is that you can set a bandwidth limit so that it doesn't saturate your family's internet connections.

[–] possiblylinux127@lemmy.zip 2 points 1 year ago (1 children)

You don't want to send as that's means that an adversary can delete the backups.

Make each one do a pull instead.

[–] surewhynotlem@lemmy.world 2 points 1 year ago (1 children)
[–] possiblylinux127@lemmy.zip 1 points 1 year ago (2 children)

Configure each backup machine to read from the data you are backing up

[–] surewhynotlem@lemmy.world 2 points 1 year ago (1 children)

You can do it that way, but I don't see how that's more secure. Fewer server bits to maintain I guess?

[–] EarMaster@lemmy.world 4 points 1 year ago

I think what he means is that if your backup is triggered from your main server and your main server is compromised the backups can also be attacked immediately. If the backup is requested from the backup machine you will at least have the time between the attack and the next backup to prevent the attack from reaching your backup machines.

[–] peregus@lemmy.world 1 points 1 year ago (1 children)

I'm sorry, but I still don't understand what you mean. Could you please elaborate bit? Thanks!

[–] possiblylinux127@lemmy.zip 1 points 1 year ago (1 children)

You configure the backup systems to connect to the device to be backed up. The idea is if something bad happens on the main machine it won't impact the backups

[–] peregus@lemmy.world 2 points 1 year ago

Got it, good idea, thanks!

[–] surph_ninja@lemmy.world 1 points 1 year ago

I like borg with rsync.

[–] Shimitar@downonthestreet.eu 1 points 1 year ago (1 children)

Just run backrest backup on each server three times, one for each remote backup tepository. Easy enough.

[–] peregus@lemmy.world 1 points 1 year ago (1 children)

How would you create the remote repository? With rest-server?

The same way you back it up, using ssh remotely for example