3
A 4TB WD Blue blew up and now I'm finally contemplating a more methodical backup strategy. Roast me.
(alien.top)
The drive had nothing essential (that I can think of yet :D) and I've yet to try salvaging data via ddrescue/testdisk/etc when the new drive arrives. But it was a motivator to finally do a robust backup system for most of my data, rather than just the essential stuff. I was supposed to do that till the end of the year ... for the last several years. Here's what I've come up with so far:
Wish list for the setup:
- Have versioned backups for some of the data (e.g. projects/documents)
- Have a copy of all personal/work/system config data by the 3-2-1 "rule" (3 copies, 2 media, 1 remote location)
- Be able to restore a machine easily enough if the drive of the root fails (not necessarily instantly), possibly without keeping whole copies for the root around
- No port forwarding/VPN/reverse proxy, etc for the remote communication between machines.
- And in general keep things as simple as possible.
The plan (using Borg and Syncthing):
I have a laptop (but say I have N laptops), a (mostly) headless server locally and a remote RaspberryPi with a large HDD attached.
- Setup periodic Borg backups and staggered versioning for the folders which warrant that.
- Create a /backups folder on all devices. In it have /backups/ subfolders in which each machine will place its data for backup (i have some feeling the path compatibility will matter). The idea is to keep the Borg repos here, as well as symlinks to folders I want to have replicated to the two backup locations (but for which I don't need versioned backups).
- Configure Syncthing on all three machines so the local server and remote RPi keep read only copies of all /backups/ folders. ST can be setup to do scans on a smaller period and has been pretty stable with numerous files AFAIK.
- Regarding system configs (and if a root partition fails) - I'm thinking of keeping backups of /etc and ~/.config (as well as some other app folders and files like .bashrc from /home). I'll also periodically dump the list of installed packages. In theory I should be able to do a fresh install, install the same packages, transplant /etc and the /home/ folder and ... be happy? I'm pretty sure I'm missing something here. I'll also backup systemd logs wherever they are (to trace failures potentially). I don't have any databases or services that keep data outside /home .. I think.
- I would optimally setup some kind of monitoring and recovery testing (thanks, chatgpt for reminding me of the latter). If you have some specific advice for some simple tools/approaches that would be nice. Otherwise I'll have to conjure some mini app/script that I'll run when I have ssh access to the machines. Or have a diagnostics folder, where each machine will write their own and have that synced with ST to asses on the laptop. I really have to not overengineer it, because I want to be done with the whole thing sooner rather than later.
What I'm still not sure about:
- Should I keep a copy of my essential data at a cloud provider regardless of the triple copy? Yes, it's always nice to have redundancy, but is it a significantly needed measure in your experience?
- Should I fear encryption and being locked out of my data? Also how hard and how needed is it to change encryption keys at some point? I guess it's very much specific on their usage, etc., but I guess I'm looking for some examples from your experience.
And in general - roast my planned setup before I've invested significant effort in implementing it.