Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
The next day, the novice's disk crashed. Three days later, the novice was still reinstalling software.
I laugh in NixOS
No it is not. Just use cheksum. Like a normal person...
Cool.
Still doesnt mean you can boot from it
ChatGPT told me to do
sudo sha256sum /dev/sda1 > /dev/sda1
So. Is this wrong? I thought it backs up the data
Or that it's complete.
These kinds of relentless posts finally got me to write a script that verifies all my backed up files using md5 checksums.
Verifying the files are there in your backup is only, like 10% of verifying that it's a real, usable backup.
The important question is: can you successfully restore those files from the backup? Can you successfully put them back where they're supposed to be after losing your primary copy?
I specifically stated that I verify the file content via md5 hash. And I keep original directory structure, so yes if i need to restore these I can.
Blake3(blak3?) is where it's at

Backup fire drill tomorrow!
I just run a script that runs a bunch of rsync commands. So I guess every run kinda confirms the backup is functional. I have no use for versioned backups, nor could I afford the hard drive space necessary (thanks Sam)
What if my backup is just files and there’s nothing to restore?
Like say I take my existing drives, full of totally working media, and duplicate them, use the originals as a backup and the new drives as the active.
Does that count as a backup? No restoration involved.
In the spirit of this thread: no.
Recovering with the backup should put you back to an operational state equivalent to when the backup was taken.
I.e. if you've restored some files, but something is still not working then the backup failed its purpose.
E.g. the timestamps on the files might be important, do they need to be stamped with the time of the backup or the time of the restore?
Sure, if my active drives died after this swap, and I had to restore from the old, now backup, drive, I’d be back at the operational state I was at the time of the backup.
That tracks.
It still doesn’t run anything tho. It’s just a drive. It doesn’t house an os or anything, just files that aren’t restricted in any way.
IMHO there is no point backing up an OS drive, just rebuild it*.
Data is the important thing to back up because you usually can't regenerate it.
* the corollary here is that you've backed up the configuration required to rebuild the OS.
I wouldn’t, I keep all of my data separate from my OS drive entirely so I can reformat or install a new OS whenever I feel like.. nasty old habit from bootleg windows 7 well beyond its age, when reformatting every 6 months was good hygiene, before I found Linux.. but gave me great data management insight.
Do you know how to transfer the files back if your OS has completely failed?
Sure, nearly everything is on a separate drive from the OS. I don’t put much on the OS drive on any of my computers unless it needs to run there and that’s easy to reinstall. Easy to fix things that way.
One thing I emphasize in every training I do is that you do not have backups until you know exactly how long it will take to restore.
That way you can tell your boss it’ll take three times as long and be hailed as a miracle worker, as Scotty intended.
Do people actually do that? Because that would be funny
Also it must come from the back region of France. Otherwise it's just sparkling archive.
is it pronounced beaucoup?
Yes it frence for raid 0
Ooh sparkling archive actually sounds really fancy, I’ll start using that
it's a fair argument but it's also bullshit if you're following the process and practices that you used when you tested your backup
lots of my job is backups and verification of the backups
Bold of you to assume people/companies test backups more than once.
Case in point: I once got instructed to "enable EBS snapshots" for customer deployments to meet a new backup requirement. Disaster recovery was a completely different feature we only kind of got to a couple years later and afaik, remains manual to this day.
that's fair and I agree but it's not a true maxim
it's a good principal but I hear it a lot so it's a thing I get annoyed about because it's directed at me even though I have the receipts and proven record that it's not a fact
An untested disaster recovery plan is wishful thinking
Has anyone actually had failed backup restoration before? It’s been a meme forever, but in my ~15 years of IT, I’ve never seen a backup not restore properly.
I've had an IT Career for about as long as you. I've had 2 memorable restore failures and got real lucky both times.
The first was a ransomware incident, and the onsite backup was not hit, but it was corrupt. Thankfully, the client had been using a 3-2-1 strategy, and the off-site one was fine.
The second was a situation where a failed update rendered a client's RDS unbootable. This time, they didn't have an on-site backup and the off-site one was corrupt. This time I happened to get immensely lucky in that there was no real data on that RDS, so I was able to spin up a fresh one, and install their LOB app and all was good.
We now test that all backups are stable every 6 months.
Absolutely. Used to work at a small MSP. Got ultra unlucky in that we got chosen as the rest case target for a zero day that leveraged our Remote Support tools so our own systems and all of our client systems that were online got hit with ransomware in a very short time frame.
Some clients had local backups to Synology boxes and those worked ok thankfully. However all the rest had backups based on Hyper-V. The other local copy was on a second windows server that also got hit so the local copies didn't help. They did also have a remote copy which wasn't encrypted.
So all good right? Just pull the remote backup copy and apply that.... Yea every time we had ever used the service before had either been single servers that physically died and took disks along on the death or just file level restores.
Those all worked fine. Still sounds like not a problem right? Nope. We found both that a couple of the larger servers had backups that didn't actually have everything in spite of being VM images. No idea how their software even was able to do that.
And the worse part was that their data transfer rate was insanely slow. About 10mbps. Not that per server or par client. Nope that was the max export rate across everything. It would have taken literally months to restore everything at that rate.
I hate to say it but yes we did in fact pay the ransom and the. Had to fight for several days going through getting things decrypted. Then going through months of reinstalling fresh copies and/or putting in new servers. Also changing our entire stack at the same time. Shockingly we handled it well enough we lost no clients. Largely because we were able to prove we couldn't have known ahead of time.
If you read through all that I'll even say the vendors name. It was StorageCraft. I now have a deep hate for them.
Also one more is that with the old Apple HFS+ filesystem based time machine backups it would sometimes report as a valid self checked backup even if it had corruption. It would do this as long as some self check confirmed that it could fix the corruption during a restore. However if you tried directly browsing through the time machine backups it would have files that couldn't be read, unless again you did a full system restore with it.
Nearly lost my wife's semester ending before finding it worked that way.
I can't confirm it but seems it is fully fixed with APFS and might be one of the reasons they spent the effort to make that transition.
Yep. At one place I worked, we did a big off-site disaster recovery exercise every year.
Most of the time it went fine, but there were multiple years where a restore didn't work due to an issue with one or more tapes. Either the data and/or indexes couldn't be read, or the tape physically failed during the restore.
Backups aren't backups unless they're tested.
in my ~15 years of IT, I’ve never seen a backup not restore properly.
I remember Outlook backups failing like nothing else during the restore process 25 years ago.
Which was fucked because it would take 2 weeks to rebuild only to find out it didn't work.
Fun video. Many backup options failed iirc.
Banger of a video! Thanks!
I’ve made mistakes before, and had that panic realization set in. I can only imagine the feeling this guy got once he realized what he just did. Nightmare fuel.