this post was submitted on 24 Feb 2026
1184 points (99.5% liked)

Programmer Humor

29989 readers
2189 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] dogdeanafternoon@lemmy.ca 6 points 14 hours ago (5 children)

Has anyone actually had failed backup restoration before? It’s been a meme forever, but in my ~15 years of IT, I’ve never seen a backup not restore properly.

[–] Thrawn@lemmy.dbzer0.com 6 points 10 hours ago

Absolutely. Used to work at a small MSP. Got ultra unlucky in that we got chosen as the rest case target for a zero day that leveraged our Remote Support tools so our own systems and all of our client systems that were online got hit with ransomware in a very short time frame.

Some clients had local backups to Synology boxes and those worked ok thankfully. However all the rest had backups based on Hyper-V. The other local copy was on a second windows server that also got hit so the local copies didn't help. They did also have a remote copy which wasn't encrypted.

So all good right? Just pull the remote backup copy and apply that.... Yea every time we had ever used the service before had either been single servers that physically died and took disks along on the death or just file level restores.

Those all worked fine. Still sounds like not a problem right? Nope. We found both that a couple of the larger servers had backups that didn't actually have everything in spite of being VM images. No idea how their software even was able to do that.

And the worse part was that their data transfer rate was insanely slow. About 10mbps. Not that per server or par client. Nope that was the max export rate across everything. It would have taken literally months to restore everything at that rate.

I hate to say it but yes we did in fact pay the ransom and the. Had to fight for several days going through getting things decrypted. Then going through months of reinstalling fresh copies and/or putting in new servers. Also changing our entire stack at the same time. Shockingly we handled it well enough we lost no clients. Largely because we were able to prove we couldn't have known ahead of time.

If you read through all that I'll even say the vendors name. It was StorageCraft. I now have a deep hate for them.

Also one more is that with the old Apple HFS+ filesystem based time machine backups it would sometimes report as a valid self checked backup even if it had corruption. It would do this as long as some self check confirmed that it could fix the corruption during a restore. However if you tried directly browsing through the time machine backups it would have files that couldn't be read, unless again you did a full system restore with it.

Nearly lost my wife's semester ending before finding it worked that way.

I can't confirm it but seems it is fully fixed with APFS and might be one of the reasons they spent the effort to make that transition.

[–] Godort@lemmy.ca 10 points 12 hours ago* (last edited 12 hours ago)

I've had an IT Career for about as long as you. I've had 2 memorable restore failures and got real lucky both times.

The first was a ransomware incident, and the onsite backup was not hit, but it was corrupt. Thankfully, the client had been using a 3-2-1 strategy, and the off-site one was fine.

The second was a situation where a failed update rendered a client's RDS unbootable. This time, they didn't have an on-site backup and the off-site one was corrupt. This time I happened to get immensely lucky in that there was no real data on that RDS, so I was able to spin up a fresh one, and install their LOB app and all was good.

We now test that all backups are stable every 6 months.

[–] BillibusMaximus@sh.itjust.works 10 points 13 hours ago

Yep. At one place I worked, we did a big off-site disaster recovery exercise every year.

Most of the time it went fine, but there were multiple years where a restore didn't work due to an issue with one or more tapes. Either the data and/or indexes couldn't be read, or the tape physically failed during the restore.

Backups aren't backups unless they're tested.

[–] how_we_burned@lemmy.zip 5 points 13 hours ago

in my ~15 years of IT, I’ve never seen a backup not restore properly.

I remember Outlook backups failing like nothing else during the restore process 25 years ago.

Which was fucked because it would take 2 weeks to rebuild only to find out it didn't work.

[–] raldone01@lemmy.world 4 points 14 hours ago (1 children)
[–] dogdeanafternoon@lemmy.ca 1 points 12 hours ago

Banger of a video! Thanks!

I’ve made mistakes before, and had that panic realization set in. I can only imagine the feeling this guy got once he realized what he just did. Nightmare fuel.