this post was submitted on 21 Jul 2024
191 points (76.5% liked)

Technology

59219 readers
3145 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

This is an unpopular opinion, and I get why – people crave a scapegoat. CrowdStrike undeniably pushed a faulty update demanding a low-level fix (booting into recovery). However, this incident lays bare the fragility of corporate IT, particularly for companies entrusted with vast amounts of sensitive personal information.

Robust disaster recovery plans, including automated processes to remotely reboot and remediate thousands of machines, aren't revolutionary. They're basic hygiene, especially when considering the potential consequences of a breach. Yet, this incident highlights a systemic failure across many organizations. While CrowdStrike erred, the real culprit is a culture of shortcuts and misplaced priorities within corporate IT.

Too often, companies throw millions at vendor contracts, lured by flashy promises and neglecting the due diligence necessary to ensure those solutions truly fit their needs. This is exacerbated by a corporate culture where CEOs, vice presidents, and managers are often more easily swayed by vendor kickbacks, gifts, and lavish trips than by investing in innovative ideas with measurable outcomes.

This misguided approach not only results in bloated IT budgets but also leaves companies vulnerable to precisely the kind of disruptions caused by the CrowdStrike incident. When decision-makers prioritize personal gain over the long-term health and security of their IT infrastructure, it's ultimately the customers and their data that suffer.

you are viewing a single comment's thread
view the rest of the comments
[–] breakingcups@lemmy.world 172 points 3 months ago (76 children)

Please, enlighten me how you'd remotely service a few thousand Bitlocker-locked machines, that won't boot far enough to get an internet connection, with non-tech-savvy users behind them. Pray tell what common "basic hygiene" practices would've helped, especially with Crowdstrike reportedly ignoring and bypassing the rollout policies set by their customers.

Not saying the rest of your post is wrong, but this stood out as easily glossed over.

[–] LrdThndr@lemmy.world 21 points 3 months ago* (last edited 3 months ago) (33 children)

A decade ago I worked for a regional chain of gyms with locations in 4 states.

I was in TN. When a system would go down in SC or NC, we originally had three options:

  1. (The most common) have them put it in a box and ship it to me.
  2. I go there and fix it (rare)
  3. I walk them through fixing it over the phone (fuck my life)

I got sick of this. So I researched options and found an open source software solution called FOG. I ran a server in our office and had little optiplex 160s running a software client that I shipped to each club. Then each machine at each club was configured to PXE boot from the fog client.

The server contained images of every machine we commonly used. I could tell FOG which locations used which models, and it would keep the images cached on the client machines.

If everything was okay, it would chain the boot to the os on the machine. But I could flag a machine for reimage and at next boot, the machine would check in with the local FOG client via PXE and get a complete reimage from premade images on the fog server.

The corporate office was physically connected to one of the clubs, so I trialed the software at our adjacent club, and when it worked great, I rolled it out company wide. It was a massive success.

So yes, I could completely reimage a computer from hundreds of miles away by clicking a few checkboxes on my computer. Since it ran in PXE, the condition of the os didn’t matter at all. It never loaded the os when it was flagged for reimage. It would even join the computer to the domain and set up that locations printers and everything. All I had to tell the low-tech gymbro sales guy on the phone to do was reboot it.

This was free software. It saved us thousands in shipping fees alone. And brought our time to fix down from days to minutes.

There ARE options out there.

[–] cyberpunk007@lemmy.ca 5 points 3 months ago (5 children)

This is a good solution for these types of scenarios. Doesn't fit all though. Where I work, 85% of staff work from home. We largely use SaaS. I'm struggling to think of a good method here other than walking them through reinstalling windows on all their machines.

[–] LrdThndr@lemmy.world 2 points 3 months ago (1 children)

That’s still 15% less work though. If I had to manually fix 1000 computers, clicking a few buttons to automatically fix 150 of them sounds like a sweet-ass deal to me even if it’s not universal.

You could also always commandeer a conference room or three and throw a switch on the table. “Bring in your laptop and go to conference room 3. Plug in using any available cable on the table and reboot your computer. Should be ready in an hour or so. There’s donuts and coffee in conference room 4.” Could knock out another few dozen.

Won’t help for people across the country, but if they’re nearish, it’s not too bad.

[–] cyberpunk007@lemmy.ca 2 points 3 months ago

Not a lot of nearish. It would be pretty bad if this happened here.

load more comments (3 replies)
load more comments (30 replies)
load more comments (72 replies)