... in the next 5 years?
wizardbeard
Subscribe to
Tell me how you haven't worked as a sysadmin again.
This wasn't some switchable feature. The only way I've seen to stop this software from auto updating (per some comments on Hacker News/Y Combinator) as it chooses is by blocking the update servers at the firewall or through DNS black holing.
And yes, they chose to use this software. Look. Crowdstrike bought a fucking SuperBowl ad, a bunch of executives drank the kool aid, and a lot of tech departments were told that they'd be rolling this software out. That's just how corporate IT works sometimes.
How exactly is Microsoft responsible for this? It's a kernel level driver that intercepts system calls, and the software updated itself.
This software was crashing Linux distros last month too, but that didn't make headlines because it effected less machines.
Sometimes there are options that are reasonable for individual users that don't scale well to enterprise environments.
Also, the effectively gives attackers a secondary attack surface in addition to the normal remote access technologies that require the machine to be up and running to work.
There's a shit ton more reasons than that, but in short: I highly doubt anyone suggesting a company just up and leave the MS ecosystem has spent any considerable amount of time in a sysadmin position.
This had nothing to do with MS, other than their OS being impacted. Not their software that broke, not an update pushed out by their update system. This is an entirely third party piece of software that installs at the kernel level, deeper than MS could reasonably police, even it somehow was their responsibility.
Thid same piece of software was crashing certain Linux distros last month, but it didn't make headlines due to the limited scope.
From what I understand, Crowdstrike doesn't have built in functionality for that.
One admin was saying that they had to figure out which IPs were the update server vs the rest of the functionality servers, block the update server at the company firewall, and then set up special rules to let the traffic through to batches of their machines.
So... yeah. Lot of work, especially if you're somewhere where the sysadmin and firewall duties are split across teams. Or if you're somewhere that is understaffed and overworked. Spend time putting out fires, or jerry-rigging a custom way to do staggered updates on a piece of software that runs largely as a black box?
Edit: re-read your comment. My bad, I think you meant it was a failure of that on CrowdStrike's end. Yeah, absolutely.
No, regulatory auditors have boxes that need checking, regardless of the reality of the technical infrastructure.
This didn't go through Windows Update. It went through the ctowdstrike software directly.
Yep, and it's harder to fix Windows VMs in Azure that are effected because you can't boot them into safe mode the same way you can with a physical machine.
The company is not Windows based, they offer clients and agents for Linux and Mac as well (and there are some scattered reports they fucked up some of their Linux customers like this last month).
The Windows version of their software is what is broken.
It also assumes that reimaging is always an option.
Yes, every company should have networked storage enforced specifically for issues like this, so no user data would be lost, but there's often a gap between should and "has been able to find the time and get the required business side buy in to make it happen".
Also, users constantly find new ways to do non-standard, non-supported things with business critical data.