this post was submitted on 19 Jul 2024
830 points (98.5% liked)

Technology

59427 readers
4429 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

…according to a Twitter post by the Chief Informational Security Officer of Grand Canyon Education.

So, does anyone else find it odd that the file that caused everything CrowdStrike to freak out, C-00000291-
00000000-00000032.sys was 42KB of blank/null values, while the replacement file C-00000291-00000000-
00000.033.sys was 35KB and looked like a normal, if not obfuscated sys/.conf file?

Also, apparently CrowdStrike had at least 5 hours to work on the problem between the time it was discovered and the time it was fixed.

you are viewing a single comment's thread
view the rest of the comments
[–] tiramichu@lemm.ee 212 points 4 months ago (46 children)

If I send you on stage at the Olympic Games opening ceremony with a sealed envelope

And I say "This contains your script, just open it and read it"

And then when you open it, the script is blank

You're gonna freak out

[–] Imgonnatrythis@sh.itjust.works 10 points 4 months ago (20 children)

Maybe. But I'd like to think I'd just say something clever like, "says here that this year the pummel horse will be replaced by yours truly!"

[–] Takios@discuss.tchncs.de 17 points 4 months ago (17 children)

Problem is that software cannot deal with unexpected situations like a human brain can. Computers do exactly what a programmer tells it to do, nothing more nothing less. So if a situation arises that the programmer hasn't written code for, then there will be a crash.

[–] deadbeef79000@lemmy.nz 2 points 4 months ago (3 children)

Poorly written code can't.

In this case:

  1. Load config data
  2. If data is valid:
    1. Use config data
  3. If data is invalid:
    1. Crash entire OS

Is just poor code.

[–] 5C5C5C@programming.dev 14 points 4 months ago (3 children)

When talking about the driver level, you can't always just proceed to the next thing when an error happens.

Imagine if you went in for open heart surgery but the doctor forgot to put in the new valve while he was in there. He can't just stitch you up and tell you to get on with it, you'll be bleeding away inside.

In this specific case we're talking about security for business devices and critical infrastructure. If a security driver is compromised, in a lot of cases it may legitimately be better for the computer to not run at all, because a security compromise could mean it's open season for hackers on your sensitive device. We've seen hospitals held random, we've seen customer data swiped from major businesses. A day of downtime is arguably better than those outcomes.

The real answer here is crowdstrike needs a more reliable CI/CD pipeline. A failure of this magnitude is inexcusable and represents a major systemic failure in their development process. But the OS crashing as a result of that systemic failure may actually be the most reasonable desirable outcome compared to any other possible outcome.

[–] Morphit@feddit.uk 5 points 4 months ago

This error isn't intentionally crashing because of a security risk, though that could happen. It's a null pointer exception, so there are no static or runtime checks that could have prevented or handled this more gracefully. This was presumably a bug in the driver for a long time, then a faulty config file came and triggered the crashes. Better static analysis and testing of the kernel driver is one aspect, how these live config updates are deployed and monitored is another.

[–] deadbeef79000@lemmy.nz 2 points 4 months ago

But the OS crashing as a result of that systemic failure may actually be the most reasonable desirable outcome compared to any other possible outcome.

In which case this should've been documented behaviour and probably configurable.

[–] CeeBee_Eh@lemmy.world 1 points 4 months ago* (last edited 4 months ago) (1 children)

That's a bad analogy. CrowdStrike's driver encountering an error isn't the same as not having disk IO or a memory corruption. If CrowdStrike's driver ~~didn't load at all~~ wasn't installed the system could still boot.

It should absolutely be expected that if the CrowdStrike driver itself encounters an error, there should be a process that allows the system to gracefully recover. The issue is that CrowdStrike likely thought of their code as not being able to crash as they likely only ever tested with good configs, and thus never considered a graceful failure of their driver.

[–] 5C5C5C@programming.dev 1 points 4 months ago

I don't doubt that in this case it's both silly and unacceptable that their driver was having this catastrophic failure, and it was probably caused by systemic failure at the company, likely driven by hubris and/or cost-cutting measures.

Although I wouldn't take it as a given that the system should be allowed to continue if the anti-virus doesn't load properly more generally.

For an enterprise business system, it's entirely plausible that if a crucial anti-virus driver can't load properly then the system itself may be compromised by malware, or at the very least the system may be unacceptably vulnerable to malware if it's allowed to finish booting. At that point the risk of harm that may come from allowing the system to continue booting could outweigh the cost of demanding manual intervention.

In this specific case, given the scale and fallout of the failure, it probably would've been preferable to let the system continue booting to a point where it could receive a new update, but all I'm saying is that I'm not surprised more generally that an OS just goes ahead and treats an anti-virus driver failure at BSOD worthy.

[–] ChairmanMeow@programming.dev 11 points 4 months ago (1 children)

If AV suddenly stops working, it could mean the AV is compromised. A BSOD is a desirable outcome in that case. Booting a compromised system anyway is bad code.

[–] CeeBee_Eh@lemmy.world 1 points 4 months ago (1 children)

You know there's a whole other scenario where the system can simply boot the last known good config.

[–] ChairmanMeow@programming.dev 1 points 4 months ago (1 children)

And what guarantees that that "last known good config" is available, not compromised and there's no malicious actor trying to force the system to use a config that has a vulnerability?

[–] CeeBee_Eh@lemmy.world 1 points 4 months ago* (last edited 4 months ago) (1 children)

The following:

  • An internal backup of previous configs
  • Encrypted copies
  • Massive warnings in the system that current loaded config has failed integrity check

There's a load of other checks that could be employed. This is literally no different than securing the OS itself.

This is essentially a solved problem, but even then it's impossible to make any system 100% secure. As the person you replied to said: "this is poor code"

Edit: just to add, failure for the system to boot should NEVER be the desired outcome. Especially when the party implementing that is a 3rd party service. The people who setup these servers are expecting them to operate for things to work. Nothing is gained from a non-booting critical system and literally EVERYTHING to lose. If it's critical then it must be operational.

[–] ChairmanMeow@programming.dev 1 points 4 months ago (1 children)

The 3rd party service is AV. You do not want to boot a potentially compromised or insecure system that is unable to start its AV properly, and have it potentially access other critical systems. That's a recipe for a perhaps more local but also more painful disaster. It makes sense that a critical enterprise system does not boot if something is off. No AV means the system is a security risk and should not boot and connect to other critical/sensitive systems, period.

These sorts of errors should be alleviated through backup systems and prevented by not auto-updating these sorts of systems.

Sure, for a personal PC I would not necessarily want a BSOD, I'd prefer if it just booted and alerted the user. But for enterprise servers? Best not.

[–] CeeBee_Eh@lemmy.world 1 points 3 months ago (1 children)

Sure, for a personal PC I would not necessarily want a BSOD, I’d prefer if it just booted and alerted the user. But for enterprise servers? Best not.

You have that backwards. I work as a dev and system admin for a medium sized company. You absolutely do not want any server to ever not boot. You absolutely want to know immediately that there's an issue that needs to be addressed ASAP, but a loss of service generally means loss of revenue and, even worse, a loss of reputation. If you server is briefly at a lower protection level that's not an issue unless you're actively being targeted and attacked. But if that's the case then getting notified of an issue can get some people to deal with it immediately.

[–] ChairmanMeow@programming.dev 2 points 3 months ago

A single server not booting should not usually lead to a loss of service as you should always run some sort of redundancy.

I'm a dev for a medium-sized PSP that due to our customers does occasionally get targetted by malicious actors, including state actors. We build our services to be highly available, e.g. a server not booting would automatically do a failover to another one, and if that fails several alerts will go off so that the sysadmins can investigate.

Temporary loss of service does lead to reputational damage, but if contained most of our customers tend to be understanding. However, if a malicious actor could gain entry to our systems the damage could be incredibly severe (depending on what they manage to access of course), so much so that we prefer the service to stop rather than continue in a potentially compromised state. What's worse: service disrupted for an hour or tons of personal data leaked?

Of course, your threat model might be different and a compromised server might not lead to severe damage. But Crowdstrike/Microsoft/whatever may not know that, and thus opt for the most "secure" option, which is to stop the boot process.

[–] Takios@discuss.tchncs.de 10 points 4 months ago (2 children)

I agree that the code is probably poor but I doubt it was a conscious decision to crash the OS.

The code is probably just:

  1. Load config data
  2. Do something with data

And 2 fails unexpectedly because the data is garbage and wasn't checked if it's valid.

[–] Morphit@feddit.uk 3 points 4 months ago

You can still catch the error at runtime and do something appropriate. That might be to say this update might have been tampered with and refuse to boot, but more likely it'd be to just send an error report back to the developers that an unexpected condition is being hit and just continuing without loading that one faulty definition file.

[–] CeeBee_Eh@lemmy.world 1 points 4 months ago (1 children)

If there's an error, use last known good config. So many systems do this.

[–] ToyDork@preserve.games 2 points 4 months ago

Unfortunately, an OS that covers such cases is a lost monetization opportunity, fuck the system, use a Linux distro, you get the idea. Microsoft makes money off of tech support for people too unversed in computers to fix it themselves.

load more comments (13 replies)
load more comments (15 replies)
load more comments (40 replies)