this post was submitted on 14 Mar 2026
642 points (98.6% liked)

Technology

82713 readers
3691 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

KB5077181 was released about a month ago as part of the February Patch Tuesday rollout. When the update first arrived, users reported a wide range of problems, including boot loops, login errors, and installation issues.

Microsoft has now acknowledged another problem linked to the same update. Some affected users see the message “C:\ is not accessible – Access denied” when trying to open the system drive.

top 50 comments
sorted by: hot top controversial new old
[–] bitjunkie@lemmy.world 10 points 1 day ago (5 children)

Who could have possibly predicted that an operating system with vibe code in the kernel would be complete ass

load more comments (5 replies)
[–] davetortoise@reddthat.com 1 points 1 day ago

Seems like quite an important drive to have access to. They should probably try to fix that. imo

[–] saltnotsugar@lemmy.world 59 points 2 days ago
[–] AeonFelis@lemmy.world 18 points 1 day ago

You don't need C:\. All your data should be in the 365 cloud anyway. Storing files locally in C:\ leads to antipatterns like not paying Microsoft for 365 access (a.k.a "Software Piracy")

[–] Auth@lemmy.world 12 points 1 day ago (1 children)

A lot of people didnt read the issue. This was an issue with the samsung connect app.

[–] golden_king@lemmy.dbzer0.com 1 points 1 day ago (1 children)

and people are just blaming microsoft for it

[–] isVeryLoud@lemmy.ca 1 points 17 hours ago

It's funnier that way

[–] JensSpahnpasta@feddit.org 206 points 3 days ago (11 children)

There must be something really seriously wrong at Microsoft. I can understand that Windows patches are complex and that they might break some of those crazy things people are running on their machines. But how is a bug that is killing access to the C:\ drive able to get through testing? WTF are they doing?

[–] Lost_My_Mind@lemmy.world 169 points 3 days ago (1 children)

It's going to come out that there's AI in the code. And the code testing was done by AI, who gave the buggy code the green light.

[–] Semi_Hemi_Demigod@lemmy.world 80 points 3 days ago (6 children)

Or worse: AI is doing the QA as well

load more comments (6 replies)
[–] ThatGuy46475@lemmy.world 72 points 3 days ago (7 children)

They don’t need testing because they tell the ai to not make any errors

load more comments (7 replies)
load more comments (9 replies)
[–] JoMiran@lemmy.ml 172 points 3 days ago (32 children)
load more comments (32 replies)
[–] wabafee@lemmy.world 5 points 1 day ago (2 children)

Clearly the fix is boot in Linux

[–] FatVegan@leminal.space 4 points 1 day ago

Microsoft is pretty bad at a lot if things, but you have to hand it to them... They are great at making Linux commercials

load more comments (1 replies)
[–] FauxLiving@lemmy.world 91 points 2 days ago (10 children)

I like how, once AI is invented, there is never a problem that isn't AI related.

Microsoft made broken shit before AI, it isn't like they suddenly lost that capability once AI was invented.

[–] WanderingThoughts@europe.pub 41 points 2 days ago (1 children)

It's more like the old adage but extended: "To err is human, to really foul things up you need a computer, but to make an unbelievable mess you need an AI."

load more comments (1 replies)
load more comments (9 replies)
[–] thethunderwolf@lemmy.dbzer0.com 11 points 2 days ago

Solution: install linux

Just like I have been calling macOS "NonfreeBSD" I will now be calling Windows 11 "Slop_OS"

[–] DickFiasco@sh.itjust.works 88 points 2 days ago (4 children)

Huh, my computer doesn't seem to be affected.

I'm using Arch, btw.

[–] ExLisper@lemmy.curiana.net 49 points 2 days ago (5 children)

I think I'm affected because I can't access the C: Drive.

I'm using Debian, btw.

load more comments (5 replies)
load more comments (3 replies)
[–] nocteb@feddit.org 12 points 2 days ago

morged continvously

[–] lechekaflan@lemmy.world 24 points 2 days ago (1 children)

Install Linux. Problem Solved.

[–] Pirate@feddit.org 16 points 2 days ago* (last edited 2 days ago) (1 children)

It’s hilarious that the issues people think Linux has, like for example the disk deleting itself, are exactly what happens on Windows lol.

load more comments (1 replies)
[–] marighost@piefed.social 68 points 2 days ago (1 children)

Microsoft believes the issue may be related to the Samsung Share application, although the exact cause has not yet been confirmed.

30percentofcodewrittenbyai.jpeg

load more comments (1 replies)
[–] rodneylives@lemmy.world 24 points 2 days ago* (last edited 2 days ago) (2 children)

There was a story going around back in September ago about the person whose wife used OneDrive on her phone. It had taken upon itself to copy 25+GB of data on the phone into OneDrive, despite only having the free account tier, and copying it to their Windows 11 PC. There it completely filled up its small SSD boot drive, putting it into a condition of extremely low disk space, which in made it impossible for Windows to boot. Here it is.

load more comments (2 replies)
[–] mkhopper@lemmy.world 12 points 2 days ago (7 children)

Ugh... I'm so tired of "microslop" and "AI slop".

I'm not defending Microsoft in any way, but they were releasing buggy updates long before the rise of AI.

[–] Buddahriffic@lemmy.world 18 points 2 days ago (2 children)

You know what's going on inside the large companies that are hoping to cash in on the AI thing? All workers are being pushed to use AI and goals are set that targets x% of all code written be AI-generated.

And AI agents are deceptively bad at what they do. They are like the djinn: they will grant the word of your request but not the spirit. Eg they love to use helper functions but won't necessarily reuse helper functions instead of writing new copies each time it needs one.

Here's a test that will show that, with all the fancy advancements they've made, they are still just advanced text predictors: pick a task and have an AI start that task and then develop it over several prompts, test and debug it (debug via LLM still). Now ask the LLM to analyse the code it just generated. It will have a lot of notes.

An entity using intelligence would use the same approach to write the code as it does to analyze it. Not so for an LLM, which is just predicting tokens with a giant context window. There is no thought pattern behind it, even when it predicts a "thinking process" before it can act. It just fits your prompt into the best fit out of all the public git depots it was trained on, from commit notes and diffs, bug reports and discussions, stack exchange exchanges, and the like, which I'd argue is all biased towards amateur and beginner programming rather than expert-level. Plus it includes other AI-generated code now.

So yeah, MS did introduce bugs in the past, even some pretty big ones (it was my original reason for holding back on updates, at least until the enshitification really kicked in), but now they are pushing what is pretty much a subtle bug generator on the whole company so it's going to get worse, but admitting it has fundamental problems will pop the AI bubble, so instead they keep trying to fix it with bandaids in the hopes that it'll run out of problems before people decide to stop feeding it money (which still isn't enough, but at least there is revenue).

[–] ExperiencedWinter@lemmy.world 5 points 1 day ago (1 children)

Now ask the LLM to analyse the code it just generated. It will have a lot of notes.

Not only will it have a lot of notes, every time you ask if to analyze the code it will find new notes. Real engineers are telling me this is a good code review tool but it can't even find the same issues reliably. I don't understand how adding a bunch of non-deterministic tooling is supposed to make my code better.

[–] Buddahriffic@lemmy.world 1 points 1 day ago (2 children)

Though on that note, I don't think having an LLM review your code is useless, but if it's code that you care about, read the response and think about it to see if you agree. Sometimes it has useful pointers, sometimes it is full of shit.

[–] ExperiencedWinter@lemmy.world 2 points 1 day ago (1 children)

So when do I stop asking the LLM to take another look? If it finds a new issue on the second or third or fourth check am I supposed to just sit here and keep asking it to "pretty please take another look and don't miss anything this time"?

I'm not saying it's a useless tool, it's just not a replacement for a human code review at all.

[–] Buddahriffic@lemmy.world 1 points 1 day ago

Stop when you feel like it, just like any other verification method. You don't really prove that there are no problems with software development, it's more of a "try to think of any problem you can and do your best to make sure it doesn't have any of those problems" plus "just run it a lot and fix any problems that come up".

An LLM is just another approach to finding potential problems. And it will eventually say everything looks good, though not because everything is good but because that happens in its training data and eventually that will become the best correlated tokens (assuming it doesn't get stuck flipping between two or more sides of a debated issue).

[–] JcbAzPx@lemmy.world 1 points 1 day ago (1 children)

That sounds worse than useless. It would be better to fail utterly than make up shit that you have to waste time parsing through.

[–] Buddahriffic@lemmy.world 1 points 1 day ago

It helps in the sense of once you've looked at code enough times, you can stop really seeing it. So many times I've debugged issues where I looked many times at an error that is obvious in hindsight but I just couldn't see it before that. And that's in cases where I knew there was an issue somewhere in the code.

Or for optimization advice, if you have a good idea of how efficiency works, it's usually not difficult to filter the ideas it gives you into "worthwhile", "worth investigating", "probably won't help anything", and "will make things worse".

It's like a brainstorming buddy. And just like with your own ideas, you need to evaluate them or at least remember to test to see if it actually does work better than what was there before.

[–] SoleInvictus@lemmy.blahaj.zone 11 points 2 days ago (1 children)

You're spot on regarding how AI operates.

AI is stupid story time!

I recently helped a friend with a self-hosted VPN problem. He had been using a free trial of Gemini Pro to try to fix it himself but gave up after THREE HOURS. It never tried to help him diagnose the issue, but instead kept coming up with elaborate fixes with names that suggested they were known issues, like The MTU Traffic Jam, The Packet Collision Quandary, and, my favorite, The Alpine Ridge Controller Trap. Then it would run him through an equally elaborate "fix". When that didn't work, it would use the failure conditions to propose a new, very serious sounding pile of bullshit and the process would repeat.

I fixed it in about fifteen minutes, most of that time spent undoing all the unnecessary static routing, port forwarding, and driver rollbacks it had him do. The solution? He had a typo in the port number in his peer config.

I can't deny that LLMs are full of useful knowledge. I read through its output and all of its suggestions absolutely would have quickly and efficiently fixed their accompanying issue, even the thunderbolt/pcie bridging issue, if the real problem had been any of them. They're just garbage at applying that information.

[–] Buddahriffic@lemmy.world 1 points 1 day ago

Yeah, they don't do analysis but can fool people because they can regurgitate someone else's analysis from their training data.

If could just be matching a pattern like "I have a network problem with . Your issue is and you need to ." Where the problem and solution are related to each other but the problem isn't related to the symptoms, because the correlation with "network problem" ends up being stronger than the correlation with the description of the symptoms.

And that specific problem could likely be solved just by adding a description of that process to the training data. But there will be endless different versions of it that won't be fixed by that bandaid.

[–] PalmTreeIsBestTree@lemmy.world 11 points 2 days ago

They’ve earned that name at this point

load more comments (5 replies)
[–] ChickenLadyLovesLife@lemmy.world 13 points 2 days ago (1 children)

What would happen if you trained an AI entirely and solely on Microslop's knowledge base?

[–] maplesaga@lemmy.world 9 points 2 days ago (1 children)

It would be stuck on thousands of missing articles and unable to go back due to a bunch of redirects, like a sketchy page from the 90s.

load more comments (1 replies)
[–] fne8w2ah@lemmy.world 10 points 2 days ago

This should be yet another opportunity for Windows refugees to come to the Kingdom of Torvalds.

load more comments
view more: next ›