this post was submitted on 10 Mar 2026
582 points (99.3% liked)

Technology

82490 readers
4969 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

top 50 comments
sorted by: hot top controversial new old
[–] laranis@lemmy.zip 5 points 52 minutes ago

How in the glorious fuck was this not a thing from the start? In a system this big and this critical all code should be reviewed by cognizant individuals. Anyone who thought an LLM would be perfect and not need code reviews has their heads so far up their asses they can see through their pee hole.

[–] merc@sh.itjust.works 23 points 2 hours ago (2 children)

What is AI good at? Creating thousands of lines of code that look plausibly correct in seconds.

What are humans bad at? Reviewing changes containing thousands of lines of plausibly correct code.

This is a great way to force senior devs to take the blame for things. But, if they actually want to avoid outages rather than just assign blame to them, they'll need to submit small, efficient changes that the submitter understands and can explain clearly. Wouldn't it be simpler just to say "No AI"?

[–] Earthman_Jim@lemmy.zip 4 points 1 hour ago* (last edited 1 hour ago)

AI's greatest feature in the eyes of the Epstein class is the ability to shift responsibility. People will do all kinds of fucked up shit if they can shift the blame to someone else, and AI is the perfect bag holder.

Just ask the school of little girls in Iran which were likely targets picked by AI with out of date information about it being a barracks. Why bother confirming the target with current intel from the ground when no one's going to take the blame anyway?

Or I suppose add extra work by walking an AI tool through making small incremental changes.

[–] WraithGear@lemmy.world 5 points 1 hour ago* (last edited 1 hour ago) (1 children)

or hear me out, they can build it themselves so they don’t have to chase hallucinations. as a matter of fact, let’s cut the ai out of the project and leave it to summarizing emails.

[–] laranis@lemmy.zip 2 points 50 minutes ago

This 1000x. You think that senior dev got to that level hoping one day all they'd have to do is evaluate randomly generated code? No! They want to create, build, design, integrate, share. Cut out the middle, useless step and get back to the work these professionals have dedicated their careers to.

[–] Bytemeister@lemmy.world 5 points 1 hour ago (2 children)

AI is an assistant, not a replacement. It amazes me that Amazon, Microsoft, Google, and all these "tech leader" companies are going to make the same tech fuckup multiple times.

[–] laranis@lemmy.zip 1 points 49 minutes ago

Wonder what the turnover rate in executives is. I bet it is about 8 years.

[–] Earthman_Jim@lemmy.zip 2 points 1 hour ago* (last edited 1 hour ago)

If only the lessons were painful for them and not just us/the workers.

[–] nightlily@leminal.space 1 points 49 minutes ago

If my job ends up being reviewing AI code spammed at me by vibe coding juniors all day, I’m joining a nunnery.

[–] pedroapero@lemmy.ml 45 points 4 hours ago* (last edited 4 hours ago) (1 children)

Yes, so now when there's a success, it gets attributed to AI. When there's an outage, that's the fault of humans not reviewing correctly. These senior engineers will get fucked in all scenarios.

[–] IratePirate@feddit.org 24 points 4 hours ago* (last edited 4 hours ago) (1 children)

Precisely. From Cory Doctorow's latest, very insightful essay on AI, where he talks about the promise of AI replacing 9 out of 10 radiologists:

"if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."

This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.

[–] kimara@sopuli.xyz -2 points 2 hours ago* (last edited 39 minutes ago) (3 children)

I don't think it's fair to compare LLM code generation to machine vision in this way. These are very different "AI"s. Not necessarily disagreeing with Doctorow, but this is an important distinction.

[–] BlameTheAntifa@lemmy.world 4 points 1 hour ago (1 children)

How the machines work does not matter. The situation is using a machine to replace human expertise while ensuring a human still takes responsibility for things that human is not responsible for. It is not the owning class who is at risk for their machines mistakes, it is the owning classes wage slaves who are at risk.

[–] kimara@sopuli.xyz 1 points 34 minutes ago (1 children)

My understanding is that the tumor detecting machine vision is generally thought useful in addition to the radiologist's expertise. It basically outputs "yes", "maybe", and "no", which is more expertise respecting than generating somewhere thereabouts code, which the coder has to (now) validate.

This is why I wouldn't equate these tools. LLM code generation is marketed to do much more than machine vision for tumor detection.

[–] AnarchistArtificer@slrpnk.net 1 points 3 minutes ago

Cory Doctorow actually goes more in depth on the radiologist example in a post from last year:

'If my Kaiser hospital bought some AI radiology tools and told its radiologists: "Hey folks, here's the deal. Today, you're processing about 100 x-rays per day. From now on, we're going to get an instantaneous second opinion from the AI, and if the AI thinks you've missed a tumor, we want you to go back and have another look, even if that means you're only processing 98 x-rays per day. That's fine, we just care about finding all those tumors."

If that's what they said, I'd be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if that also makes radiology more accurate. The market's bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: "Look, you fire 9/10s of your radiologists, saving $20m/year, you give us $10m/year, and you net $10m/year, and the remaining radiologists' job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it's catastrophically wrong.

"And if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."

This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.'

In short, we definitely could (and indeed should) be using tools like tumor detecting machine vision as something that helps humans build a better world for humans. But we've seen time and time again, across countless fields that it never works out that way.

That's because this isn't a problem with the technology of AI, but the fucked up sociotechnical and economic systems that govern how this tech is used, who gets to use it, who it gets used on, whose consent is required for those uses and most significant of all: who gets to profit?

!Not us, that's for sure!<

[–] Frenchgeek@lemmy.ml 2 points 1 hour ago

The kind of AI doesn't matter with this situation. Hell, It could be a magic talking rock™ and it change nothing of Mismanagement using a person to avoid blaming their shiny and expensive new toy.

[–] Earthman_Jim@lemmy.zip 1 points 1 hour ago

"this is an important distinction"

it really isn't

[–] resipsaloquitur@lemmy.world 7 points 3 hours ago
[–] DarrinBrunner@lemmy.world 6 points 4 hours ago

Couldn't they, I don't know, just go back to people writing the code, and stop using AI to do something it clearly can't handle? Just an idea.

I guess they've invested (thrown) so much money at this thing, they're determined to make it work. Also, I know they've gone into insanely deep debt and if it doesn't work they're going to lose an eye watering amount of money, and perhaps the bubble bursting will be the catalyst to bringing down the entire world economy.

Oh, so yeah, they do have great incentive to make this work, but I don't see it happening. As usual, they fuck up and the rest of us pay the bill. None of the billionaires will suffer any more than loss of face over this. Even if they've broken laws, all they ever get is a small fine and a slap on the back, "Better luck, next time, ol' boy!"

[–] Simulation6@sopuli.xyz 31 points 9 hours ago (1 children)

I always saw a code review like a dissertation defense. Why did you choose to implement the requirement in this way? Answers like 'I found a post on Stackoverflow' or 'the AI told me to' would only move the question back one step; why did you choose to accept this answer?
I was a very unpopular reviewer.

[–] PlutoniumAcid@lemmy.world 8 points 6 hours ago

Likely, but you did not let poor code pass. That is valuable.

[–] GreenKnight23@lemmy.world 15 points 8 hours ago

as a sr, I would just keep rejecting them and make AI find "reasons" why.

[–] pirate2377@lemmy.zip 14 points 11 hours ago

Keep taking Ls Amazon!

[–] FosterMolasses@leminal.space 7 points 9 hours ago
[–] BrianTheeBiscuiteer@lemmy.world 130 points 16 hours ago (4 children)

Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added.

So instead of getting a human to write it and AI peer reviewing it you want the most expensive per hour developers to look at stuff a human didn't write and the other engineers can't explain? Yeah, this is where the efficiency gains disappear.

I read stuff from one of my Jr's all the time and most of it is made with AI. I don't understand most of it and neither does the Dev. He keeps saying how much he's learned from AI but peer programming with him is the pits. I try to say stuff like, "Oops! Looks like we forgot the packages." And then 10 secs of silence later, "So you can go to line 24 and type..."

[–] Aceticon@lemmy.dbzer0.com 17 points 6 hours ago* (last edited 6 hours ago) (1 children)

Just to add to this:

  • When a senior dev reviews code from a more junior dev and gives feedback the more junior person (generally) learns from it.
  • When a senior dev reviews code from an AI, the AI does not learn from it.

So beyond the first order effects you pointed out - the using of more time from more experience and hence expensive people - there is a second order effect due of loss of improvement in the making of code which is both persistent and cumulative with time: every review and feedback of the code from a junior dev reduces forever the future need for that, whilst every review and feedback of the code from an AI has no impact at all in need for it in the future.

Given enough time, the total time wasted in reviews and feedback for code from junior devs is limited - because they eventually learn enough not to do such mistakes - but the total time wasted in reviews and feedback for code from an AI is unlimited - because it will never improve.

[–] BrianTheeBiscuiteer@lemmy.world 6 points 4 hours ago (1 children)

Seniors reviewing code is fine but only when, as someone else mentioned, the code writer is learning from the review. The AI doesn't learn at all and the Jr Dev probably learns very little because they didn't understand the original code. Reviewing AI code often turns into me rewriting most of it.

[–] Aceticon@lemmy.dbzer0.com 2 points 3 hours ago* (last edited 3 hours ago)

Exactly.

The best way to learn is to have done the work yourself with all the mistakes that come from not knowing certain things, having wrong expectations or forgetting to account for certain situations, and then get feedback on your mistakes, especially if those giving the feedback know enough to understand the reasons behind the mistakes of the other person.

Another good way to learn is by looking through good quality work from somebody else, though it's much less effective.

I suspect that getting feedback on work of "somebody" else (the AI) which isn't even especially good, yields very little learning.

So linking back to my previous post, even though the AI process wastes a lot of time from a more senior person, not only will the AI (which did most of the implementation) not learn at all, but the junior dev that's supposed to oversee and correct the AI will learn very little thus will improve very little. Meanwhile with the process that did not involve an AI, the same senior dev time expenditure will have taught the junior dev a lot more and since that's the person doing most of the work yielded a lot more improvement next time around, reducing future expenditure of senior dev time.

I read stuff from one of my Jr’s all the time and most of it is made with AI. I don’t understand most of it and neither does the Dev. He keeps saying how much he’s learned from AI but peer programming with him is the pits. I try to say stuff like, “Oops! Looks like we forgot the packages.” And then 10 secs of silence later, “So you can go to line 24 and type…”

So what kind of code is that? Code lyoko? Are they using more advanced code than their training should make one think?

[–] RandallFlagg@lemmy.world 19 points 11 hours ago

Lol I would be your Jr, except instead of 10 seconds of silence it would be 10 seconds of me frantically clacking on the keyboard "add a block to this for these packages with proper syntax, I forgot to include it" to claude. Then I'd of course be all discombobulated and shit so I wouldn't even bother to open code, I'd just ctrl-c about 100 lines somewhere around the general area of where I think the new code block should go, then ctrl-v the whole thing into the chat box because why not the company is paying out the dick for these tokens so might as well use them.

And two weeks later half our website crashes which results in you having to go to a meeting where management tells you to keep a closer eye on me. Which is basically what you had been already doing before AI but now you get to babysit me and claude!

load more comments (1 replies)
load more comments
view more: next ›