this post was submitted on 14 May 2026
450 points (95.9% liked)

Technology

84653 readers
5219 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Dumhuvud@programming.dev 9 points 3 hours ago

Software Engineers

Oftentimes I wonder what civil or mechanical engineers think about webdevs-turned-prompt-writers calling themselves "engineers".

[–] Amnesigenic@lemmy.ml 11 points 4 hours ago

Loudly announcing your increasing incompetence to the world seems like a weird career move, maybe consider lying about that?

[–] FosterMolasses@leminal.space 6 points 3 hours ago

Oh no... who could have... possibly... foreseen this...

[–] ferrule@sh.itjust.works 5 points 4 hours ago (1 children)

We use it at work and I now have disabled it for all the typeahead stuff. Far too many times it guesses what I am doing incorrectly and it made using my TAB key (which inserts the propper two spaces) impossible.

The only place I still use it is for reading and identifying compiler errors. Even then it is only about 50% correct as most times it falls into the "Oh you are right, X isn't the solution. Have you tried X?" I have had few bad interns and even they were smart enough to not forget what they said in their previous sentence.

[–] Smoogs@lemmy.world 3 points 2 hours ago

This is why I've never tossed any of the developer bookmarks

I've been training new hires how to look stuff up on stack and dictionaries to fix code that went wrong after AI mucked it up. They aren't even being trained to parachute in school.

What a sad time line we are in.

[–] GutterRat42@lemmy.world 2 points 3 hours ago

I want to become a software entomologist, you know, so I can study all their bugs.

[–] shoo@lemmy.world 15 points 5 hours ago (5 children)

Things I've realized while working with AI (Claude code):

  • It's fantastic for very small macros and medium length scripts. Think dev ops stuff, pre-commit hooks, transforming data. Keep it small enough to manually review and something you can run without destroying anything important. This can massively boost your codebase QoL. [Double bonus for not wasting tokens to solve the same problem over and over]
  • It's decent-to-good at debugging but not consistent with fixes. It can find some utf encoding edge case that might have taken you 1hr+ but suggest the dumbest bandaid fix you've ever seen. Also very good at spinning up unit test suites for basic edge cases.
  • Due to obvious training bias, it's pretty good with common libraries and cloud platform infrastructure. It could probably help with writing a complex cron call, debugging regex or fixing an IaC config. On the flip side it won't bother to use the latest package version or know your niche/new library.
  • It does better with greenfield because exploring your codebase introduces a ton of bias. It might try to fit in an ugly hack when a refactor to simplify everything is way easier.
  • It's absolutely garbage with UI, just throws the most disorganized HTML together that isn't reactive or reusable. OK enough for ugly internal stuff but God help anyone relying on it for that.
  • This is setting up to be the biggest rug pull in history. People that buy into it heavily just to save a couple bucks on engineer payroll are going to be fucked when they start ratcheting up the token price.

All in all it can be useful when used with care but will never be a magic bullet.

[–] Archr@lemmy.world 1 points 1 hour ago

This is basically what I discovered as well. I have found that Ai writes code that is complex and "works" (at least most of the time) but it is heavily over engineered and often contains design choices that make expanding functionality effectively impossible without a full refactor.

When I tried having the Ai fix a test failure the Ai would either fix the code, fix the test, or change the test and the code breaking everything else in the chain.

I no longer use vibe coding because it is just faster/better for me to write the code.

But for tiny scripts it is very good.

[–] Blackmist@feddit.uk 3 points 4 hours ago

Yeah, fully agree with all that.

I've got some godawful spaghetti code I don't understand fully, and it's pretty good at deciphering that and the bizarre labyrinth of code paths leading around it. But it's absolutely no guarantee of working code, and in any project larger than a simple crud app, you are going to still need programmers who know about things like memory and databases.

It often needs pointing at a solution you want, because as you pointed out, it's fond of dumb band-aids. Like yesterday when it was trying to hook into mouse wheel events and create separate threads, when all it needed was an event on the dataset I was using to load a sub-dataset.

[–] Davin@lemmy.world 1 points 3 hours ago

Claude can do some medium complicated sites from scratch relatively quickly. The problem is I've seen so many of these at work, not just from non-engineers, but from peers too, that they're easy to spot. AI sites/apps are going to be the new geocities.

But when you want to move beyond the basic thing that impresses the c suites for some reason, it hits a pretty big wall in speed to output and needs a lot more hand holding.

I fear that the c suites don't really care about quality, just speed and saving money. So while I'm a much better developer than Claude (which is imo the best at the moment), I don't think that makes my job secure. I have to use the AI, and it's getting silly/scary religious here about it. We have to talk about how we used AI and how it's making things better. And to make things worse, I don't see a company that's not drinking the Flavor Aid.

It can be useful, and used right, you can do a lot of things faster. But the expectations from the top don't align with the reality of the product, and us developers are being blamed for the gap.

[–] subtex@lemmy.world 2 points 4 hours ago

This is pretty spot on from my experience as well. Also, the gap in quality from the Opus models and say GPT is vast.

100% agree on ui code. Really awful output there regardless of model.

[–] Ledivin@lemmy.world 0 points 3 hours ago* (last edited 3 hours ago) (1 children)

Man, I disagree with all of this. The frontier models are actually good, and basically everyone in my F500 company has been using it. The codebases i work on are super-legacy java, where it does great despite us having like 75 different patterns for each task, and a massive front-end web repo where it thrives because we've been extremely strict in typing and patterns leading up to this. It even does pretty well across repo boundaries, despite having significantly lower context for those situations.

I genuinely will never understand the people saying they suck. Are the worth the price? I have no idea, I've never used them for personal project. But they are at least as good as a dev with 3-5 years of experience, at this point. Our career is boned.

[–] shoo@lemmy.world 1 points 2 hours ago

I don't doubt it's possible to get better consistency but the juice is really not worth the squeeze for me. You end up churning through huge expensive models, orchestrating sub agents, writing out boilerplate hand-holding instructions ("please don't break this, stop trying to commit to main, please lint ffs...").

I don't use it for Java but that would make sense with rigid enterprise patterns and VeryVerboseNamesThatAreEasierForAModelThanAHumanFactoryClazz {...

I don't think our career is boned, moreso that all juniors trying to get in are boned. Everyone who knows what going on transition to a more hands-off architect role.

But like I said, our tokens are heavily subsidized right now. When they pull the rug, code monkey jobs will start to get listed again (with lower salaries of course).

[–] grrgyle@slrpnk.net 6 points 6 hours ago

I've worked on a cloded codebase. It's not... uh, good.

[–] Kaligalis@lemmy.world 18 points 11 hours ago (1 children)

Nah, AI isn't that good. When you don't properly review every single line twice, you get the most absurd bullshit you've ever seen.
I use Claude Code Opus daily btw.

[–] Nalivai@lemmy.world 11 points 9 hours ago (2 children)

That's the funnest part. You loose your ability to code, and you do it by using thing that isn't even that good, and you don't get anything out of it. Isn't that great?

[–] Blackmist@feddit.uk 1 points 4 hours ago

You speak for yourself, I'm flying through this killer sudoku book...

[–] HaraldvonBlauzahn@feddit.org 2 points 5 hours ago

You forgot that you'll work for less salary because "work has become much simpler, every intern can do it now!/s"

[–] Sam_Bass@lemmy.world 0 points 4 hours ago (1 children)

that happens across all technological industries. when cars first became available, hsnds were needed to build them. nowadays most of it is done by robots. clerical workers were replaced by computers. and now "artificial intelligence" machines are trying to replace artists and writers, editors, and managers. unfortunately, the people that do those jobs are not just going to keel over and disappear. at what technological point do we stop and say thats enough? there has even been talk of replacing ceos with ai. are shareholders next? nobody will be able to by those products anymore either

[–] Ledivin@lemmy.world 2 points 3 hours ago (1 children)

unfortunately, the people that do those jobs are not just going to keel over and disappear

Unfortunately?

[–] Sam_Bass@lemmy.world 1 points 3 hours ago

for the ai creators, yes

[–] NocturnalMorning@lemmy.world 12 points 13 hours ago (1 children)

I weap for the environment and our future water and electricity availability.

[–] SchwertImStein@lemmy.dbzer0.com 0 points 1 hour ago (1 children)
[–] NocturnalMorning@lemmy.world 1 points 1 hour ago

That you captain autocorrect.

[–] IEatDaFeesh@lemmy.world -2 points 6 hours ago (2 children)

I doubt anyone can actually calculate a line of best using ordinary least square linear regression by hand with no mistakes but no one's crying about that. LLMs are just the next generation of calculators and programs.

[–] Blackmist@feddit.uk 1 points 4 hours ago

The hype actually feels like some of the vintage marketing for BASIC.

"So simple, your boss can do it!"

It's probably been like this every time we go "up" a level of abstraction. We're still needed because complicated shit will always be complicated, and people who make decisions will always need an underling to blame.

[–] ameen272@thelemmy.club 1 points 5 hours ago (1 children)

For the first sentence: Yes, that's why computers are popular. For the second sentence: They're more like the next generation of algorithms, not whole calculators.

I'll proceed to eat your feesh now.

[–] IEatDaFeesh@lemmy.world 1 points 5 minutes ago

That second point is just a distinction without a difference. Being pedantic doesn’t add anything to the conversation.

[–] BenevolentOne@infosec.pub 9 points 17 hours ago

Being able to call out a middle manager that if these tools are really so great he can just open the PR himself is pretty awesome though.

[–] normalentrance@lemmy.zip 31 points 22 hours ago* (last edited 18 hours ago) (1 children)

It feels like relying on GPS while driving around. If you know the roads well and just want some help with live traffic or somewhere you haven't been before, it's a decent tool.

If you rely on it because you don't want to think and just want to press the easy button, you're going to have a bad time sooner or later.

Back to software, I think there are a lot of people introducing concepts they don't understand or can't maintain (either from poor quality slop or it is just too advanced for their current level of understanding). You can do a few turns like this, until you're stuck burning tokens in a loop without moving forward in a meaningful way.

I try to avoid taking the easy route myself unless I've burnt too much time stuck on some small detail. Ultimately I feel it is super important to understand what you are delivering. Whether it is writing it yourself, copying a stack overflow post, or using an LLM. Once you commit and push to prod you've got to deal with that crap.

[–] themaninblack@lemmy.world 1 points 10 hours ago

Agree completely but I wanted to add: you can also get into an incomprehensible mess without vibing. Just follow the serverless flask tutorial, start writing raw SQL, and away you go!

I asked Claude today about why a coworker was getting errors and it almost exploded.

[–] aesthelete@lemmy.world 24 points 23 hours ago

Hot take: they had no ability to code in the first place.

[–] dejected_warp_core@lemmy.world 38 points 1 day ago (3 children)

(X) Doubt

As a Sr. Engineer, I completely get that my situation may be wildly different from what's cited in the article.

Right now, I'm using AI "in the loop" rather than "as the loop". That's a big difference. And I'm getting my ass kicked routinely on review for dumb-ass things that I'm letting slide from AI generated output. And rightly so. Plus, models routinely lead me down sub-optimal blind alleys while dreaming up really stupid ways to fix problems. The level of (re)prompting I have to provide to suggest to get decent quality results converges on a post-grad that has encyclopedic knowledge of software engineering as it exists online, but with zero real-world experience. It's both impressive and dangerous as a replacement for software engineering.

In the mode I describe above, I'm not losing the ability to do anything. I can see how one could surrender some coding chops or familiarity with a whole language or stack, in favor of automation. But all you have to do is not do that.

I will say that as a rapid-prototyping technology, It's nothing short of miraculous. I've watched junior engineers knock together medium-weight applications, complete with browser UI/UX and decent workflow, in less than a week. This is great for showing value or putting something semi-functional in front of management and/or customers. But pivoting those prototypes into something maintainable is an utter nightmare. Depending on how beholden to AI and forever prompt-looping with "skills" and MCPs you want to be, I suppose it's possible to just keep mashing the AI button. But at some point, you're going to need to get inside there to fix security problems or bugs that elude this workflow. What then?

[–] Nalivai@lemmy.world 4 points 9 hours ago

And I’m getting my ass kicked routinely on review for dumb-ass things that I’m letting slide from AI generated output.

Now imagine if you aren't that experienced and the reviewers aren't that thorough, or, and this is the most depressing part, review process doesn't exist. And you get people, even senior engineers, who push that sub-optimal barely working code, but because their project isn't that complicated, it somehow works, so they continue with it, and after some iterations they get code that nobody wrote, nobody knows how to maintain, and nobody reads. But because a lot of modern frameworks are made so monkey can make that barely work by sitting on a keyboard, a lot of the projects didn't collapse on itself yet.
And that's how you get a generation of programmers who lost the ability to program.

load more comments (2 replies)
load more comments
view more: next ›