this post was submitted on 08 May 2026
471 points (98.8% liked)

Technology

84463 readers
3982 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Tollana1234567@lemmy.today 4 points 23 hours ago (1 children)

and it still cant make anything good out of that slop.

[–] maegul@lemmy.ml 6 points 22 hours ago (1 children)

Interestingly, I don’t think I share this sentiment.

I’m no fan and personally don’t use AI (I barely touched it early ChatGPT days). But people use it to do things in successful fulfilment of their initial purpose.

I’ve seen it. Maybe I’ve seen the successes and not the failures in some cases. And I’ve certainly seen badly failed attempts to use it, but in those cases I’m happy to ascribe the failure substantially to a misapplication of the tool (which to be fair certainly invites gross misapplication).

My point though is that I don’t think an absolutist “AI is never useful” position is persuasive any more nor absolutely accurate.

Which, in my view, makes addressing the “rest of the situation” all the more fundamental. Indeed, I think everything g other than its efficacy was always the important part.

Part of the problem is that ethical arguments are difficult for people and many just switch off when it comes to the common good. Which is all of course part of the problem too.

But I think that’s gravity of the situation right now: our collective instincts may be misaligned for the moment. Our personal habits vulnerable from our prior corruptions. And our societal architectures already mutated, perhaps beyond repair, and therefore ill equipped for this.

Doomy, yes, but you’ve got to fight the fight you’re in, not the one you’d wish you’d won.

Another way I could put this counter, is that I feel like so much of what’s bad about AI was bad before AI, and that society from 2005-2020 badly mishandled technology. Whether AI “works” or not doesn’t matter. So long as it can fit into the same shape and meet the same urges that tech did 2005-2020, it will be adopted. But if the consequences of its adoption are graver than what came before, then the whole stack of that history needs to be addressed.

[–] timwa@lemmy.snowgoons.ro 3 points 18 hours ago

One of the problems the anti-AI crowd have with protrsting this use case is that they don't seem to appreciate that the enshittification happened long before AI.

Actual Software Engineers make up about 5% of the profession, the other 95% have been turning out slop that they don't even understand themselves for years. In that environment, an AI that does the same but at least doesn't complain when asked to do rework doesn't seem so bad.