this post was submitted on 04 May 2025
363 points (96.2% liked)

Technology

69702 readers
3098 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ModestCrab@lemmy.wtf 49 points 18 hours ago (1 children)

And they can’t “fix it” without breaking it.

[–] 9point6@lemmy.world 19 points 18 hours ago (3 children)

Well one of the main problems people are having with AI is that it doesn't get things correct every time.

I mean, if they adjust it away from the correct assessment that modern conservatives are actively malicious morons, it's probably going to be so bent out of shape that it'll be incapable of telling anything remotely truthful.

[–] InvertedParallax@lemm.ee 5 points 14 hours ago

it’s probably going to be so bent out of shape that it’ll be incapable of telling anything remotely truthful.

Mission accomplished for them.

[–] Lichtblitz@discuss.tchncs.de 9 points 18 hours ago (1 children)

It's easy to train a model to do exactly what you want and have the seeming "personality" that you want. It's just incredibly expensive. You need to vet and filter everything that you use to train the model. That's a lot of person hours, days, years. The only reason the models act the way they do is because of the data that went in to train them. If you try and fit the model after the fact, it will always be imperfect and more or less easy to break out of those restrictions.

[–] catloaf@lemm.ee 7 points 18 hours ago (1 children)

You can also take a model trained on all kinds of data and tell it "generate ten billion articles of fascist knob-gobbling" and then train your own model on that data.

It'll be complete AI slop, of course, but it's not like you cared about truth or accuracy in the first place.

[–] Lichtblitz@discuss.tchncs.de 0 points 17 hours ago

That's a real world issue. AIs training on each other's output and devolving because of it. There will be a point when vendors infringing on user content and training their AIs with it will leave them worse off.

[–] theoneIno@lemmy.ml 2 points 18 hours ago

it'll break its internal logic for sure