this post was submitted on 10 Aug 2025
794 points (99.1% liked)

Technology

77768 readers
3547 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Prox@lemmy.world 163 points 4 months ago (64 children)

Isn't this true of like everything AI right now?

We're in the "grow a locked-in user base" part of their rollout. We'll hit the "make money" part in a year or two, and then the enshittification machine will kick into high gear.

[–] jaykrown@lemmy.world -2 points 4 months ago (3 children)

I doubt it, LLMs have already become significantly more efficient and powerful in just the last couple months.

In a year or two we will be able to run something like Gemini 2.5 Pro on a gaming PC which right now requires a server farm.

[–] AmbiguousProps@lemmy.today 10 points 4 months ago (2 children)

Current gen models got less accurate and hallucinated at a higher rate compared to the last ones, from experience and from openai. I think it's either because they're trying to see how far they can squeeze the models, or because it's starting to eat its own slop found while crawling.

https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf

load more comments (1 replies)
load more comments (61 replies)