this post was submitted on 17 Feb 2026
84 points (94.7% liked)
Videos
17834 readers
181 users here now
For sharing interesting videos from around the Web!
Rules
- Videos only (aside from meta posts flagged with [META])
- Follow the global Mastodon.World rules and the Lemmy.World TOS while posting and commenting.
- Don't be a jerk
- No advertising
- No political videos, post those to !politicalvideos@lemmy.world instead.
- Avoid clickbait titles. (Tip: Use dearrow)
- Link directly to the video source and not for example an embedded video in an article or tracked sharing link.
- Duplicate posts may be removed
- AI generated content must be tagged with "[AI] …" ^Discussion^
Note: bans may apply to both !videos@lemmy.world and !politicalvideos@lemmy.world
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've been told that the LLMs are reaching pretty hard diminishing returns, in that you need exponentially increasing amounts of compute for linear returns in model performance. The cost of marginal improvements is bumping into practical limits.
It can't turn into general AI, that's not how LLMs work. So they're uselessly throwing money after nothing.
Architectural improvements could help, but the big guys can't even get away from "basic" problems like high temperature sampling. Corporate development is way more conservative than you'd think.
And it's not getting better. See the "all star" ego Tech Bro teams at Meta, OpenAI and such vs. researchers that quit.
The Chinese are testing some more interesting optimizations (and actually publishing papers on them), but still pretty conservative all things considered. They appear content with LLMs as modest coding assistants and document processors, basically; you don't hear anything about AGI in their presentations.
It's like saying adding more steps to a ladder will make it fly.
Fundamentally, LLMs are not AI.