this post was submitted on 08 Jun 2025
833 points (95.4% liked)

Technology

72360 readers
5105 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 2) 50 comments
sorted by: hot top controversial new old
[–] mfed1122@discuss.tchncs.de 13 points 3 weeks ago* (last edited 3 weeks ago) (11 children)

This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.

[–] LesserAbe@lemmy.world 11 points 3 weeks ago (2 children)

Agreed. We don't seem to have a very cohesive idea of what human consciousness is or how it works.

load more comments (2 replies)
[–] Endmaker@ani.social 7 points 3 weeks ago (1 children)

You've hit the nail on the head.

Personally, I wish that there's more progress in our understanding of human intelligence.

load more comments (1 replies)
load more comments (9 replies)
[–] LonstedBrowryBased@lemm.ee 12 points 3 weeks ago (14 children)

Yah of course they do they’re computers

load more comments (14 replies)
[–] communist@lemmy.frozeninferno.xyz 11 points 3 weeks ago* (last edited 3 weeks ago) (16 children)

I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.

do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.

if someone can objectively answer "no" to that, the bubble collapses.

load more comments (16 replies)
[–] ZILtoid1991@lemmy.world 11 points 3 weeks ago (3 children)

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

load more comments (3 replies)
[–] Blaster_M@lemmy.world 10 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Would like a link to the original research paper, instead of a link of a screenshot of a screenshot

load more comments (1 replies)
[–] melsaskca@lemmy.ca 9 points 3 weeks ago (1 children)

It's all "one instruction at a time" regardless of high processor speeds and words like "intelligent" being bandied about. "Reason" discussions should fall into the same query bucket as "sentience".

load more comments (1 replies)
[–] surph_ninja@lemmy.world 8 points 3 weeks ago (38 children)

You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

load more comments (38 replies)
[–] Harbinger01173430@lemmy.world 8 points 3 weeks ago

XD so, like a regular school/university student that just wants to get passing grades?

[–] BlaueHeiligenBlume@feddit.org 8 points 3 weeks ago (1 children)

Of course, that is obvious to all having basic knowledge of neural networks, no?

load more comments (1 replies)
load more comments
view more: ‹ prev next ›