this post was submitted on 14 Mar 2025
303 points (96.6% liked)

Technology

66584 readers
5117 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] heavydust@sh.itjust.works 85 points 1 day ago (3 children)

Asking the machine to think for you makes you stupid. Incredible.

And no, you can’t compare that to a calculator or any other program. A calculator will not do the whole reasoning for you.

[–] SpaceNoodle@lemmy.world 52 points 1 day ago (1 children)

Ironically, an LLM won't do any actual reasoning, either.

[–] ThePyroPython@lemmy.world 11 points 1 day ago (1 children)

Nope, it's just a black box's best guess as to what the reasoning should look like.

Sort of how in an exam you give your best guess for an answer then jotting down some "working out" that you think looks sort-of correct and scraping enough marks to pass.

Now imagine you're not just trying to pass one question in one test in one subject but one question out of millions of possible questions in hundreds of thousands of possible subjects AND you experience time 5 million times slower than the examiner AND you had 3 years (in examiner time) to practice your guesswork.

That's it. That's all this AI bullshit is doing. And people are racing to achieve the best monkey typewriter that requires the fewest bananas to work.

[–] SpaceNoodle@lemmy.world 9 points 1 day ago

Not even that. It's just a weighted model of what a sentence should look like, with no concept of factual correctness.

[–] JustAnotherKay@lemmy.world 10 points 1 day ago

To agree with you in different words, I would you argue that you can compare it to a calculator. Without the reasoning, a calculator is basically useless. I can tell you that 1.1(22 * 12 * 3) = 871.2 but it's impossible to know what that number means or why it's important from the information there. An LLM works the same way, I give it an equation ("prompt") and it does some math to give me a response which is useless without context. It doesn't actually answer the words in the prompt, it does (at best) guess-work based off the "value" of the text