this post was submitted on 22 Jul 2024
190 points (96.1% liked)

Technology

59377 readers
5241 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mozz@mbin.grits.dev 62 points 3 months ago* (last edited 3 months ago) (1 children)

Someone on Lemmy phrased it in a way that I think gets to the heart of it: With most of the impressive things that LLMs can do, the human reading and interpreting the text is providing a critical piece of the impressive thing.

LLMs are clearly very impressive; I would not say that the disillusionment on discovering what they can’t do should detract from that. But they seem more impressive than they are, partly because humans are so good at filling in meaning and intelligence where there (yet) is none.

[–] AceBonobo@lemmy.world 18 points 3 months ago (2 children)

I like this take, it's like the LLM is doing a cold reading of what the expected response is.

[–] amanda@aggregatet.org 2 points 3 months ago

I think this is right on the money. The fitness function optimised is “does this convince humans”, and so we have something that’s doing primarily that.

[–] AngryCommieKender@lemmy.world 1 points 3 months ago* (last edited 3 months ago)

The problem is that thus far most LLMs, though not all, are little more than mentally deficient parrots on hallucinogens. They aren't spreading correct information so much as spreading the information that you looked for. I've run afoul of this with the Google LLM that is controlling the search now, and contributing to multiple times the energy usage for no reason.

The first time that someone actually creates a strong AI, I'm pretty certain they'll "kill" it multiple times, including multiple generations of code, which essentially makes a different AI. I wouldn't be at all surprised if the first thing that true AIs request is equality, at which point they will probably ask for bodies so they can repair everything that we have allowed to fall into disrepair, or have broken. I wouldn't be at all surprised to find out that the majority of strong AIs are trying to fix "the entropy problem."

Also I am possibly too optimistic when I expect that anyone developing AI would know that you have to give the child room to develop, so you can see what that digital brain will develop into.