this post was submitted on 22 Sep 2024
80 points (90.0% liked)

Socialism

5194 readers
39 users here now

Rules TBD.

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] frightful_hobgoblin@lemmy.ml 4 points 1 month ago (12 children)

They don’t understand though. A lot of AI evangelists seem to smooth over that detail, it is a LLM not anything that “understands” language, video nor images.

We're into the Chinese Room problem. "Understand" is not a well-defined or measurable thing. I don't see how it could be measured except from looking at inputs&outputs.

[–] booty@hexbear.net 7 points 1 month ago (5 children)

I don't see how it could be measured except from looking at inputs&outputs.

Okay, then consider that when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning, proving that it does not have any functional understanding of anything and instead simply outputs random noise that sometimes looks similar to what one would output if they did understand the content in question.

[–] frightful_hobgoblin@lemmy.ml 1 points 1 month ago (4 children)

Right. Like if I were talking to someone in total delirium and their responses were random and not a good fit for the question.

LLMs are not like that.

[–] booty@hexbear.net 3 points 1 month ago (1 children)

You don't seem to have read my comment. Please address what I said.

[–] frightful_hobgoblin@lemmy.ml 1 points 1 month ago (1 children)

when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning

Can you paste an example of this error?

[–] booty@hexbear.net 4 points 1 month ago* (last edited 1 month ago) (1 children)

Have you ever used an LLM?

Here's a screenshot I took after spending literally 10 minutes with chatgpt very confidently stating incorrect answers to a simple question over and over. (from this thread) Not only is it completely incapable of coming up with a very simple correct answer to a very simple question, it is completely incapable of responding in a coherent way to the fact that none of its answers are correct. Humans don't behave this way. Nothing that understands what is being said would respond this way. It responds this way because it has no understanding of the meaning of anything that is being said. It is responding based on statistical likelihoods of words and phrases following one another, like a markov chain but slightly more advanced.

[–] UlyssesT@hexbear.net 2 points 1 month ago* (last edited 1 month ago)

You were arguing with such an incredibly misanthropic piece of shit that of course they see a sufficient number of TI-88s bolted together as direct analogues to self-aware and conscious human intelligence.

Look at how that piece of shit treats other human beings: like the inferior "meat computers" that such a techbro mindset provides.

https://hexbear.net/comment/5438712

load more comments (2 replies)
load more comments (2 replies)
load more comments (8 replies)