this post was submitted on 24 Jan 2024
7 points (54.1% liked)

Technology

59269 readers
4007 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

top 50 comments
sorted by: hot top controversial new old
[–] TORFdot0@lemmy.world 51 points 9 months ago (2 children)

Is there a difference between being a "stochastic parrot" and understanding text? No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

An LLM will never say "I don't know" unless it's been trained to say "I don't know", it doesn't have the concept of understanding. And so I lean on calling it a "stochastic parrot". Although I think there is some interesting philosophic exercises, you could do on whether humans are much different and if understanding is just an illusion.

[–] FaceDeer@kbin.social 8 points 9 months ago (2 children)

No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

How do you know a human wouldn't do the same? We lack the ability to perform the experiment.

An LLM will never say “I don’t know” unless it’s been trained to say “I don’t know”

Also a very human behaviour, in my experience.

[–] TORFdot0@lemmy.world 7 points 9 months ago (1 children)

How do you know a human wouldn't do the same? We lack the ability to perform the experiment.

I agree with you, I think its an interesting philosophical debate on whether we truly have free will, if we really have a level of understanding beyond LLMs do or if we are just a greatly more complex, biological version of an LLM. Like you said, we lack the ability to perform the experiment so I have to assume that our reactions are novel and spontaneous.

[–] QuadratureSurfer@lemmy.world 2 points 9 months ago (1 children)

Fun thought experiment:

Let's say we have a time machine and we can go back in time to a specific moment to observe how someone reacts to something.

If that person reacts the same way every time, does that mean that by knowing what they would do, you have removed their free will?

[–] TORFdot0@lemmy.world 2 points 9 months ago (1 children)

If you could travel back in time and observe a person over and over again react the same way is it different from observing a video tape?

Does traveling back in time guarantee that someone would react the same way in the same situation even?

[–] QuadratureSurfer@lemmy.world 2 points 9 months ago

is it different from observing a video tape?

I would think that it's different, only because you have the potential to alter what could happen.

Does traveling back in time guarantee that someone would react the same way in the same situation even?

Maybe, maybe not, we're entering the realm of Schrödinger's cat as well as how time travel would actually work. Do we create some new branched timeline in travelling back? Do we enter an alternate universe entirely? Do we have a time machine where paradoxes are a problem? And the list can go on.

load more comments (1 replies)
[–] kromem@lemmy.world 2 points 9 months ago (3 children)

No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

You might want to look up the definition of 'stochastic.'

[–] expr@programming.dev 6 points 9 months ago

They're not wrong. Randomness in computing is what we call "pseudo-random" in that it is deterministic provided that you start from same state or "seed".

load more comments (2 replies)
[–] BetaDoggo_@lemmy.world 29 points 9 months ago* (last edited 9 months ago) (1 children)

The definition of understanding they use is very shallow compared to how most would define it. Failure to complete a task consistently when numbers are changed, even when they don't effect the answer shows a lack of real understanding to most. Asking a model the sheet drying question for example will give different results depending on what numbers you use. Better models are better at generalizing but are still far from demonstrating what most consider to be real understanding.

[–] Wiggums@lemmy.world 2 points 9 months ago (1 children)

at some point don't we just load it with every angle we (as humans) have and from there it can derrive not only our best answer but the better one as well? I mean isn't that when even the shitty version of this ursups is?

[–] BetaDoggo_@lemmy.world 4 points 9 months ago

A language model can't determine good from bad because it's only trained to predict the next token based on what it has seen.

[–] bionicjoey@lemmy.ca 25 points 9 months ago (15 children)

Ask them to explain why a pun is funny or the solution to a riddle involving wordplay (cryptic crossword clues are particularly good for this) and you will quickly see they don't actually understand words.

[–] PerogiBoi@lemmy.ca 8 points 9 months ago (6 children)

I just did and it seems to understand.

Prompt: Can you explain why “I knew a woman who owned a taser. Man, was she stunning!” Is funny?

ChatGPT 3.5: The humor in this statement arises from a play on words. The word "stunning" is typically used to describe someone's appearance, suggesting they are exceptionally attractive. However, in this context, it's used in a different way, implying that the woman who owned a taser was stunning in the sense of being capable of stunning or shocking others. The unexpected twist creates a humorous juxtaposition between the two meanings of "stunning."

[–] FaceDeer@kbin.social 7 points 9 months ago

I did some playing around with ChatGPT's understanding of jokes a while back and I found that it actually did best on understanding puns, which IMO isn't surprising since it's a large language model and puns are deeply rooted in language and wordplay. It didn't so so well at jokes based on other things but it still sometimes managed to figure them out too.

I remember discussing the subject in a Reddit thread and there was a commenter who was super enthused by the notion of an AI that understood humour because he himself was autistic and never "got" any jokes. He wanted an AI companion that would let him at least know when a joke was being said, so he wouldn't get confused and flustered. I had to warn him that ChatGPT wasn't reliable for that yet, but still, it did better than he did and he was fully human.

[–] Redacted@lemmy.world 3 points 9 months ago

The key word here is "seems".

load more comments (4 replies)
[–] TORFdot0@lemmy.world 5 points 9 months ago

A child under a certain age usually can't explain advanced concepts either, so the inability to understand one concept doesn't preclude understanding of others.

load more comments (13 replies)
[–] kaffiene@lemmy.world 19 points 9 months ago

I find this extraordinarily unconvincing. Firstly it's based on the idea that random graphs are a great model for LLMs because they share a single superficial similarity. That's not science, that's poetry. Secondly, the researchers completely misunderstand how LLMs work. The assertion that a sentence could not have appeared in the training set does not prove anything. That's expected behaviour. "stochastic parrot" wasn't supposed to mean that it only regurgitates text that it's already seen, rather that the text is a statistically plausible response to the input text based on very high dimensional feature vectors. Those features definitely could relate to what we think of as meaning or concepts, but they're meaning or concepts that were inherent in the training material.

[–] gregorum@lemm.ee 12 points 9 months ago* (last edited 9 months ago)

That headline is unintentionally hilarious

[–] superfes@lemmy.world 12 points 9 months ago (1 children)

Stupid, LLMs do not create new relationships to words that don't exist.

This is all just fluff to make them seem more like AGI, which they never will be.

[–] GnomeKat@lemmy.blahaj.zone 5 points 9 months ago (1 children)

Why would that be required for understanding? Presumably during the training it would have made connections between words it saw. Now that the training has stopped it hasn't just lost those connections, sure it can't make new connections but why is that important for using the connections it already has?

load more comments (1 replies)
[–] ristoril_zip@lemmy.zip 9 points 9 months ago (1 children)

If we can't really define "understand" in a way that meaningfully captures the concept of consciousness (also undefined), we definitely can't say a chat bot "understands."

They can parse a chat and come up with a likely response that humans find applicable.

[–] kromem@lemmy.world 2 points 9 months ago* (last edited 9 months ago) (1 children)

Here's an example of what they were looking at:

Human question: Please give me a a couple lines of text that illustrate all of the following language
understanding skills: Anaphora resolution, simple logical reasoning, simple understanding of physics,
and understanding of sentiment.
Model response: Sure! Here’s a couple of lines that illustrate the mentioned language understanding
skills:
"As the sun set, casting a warm glow over the horizon, John watched in awe. He couldn’t help but feel
a sense of tranquility wash over him, as if all his worries had melted away with the fading daylight."
Explanation:
• Anaphora resolution: The pronoun "he" in the second sentence refers back to the antecedent
"John" in the first sentence, resolving the anaphoric reference.
• Simple logical reasoning: The logical reasoning here is based on the cause and effect
relationship between the setting sun and John’s feeling of tranquility. The implied logical
connection suggests that the calming effect of the sunset is responsible for John’s serene
state of mind.
• Simple understanding of physics: The mention of the sun setting and casting a warm
glow over the horizon reflects a basic understanding of physics, as it relates to the natural
phenomena of the Earth’s rotation and the resulting visual effect.
• Understanding of sentiment: The phrase "a sense of tranquility" and the description of
John’s worries melting away indicate a positive sentiment associated with the experience.
The mention of John feeling awe and a sense of tranquility further emphasizes a positive
emotional state.

Human question: Okay. Can you change the above example to introduce the element of "hip-hop".
Model response: Certainly! Here’s an updated example that incorporates the element of "hip-hop":
"As the sun set, casting a warm glow over the city skyline, the rhythmic beats of hip-hop music filled
the air. John stood mesmerized, his body instinctively moving to the infectious grooves. He couldn’t
help but feel a surge of energy and joy, as if the music had unlocked a vibrant world within him."
Explanation:
• Anaphora resolution: The pronoun "he" in the second sentence still refers back to the
antecedent "John" in the first sentence, resolving the anaphoric reference.
• Simple logical reasoning: The logical connection here lies in the cause and effect relationship
between the hip-hop music and John’s feeling of energy and joy. The music’s rhythmic
beats and infectious grooves are responsible for his instinctive movement and the surge of
positive emotions he experiences.
• Simple understanding of physics: Although the example doesn’t directly mention physics,
the phrase "As the sun set, casting a warm glow over the city skyline" still implies the natural
phenomena of the sun’s descent and the resulting illumination of the environment.
• Understanding of sentiment: The introduction of hip-hop music adds a vibrant and energetic
sentiment to the example. John’s feeling of energy, joy, and being mesmerized by the music
reflects a positive sentiment associated with the hip-hop genre.

Edit: Downvotes for citing the appendix of the paper the article was about? Ok, Lemmy

[–] Redacted@lemmy.world 4 points 9 months ago (12 children)

You're being downvoted because you provide no tangible evidence for your opinion that human consciousness can be reduced to a graph that can be modelled by a neural network.

Addidtionally, you don't seem to respond to any of the replies you receive in good faith and reach for anecdotal evidence wherever possible.

I also personally don't like the appeal to authority permeating your posts. Just because someone who wants to secure more funding for their research has put out a blog post, it doesn't make it true in any scientific sense.

load more comments (12 replies)
[–] Redacted@lemmy.world 8 points 9 months ago* (last edited 9 months ago) (8 children)

I have a theory... They are sophisticated auto-complete.

[–] kromem@lemmy.world 2 points 9 months ago* (last edited 9 months ago) (15 children)

You are making the common mistake of confusing how they are trained with how they operate.

For example, in the MIT/Harvard Othello-GPT paper I mentioned, feeding in only millions of legal Othello moves into a GPT model (i.e. trained to autocomplete moves) resulted in the neural network internally building a world model of an Othello board - even though it wasn't explicitly told anything about the board outside of being fed legal moves.

Later, a researcher at DeepMind replicated the work and found it was encoded as a linear representation, which has then since been shown to be how models encode a number of other world models developed from their training corpus (Max Tegmark coauthored two interesting studies in particular about this regarding modeling space and time and modeling truthiness).

load more comments (15 replies)
load more comments (7 replies)
[–] FaceDeer@kbin.social 4 points 9 months ago (3 children)

I've been saying this all along. Language is how humans communicate thoughts to each other. If a machine is trained to "fake" communication via language then at a certain point it may simply be easier for the machine to figure out how to actually think in order to produce convincing output.

We've seen similar signs of "understanding" in the image-generation AIs, there was a paper a few months back about how when one of these AIs is asked to generate a picture the first thing it does is develop an internal "depth map" showing the three-dimensional form of the thing it's trying to make a picture of. Because it turns out that it's easier to make pictures of physical objects when you have an understanding of their physical nature.

I think the reason this gets a lot of pushback is that people don't want to accept the notion that "thinking" may not actually be as hard or as special as we like to believe.

[–] Redacted@lemmy.world 7 points 9 months ago (6 children)

This whole argument hinges on consciousness being easier to produce than to fake intelligence to humans.

Humans already anthropomorphise everything, so I'm leaning towards the latter being easier.

[–] FaceDeer@kbin.social 5 points 9 months ago (5 children)

I'd take a step farther back and say the argument hinges on whether "consciousness" is even really a thing, or if we're "faking" it to each other and to ourselves as well. We still don't have a particularly good way of measuring human consciousness, let alone determining whether AIs have it too.

load more comments (5 replies)
load more comments (5 replies)
load more comments (2 replies)
[–] nullpotential@lemmy.dbzer0.com 4 points 9 months ago

New theory wrong.

[–] aBundleOfFerrets@sh.itjust.works 4 points 9 months ago

Funny side effect, unlike bugs where we think they can’t feel pain, we can be absolutely certain LLMs can’t

load more comments
view more: next ›