this post was submitted on 31 Aug 2023
540 points (98.0% liked)

Technology

59664 readers
3109 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I'm rather curious to see how the EU's privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn't have a paywall)

you are viewing a single comment's thread
view the rest of the comments
[–] SatanicNotMessianic@lemmy.ml 5 points 1 year ago (1 children)

No, I disagree. Human knowledge is semantic in nature. “A cat walks across a room” is very close, in semantic space, to “The dog walked through the bedroom” even though they’re not sharing any individual words in common. Cat maps to dog, across maps to through, bedroom maps to room, and walks maps to walked. We can draw a semantic network showing how “volcano” maps onto “migraine” using a semantic network derived from human subject survey results.

LLMs absolutely have a model of “cats.” “Cat” is a region in an N dimensional semantic vector space that can be measured against every other concept for proximity, which is a metric space measure of relatedness. This idea has been leveraged since the days of latent semantic analysis and all of the work that went into that research.

For context, I’m thinking in terms of cognitive linguistics as described by researchers like Fauconnier and Lakoff who explore how conceptual bundling and metaphor define and constrain human thought. Those concepts imply that a realization can be made in a metric space such that the distance between ideas is related to how different those ideas are, which can in turn be inferred by contextual usage observed over many occurrences. 

The biggest difference between a large model (as primitive as they are, but we’re talking about model-building as a concept here) and human modeling is that human knowledge is embodied. At the end of the day we exist in a physical, social, and informational universe that a model trained on the artifacts can only reproduce as a secondary phenomenon.

But that’s world apart from saying that the cross-linking and mutual dependencies in a metric concept-space is not remotely analogous between humans and large models.

[–] Veraticus@lib.lgbt -3 points 1 year ago (1 children)

But that’s world apart from saying that the cross-linking and mutual dependencies in a metric concept-space is not remotely analogous between humans and large models.

It's not a world apart; it is the difference itself. And no, they are not remotely analogous.

When we talk about a "cat," we talk about something we know and experience; something we have a mental model for. And when we speak of cats, we synthesize our actual lived memories and experiences into responses.

When an LLM talks about a "cat," it does not have a referent. There is no internal model of a cat to it. Cat is simply a word with weights relative to other words. It does not think of a "cat" when it says "cat" because it does not know what a "cat" is and, indeed, cannot think at all. Think of it as a very complicated pachinko machine, as another comment pointed out. The ball you drop is the question and it hits a bunch of pegs on the way down that are words. There is no thought or concept behind the words; it is simply chance that creates the output.

Unless you truly believe humans are dead machines on the inside and that our responses to prompts are based merely on the likelihood of words being connected, then you also believe that humans and LLMs are completely different on a very fundamental level.

[–] SatanicNotMessianic@lemmy.ml 2 points 1 year ago (1 children)

Could you outline what you think a human cognitive model of “cat” looks like without referring to anything non-cat?

[–] Veraticus@lib.lgbt -2 points 1 year ago (1 children)

Yes; it is a cat. I can think of what that is. Can an LLM?

[–] SatanicNotMessianic@lemmy.ml 2 points 1 year ago (1 children)

Describe it. Imagine I’ve never encountered a cat, because I’m from Mars.

[–] Veraticus@lib.lgbt 0 points 1 year ago* (last edited 1 year ago) (1 children)

You can't! It's like describing fire to someone that's never experienced fire.

This is the root of experience and memory and why humans are different from LLMs. Which, again, can never understand or experience a cat or fire. But the difference is more fundamental than that. To an LLM, there is no difference between fire and cat. They are simply words with frequencies attached that lead to other words. Their difference is the positions they occupy in a mathematical model where sometimes it will output one instead of the other, nothing more.

Unless you're arguing my inability to express a mental construct to you completely means I myself don't experience it. Which I think you would agree is absurd?

[–] SatanicNotMessianic@lemmy.ml 2 points 1 year ago (1 children)

I have absolutely no idea what your model is for how humans understand, relate, and communicate concepts.

[–] Veraticus@lib.lgbt -1 points 1 year ago (1 children)

How is that germane to this question? Do you agree humans can experience mental phenomena? Like, do you think I have any mental models at all?

If so, then that is the difference between me and an LLM.

[–] SatanicNotMessianic@lemmy.ml 2 points 1 year ago (1 children)

I think you have a mental model and that it is analogous to the model created in an LLM in that it is representable by a semantic graph/n-dimensional matrix relating concepts that are realized via terms.

You have never in your life encountered a dodo. You know what a dodo is (using the present these because I’m talking about a concept). It is a bird, so it relates evolutionarily and ecologically to “bird.” It’s flightless, so it relates to “patriarch” and “emu.” It is extinct, so it relates to all of the species extinction ideas you have. Humans perhaps contributed to the extinction, so it links to human-caused ecological change, which in turn links to human-caused climate change. Human-introduced invasive species are are causing ecological change in Australia, and that may have been a major factor in driving the dodo to extinction. People ate them, so maybe in your head it has a relation to wild turkeys. And so on. That’s how minds work. That’s how the human cognitive model of the world works. That’s how LLMs work.

Visualize an n-dimensional space in which these semantic topics are embedded. The interpretation of the dimensions don’t matter. Instead, we’re just worried about the distances between concepts. Dodo is closer to turkey than it is to snake. Dodo is closer to snake than it is to rock. Dodo is closer to rock than it is to the feeling of melancholy I get when listening to Tori Amos. We can grasp this intuitively. We can mathematize it by formally placing the various concepts in a metric space.

There’s a lot more to unpack, from neural correlates of consciousness to cognitive linguistics and embodied learning using metaphorical reasoning, but that’s kind of the gist of it boiled down to an overly long post.

[–] Veraticus@lib.lgbt -1 points 1 year ago (1 children)

That’s how LLMs work.

This is not how LLMs work. LLMs do not have complex thought webs correlating concepts birds, flightlessness, extinction, food, and so on. That is how humans work.

An LLM assembles a mathematical model of what word should follow any other word by analyzing terabytes of data. If in its training corpus the nearest word to "dodo" is "attractive," the LLM will almost always tell you that dodos are attractive. This is not because those concepts are actually related to the LLM, because the LLM is attracted to dodos, or because LLMs have any thoughts at all. It is simply the output of bunch of math based on word proximity.

Humans have cognition and mental models. LLMs have frequency and word weights. While you have correctly identified that both of these things can be portrayed as n-dimensional matrixes, you can also use those tools to describe electrical currents or the movement of stars. But those things contain no more thought and have no more mental phenomenon occurring in them than LLMs.

[–] SatanicNotMessianic@lemmy.ml 2 points 1 year ago (1 children)

That is exactly how LLMs work. LOMs embed semantic concepts in metric spaces. That is what we’re talking about.

[–] Veraticus@lib.lgbt 0 points 1 year ago

No, they embed word weights in metric spaces. Human thought is more like semantic concepts in a metric space (though I don't think that's entirely unequivocal, human thought is not very well-understood). Even if the space is similar what's in them is definitely not.