this post was submitted on 09 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I am not sure it isn't sentient.
An ant is sentient and it's not going to tell you how many brothers Sally has either.
The real question is does consciousness spark into existence while all that transformer math resolves, or is that still completely unrelated and real life conscious brains are conscious due to completely dfferent emergent phenomenae.
To me, a big reason LLMs aren't conscious is that they only respond to user input, generate output and then stop. They don't talk to themselves. They aren't sitting their contemplating the meaning of their existence while you are away from the keyboard.
Most people assume it must work like human brains and human consciousness. Can it not just be it's own thing with the qualities it has and ones it doesn't?
LLM clearly don't have a stateful human like consciousness but do have some semantic understanding and build a world model when they are large enough. Image models have some grasp of 3d space.
They are neither sentient nor a stochastic parrot.
What you are going to realize is that consciousness doesn't exist at all.
It's going to be a rude wake-up call to a lot of humanity.
Lol jk. If there's one thing GPT-humans are good at, it's denial. They'll say the A.I. math is of the devil and retreat back into their 3000 year old bronze age cult churches, continuing to pretend they are magical beings.
Wouldn't that be a black mirror episode? Almost want to live to see it.