this post was submitted on 09 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] SuddenDragonfly8125@alien.top 1 points 10 months ago (3 children)

Yknow, at the time I figured this guy, with his background and experience, would be able to distinguish normal from abnormal LLM behavior.

But with the way many people treat GPT3.5/GPT4, I think I've changed my mind. People can know exactly what it is (i.e. a computer program) and still be fooled by its responses.

[–] scubawankenobi@alien.top 1 points 10 months ago

exactly what it is (i.e. a computer program)

I get what you mean, but I believe it's more productive not lumping a neural network (inference model), with much of the "logic" coming from automated/self-training, into being "just a computer program". There's historical context & understanding of a "program" where a human actually designs & knows what IF-THEN-ELSE type of logic is executed... understanding it will do what it is 'programmed' to do. NN inference is modeled after (& named after) the human brain (weighted neurons) and there is both a lack of understanding all (most!) of the logic ('program') that is executing under-the-hood, as they say.

Note: I'm not at all saying that GPT 3.5/4 are sentient, but rather that it's missing a lot of the nuance, as well as complexity, of LLMs by referring to them as simply being "just a computer program".

[–] PopeSalmon@alien.top 1 points 10 months ago

it's dismissive & rude for you to call it "fooled" that he came to a different conclusion than you about a subtle philosophical question

[–] Captain_Pumpkinhead@alien.top 1 points 10 months ago (2 children)

If you ever wonder if the machine is sentient, ask it to write code for something somewhat obscure.

I'm trying to run a Docker container in NixOS. NixOS is a Linux distro known for being super resilient (I break stuff a lot because I don't know what in doing), and while it's not some no-name distro, it's also not that popular. GPT 4 Turbo has given me wrong answer after wrong answer and it's infuriating. Bard too.

If this thing was sentient, it'd be a lot better at this stuff. Or at least be able to say, "I don't know, but I can help you figure it out".

[–] Mobile-Gas2146@alien.top 1 points 10 months ago (2 children)

At this point I'm probably not sentient either

[–] Captain_Pumpkinhead@alien.top 1 points 10 months ago

I'm more talking about hallucinations. There's a difference between "I'm not sure", "I think it's this but I'm confidently wrong", and "I'm making up bullshit answers left and right".

[–] nagareteku@alien.top 1 points 10 months ago

Are we? Do we have free will or are our brains are just deterministic models with 100T parameters as mostly untrained synapses?

[–] Feisty-Patient-7566@alien.top 1 points 10 months ago

I think this is a huge problem with current AIs is that they are forced to generate an output, particularly in a very strict time constraint. "I don't know" should be a valid answer.