this post was submitted on 09 Nov 2023
1 points (100.0% liked)

LocalLLaMA

4 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] SuddenDragonfly8125@alien.top 1 points 2 years ago (7 children)

Yknow, at the time I figured this guy, with his background and experience, would be able to distinguish normal from abnormal LLM behavior.

But with the way many people treat GPT3.5/GPT4, I think I've changed my mind. People can know exactly what it is (i.e. a computer program) and still be fooled by its responses.

[–] Captain_Pumpkinhead@alien.top 1 points 2 years ago (4 children)

If you ever wonder if the machine is sentient, ask it to write code for something somewhat obscure.

I'm trying to run a Docker container in NixOS. NixOS is a Linux distro known for being super resilient (I break stuff a lot because I don't know what in doing), and while it's not some no-name distro, it's also not that popular. GPT 4 Turbo has given me wrong answer after wrong answer and it's infuriating. Bard too.

If this thing was sentient, it'd be a lot better at this stuff. Or at least be able to say, "I don't know, but I can help you figure it out".

[–] Mobile-Gas2146@alien.top 1 points 2 years ago (2 children)

At this point I'm probably not sentient either

[–] Captain_Pumpkinhead@alien.top 1 points 2 years ago

I'm more talking about hallucinations. There's a difference between "I'm not sure", "I think it's this but I'm confidently wrong", and "I'm making up bullshit answers left and right".

load more comments (1 replies)
load more comments (2 replies)
load more comments (4 replies)