this post was submitted on 20 Jan 2024
79 points (92.5% liked)

No Stupid Questions

35805 readers
1805 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

I hear people saying things like "chatgpt is basically just a fancy predictive text". I'm certainly not in the "it's sentient!" camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it's predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that's pretty interesting. Tbh, I'm more impressed by chatgpt's ability to appearing to "understand" my prompts than I am by the quality of the output. Even though it's writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I've asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

all 41 comments
sorted by: hot top controversial new old
[–] bionicjoey@lemmy.ca 72 points 10 months ago* (last edited 10 months ago) (2 children)

It is literally the same exact kind of algorithm that predicts the next word you will type on your phone based on what's already been typed. The differences are that it has a much larger training dataset, which means more accurate predictions, it processes based on the entire body of text that has already been given (including the hidden prompt and previous messages), and that it doesn't always predict whole words, but instead clusters of characters.

If you want a more general overview of how machine learning works in general, this is a good video series to watch: https://www.youtube.com/watch?v=aircAruvnKk

If you want to see some evidence that it doesn't truly understand what it says, try having it generate and explain some jokes or riddles that rely on wordplay. It will completely shatter the illusion.

[–] QuaternionsRock@lemmy.world 10 points 10 months ago* (last edited 10 months ago) (1 children)

Me:

You are a talented comedian. Generate some jokes or riddles that rely on wordplay.

ChatGPT 4:

Sure, here are a few wordplay-based jokes and riddles:

Why don’t scientists trust atoms? Because they make up everything!

I told my wife she should embrace her mistakes. She gave me a hug.

What do you call fake spaghetti? An “impasta.”

I’m reading a book on anti-gravity. It’s impossible to put down!

Why was the math book sad? Because it had too many problems.

Enjoy these light-hearted plays on words!

Edit: I prodded it a little, and I actually quite like the fourth one below.

Me:

These aren’t very clever. Can you generate some that a more mature audience would appreciate?

ChatGPT:

Of course, here are some more sophisticated wordplay-based jokes suitable for a mature audience:

Parallel lines have so much in common. It’s a shame they’ll never meet.

A Freudian slip is when you say one thing but mean your mother. I mean another.

I'm reading a book on the history of glue. Can't put it down, but I'm sticking to it.

Time flies like an arrow; fruit flies like a banana. A classic example of linguistic ambiguity.

The rotation of Earth really makes my day. Quite literally.

These jokes involve a bit more linguistic and conceptual play, appealing to an audience that enjoys a deeper level of wordplay.

[–] bionicjoey@lemmy.ca 18 points 10 months ago (1 children)

All of those jokes are plagiarized. It doesn't actually understand the jokes, it's just repeating ones that it's seen before. Ask it to explain why some of these are funny.

[–] shasta@lemm.ee 7 points 9 months ago

Seriously. I've literally heard all of those jokes before. It may have even stolen them all from a single website.

[–] MeatsOfRage@lemmy.world 3 points 10 months ago (1 children)

Great video! Thanks for posting that

[–] guyrocket@kbin.social 5 points 10 months ago (2 children)

I agree, that was good.

My major takeaway is that neutral networks, and AI in general, are mostly pattern recognition with a little bias and weighting thrown in to improve accuracy.

And that is why I question all the supposedly amazing things people seem to think it will do and many of the applications of AI.

[–] bionicjoey@lemmy.ca 2 points 10 months ago (1 children)

That's exactly right. It is a statistical model that is based on some training dataset. The quality of the predictions is only as good as the completeness and bias of the training set.

[–] dustyData@lemmy.world 5 points 10 months ago (1 children)

And it is one of the major issues with giving AI and the corporations who make them free reign to "think" and inform decision making. Feed it a racist dataset, and the AI will be racist. Feed it misinformation, and the AI will only reproduce misinformation.

[–] snooggums@kbin.social 1 points 10 months ago

The proof that AI is just garbage in and garbage out is that AI always does this while some people are able to be anti-racist and anti-misinformation as a response even if most people fall for it.

Feed it a racist dataset, and the AI will be racist. Feed it misinformation, and the AI will only reproduce misinformation.

[–] Acamon@lemmy.world 2 points 10 months ago (1 children)

That's my take as well, I would just like to know more about the weighting/bias.

[–] bionicjoey@lemmy.ca 6 points 10 months ago

Weighting and bias are based on the training dataset. And the training dataset of ChatGPT is mostly internet content, literature, social media discussions, articles, etc.

So the inherent biases are going to be limited in the same way. For example, ChatGPT is not good at generating or interpreting code written in Malbolge, despite the fact that this language is meant to be relatively easy to understand for a machine yet difficult for a human to understand. Because it isn't processing like a machine, it is processing text like a person.

It also is bad at understanding wordplay like puns since wordplay requires a simultaneous understanding of the meaning of a word as well as the linguistics that underly that word. It is decent at generating puns which already exist and are out in the world, but it can't creatively generate new ones or interpret novel puns or other wordplay, since that would require a deeper understanding of the language.

Looking at the things it is bad at can give a great insight into its limitations, and in turn into how it works.

[–] huginn@feddit.it 32 points 10 months ago* (last edited 10 months ago) (3 children)

I think there's a second, unstated issue at play here: you're experiencing a very deep cognitive bias. An exploit in the human brain.

The human brain is a fantastically complex piece of meat but one of its many issues is the anthropomorphic bias: the tendency to ascribe human traits, especially agency and cognition, to things or animals that do not have those traits.

We tend to believe if it walks like a duck and talks like a duck it must be a duck. ChatGPT is a very complex and highly specialized algorithm that outputs data just like another online human... But 100% of it is just a model processing your input and returning it back out. It talks like a human but is more akin to notepad than it is to us.

To be clear: that bias exists in everyone. We all do this. Anytime I talk about my dog scheming to get my attention I'm hitting that bias. Anytime my robot vacuum interrupts me doing the dishes I talk at it and tell it to go away. I interact with the world around me as though most things are human.

[–] Azzu@lemm.ee 13 points 10 months ago (2 children)

To be fair/to elaborate on that point, your dog is much much closer to human than chatgpt is, we share like 84% of DNA. Most of the same basic emotions like hunger, fear, desire, etc are present as well as the ability to learn and communicate.

Your dog may not be "scheming", because it lacks the ability to plan very far in the future, but it definitely has the intention of getting your attention and tries to figure out in the moment how to do it. Same as a human kid might do.

It is incredibly valuable to act like a dog is human, because dogs do actually share a lot of characteristics. Not all of course, it's still wrong to fully assume a dog is human, but as a quick heuristic it's still valuable a lot (84%? :D) of the time.

[–] huginn@feddit.it 3 points 10 months ago

Sure: I get that they're not exactly the same. The ChatGPT issue is orders of magnitude more removed from humanity than a dog, but it's a daily example of anthropomorphic bias that is relatable and easy to understand. Just was using it as an example.

[–] nutsack@lemmy.world 1 points 10 months ago

when the chat bot starts using my DNA I'm killing it

[–] Randomgal@lemmy.ca 6 points 10 months ago (1 children)

Yep, this is a very good explanation. Seeing ChatGPT "talk" is immediately associated with sentience, because for your entire life, and millions of years of evolution, apeech was in 99.9% of cases, a sign of aentience. So your brain doesn't even consider it a question, until you consciously stop to think about it.

An interesting way to antromorphizise GPT that's still technically correct is to think of it as having essentially perfect memory. So it doesn't know how to talk, but it has seen so many conversations (literal trillions) that it can recognize the patterns that make up speech and simply "remember" what the most likely combination of words is, given the context, with zero actual "understanding" of language. (Human trainers then fine-tune these guesses to give you the ChatGPT experience)

[–] huginn@feddit.it 5 points 10 months ago* (last edited 10 months ago) (1 children)

ChatGPT also fudges "memory" by feeding in all previous prompts (up to a token limit) with whatever you've said latest, which improves the pattern matching.

The best way to do this is to ask it to ask you questions.

[–] Acamon@lemmy.world 2 points 10 months ago (1 children)

Just to make clear because it seems to come up a lot in some responses - I absolutely don't think (and never have) that chatgpt is intelligent, 'understands' what I'm saying to it or what it's saying to me (let alone is accurate!). Older chat bots were very prone to getting in weird loops, or sudden context/topic switches. Chatgpt doesn't do this very often, and I was wondering what the mechanism for keeping it's answers plausibly connected to the topic under discussion and avoiding grammatical cul-de-sacs.

I know it's just a model, I want to understand the difference between it's predictions and the predictions on my Android keyboard. Is it simply considering the entire previous text as it makes its predictions vs just the last few words? Why doesn't it occasionally respond with a hundred thousand word response? Many of the texts it's trained on are longer than it's usual responses. There seems to be some limits and guidance given either through its training data or its response training that guide it beyond "based on the texts I have seen, what is the most likely word." and I was curious if there was a summary what the blend of corpus based prediction and respinse feedback, etc. has been used.

[–] Aedis@lemmy.world 4 points 10 months ago

Software engineer here, but not llm expert. I want to address one of the questions you had there.

Why doesn't it occasionally respond with a hundred thousand word response? Many of the texts it's trained on are longer than it's usual responses.

An llm like chatgpt does some rudimentary level of pattern matching when it analyzes training data. So this is why it won't generate a giant blurb of text unless you ask it to.

Let's say for example one of its training inputs is a transcription of a conversation. That will be tagged "conversation" by a person. Then it will see that tag when analyzing hundreds of input texts that are conversations. Finally, the training algorithm writes down that "conversation" have responses of 1-2 sentences with x% likelyhood because that's what the transcripts did. Now if another of the training sets is "best selling novels" it'll store that "best selling novels have" responses" that are very long.

Chatgpt will probably insert a couple of tokens before your question to help it figure out what it's supposed to respond: "respond to the user as if you are in a casual conversation"

This will make the model more likely to output small answers rather than giving you a giant wall of text. However it is still possible for the model to respond with a giant wall of text if you ask something that would contradict the original instructions. (hence why jailbreaking models is possible)

[–] SorteKanin@feddit.dk 29 points 10 months ago (3 children)

it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

But that is all that's going on. It has just been trained on so much text that the predictions "learn" the grammatical structure of language. Once you can form coherent sentences, you're not that far from ChatGPT.

The remarkable thing is that prediction of the next word seems to be "sufficient" for ChatGPT's level of "intelligence". But it is not thinking or conscious, it is just data and statistics on steroids.

[–] datavoid@lemmy.ml 12 points 10 months ago (1 children)

Try to use it to solve a difficult problem and it will become extremely obvious that it has no idea what it is talking about.

Yup. I used it to try to figure out why our Java code was getting permission denied on jar files despite being owned by the user running the code and 777 permissions while upgrading from rhel7 to 8

It gave me some good places to check, but the answer was that rhel8 uses fapolicyd instead of selinux (which I found myself on some tangentially related stack exchange post)

[–] Dran_Arcana@lemmy.world 8 points 10 months ago (1 children)

The magic sauce is context length within reasonable compute restraints. Phone predictive text has a context length of like 2-3 words, ChatGPT (and other LLMs) have figured out how to do predictions on thousands or tens of thousands of words of context at a time.

[–] doublejay1999@lemmy.world 4 points 10 months ago (1 children)

It’s that why is compute heavy ?

[–] Dran_Arcana@lemmy.world 7 points 10 months ago

Correct, and the massive databases of long-length context associations are why you need tens to hundreds of gigabytes worth of ram/vram. Disk would be too slow

[–] LesserAbe@lemmy.world 5 points 10 months ago (1 children)

I think this explanation would be more satisfying if we had a better understanding of how the human brain produces intelligence.

[–] SorteKanin@feddit.dk 3 points 10 months ago (1 children)

I agree. We don't actually know that the brain isn't just doing the same thing as ChatGPT. It probably isn't, but we don't really know.

[–] Dran_Arcana@lemmy.world 0 points 10 months ago

Considering that we can train digital statistical models to read thoughts via brain scans I think it's more likely than not that we are more similar

[–] vzq@lemmy.blahaj.zone 15 points 10 months ago

Most of my job is predicting the next word I’m going to type.

I get a mail. I read it. Then I wrote the first word of my reply, the most likely word after the last word of the original mail. Then the next one. Then the next one.

Or in a meeting. Someone says something. Then I say the first word of my reply. Then the next one.

Predicting the next word well in a wide number of cases is what most of us do daily all the time. It’s a very difficult and versatile and complex skill.

[–] Munkisquisher@lemmy.nz 11 points 10 months ago (2 children)
[–] LesserAbe@lemmy.world 3 points 10 months ago

This is very interesting, thanks

[–] Acamon@lemmy.world 3 points 10 months ago

That's really helpful! Exactly the sort of thing I was looking for. I wasn't aware of the role of temperature and embedding, thet explain a lot.

[–] DarkGamer@kbin.social 10 points 10 months ago* (last edited 10 months ago) (2 children)

I found this example useful, this dude builds a simple LLM that writes infinite Shakespeare and walks you through each step.

Eli5: large language models like chat GPT are really good at guessing what combinations of words are most likely to be a good response to questions. They're so good at it that many people think they are intelligent even though they're not.

[–] surewhynotlem@lemmy.world 5 points 10 months ago

To be fair, I think most people do that too. Not a lot of thought out there.

[–] PM_ME_VINTAGE_30S@lemmy.sdf.org 1 points 9 months ago

Thank you for sharing this. I just finished watching it.

[–] Candelestine@lemmy.world 4 points 10 months ago

Sounds like a job for Kyle Hill:

https://youtu.be/-4Oso9-9KTQ

As I understand it, the big innovation that allowed things to advance so fast is called "attention".

Basically it's not just learning which word is next, it's also learning which of the previous words is the most important context for predicting the next word. This is what allows it to learn the grammatical structure of language which is important for a couple reasons.

For one it can use that info to better identify if data in the training set is related to the subject at hand. E.g. it can better pick out keywords so it knows that 2 data sets with the word "the" might not be related at all, but 2 data sets with the word "silicon" are probably highly related.

It might even be able to "understand meaning" by finding the relationships of words to each other. If it comes across the phrase "minor the key and not the child", it can figure out not only that "key" is important to the subject at hand, but also that information about children is actively harmful to predicting the next word.

It then uses this information to stay on topic, avoid mixing nonsense sentence structures together, and increasingly-- predict what an expert in the field might say.

There are more tricks to it than this, including ones that don't know about or understand, but I've heard that the recent advancement is largely due to "attention"

[–] SnokenKeekaGuard@lemmy.dbzer0.com 0 points 10 months ago