this post was submitted on 18 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

It's no secret that many language models and fine-tunes are trained using datasets, many of them are made using GPT models. The problem arises when many "GPT-isms" end up in the dataset. And I am not only referring to the typical expressions like "however, it's important to...", "I understand your desire to...", but I am also referring to the structure of the outputs in the model's responses. ChatGPT (GPT models in general) tend to have a very predictable structure when in its "soulless assistant" mode, which makes it very easy to say "this is very GPT-like".

What do you think about this? Oh, and by the way, forgive my English.

you are viewing a single comment's thread
view the rest of the comments
[–] Robot1me@alien.top 1 points 11 months ago

What do you think about this?

I think an interesting experiment is when you edit an AI output message to start with "As an AI language model" and then let it continue the rest. If it completely loses character and just sounds like ChatGPT, it's then quite telling.