this post was submitted on 18 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

It's no secret that many language models and fine-tunes are trained using datasets, many of them are made using GPT models. The problem arises when many "GPT-isms" end up in the dataset. And I am not only referring to the typical expressions like "however, it's important to...", "I understand your desire to...", but I am also referring to the structure of the outputs in the model's responses. ChatGPT (GPT models in general) tend to have a very predictable structure when in its "soulless assistant" mode, which makes it very easy to say "this is very GPT-like".

What do you think about this? Oh, and by the way, forgive my English.

you are viewing a single comment's thread
view the rest of the comments
[–] arekku255@alien.top 1 points 11 months ago

As an AI language model I do not have an opinion on GPT-isms polluting datasets. However it is important to remember to respect other people and work together to achieve the optimal outcome.