this post was submitted on 09 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

PS. This is Text from Bing AI.

you are viewing a single comment's thread
view the rest of the comments
[–] stddealer@alien.top 1 points 1 year ago (1 children)

Putting "safety" mechanism in foundational models is dumb imo. They are not just text generators, they are statistical models about human languages, and it shouldn't have made up arbitrary biases about what language should look like.

[–] api@alien.top 1 points 1 year ago (1 children)

It's not hard to fine tune base models for any bias you want. "Zero bias" isn't possible. There's always some bias in the training data.

[–] stddealer@alien.top 1 points 1 year ago

Sure, but that's not a reason to purposefully add more biases into it.