this post was submitted on 09 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Putting "safety" mechanism in foundational models is dumb imo. They are not just text generators, they are statistical models about human languages, and it shouldn't have made up arbitrary biases about what language should look like.
It's not hard to fine tune base models for any bias you want. "Zero bias" isn't possible. There's always some bias in the training data.
Sure, but that's not a reason to purposefully add more biases into it.