Sure, but that's not a reason to purposefully add more biases into it.
stddealer
joined 1 year ago
Putting "safety" mechanism in foundational models is dumb imo. They are not just text generators, they are statistical models about human languages, and it shouldn't have made up arbitrary biases about what language should look like.
Every model will react differently to the same prompts. Smaller models might get confused with complicated prompts designed for GPT4.