stddealer

joined 1 year ago
[–] stddealer@alien.top 1 points 11 months ago

Every model will react differently to the same prompts. Smaller models might get confused with complicated prompts designed for GPT4.

[–] stddealer@alien.top 1 points 1 year ago

Sure, but that's not a reason to purposefully add more biases into it.

[–] stddealer@alien.top 1 points 1 year ago (2 children)

Putting "safety" mechanism in foundational models is dumb imo. They are not just text generators, they are statistical models about human languages, and it shouldn't have made up arbitrary biases about what language should look like.