this post was submitted on 13 Aug 2023
63 points (100.0% liked)
Technology
37708 readers
405 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Security wasn't a concern? Are we talking about the model itself? Security isn't part of the model at all and can't be. Anything you try to add to a model is just a suggestion, and security cannot be a suggestion. Not to mention that it will create a bunch of, "as a secure AI language model I can't let you do this."
A significant problem is a lay person cannot understand what a LLM even is without a lot of reading and thought and these articles are aimed at people who have done neither, or worse they are just posturing and propaganda.
They are 100% biased and they can't help but be since they absorb and emulate human writing. An AI that can't write a biased take also can't write from a black person's perspective or a woman's because bias is part of their experience. How ridiculous would it be if you asked an AI about slavery in America and it had no idea what you were talking about or thought it applied to all races equally?
I disagree. Even basic inclusion of words to change (ie: the N word to Black or f*g to gay) would have helped.
Making these companies work harder to bring their product online isn't a bad thing here.
It sounds simple but data conditioning like that is how you get scunthorpe being blacklisted, and the effects on the model even if perfectly executed are unpredictable. It could get into issues of "race blindness", where the model has no idea these words are bad and as a result is incapable of accommodating humans when the topic comes up. Suppose in 5 years there's a therapist AI (not ideal but mental health is horribly understaffed and most people can't afford a PhD therapist) that gets a client who is upset because they were called a f**got at school, it would have none of the cultural context that would be required to help.
Techniques like "constitutional AI" and RLHF developed after the foundation models really are the best approach for these, as they allow you to get an unbiased view of a very biased culture, then shape the model's attitudes towards that afterwards.
I agree with you but I’m just gonna say with basic regex (hell, even without regex) you can easily find bad words without the problem you mentioned above.
Word filters tend to suck in online games and stuff because they have to navigate players trying to avoid the filter, which I think could still be improved with a little effort