this post was submitted on 08 Aug 2024
57 points (100.0% liked)
Technology
37725 readers
471 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This doesn't prevent an authorized user from committing murder. It would prevent someone from looting it off of your corpse and returning fire to an attacker.
This is not a great analogy for AI, but it's still effectively amoral anyway.
This is closer. Still not a great analogy for AI, but we can agree that outside of military and police action mass murder is more likely than an alternative. That being said, ask a Ukrainian how moral it would be to go up against Russian soldiers with a 5 round mag.
I feel like you're focused too narrowly on the gun itself and not the gun as an analogy for AI.
This isn't bad. We can currently use AI to examine the output of an AI to infer things about the nature of what is being asked and the output. It's definitely effective in my experience. The trick is knowing what questions to ask about in the first place. But for example OAI has a tool for identifying violence, hate, sexuality, child sexuality, and I think a couple of others. This is promising, however it is an external tool. I don't have to run that filter if I don't want to. The API is currently free to use, and a project I'm working on does use it because it allows the use case we want to allow (describing and adjudicating violent actions in a chat-based RPG) while still allowing us to filter out more intimate roleplaying actions.
The object needs it to differentiate between allowing moral use and denying immoral use. Otherwise you need an external tool for that. Or perhaps a law. But none of that interferes with the use of the tool itself.
But is literally does. If my goal is to use someone else's gun to kill someone, and the gun has a biometric lock, that absolutely interferes with the use (for unlawful shooting) of the gun.
Wrt AI, if someone's goal is to use a model that e.g. OpenAI operates, to build a bomb, an external control that prevents it is just as good as the AI model itself having some kind of baked in control.
Again a biometric lock neither prevents immoral use nor allows moral use outside of its very narrow conditions. It's effectively an amoral tool. It presumes anything you do with your gun will be moral and other uses are either immoral or unlikely enough to not bother worrying about.
AI has a lot of uses compared to a gun and just because someone has an idea for using it that is outside of the preconceived parameters doesn't mean it should be presumed to be immoral and blocked.
Further the biometric lock analogy falls apart when you consider LLM is a broad-scoped tool for use by everyone, while your personal weapon can be very narrowly scoped for you.
Consider a gun model that can only be fired by left-handed people because most guns crimes are committed by right-handed people. Yeah, you're ostensibly preventing 90% of immoral use of the weapon but at the cost of it no longer being a useful tool for most people.
Not every safety control needs to solve every safety issue. Almost all safety controls are narrowly-tailored to one threat model. You're essentially just arguing that if a safety control doesn't solve everything, it's not worth it.
LLMs being a tool that is so widely available is precisely why they need more built-in safety. The more dangerous a tool is, the more likely it is to be restricted to only professional or otherwise licensed users or businesses. Arguing against safety controls being built into LLMs is just going to accelerate their regulation.
Whether you agree with that mentality or not, we live in a Statist world, and protection of its constituent people from themselves and others is the (ostensible) primary function of a State.
Not exactly. My argument is that the more safety controls you build into the model, the less useful the model is at anything. The more you bend the responces away from true (whatever that is) the less of the tool you have.
Yeah I agree with that, but I'm saying protect people from the misuse of the tool. Don't break the tool to the point where it's worthless.