this post was submitted on 08 Aug 2024
57 points (100.0% liked)
Technology
37725 readers
471 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I will take a different tack than sweng.
I think that this is irrelevant. Whether a safety mechanism is intrinsic to the core functioning of something, or bolted on purely for safety purposes, it is still a limiter on that thing's function, to attempt to compel moral/safe usage.
Any action has 2 different moral aspects:
Of course it is impossible to change the moral intent of an actor. But the LLM is not the actor, it is the tool used by an actor.
And you can absolutely change the morality of the outcome of an action (I.e. said weapon use) by limiting the possible damage from it.
Given that a tool is the means by which the actor attempts to take an action, it is also an appropriate place that safety controls which attempt to enforce a more moral outcome should reside in.
I think I've said a lot in comments already and I'll leave that all without relitigating just for arguments sake.
However, I wonder if I haven't made clear that I'm drawing a distinction between the model that generates the raw output, and perhaps the application that puts the model to use. I have an application that generates output via OAI API and then scans both the prompt and output to make sure they are appropriate for our particular use case.
Yes, my product is 100% censored and I think that's fine. I don't want the customer service bot (which I hate but that's an argument for another day) at the airline to be my hot AI girlfriend. We have tools for doing this and they should be used.
But I think the models themselves shouldn't be heavily steered because it interferes with the raw output and possibly prevents very useful cases.
So I'm just talking about fucking up the model itself in the name of safety. ChatGPT walks a fine line because it's a product not a model, but without access to the raw model it needs to be relatively unfiltered to be of use, otherwise other models will make better tools.