I'm not sure that it's plausible to add meaningful safety to open source LLMs.
Incidentally, that does worry me some in the long term.
I'm not sure that it's plausible to add meaningful safety to open source LLMs.
Incidentally, that does worry me some in the long term.