The AI used doesn't necessarily have to be an LLM. A simple model for determining the "safety" of a comment wouldn't be vulnerable to prompt injection.
okr765
joined 11 months ago
My instance admin is also extremely oppressive.
Not with the asterisk! The shell automatically expands the asterisk before sending the args to rm, removing every French language pack in your root directory.