Correction: Newer versions of ChatGPT (GPT-5.x) are failing in insidious ways. The article has no mention of the other popular services or the dozens of open source coding assist AI models (e.g. Qwen, gpt-oss, etc).
The open source stuff is amazing and gets better just as quickly as the big AI options. Yet they're boring so they don't make the news.
If there was no filter for the guy that requested the bot create this, what makes you think illustrators will have such a filter? How do you know it's not an illustrator that would make such a thing?
The problem here is human behavior. Not the machine's ability to make such things.
AI is just the latest way to give instructions to a computer. That used to be a difficult problem and required expertise. Now we've given that power to immoral imbeciles. Rather than take the technology away entirely (which is really the only solution since LLMs are so easy to trick; even with a ton of anti-abuse stuff in system prompts), perhaps we should work on taking the ability of immoral imbeciles to use them away instead.
Do I know how to do that without screwing over everyone's right to privacy? No. That too, may not be possible.