dhaitz

joined 11 months ago
[–] dhaitz@alien.top 1 points 11 months ago

Another is the potential for misuse of knowledge, such as creating napalm"

IMHO these examples of "I tricked ChatGPT into telling me how to build a bomb!!" are fun, but you can find this information online anyway. This is mainly a PR problem if screenshots of company XY's new chatbot spewing problematic content are circulating on social media.

The point is rather that any information the LLM has ever seen (during training or in its prompt) can be leaked to the user, no matter how thorough your finetuning or prompt engineering is.