I typed all this up for someone who posted a.... very strangely written question regarding something they noticed with AI, but it appears to be deleted/removed.... and, well, I wanna know if I got their question rephrased in a less... difficult to understand format. And then the answer to said question, because I find it interesting as well.
What I typed in response:
After parsing the insanity that is your writing style and... English as a second language? Allow me to confirm and summarize, because I find this question fascinating.
You've come across a LLM trend associated with said LLM being given instruction to describe/pretend to be a human named Delilah. LLM's have gone viral at times for being instructed to formulate their output to sound like famous people with what appears to be resonable accuracy. But what goes into that ability is human words previously written associated with that person (or rather, their full name/titles/etc), as well as purposful restrictions given to the LLM directly (like, don't output the N word).
Another lesser/totally unquantifiable factor in the output's "tone" is result of errors in the blackbox algorithm that associates the "words" (not truly words I know, but essentially) in ways that aren't what you'd expect.
(Here's where my slight confusion mostly is) Each of these "factors" associated with the tone of the output... you've given names to? Or maybe my entirely self-researched knowledge has missed an agreed-upon naming system for these "characters"? I'm not quite sure.
And now your question and qualifers : Is there a pop culture/historic person or character named Delilah who is associated with furry stuff? Because you have been looking at some of the interesting mistaken/innacurate tones adopted by a LLM, and you've noticed that asl the LLM to output as if it was Delilah, and the results are furry related. And typically this sort of issue is mostly due to overlapping/similar names in the model's training (as well as much stranger links without any explanation as to how they formed). And you're research on "Delilah" hasn't turned up anything giving reason for the LLM's furry related output
.... is that more or less what you are saying?
Sorry, I'm also not a native speaker. I don't know what PC 5-0 means (political correctness police??). But if we want to know what happened, we need to know the circumstances. It'll be a big difference which exact LLM model got used. We need to know the exact prompt and text that went in. And then we can start discussing why something happened. I'd say a good chance is the LLM has been made to output stories like that. Like it's the case with LLM models that have been made for ERP. That's why I said that.
Oh, and PC 5-0
PC - politically correct (a very... wide term)
5-0 is a colloquial term meaning Police.
Idk how non-native English your internet consumption is, but just straight up saying PC Police... is just something I'd like to not continue the use of.
Alright. Thx for the explanation. Yeah, I don't have a filter. I just say whatever I think. Don't really care if it's offensive, just if things are true or not. Which is hard to tell in this case, since we don't have enough information at hand. And LLMs are complex. Could be a fluke. Or whatever.
NP.
I watch my own words, but really try to not attempt to stifle some elses. You do you boo boo