I was just watching a tiktok with a black girl going over how race is a social construct. This felt wrong to me so I decided to back check her facts.
(she was right, BTW)
Now I've been using Microsoft's Copilot which is baked into Bing right now. It's fairly robust and sure it has it's quirks but by and large it cuts out the middle man of having to find facts on your own and gives a breakdown of whatever your looking for followed by a list of sources it got it's information from.
So I asked it a simple straightforward question:
"I need a breakdown on the theory behind human race classifications"
And it started to do so. quite well in fact. it started listing historical context behind the question and was just bringing up Johann Friedrich Blumenbach, who was a German physician, naturalist, physiologist, and anthropologist. He is considered to be a main founder of zoology and anthropology as comparative, scientific disciplines. He has been called the "founder of racial classifications."
But right in the middle of the breakdown on him all the previous information disappeared and said, I'm sorry I can't provide you with this information at this time.
I pointed out that it was doing so and quite well.
It said that no it did not provide any information on said subject and we should perhaps look at another subject.
Now nothing i did could have fallen under some sort of racist context. i was looking for historical scientific information. But Bing in it's infinite wisdom felt the subject was too touchy and will not even broach the subject.
When other's, be it corporations or people start to decide which information a person can and cannot access, is a damn slippery slope we better level out before AI starts to roll out en masse.
PS. Google had no trouble giving me the information when i requested it. i just had to look up his name on my own.
You’re not describing a problem with AI, you’re describing a problem with a layer between you and the AI.
The censorship isn’t actually as smart as they’d like. They give what is essentially a list of things that the LLM can’t talk about, and if the pattern matches it, it kills the entire thread.
Which is what happened here. M$ set some arbitrary “omg this is bad” rules, and in the process of describing things it hit that “omg bad” flag. My guess is that the LLM was going into examples of incorrect conclusions, and would have pivoted to “but the actual fact is…” which the filters don’t have the ability to parse out.
In the end, again, this isn’t an AI issue. This is an issue with making it globally available and wanting to ensure your LLM doesn’t say something controversial. Essentially, this is a preemptive PR move.
This is a problem of generative AI. The problem is that it's necessary to have these kind of protections to prevent it to accidentally go full nazi.
Have you seen what it takes to go even close to “full conservative”, nevermind full Nazi? Take a look at the Gab AI prompt, and it still goes against most of the biases insisted upon by that prompt.
You’re thinking of much earlier attempts at this which were based purely on user provided input.
Rofl they named it "Arya"? How utterly mask-off can you get? That's not even a dog whistle. That's a swastika tattoo on your forehead