what material benefit does having a cutesy representation of phrenology, a pseudoscience used to justify systematic racism, bring to this article or discussion?
spujb
they’re absolutely correct we will make all of these breakthroughs, but you dear reader? will benefit from none of them
the fact that you explained the problem doesn’t make it not a problem
glad i was able to clarify.
there’s little incentive for these companies to actually address these (brand security) issues
this is where i disagree, and i think the facts back me up here. bing’s ai no longer draws kirby doing 9/11. openAI continues to crack down on ChatGPT saying slurs. it’s apparent to me that they have total incentive to address brand security, because brands are how they are raking in cash.
yes i am aware? are they being used by openai?
oh! i see we have two different definitions of “security,” both of which are valid to discuss, but yours is not the one that relates to my point.
you understood “security” in a harm-reduction sense. i.e., that an LLM should not be permitted to incite violence, should not partake in emotional manipulation or harrasment of the user, and a few other examples like it shouldn’t be exploitable to leak PII. well and good, i agree that researchers publishing these harm-reduction security issues is good and should be continued.
my original definition of “security” is distinct and might be called “brand security.” OpenAI primarily wants to make use of their creation by selling it to brands for use in human-facing applications, such as customer service chat bots. (this is already happening and a list of examples can be found here.) as such, it behooves OpenAI to not only make a base-level secure product, but also one that is brand-friendly. the image in the article is one example—it’s not like human users can’t use google to find instructions to build a bomb. but it’s not brand friendly if users are able to ask the Expedia support bot or something for those instructions. other examples include why openAI have intentionally kept the LLM from saying the n-word (among other slurs), kirby doing 9-11 or writing excessively unkind or offensive output for users.
these things don’t directly cause any harm, but they would harm the brand.
I think that researchers should stop doing this “brand security” work for free. I have noticed this pattern where a well-meaning researcher publishes their findings of ways they were able to manipulate the brand-unsafe blackbox they published, quickly followed by a patch once the news spreads. In essence these corps are getting free QA for their products when they should just be hiring and paying these researchers for their time.
ooh hot take. reasearchers should stop doing security testing for OpenAI for free. aren’t they just publishing these papers, with full details on how it might be fixed, with no compensation for that labor?
bogus. this should work more like pen testing or finding zero day exploits. make these capitalist “oPeN” losers pay to secure the shit they create.
(pls tell me why im wrong if i am instead of downvoting, just spitballing here)
it absolutely is because of that. i do not disagree :) just asking for a better headline
misleading title:
In addition to the Pride flag, the measure approved by voters bans religious flags and breast cancer awareness flags, according to the Los Angeles Blade.
the measure only applies to city property and does not “ban” individuals from anything. but whatever gets clicks i guess 🤷♀️
intelligence, unlike wealth, is not a one dimensional metric
you could identify several hundred different definitions of identifying “top 1% intelligence” and get totally different results for every one
never blame the powerless when the one with all the power is calling the shots
doing so is called victim blaming and is generally frowned upon
😂 HG Wells is an English writer often known for being the “father of science fiction.” He is most famous for writing The War of the Worlds.
You may be thinking of H.G. Hill or Wells Fargo?