News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source. Clickbait titles may be removed.
Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.
7. No duplicate posts.
If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners or news aggregators.
All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
It sounds like the person who entered a 6 word prompt wasn't clear enough to indicate whether they meant 'actual historical pope' or 'possible pope that could exist in the future' and expected the former. The results met the criteria of the vague prompt.
That’s not how ANN should react if it was simply trained on images of past popes. The diversity had to be part of the training. This is simple technical statement.
So if someone wrote a prompt to make an image of a black woman as a pope, would you expect the model to only return historical popes?
If the model is supposed to be able to make both historically accurate and possibilities, why would the expectation for a vague prompt to be historical instead of possible?
If the model is supposed to default to historical accuracy, how would it handle a request for a red dragon? Just the painting named Red Dragon, dragons from mythology, or popular media?
Yes, there is could be something that promotes diversity or it could just be that the default behavior doesn't have context for what content 'should' be historically accurate and what is just a randomized combination of position/race/gender.
Of course it will draw black female pope if you request, but if you do not - it would not. As a gross approximation, ANN is an interpolator of known data-points (with some noise), and if you ask simply a pope, it will interpolate between the images it learned of popes. Since all of them are white male it is highly unlikely for ANN to produce black female (the noise should be very high). If you ask black female pope, it would start to interpolate between the images of popes and black females. You have to tune the model so that when you ask just for pope, something else pushes the model to consider otherwise irrelevant images.
Would expect a lot of models to struggle with making the pope female, making the pope black, or making a black female a pope unless they build in some kind of technique to make replacements. Thing is, a neural net reproduces what you put into it, and I assume the bias is largely towards old white men since those images are way more readily found.
Even targeted prompts, like a zebra with rainbow colored stripes, had very limited results 6 monts ago where there would be at least 50% non black and white stripes. I had to generate multiple times with a lot of negative terms just to get close. Currently, the first generation of copilot matches my idea behind the prompt.
Clearly the step made was a big one, and I imagine tuning was done to ensure models capable of returning more diverse results rather just what is in the data set. It just has more unexpected results and less historically accurate images for these kind of prompts. And some that might be quite painful. Still, being always underrepresented in data sets is also quite painful. Hard to get to a perfect product quickly, but there should be a feature somewhere on their backlog to by default prevent some substitutions. Black, female popes when requesting a generated pope? To me that is a horizon broadening feature. Black, female nazis when requesting nazis? Let that not be a default result.
That's not really true, they learn based on layers of data so it might have learned that a pope is a person in a silly outfit then the layer below that a person can be old or young, a range of ethnicities or genders... Thats why you can ask for gopnik pope or sexy pope.
You would expect it to make stereotypical old male popes but they had people write similar articles complaining that asking for doctor gave make doctors snd nurse was female so instead of telling people to ask for what they actually want they added nonsense to the promp - now people run and still don't ask for what they want and complain it goes the other way.
That's not what happened. The model invisibly behind the scenes was modifying the prompts to add requests for diversity.
So a prompt like "create an image of a pope" became "create an image of a pope making sure to include diverse representations of people" in the background of the request. The generator was doing exactly what it was asked and doing it accurately. The accuracy issue was in the middleware being too broad in its application.
I just explained a bit of the background on why this was needed here.
It’s kind of an interesting double-standard that exists in our society. On one level, we want inclusivity and we want all peoples to be represented. Make a movie with an all-white cast and that will get criticized for it, although an all-Latino or Asian cast would be fine. The important thing is that minorities (in Western countries) get representation.
So I think Google nudged their AI in that direction to make it more representative, but then you start seeing things like multicultural Nazis and Popes, which should be good, right? Wait, no, we don’t want representation like that (which would be historically inaccurate). Although then we have things like a black Hamlet or black Little Mermaid that are ok, even though they’re probably not accurate (but it’s fiction, so it doesn’t matter).
It probably seems schizophrenic and hard to program into an algorithm when multiculturalism is appropriate and when it’s not. I think they should just take the guard rails off and let it do whatever, because the more they censor these AI models the more boring they get with their responses.
If you want historical accuracy you shouldn't be using generative AI in the first place.
Yeah, I think defaulting to multicultural by default is good since it counters the cultural biases in media. Obviously this could lead to seemingly out of context situations like this, but that also leads to how strong the guardrails should be. Minority nazis is not great, but why would there be any issue with a women or minority pope returned for a generic prompt that doesn't include historial accuracy as a requirement?
There's beet at least one female pope. So it's not technically wrong.
That's never been definitely proven
It's a religious thing. Belief is everything.