News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
Do a Google Image search for "child" or "teenager" or other such innocent terms, you'll find plenty of such.
I think you're underestimating just how well AI is able to learn basic concepts from images. A lot of people imagine these AIs as being some sort of collage machine that pastes together little chunks of existing images, but that's not what's going on under the hood of modern generative art AIs. They learn the underlying concepts and characteristics of what things are, and are able to remix them conceptually.
And conceptually, if I had never seen my cousin in the nude, I'd never know what young people look naked.
No that's not a concept, that's a fact. AI has seen inappropriate things, and it doesn't fully know the difference.
You can't blame the AI itself, but you can and should blame any and all users that have knowingly fed it bad data.
I don't believe you're fully arguing in good faith here.
I'm assuming you've seen a naked adult, and if you had never seen a naked young person, I don't believe for one second you would be unable to infer what a naked young person might look like. You might not know for certain, but your best guess would likely be very accurate.
Generative AI can absolutely make those same inferences, so it does not need inappropriate training material for it to generate it.
The AI knows what a young person looks like.
It knows what a clothed adult looks like.
It knows what an unclothed adult looks like.
An AI trained on 100% legal material could make that inappropriate inference without even trying.
Now, have all the popular AI models actually been trained on 100% legal material? I ~~have no way of knowing that answer, but you're incorrect to assume that just because it can output inappropriate images, that absolutely 100% proves that data was also included in its training input.~~ Edit: nevermind, it definitely has been trained on inappropriate material, but that doesn't disprove that it doesn't need to be.
Well how do you train an AI model of any set of information, without the risk of it confusing good information from bad info...?