this post was submitted on 04 Apr 2025
211 points (96.5% liked)

Technology

68305 readers
4837 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts

top 50 comments
sorted by: hot top controversial new old
[–] General_Effort@lemmy.world 13 points 16 hours ago (2 children)

When I saw this, 2 questions came to mind: How come that this isn't immediately reported? Why would anyone upload illegal material to a platform that tracks as thoroughly as Meta's do?

The answer is:

All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces.

The 1 question that came to mind upon reading this is: What?

[–] WeirdGoesPro@lemmy.dbzer0.com 12 points 14 hours ago (1 children)

I’m a little confused as to how it can still be AI CSAM if the bodies are voluptuous and the breasts are ample. Childlike faces have been the bread and butter of face filters for years.

Which parts specifically have to be childlike for it to be AI CSAM? This is why we need some laws ASAP.

[–] sik0fewl@lemmy.ca 6 points 13 hours ago

Things that you want to understand but sure as fuck ain't gonna Google.

[–] mic_check_one_two@lemmy.dbzer0.com 3 points 14 hours ago (1 children)

My guess is that the algorithm is really good at predicting who will be likely to follow that kind of content, rather than report it. Basically, it flies under the radar purely because the only people who see it are the ones who have a vested interest in it flying under the radar.

[–] General_Effort@lemmy.world 3 points 3 hours ago

Look again. The explanation is that these images simply don't look like any kind of CSAM. The whole story looks like some sort of scam to me.

[–] Bakkoda@sh.itjust.works 17 points 18 hours ago

Please, please, please abandon these platforms. Just stop using them. There's a cycle to these things and once they are past the due date all that's left is rotten. It really is as simple as stop using their platform.

[–] yesman@lemmy.world 51 points 1 day ago (1 children)

after contact, the company acknowledged the problem and removed the accounts

Meta is outsourcing content moderation to journalists.

[–] ICastFist@programming.dev 4 points 15 hours ago

Meta profits from these accounts, it also profits off scams and fraud posts, because they pay for ad space. They have literally no incentive to moderate beyond the bare minimum their automatic tools do

[–] Cryophilia@lemmy.world 16 points 20 hours ago (3 children)

If a child is not being harmed, I truly do not give a shit.

[–] AutomaticButt@lemm.ee 14 points 15 hours ago (1 children)

The most compelling argument against AI generated child porn I have heard is it normalizes it and makes it more likely people will be unable to tell if it is real or AI. This allows actual children to get hurt when it is not reported or skimmed over because someone thought it was AI.

[–] yyprum@lemmy.dbzer0.com 8 points 15 hours ago (1 children)

As a counterpart, the fact that it is so easy and simple to get those AI images, compared to the risk and extra effort of doing it for real, could make the actual child abuse become less common and less profitable for mafias and assholes in general. It's a really complex topic that no simple straight answer would solve.

Normalising it would be horrible and should be avoided, but there will always be some amount of people looking for that content. I rather have them using AI to create it than having to go searching for real content. Persecuting the AI content is not only very inefficient, it might also be harmful as the only other content left would be the real one that is much harder to catch those who make it.

[–] joshchandra@midwest.social 3 points 13 hours ago (1 children)

I rather have them using AI to create it than having to go searching for real content.

A rebuttal to this that I've read is that the easy access may encourage people to dig into it and eventually want "the real thing"... but regardless, with it being FOSS, there's no easy way to stop it anyway... It's just a Pandora's box that we can never close.

[–] yyprum@lemmy.dbzer0.com 4 points 13 hours ago

And I could rebute to that, that if someone is interested enough to check it with AI then they were likely to try and check it anyway without AI, maybe it would take longer, it would be harder to find... But they'd be the intended audience that now are redirected elsewhere.

To quote myself:

It's a really complex topic that no simple straight answer would solve.

We could rebute again and again and again, and get nowhere because either option is hard to discuss as it is simply impossible to give proper data to prove anything. And worse, when defending the use of AI for it can lead to being told you are allowing it in the first place and that's not even telling how many people still believe that AI needs real sample images to produce those (whether the algorithm is trained or not on CP is irrelevant on this particular point, as it is not needed to be created)

[–] RowRowRowYourBot@sh.itjust.works 2 points 19 hours ago (1 children)

What if it features real kid’s faces?

[–] boreengreen@lemm.ee 13 points 18 hours ago

Then it is harming someone.

[–] lepinkainen@lemmy.world 9 points 20 hours ago

Meta doesn’t care about AI generated content. There are thousands of fake accounts with varying quality of AI generated content and reporting them does exactly shit.

[–] XEROAARON@sh.itjust.works 25 points 1 day ago (2 children)

Parents should get their kids to never touch anything “Meta” made or brought.

But then again, them same parents are currently telling the world what their neighbours are doing, what they’re eating and how cute did “insert name here” look in their new school uniform. 🤦‍♂️

[–] 3dmvr@lemm.ee 1 points 12 hours ago

to bad vrs got a hold and vrchats so much worse than the internet chatrooms we grew up with

[–] imnotafish@midwest.social 11 points 1 day ago* (last edited 1 day ago) (1 children)

They are also providing Meta with free age progression training material when they upload those pictures of their kids each year on the first day of school

[–] XEROAARON@sh.itjust.works 9 points 1 day ago

Yeah without doubt we’re entering a weird and scary time with all this non-consensual AI training and data models.

Especially with the amount of data Meta has across all its platforms.

[–] arararagi@ani.social 11 points 23 hours ago (1 children)

I never saw a child with "voluptuous bodies and ample breasts" though.

[–] source_of_truth@lemmy.world 3 points 14 hours ago

Shh! We're trying to ragebait here! Be outraged!

[–] Infynis@midwest.social 14 points 1 day ago

... Meta's security systems were unable to identify...

I think you mean incentivized to ignore

[–] DFX4509B_2@lemmy.org 1 points 14 hours ago

Stuff like this is a good ad for Pixelfed.

[–] technocrit@lemmy.dbzer0.com 7 points 1 day ago

IG is a total fascist shithole. I closed my "political" acct because all of the sponsored content was fascist trash: zionism, flat earthism, qanon, racist stuff, anti-vax, etc.

Switched to Pixelfed and RSS... and Lemmy ofc.

[–] FauxLiving@lemmy.world 6 points 1 day ago (2 children)

Child Sexual Abuse Material is abhorrent because children were literally abused to create it.

AI generated content, though disgusting, is not even remotely on the same level.

The moral panic around AI that leads to implying that these things are the same thing is absurd.

Go after the people filming themselves literally gang raping toddlers, not the people typing forbidden words into an image generator.

Don't dilute the horror of the production CSAM by equating it to fake pictures.

[–] suicidaleggroll@lemm.ee 8 points 1 day ago* (last edited 1 day ago) (3 children)

Yes at a cursory glance that's true. AI generated images don't involve the abuse of children, that's great. The problem is what the follow-on effects of this is. What's to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it's AI generated?

AI image generation is getting absurdly good now, nearly indistinguishable from actual pictures. By the end of the year I suspect they will be truly indistinguishable. When that happens, how do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn't if AI-generated CP is legal?

[–] mic_check_one_two@lemmy.dbzer0.com 1 points 12 hours ago* (last edited 12 hours ago)

What's to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it's AI generated?

Aside from the other arguments people have presented, this wrecks one of the largest reasons that people produce CSAM. Pedophiles are insular data hoarders by necessity, because actually creating and procuring it is such a big risk. Every time they go online to find new content, they’re at risk of stumbling into a honeypot. And producing it requires IRL work, and a LOT of risk of being caught/turned in by the victim. They tend to form tight-knit rings, and one of the only reliable ways to get into a ring as an outsider is to provide your own CSAM to the others. CSAM is traded in these rings like baseball cards, where you need fresh content in order to receive fresh content.

The data hoarding side of things is where all of the “cops bust pedophile with 100TB of CSAM” headlines come from; In reality, it was probably like 1TB of videos, (which is a lot, but not unheard of) but was backed up multiple times in multiple places, because losing it would be catastrophic for the CSAM producer; They can’t simply go grab a new blue ray of it. And the cops counted the full size of each backup disk, not just the space that was used.

Intentionally marking your content as AI-generated would ruin the trading value, because nobody will see it as valuable/worth trading for if it’s fake. At best, you won’t get anything for it. At worst, you’d be labeled a cop trying to pass off AI content to gather evidence.

[–] FauxLiving@lemmy.world 16 points 23 hours ago

What's the follow on effect from making generated images illegal?

Do you want your freedom to be at stake where the question before the Jury is "How old is this image of a person (that doesn't exist?)". "Is this fake person TOO child-like?"

When that happens, how do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn't if AI-generated CP is legal?

You won't be able to tell, we can assume that this is a given.

So the real question is:

Who are you trying to arrest and put in jail and how are you going to write that difference into law so that innocent people are not harmed by the justice system?

To me, the evil people are the ones harming actual children. Trying to blur the line between them and people who generate images is a morally confused position.

There's a clear distinction between the two groups and that distinction is that one group is harming people.

[–] ExLisper@lemmy.curiana.net 10 points 23 hours ago

If pedophiles won't be able to tell what's real and what's AI generated why risk jail to create the real ones?

[–] Grimy@lemmy.world -1 points 21 hours ago* (last edited 21 hours ago) (1 children)

Although that's true, such material can easily be used to groom children which is where I think the real danger lies.

I really wish they had excluded children in the datasets.

You can't really put a stop to it anymore but I don't think it should be something that's normalized and accepted just because there isn't a direct victim anymore. We are also talking about distribution here and not something being done in private at home.

[–] Cryophilia@lemmy.world -2 points 20 hours ago (2 children)

such material can easily be used to groom children

This literally makes no sense.

[–] Grimy@lemmy.world 5 points 19 hours ago (1 children)

Kids will do things if they see other children doing it in pictures and videos. It's easier to normalize sexual behavior with cp then without.

[–] Cryophilia@lemmy.world 1 points 17 hours ago (1 children)

This sounds like you're searching really hard for a reason to justify banning it. Pretty tenuous "what if" there.

Like, a dildo could hypothetically be used to sexualize a child. Should we ban dildos?

It's so vague it could apply to anything.

[–] Grimy@lemmy.world -4 points 17 hours ago* (last edited 17 hours ago) (1 children)

Banning the tech, banning generated cp on the internet or banning it at home?

I'm a big advocate of AI and don't personally want any kind of banning or censorship of the tools.

I don't think it should be published on any kind of image sharing sites. I don't hold people publishing it in high regard and I'm not against some kind of consequence. I generally view prison as unproductive though.

At home, I'm not sure. People imo can do what they want behind closed doors. I don't want any kind of surveillance but I don't know how I would react if it got brought up at a trial, as a kind of proof if the allegations have something to do with that theme (child molestation).

I also don't think we need much of a reason to ban it on the web.

[–] Cryophilia@lemmy.world 6 points 17 hours ago (1 children)

It would probably make me distrust the prosecution, like if they're bringing this up they must not have much to go on. Like every time a black man is shot by police they bring up that he smoked weed.

I guess my main complaint is that it's insane to view it as equivalent to real CP, and it's harmful to waste any resources prosecuting it.

[–] Grimy@lemmy.world 1 points 12 hours ago

That's fair. We can also expect proper moderation from social media sites. I'm okay with a light touch but It shouldn't be floating around if you get what I mean.

[–] sunzu2@thebrainbin.org 7 points 1 day ago

Amazing how much internet is essentially got turned into threat actor paradise

[–] jordanlund@lemmy.world 5 points 1 day ago

Some day, kids using social media will be child abuse...

[–] stucljr@lemmy.world 2 points 1 day ago

That's not surprising but it's messed up