this post was submitted on 22 May 2024
296 points (96.8% liked)

News

23287 readers
4563 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Empricorn@feddit.nl 86 points 5 months ago (10 children)

This is tough. If it was just a sicko who generated the images for himself locally... that is the definition of a victimless crime, no? And it might actually dissuade him from seeking out real CSAM....

BUT, iirc he was actually distributing the material, and even contacted minors, so... yeah he definitely needed to be arrested.

But, I'm still torn on the first scenario...

[–] kromem@lemmy.world 66 points 5 months ago (3 children)

But, I'm still torn on the first scenario...

To me it comes down to a single question:

"Does exposure and availability to CSAM for pedophiles correlate with increased or decreased likelihood of harming a child?"

If there's a reduction effect by providing an outlet for arousal that isn't actually harming anyone - that sounds like a pretty big win.

If there's a force multiplier effect where exposure and availability means it's even more of an obsession and focus such that there's increased likelihood to harm children, then society should make the AI generated version illegal too.

[–] TheDoozer@lemmy.world 52 points 5 months ago (1 children)

Hoooooly hell, good luck getting that study going. No ethical concerns there!

[–] ricecake@sh.itjust.works 13 points 5 months ago

How they've done it in the past is by tracking the criminal history of people caught with csam, arrested for abuse, or some combination thereof, or by tracking the outcomes of people seeking therapy for pedophilia.

It's not perfect due to the sample biases, but the results are also quite inconsistent, even amongst similar populations.

[–] HonoraryMancunian@lemmy.world 19 points 5 months ago

I'm willing to bet it'll differ from person to person, to complicate matters further

I think the general consensus is that availability of CSAM is bad, because it desensitizes and makes harming of actual children more likely. But I must admit that I only remember reading about that and don't have a scientific source.

[–] Dave@lemmy.nz 12 points 5 months ago (2 children)

What is the AI trained on?

[–] FaceDeer@fedia.io 53 points 5 months ago (28 children)

Image-generating AI is capable of generating images that are not like anything that was in its training set.

load more comments (28 replies)
load more comments (1 replies)
load more comments (8 replies)
[–] not_that_guy05@lemmy.world 54 points 5 months ago (4 children)

Fuck that guy first of all.

What makes me think is, what about all that cartoon porn showing cartoon kids? What about hentai showing younger kids? What's the difference if all are fake and being distributed online as well?

Not defending him.

[–] jeffw@lemmy.world 30 points 5 months ago (1 children)

I think there’s certainly an argument here. What if the hentai was more lifelike? What if the AI stuff was less realistic? Where’s the line?

At least in the US, courts have been pretty shitty at defining things like “obscenity”. This AI stuff might force them to delineate more clearly.

[–] tatterdemalion@programming.dev 14 points 5 months ago

What if someone draws their own CSAM and they're terrible at drawing but it's still recognizable as CSAM?

[–] ricecake@sh.itjust.works 17 points 5 months ago (3 children)

Ethically is one question, but the law is written such that it's pretty narrowly covering only photograph-style visual depictions that are virtually indistinguishable from an actual child engaged in explicit conduct in the view of a reasonable person that is also lacking in any other artistic or cultural significance.
Or in short: if it looks like an actual image of actual children being actually explicit, then it's illegal.

load more comments (3 replies)
load more comments (2 replies)
[–] 0x0001@sh.itjust.works 37 points 5 months ago (1 children)

One thing to consider, if this turned out to be accepted, it would make it much harder to prosecute actual csam, they could claim "ai generated" for actual images

[–] theherk@lemmy.world 22 points 5 months ago (20 children)

I get this position, truly, but I struggle to reconcile it with the feeling that artwork of something and photos of it aren’t equal. In a binary way they are, but with more precision they’re pretty far apart. But I’m not arguing against it, I’m just not super clear how I feel about it yet.

load more comments (20 replies)
[–] bfg9k@lemmy.world 25 points 5 months ago (1 children)
[–] Hadriscus@lemm.ee 12 points 5 months ago

that was just the present

[–] eating3645@lemmy.world 24 points 5 months ago (4 children)

I find it interesting that the relabeling of CP to CSAM weakens their argument here. "CP generated by AI is still CP" makes sense, but if there's no abusee, it's just CSM. Makes me wonder if they would have not rebranded if they knew about the proliferation of AI pornography.

[–] Stovetop@lemmy.world 31 points 5 months ago (9 children)

The problem is that it abets the distribution of legitimate CSAM more easily. If a government declares "these types of images are okay if they're fake", you've given probable deniability to real CSAM distributors who can now claim that the material is AI generated, placing the burden on the legal system to prove it to the contrary. The end result will be a lot of real material flying under the radar because of weak evidence, and continued abuse of children.

Better to just blanket ban the entire concept and save us all the trouble, in my opinion. Back before it was so easy to generate photorealistic images, it was easier to overlook victimless CP because illustrations are easy to tell apart from reality, but times have changed, and so should the laws.

[–] kromem@lemmy.world 12 points 5 months ago (8 children)

Not necessarily. There's been a lot of advances in watermarking AI outputs.

As well, there's the opposite argument.

Right now, pedophile rings have very high price points to access CSAM or require users to upload original CSAM content, adding a significant motivator to actually harm children.

The same way rule 34 artists were very upset with AI being able to create what they were getting commissions to create, AI generated CSAM would be a significant dilution of the market.

Is the average user really going to risk prison, pay a huge amount of money or harm a child with an even greater prison risk when effectively identical material is available for free?

Pretty much overnight the CSAM dark markets would lose the vast majority of their market value and the only remaining offerings would be ones that could demonstrate they weren't artificial to justify the higher price point, which would undermine the notion of plausible deniability.

Legalization of AI generated CSAM would decimate the existing CSAM markets.

That said, the real question that needs to be answered from a social responsibility perspective is what the net effect of CSAM access by pedophiles has on their proclivity to offend. If there's a negative effect then it's an open and shut case that it should be legalized. If it's a positive effect than we should probably keep it very much illegal, even if that continues to enable dark markets for the real thing.

load more comments (8 replies)
load more comments (8 replies)
load more comments (3 replies)
[–] prettydarknwild@lemmy.world 20 points 5 months ago* (last edited 5 months ago) (2 children)

oh man, i love the future, we havent solved world hunger, or reduce carbon emissions to 0, and we are on the brink of a world war, but now we have AI's that can generate CSAM and fake footage on the fly 💀

[–] Dasus@lemmy.world 24 points 5 months ago (3 children)

Technically we've solved world hunger. We've just not fixed it, as the greedy fucks who hoard most of the resources of this world don't see immediate capital gains from just helping people.

Pretty much the only real problem is billionaires being in control.

load more comments (3 replies)
load more comments (1 replies)
[–] Obonga@feddit.de 18 points 5 months ago

Can't generate Abuse Material without Abuse. Generative AI does not need any indecent training to be able to produce indecent merial.

But it is a nice story to shock and scare many people so i guess the goal is reached.

[–] IHeartBadCode@kbin.social 17 points 5 months ago* (last edited 5 months ago) (12 children)

Quick things to note.

One, yes, some models were trained on CSAM. In AI you'll have checkpoints in a model. As a model learns new things, you have a new checkpoint. SD1.5 was the base model used in this. SD1.5 itself was not trained on any CSAM, but people have giving additional training to SD1.5 to create new checkpoints that have CSAM baked in. Likely, this is what this person was using.

Two, yes, you can get something out of a model that was never in the model to begin with. It's complicated, but a way to think about it is, a program draws raw pixels to the screen. Your GPU applies some math to smooth that out. That math adds additional information that the program never distinctly pushed to your screen.

Models have tensors which long story short, is a way to express an average way pixels should land to arrive at some object. This is why you see six fingered people in AI art. There wasn't any six fingered person fed into the model, what you are seeing the averaging of weights pushing pixels between two different relationships for the word "hand". That averaging is adding new information in the expression of an additional finger.

I won't deep dive into the maths of it. But there's ways to coax new ways to average weights to arrive at new outcomes. The training part is what tells the relationship between A and C to be B'. But if we wanted D' as the outcome, we could retrain the model to have C and E averaging OR we could use things call LoRAs to change the low order ranking of B' to D'. This doesn't require us to retrain the model, we are just providing guidance on ways to average things that the model has already seen. Retraining on C and E to D' is the part old models and checkpoints used to go and that requires a lot of images to retrain that. Taking the outcome B' and putting a thumb on the scale to put it to D' is an easier route, that just requires a generalized teaching of how to skew the weights and is much easier.

I know this is massively summarizing things and yeah I get it, it's a bit hard to conceptualize how we can go from something like MSAA to generating CSAM. And yeah, I'm skipping over a lot of steps here. But at the end of the day, those tensors are just numbers that tell the program how to push pixels around given a word. You can maths those numbers to give results that the numbers weren't originally arranged to do in the first place. AI models are not databases, they aren't recalling pixel for pixel images they've seen before, they're averaging out averages of averages.

I think this case will be slam dunk because highly likely this person's model was an SD1.5 checkpoint that was trained on very bad things. But with the advent of being able to change how averages themselves and not the source tensors in the model work, you can teach new ways for a model to average weights to obtain results the model didn't originally have, without any kind of source material to train the model. It's like the difference between Spatial antialiasing and MSAA.

load more comments (12 replies)
[–] Evotech@lemmy.world 11 points 5 months ago
load more comments
view more: next ›