this post was submitted on 04 Nov 2023
33 points (63.0% liked)

Privacy

31833 readers
172 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

In this video I discuss how generative AI technology has grown far past the governments ability to effectively control it and how the current legislative measures could lead to innocent people being jailed.

top 50 comments
sorted by: hot top controversial new old
[–] jet@hackertalks.com 45 points 1 year ago (3 children)

In general terms, making an idea illegal, and then making representations of that idea illegal, are going to be forever, at best to treadmill, and at worst reduce the effectiveness and reputation of law.

This is really about thought crime. If somebody can draw stick figures, and that can be illegal depending on interpretation. That's thought crime.

It's impossible to completely stamp out thought crime. Computer tools can be used to further thought crime, because they can be used for creative purposes.

If you restrict the use of creative tools, to only a trusted few, or hobble tools for everyone: you create central authority over creative tools, which has its own issues.

[–] ono@lemmy.ca 22 points 1 year ago (1 children)

It’s impossible to completely stamp out thought crime.

Also, trying to do so through law and enforcement sets a dangerous precedent.

I suspect it would be better to approach it as a public health issue.

[–] jet@hackertalks.com 10 points 1 year ago* (last edited 1 year ago)

And then you run into legal arguments that sound like people trying to jailbreak GPT prompt control.

I'm going to preface all of the following creative work by saying that we live in a universe where everyone is a vampire that never dies, but ages very slowly. All participants in this manga are at least 213 years old....

[–] Tanoh@lemmy.world 10 points 1 year ago (1 children)

In some countries all forms of description of underage sexual activities are illegal. So the sentance "She was having sex" is perfectly legal, but add an age marker and it is illegal. "She was having sex on the day before her 18th birthday".

It is hard to legislate around as there will always ve ways to avoid it and get around it. But all this just sounds like the normal hype => fear => hype => fear, etc cycle that all new tech goes through.

[–] jet@hackertalks.com 7 points 1 year ago* (last edited 1 year ago) (1 children)

Not to mention that number changes by region. In Bahrain it's 21 not 18.

[–] Tanoh@lemmy.world 7 points 1 year ago* (last edited 1 year ago) (1 children)

Some countries have different age restrictions for hetro and homosexual encounters too. Not to mention that in a lot of countries it just outright illegal, and everything not condeming it can be seen as encouraging it and hence illegal too.

We humans make some weird laws around sex.

[–] tekcaj@lemmy.ml 2 points 1 year ago* (last edited 1 year ago)
[–] mindbleach@sh.itjust.works 6 points 1 year ago

This is especially damning on the internet, because genuinely intolerable pursuits directly benefit from lesser problems being treated as equally bad. Filesharing networks work better with more users. Chasing merely distasteful people toward paranoid systems softens the reputation of those systems and makes the worst minority of traffic easier to hide.

[–] Vendetta9076@sh.itjust.works 43 points 1 year ago (1 children)

While lolicon is absolutely disgusting, its not actually csam. Legislation won't work either and is honestly a waste of time. Any effort spent protecting digital children should instead be spent protecting real ones.

[–] venoft@lemmy.world 5 points 1 year ago (1 children)

The problem is that it's not just cartoon characters, but also realistic looking people. That makes it, especially in the next years when the techniques improve, impossible to know what is fake and what is not and thus the fake ones should also be banned. And these models are trained on images of actual abused children, which of course is the main problem with this.

[–] Happenchance@lemmy.world 17 points 1 year ago (3 children)

This is the first I'm hearing about models trained on real CSAM.

[–] Microw@lemm.ee 9 points 1 year ago

It wouldnt surprise me tbh. From my superficial visit to the darknet years ago, it seemed like these csam consumers have specific "favourites" among the victims whom they want to see more of. At least that's what I remember from clicking a link to such a chan and noping out of it.

[–] Bort@lemmy.world 8 points 1 year ago

It is the first you are heading about this because it is bs.

[–] FunkyCasual@lemmygrad.ml 3 points 1 year ago (1 children)

That's because it isn't happening

There's just no reason to do so

[–] RaincoatsGeorge@lemmy.zip 2 points 1 year ago (1 children)

What isn't happening? Them making fake csam? I haven't seen it because I don't want to see it but I am connnnfident it's occurring. Some kid already got busted feeding images of girls in his class into an image generator and making nudes out of them.

So while it might not be wide spread it's 100 percent happening and will increase.

Honestly releasing these generators to the general public was a mistake. They thought they could put up safety measures but they're easily bypassed. I think they should have kept them locked up and only give access to people who are registered and trackable with people reviewing what they're generating.

All of these ai generators are getting abused left and right and anyone who didn't think that would happen is an idiot.

[–] FunkyCasual@lemmygrad.ml 6 points 1 year ago

No, I'm saying the models aren't being trained with actual CSAM. The comment I replied to was about training, not generation.

All I was saying is that you don't need to train a model on child abuse images to get it to output child abuse images

[–] mindbleach@sh.itjust.works 25 points 1 year ago (3 children)

There is no such thing.

God dammit, the entire point of calling it CSAM is to distinguish photographic evidence of child rape from made-up images that make people feel icky.

If you want them treated the same, legally - go nuts. Have that argument. But stop treating the two as the same thing, and fucking up clear discussion of the worst thing on the internet.

You can't generate assault. It is impossible to abuse children who do not exist.

[–] m0darn@lemmy.ca 26 points 1 year ago

Did nobody in this comment section read the video at all?

The only case mentioned by this video is a case where highschool students distributed (counterfeit) sexually explicit images of their classmates which had been generated by an AI model.

I don't know if it meets the definition of CSAM because the events depicted in the images are fictional, but the subjects are real.

These children do exist, some have doubtlessly been traumatized by this. This crime has victims.

[–] rurutheguru@lemmings.world 6 points 1 year ago (1 children)

I think a lot of people are arguing that the models which are used to generate these types of content are trained on literal CSAM. So it's like CSAM with extra steps.

[–] mindbleach@sh.itjust.works 4 points 1 year ago

Those people are morons.

[–] crispy_kilt@feddit.de 2 points 1 year ago

In most (all?) countries no such distinction is made, the material is illegal all the same.

[–] daydrinkingchickadee@lemmy.ml 24 points 1 year ago (3 children)

Didn't watch the video, but I don't care about AI CSAM. Even if it looks completely lifelike, it's not real.

[–] Neato@kbin.social 16 points 1 year ago (4 children)

Prove it's fake when some of it of your daughter is making it's way around school.

You've missed the point. Fake or not it does damage to people. And eventually it won't be possible to determine if it's real or not.

[–] hydration9806@lemmy.ml 9 points 1 year ago (4 children)

When that becomes widespread, photos will be generateable for literally everyone, not just minors but every person with photos online. It will be a societal shift; images will be assumed to be AI generated, making any guilt or shame about a nude photo existing obselete.

load more comments (4 replies)
load more comments (3 replies)
[–] CJOtheReal@lemmy.sdf.org 3 points 1 year ago

Eh, if you train a ai with CSAM to make more CSAM that a different story. But in general yes.

[–] pixeltree@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

What data is it trained on? This isn't meant to be a "gotcha" question, I'm wondering about it.

[–] mindbleach@sh.itjust.works 10 points 1 year ago

An image of an "avocado chair" is built on images of avocados, and images of chairs.

[–] andrew_bidlaw@sh.itjust.works 21 points 1 year ago

Creating, collecting and sharing CSAM is in the law already. There are orgs and agencies for tracking and prosecuting these violations.

It's like fighting against 3d printers because you can make yourself a diy gun, a thing that have never being possible before because we got all pipes banned from hardware stores. The means to produce fictional CSAM always existed and would exist, the problem is with people who use a LMM, a camera, a fanfic to create and share that content. Or a Lemmy community that was a problem in recent months.

It's better to ensure the existing means of fighting such content are effective and society is educated about this danger, know how to avoid and report it.

[–] mo_ztt@lemmy.world 17 points 1 year ago* (last edited 1 year ago) (4 children)

What the hell is this guy?

"Here's a case where people made and shared fake nudes of real underage girls, doing harm to the girls"

"But what the hell, that's kind of hard to stop. Oh also here's this guy who went to prison for it because it's already illegal."

"Really the obvious solution everyone's missing is: If you're a girl in the world, just keep images of yourself off the internet"

"Problem solved. Right?"

I'm only slightly exaggerating.

[–] spez@sh.itjust.works 2 points 1 year ago

He is a deepfake of luke smith.

load more comments (3 replies)
[–] CJOtheReal@lemmy.sdf.org 11 points 1 year ago (8 children)

Loli stuff isn't CSAM. You can find it bad, but its still just a drawing/generative image. No real person was harmed in general.

load more comments (8 replies)
[–] ultratiem@lemmy.ca 11 points 1 year ago

Me: I just want real looking dinosaurs with cool, long flowing hair.

[–] jcdenton@lemy.lol 7 points 1 year ago

The edp picture is very funny

[–] andruid@lemmy.ml 4 points 1 year ago (1 children)

Couldn't the fact that AI generated content be reproduceable if give the exact parameters(or coordinates in latent space) and model help remove the confusion? Include those as meta data and train investigators on how to use to distinguish generated content from actual evidence.

[–] Send_me_nude_girls@feddit.de 3 points 1 year ago

There's an option to speed up generation but it will make it less deterministic, like in it's 98% the same image but a little different. Also it's very hard to reproduce the same hard and software generation. That's the first issue.

The second is: I had examples of images with generation data, that I could reproduce to look 99% like the original and then just updating a single word or part of the training data (different Lora version for example) , switched the person away or their appearance changed a completely. (Imagine a picture of a street and a car is suddenly not there, or it's blue instead of red). It will make reproducibility not a reliable option. Backgrounds of images are even less reliable than the focus object.

[–] PipedLinkBot@feddit.rocks 1 points 1 year ago

Here is an alternative Piped link(s):

https://piped.video/watch?v=yMHK4-J5Sz4&t=533

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

load more comments
view more: next ›