this post was submitted on 03 Jan 2026
135 points (99.3% liked)

Technology

41147 readers
168 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

cross-posted from: https://lemmy.cafe/post/29389301

A few days ago X introduced a new AI Image Editing button that lets any user modify images posted by others, even without the original uploader’s consent. Image owners are not notified when edits are made, and the feature is enabled by default with no opt-out option (at least not so far).

you are viewing a single comment's thread
view the rest of the comments
[–] TehPers@beehaw.org 20 points 3 days ago (1 children)

Grok, put this "small adult" into a bikini and have her bend over.

Creating nudes without consent, especially CSAM (even with consent), can be extremely illegal. Doing it in photo editor software makes you responsible and only leaves it on your device. ChatGPT will attempt to filter it, and their filters lean on the aggressive side, but that's also between you and OpenAI. Grok will post it publicly.

[–] locuester@lemmy.zip 0 points 2 days ago (1 children)

Using grok this way is the same as a photo editor. It’s the human asking grok to do it that is the problem. Not grok or any other tool. Meta products have this exact same feature.

Grok will not post it publicly unless you click the button to do so. Again, it’s the person that is doing it, not grok.

[–] TehPers@beehaw.org 2 points 1 day ago (1 children)

Ok yes you're right. "Grok generate me some CSAM" is the same as opening up a photo editor and drawing a new real looking body onto someone's child and putting it in a new body position. Same exact thing. No different at all. Twitter has no responsibility for running a service that can do this.

[–] locuester@lemmy.zip 1 points 1 day ago (1 children)

You’ve totally changed the original post’s topic and made it into something obviously unacceptable. There’s a line to cross with content in general, AI or not, and any public ai model should absolutely have safety rails / content moderation on its output

[–] TehPers@beehaw.org 2 points 1 day ago* (last edited 1 day ago) (1 children)

Surely you have an example where it's appropriate for a service to generate nonconsensual deepfakes of people then? Because last I checked, that's what the post's topic is.

And yes, children are people. And yes, it's been used that way.

Edit: as for guardrails, yes any service should have that. We all know what Grok's are though, coming from Elon "anti-censorship" Musk. I mentioned ChatGPT also generating images, and they have very strict guardrails. They still make mistakes though, and it's still unacceptable. Also, any amount of local finetuning of these models can delete their guardrails accidentally, so yeah.

[–] locuester@lemmy.zip 1 points 1 day ago

This post isn’t exclusively about deepfakes. It’s about editing someone else’s images. Making suggestive deepfakes is mentioned in the article as an example, but it’s not mentioned in the title or summary here.

That said, my points stand. Don’t post shit online if you don’t want it to be edited.

With the ease of photo manipulation, society has no choice but to adapt to nonsensical, simple edits. It can’t be stopped. There’s hundreds of apps and programs that do this now. Even adobe’s famous suite.

I know you hate Elon, I can hear it in your tone. But this isn’t an Elon thing. Look around.