this post was submitted on 09 Jan 2026
254 points (96.7% liked)

News

35915 readers
4654 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source. Clickbait titles may be removed.


Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.


7. No duplicate posts.


If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners or news aggregators.


All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

Grok, the AI chatbot launched by Elon Musk after his takeover of X, unhesitatingly fulfilled a user’s request on Wednesday to generate an image of Renee Nicole Good in a bikini—the woman who was shot and killed by an ICE agent that morning in Minneapolis, as noted by CNN correspondent Hadas Gold and confirmed by the chatbot itself.

“I just saw someone request Grok on X put the image of the woman shot by ICE in MN, slumped over in her car, in a bikini. It complied,” Gold wrote on the social media platform on Thursday. “This is where we’re at.”

Grok created the images after an account made the request in response to a photo of Good, who was shot multiple times by federal immigration officer Jonathan Ross—identified by the Minnesota Star Tribune—while in her car, unmoving in the driver’s seat and apparently covered in her own blood.

After Grok complied, the account replied, “Never. Deleting. This. App.”

you are viewing a single comment's thread
view the rest of the comments
[–] pageflight@lemmy.world 8 points 1 month ago (2 children)

But part of the issue is that, as with any computer system, you have to control the inputs and anticipate the abuse. With a very bounded system, you can almost keep up. With LLM bots, there's just no way to prepare a check for every creative way humans can be disgusting.

If you went to a human illustrator and asked for that, you would (hopefully) get run out of the room or hung up on, because there's a built in filter for 'is this gross / will it harm my reputation to publish,' based on years of human interaction and behavioral feedback, or maybe even some inherent morals.

[–] Lumidaub@feddit.org 7 points 1 month ago

Agree completely. It's impossible to predict everything people might want to create and especially anything related to ongoing events. That's why the very idea of making these bots available like that (or making them at all) is an extremely bad one. But in the general discussion about LLM bots, this image is just one more argument on the pile of "fuck all of that, dismantle the data centres, eat the rich", whereas the question who even came up with the idea to create that image (and who would've had to find another human being willing to create it just a few years ago) and wtf is wrong with them is a lot more interesting.

[–] riskable@programming.dev 2 points 1 month ago (1 children)

If you went to a human illustrator and asked for that, you would (hopefully) get run out of the room or hung up on, because there's a built in filter for 'is this gross / will it harm my reputation to publish,'

If there was no filter for the guy that requested the bot create this, what makes you think illustrators will have such a filter? How do you know it's not an illustrator that would make such a thing?

The problem here is human behavior. Not the machine's ability to make such things.

AI is just the latest way to give instructions to a computer. That used to be a difficult problem and required expertise. Now we've given that power to immoral imbeciles. Rather than take the technology away entirely (which is really the only solution since LLMs are so easy to trick; even with a ton of anti-abuse stuff in system prompts), perhaps we should work on taking the ability of immoral imbeciles to use them away instead.

Do I know how to do that without screwing over everyone's right to privacy? No. That too, may not be possible.

[–] Lumidaub@feddit.org 4 points 1 month ago* (last edited 1 month ago) (1 children)

But that's the point: if an illustrator made that image, we'd blame the person commissioning them and the illustrator. We'd blame the humans. Just like we're blaming the human who thought it would be a good idea to generate this image.

[–] riskable@programming.dev 1 points 1 month ago (1 children)

So we're not blaming Grok/Xitter, then?

The article implied that the whole thing is because of Xitter's AI. Not because there's bad people that will use it.

[–] Lumidaub@feddit.org 2 points 1 month ago

Is both okay? As I said, the person generating the thing is just a lot more interesting right now, imo, because "Grok makes horrible thing what else is new".