this post was submitted on 20 Dec 2023
77 points (89.7% liked)

Technology

59427 readers
2850 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] Supermariofan67@programming.dev 17 points 11 months ago* (last edited 11 months ago) (1 children)

Given that Facebook and meta's other platforms are one of the largest distributors of that, if they scrape Facebook for data this is not exactly a surprise unfortunately...

[–] Yuki@kutsuya.dev 2 points 11 months ago

That's sad :c

[–] LainOfTheWired@lemy.lol 4 points 11 months ago (2 children)

How are you supposed to train the dam thing to detect something without using that thing though?

[–] SapphireVelvet84839@lemmynsfw.com 6 points 11 months ago

How are you supposed to train the dam thing to detect something without using that thing though?

There are various organizations which have clearances to handle child abuse images. Where the data is handled like the plutonium it is, and everybody is vetted. I'm sure they've already experimented with developing a bot to detect images.

They even make available their traditional hash databases to server admins who want to run their images against their hash databank.

The issue as the report states is that nobody will willingly link said bot/database to the training data, because either they don't want a copyright fight or they don't want to acknowledge the issue.

[–] GigglyBobble@kbin.social 4 points 11 months ago

It's about generators which certainly should not be trained with such material.

[–] autotldr@lemmings.world 3 points 11 months ago

This is the best summary I could come up with:


More than 1,000 known child sexual abuse materials (CSAM) were found in a large open dataset—known as LAION-5B—that was used to train popular text-to-image generators such as Stable Diffusion, Stanford Internet Observatory (SIO) researcher David Thiel revealed on Wednesday.

His goal was to find out what role CSAM may play in the training process of AI models powering the image generators spouting this illicit content.

"Our new investigation reveals that these models are trained directly on CSAM present in a public dataset of billions of images, known as LAION-5B," Thiel's report said.

But because users were dissatisfied by these later, more filtered versions, Stable Diffusion 1.5 remains "the most popular model for generating explicit imagery," Thiel's report said.

While a YCombinator thread linking to a blog—titled "Why we chose not to release Stable Diffusion 1.5 as quickly"—from Stability AI's former chief information officer, Daniel Jeffries, may have provided some clarity on this, it has since been deleted.

Thiel's report warned that both figures are "inherently a significant undercount" due to researchers' limited ability to detect and flag all the CSAM in the datasets.


The original article contains 837 words, the summary contains 182 words. Saved 78%. I'm a bot and I'm open source!

[–] BetaDoggo_@lemmy.world 1 points 11 months ago (1 children)

Between 0.00002% and 0.00006%

[–] snooggums@kbin.social 8 points 11 months ago

Anything > 0 is too many.