this post was submitted on 25 Jul 2023
104 points (84.7% liked)

Fediverse

27828 readers
391 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

Not the best news in this report. We need to find ways to do more.

top 39 comments
sorted by: hot top controversial new old
[–] shrugal@lemm.ee 41 points 1 year ago* (last edited 1 year ago) (1 children)

Why do they just mention absolute numbers, instead of comparing them to similar platforms? All they said was that there is CSAM on the Fediverse, but that's also true for centralized services and the internet as a whole. The important question is whether there is more or less CSAM on the Fediverse, no?

This makes it look very unscientific to me. The Fediverse might have a CSAM problem, but you wouldn't know it from this study.

[–] WhoRoger@lemmy.world 10 points 1 year ago* (last edited 1 year ago) (1 children)

Fediverse also makes it potentially easier to scan for this stuff. You can just connect a new server to the network and if the material is on federated servers, then you can find it (probably). While if it's some private forum or even the dark web, I assume it's a lot more difficult.

The other thing is, most regular servers defederate from suspicious stuff already. Like pretty much nobody federates with that one shota instance, and they only serve drawm stuff (AFAIK). So I don't know if you can even say servers like that are a part of the Fediverse in the first place.

[–] shrugal@lemm.ee 3 points 1 year ago

That's what I thought as well. If the authors of this "study" were able to simply scan for it on the Fediverse, then what's stopping law enforcement units from doing the same? They can literally get a message everytime someone posts something on a suspicious instance.

[–] Rooki@lemmy.world 25 points 1 year ago* (last edited 1 year ago) (2 children)

Another "SCREAM" for "BAN FEDIVERSE ITS DANGEROUS!!!!!!!". And then of course its tagged "CSAM" lol. You want a (small) website removed just accuse them of csam. And boom hoster and admins raided.

[–] Scew@lemmy.world 24 points 1 year ago (1 children)

Better ban school too. I hear that's how kids find drug dealers.

[–] DmMacniel@feddit.de 6 points 1 year ago* (last edited 1 year ago)

And don't forget to ban Sugar. Because Hitler loved Sugar!

[–] FlyingSquid@lemmy.world 8 points 1 year ago (1 children)

Not at all. Sensible suggestions.

[–] Rottcodd@kbin.social 16 points 1 year ago

This isn't science - it's propaganda.

[–] AnonTwo@kbin.social 8 points 1 year ago (1 children)

Is this the same report that was brought up where it was found out Twitter has the exact same issue?

Or that Reddit has had this issue in spades.

Frankly, that the ability for the Fediverse to cut off problem servers is listed as a drawback and not an advantage is in my opinion, wrong.

[–] blazera@kbin.social 5 points 1 year ago (2 children)

basically we dont know what they found, because they just looked up hashtags, and then didnt look at the results for ethics reasons. They dont even say what hashtags they looked through.

[–] Aesthesiaphilia@kbin.social 4 points 1 year ago (2 children)

We do know they only found, what, 112 actual images of CP? That's a very small number. I'd say that paints us in a pretty good light, relatively.

[–] dustyData@lemmy.world 5 points 1 year ago

112 images out of 325,000 images scanned over two days, is about 0,03% So we are doing pretty well. With more moderation tools we could continue to knock out those sigmas.

[–] blazera@kbin.social 1 points 1 year ago (1 children)

it says 112 instances of known CSAM. But that's based on their methodology, right, and their methodology is not actually looking at the content, it's looking at hashtags and whether google safesearch thinks it's explicit. Which Im pretty sure doesnt differentiate with what the subject of the explicitness is. It's just gonna try to detect breast or genitals I imagine.

Though they do give a few damning examples of things like actual CP trading, but also that they've been removed.

[–] Rivalarrival@lemmy.today 3 points 1 year ago

How many of those 112 instances are honeypots controlled by the FBI or another law enforcement agency?

[–] bandario@lemmy.dbzer0.com -4 points 1 year ago (3 children)

There are communities on NSFW Lemmy where people intentionally present as children engaged in sexual abuse scenarios. They're adults and this is their kink that everyone is supposed to tolerate and pretend is ok.

See defederation drama over the last couple of days. What I'm saying is, the hashtags mean nothing.

[–] LexiconDexicon@lemmy.world 8 points 1 year ago

They’re adults

then what's the problem?

[–] blazera@kbin.social 4 points 1 year ago

No, that admin lied about the community

[–] hightrix@kbin.social 4 points 1 year ago (1 children)

There are communities on NSFW Lemmy where people intentionally present as children engaged in sexual abuse scenarios.

If you are referring to the community that was cited as the reason for defederation, this is completely false. The community in question is adorableporn, extremely similar to the subreddit of the same name. No one, in any manner, in either community, presents as a child. While yes, the women that post there tend to be on the shorter and thinner side, calling short, thin adults 'children' is not being honest.

To be clear, this community is about petite women. This community is NOT about women with a kink to present as a child.

[–] bandario@lemmy.dbzer0.com -1 points 1 year ago (1 children)

And what of all the other bait communities? Come on. It's not ok.

[–] hightrix@kbin.social 3 points 1 year ago

What other bait communities? We can't just accept "think of the children" as an excuse. That doesn't work.

Yes, no one wants actual CSAM to show up in their feed, we can all completely agree on that. But just because some middle-aged woman can't tell the difference between a 20 year old and a 15 year old, doesn't make images of the 20 year old CSAM.

[–] indigojasper@kbin.social 3 points 1 year ago

yeahhh the free internet is a bit like a wild west. maybe ain't for children. maybe gotta keep em behind more secure digital walls while they're growing before letting em loose. but still gotta teach em media literacy and safe internet practices and such.

but lbr this isn't gonna stop the more persistent kids

[–] moistclump@lemmy.world 2 points 1 year ago

Abstract:

The Fediverse, a decentralized social network with interconnected spaces that are each independently managed with unique rules and cultural norms, has seen a surge in popularity. Decentralization has many potential advantages for users seeking greater choice and control over their data and social preferences, but it also poses significant challenges for online trust and safety.

In this report, Stanford Internet Observatory researchers examine issues with combating child sexual exploitation on decentralized social media with new findings and recommendations to address the prevalence of child safety issues on the Fediverse.

[–] MyFairJulia@lemmy.world 1 points 1 year ago (3 children)

Why would someone downvote this post? We have a problem and it's in our best interest to fix that.

[–] Aesthesiaphilia@kbin.social 1 points 1 year ago

Because it's another "WON'T SOMEONE THINK OF THE CHILDREN" hysteria bait post.

They found 112 images of cp in the whole Fediverse. That's a very small number. We're doing pretty good.

[–] chaogomu@kbin.social 1 points 1 year ago (1 children)

The report (if you can still find a working link) said that the vast majority of material that they found was drawn and animated, and hosted on one Mastodon instance out of Japan, where that shit is still legal.

Every time that little bit of truth comes up, someone reposts the broken link to the study, screaming about how it's the entire Fediverse riddled with child porn.

[–] MyFairJulia@lemmy.world 0 points 1 year ago (1 children)

So basically we had a bad apple that was probably already defederated by everyone else.

[–] ZILtoid1991@kbin.social 0 points 1 year ago (1 children)

It's Pawoo, Pixiv's (formerly) own instance, which is infamous for this kind of content, and those are still "just drawings" (unless some artists are using illegal real-life references).

[–] dustyData@lemmy.world 0 points 1 year ago (1 children)

They're using Generative AI to create photo realistic renditions now, and causing everyone who finds out about it to have a moral crisis.

[–] ZILtoid1991@kbin.social 0 points 1 year ago (1 children)

Well, that's a very different and way more concerning thing...

[–] Derproid@lemm.ee 1 points 1 year ago (1 children)

... I mean ... idk ... If the argument is that the drawn version doesn't harm kids and gives pedos an outlet, is a ai generated version any different?

[–] brain_pan@infosec.pub 1 points 1 year ago (1 children)

imo, the dicey part of the matter is "what amount of the AI's dataset is made up of actual images of children"

[–] Derproid@lemm.ee 1 points 1 year ago

Shit that is a good point.

[–] shrugal@lemm.ee 0 points 1 year ago* (last edited 1 year ago) (1 children)

The study doesn't compare their findings to any other platform, so we can't really tell if those numbers are good or bad. They just state the absolute numbers, without really going into to much detail about their searching process. So no, you can't draw the conclusion that the Fediverse has a CSAM problem, at least not from this study.

Of course that makes you wonder why they bothered to publish such a lackluster and alarmistic study.

[–] bumblebrainbee@lemmy.ml -1 points 1 year ago* (last edited 1 year ago)

Pretty sure any quantity of CSAM that isnt zero is bad....

load more comments
view more: next ›