this post was submitted on 03 Oct 2025
25 points (100.0% liked)

PieFed Meta

1678 readers
39 users here now

Discuss PieFed project direction, provide feedback, ask questions, suggest improvements, and engage in conversations related to the platform organization, policies, features, and community dynamics.

Wiki

founded 2 years ago
MODERATORS
 

I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.

One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.

On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between "users vs. mods," and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.

Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.

Worth checking out this related discussion:
Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.

you are viewing a single comment's thread
view the rest of the comments
[–] wiki_me@lemmy.ml 1 points 4 days ago

With all the reports on countries manipulating online content. that's like leaving your house in a crime ridden Neighbourhood with the door wide option. as far as i know there is no way to prevent bots when dealing with a highly sophisticated actor like china (which meta reported manipulated online content).

One thing that can help is keeping states about reports user made. if say 90% of the reports are good and the number of reports is say over 50 the person can become a mod. you could also have chart scoring who is the best reporter. The number can be tweaked and you could do some statistical analysis finding the optimal numbers.