Never. Even any positive thing from Reddit should never be considered because its fucking Reddit!
Asklemmy
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
The only time I saw one of these on Reddit was when some asshole sent me one after a heated thread.
I got them on the fairly regular before I caught my ban and I never even argue I say my piece and gtfo, I don’t respond to people who respond to my comments… it serves no real purpose
In my experience this feature was abuse by malicious party
Pretty often. I remember when I first came out exploring my gender identity, getting active on the Trans subs, I got hit by at least a couple. Felt really shitty, and wasn't an uncommon issue from the complaints I saw surrounding it.
I was critical of the discourse a voluntary sniper in the Ukrainian war had. His bozo flag me as « having a death wish ». It’s one of the rare moment I felt really bad/angry/scared in the internet.
This was so shitty, abusing a system only because your friend/hero said some Nazi shit ? What the fuck ?
So people can send it to others to harass them? It doesn’t work on Reddit why implement it here? Talking about suicide could actually increase the likelihood of it happening so beyond the fact it will be used to harass people it might be making things worse
The best help you can give someone in distress is hearing them, whilst you redirect them to a place that can help with empathy and compassion.
Any form of automated message comes across as the exact opposite of empathy and compassion.
In addition, speaking as the admin of a trans and queer community, I don't have any special tools or abilities to help people. Sending the report to me doesn't let me help them, because they're almost certainly not in my country, and I don't have any special access that enables me to contact them or reach out to them. The tool I do have, is the instance itself that we host, that allows people to connect with their community and their peers, that allows them to struggle, and that shuts down anyone who would try and add to the hurt of someone on the edge.
Which is to say, I don't think a reddit style feature has a place here. It will let people think they're helping, without actually doing so, as well as providing a new vector for abuse (though that would be less of an issue than on reddit). In theory, an automated list of resources that could be called on could be useful, but again, if someone is struggling, they need to feel heard, and automated replies can come across as dispassionate and uncaring.
ime as a subreddit mod that was nearly exclusively used for harassment, usually transphobic harassment. In the one or two cases where there was a report for someone who had suicidal or self-harm ideation, there's still zilch I could have done; I would just approve the post so the user could get support and speak to others (the subreddit was a support group for a sensitive subject, so it wouldn't be out of place for a post to say that the stress of certain things was making them suicidal).
I'm inclined to believe not a single actually suicidal person received one of these messages.
You can't automate concern for fellow humans.
The one on reddit is used almost exclusively for harassment. Don't be more like reddit.
The existing reporting framework already works for this. Report those so that they can be removed ASAP.
Mods/admins should not be expected to be mental health professionals, and internet volunteers shouldn't have to shoulder that burden.
No way. If anything, that kind of thing just supresses people from expressing themselves honestly in a way that might help them.
Real human connection and compassion might make a difference. A cookie cutter template message is (genuinely) a "we don't want you to talk about this here" response
We aren't beholden to advertisers, we don't need this
From a software feature perspective I don’t know if this falls onto the platform needing to support.
But seems like an opportunity for a separate bot. Either by keyword or sentiment analysis and/or by other users reporting to the bot.