News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
Are there any social scientists on lemmy? Have we actually studied the effects of labeling misinformation as opposed to removing it? Does labeling misinformation actually stop it from spreading and being believed, or does it reinforce conspiratorial thinking? This is not a rhetorical question - I genuinely don't know.
I'm not a social scientist, but it's a mixed bag. Here are the top results from Google Scholar:
There is growing concern over the spread of misinformation online. One widely adopted intervention by platforms for addressing falsehoods is applying “warning labels” to posts deemed inaccurate by fact-checkers. Despite a rich literature on correcting misinformation after exposure, much less work has examined the effectiveness of warning labels presented concurrent with exposure. Promisingly, existing research suggests that warning labels effectively reduce belief and spread of misinformation. The size of these beneficial effects depends on how the labels are implemented and the characteristics of the content being labeled. Despite some individual differences, recent evidence indicates that warning labels are generally effective across party lines and other demographic characteristics.
Social media platforms face rampant misinformation spread through multimedia posts shared in highly-personalized contexts [10, 11]. Foundational qualitative research is necessary to ensure platforms’ misinformation interventions are aligned with users’ needs and understanding of information in their own contexts, across platforms. In two studies, we combined in-depth interviews (n=15) with diary and co-design methods (n=23) to investigate how a mix of Americans exposed to misinformation during COVID-19 understand their information environments, including encounters with interventions such as Facebook fact-checking labels. Analysis reveals a deep division in user attitudes about platform labeling interventions, perceived by 7/15 interview participants as biased and punitive. As a result, we argue for the need to better research the unintended consequences of labeling interventions on factual beliefs and attitudes.
These findings also complicate discussion around "the backfire effect", the idea that when a claim aligns with someone’s ideological beliefs, telling them that it’s wrong will actually make them believe it even more strongly [35]. Though this phenomenon is thought to be rare, our findings suggest that emotionally-charged, defensive backfire reactions may be common in practice for American social media users encountering corrections on social media posts about news topics. While our sample size was too small to definitively measure whether the labels actually strengthened beliefs in inaccurate claims, at the very least, reactions described above showed doubt and distrust toward the credibility of labels--often with reason, as in the case of "false positive" automated application of labels in inappropriate contexts.
In the case of state-controlled media outlets on YouTube, Facebook, and Twitter this has taken the form of labeling their connection to a state. We show that these labels have the ability to mitigate the effects of viewing election misinformation from the Russian media channel RT. However, this is only the case when the platform prominently places the label so as not to be missed by users.
Using appropriate statistical tools, we find that, overall, label placement did not change the propensity of users to share and engage with labeled content, but the falsity of content did. However, we show that the presence of textual overlap in labels did reduce user interactions, while stronger rebuttals reduced the toxicity in comments. We also find that users were more likely to discuss their positions on the underlying tweets in replies when the labels contained rebuttals. When false content was labeled, results show that liberals engaged more than conservatives. Labels also increased the engagement of more passive Twitter users. This case study has direct implications for the design of effective soft moderation and related policies.
One thing we know for certain is that handing the government the ability to mandate control over our information flow is one of the primary tools by which fascism took hold in mid-20th century Europe.