this post was submitted on 13 Oct 2024
34 points (100.0% liked)

Politics

10178 readers
104 users here now

In-depth political discussion from around the world; if it's a political happening, you can post it here.


Guidelines for submissions:

These guidelines will be enforced on a know-it-when-I-see-it basis.


Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Foreign influence campaigns, or information operations, have been widespread in the run-up to the 2024 U.S. presidential election. Influence campaigns are large-scale efforts to shift public opinion, push false narratives or change behaviors among a target population. Russia, China, Iran, Israel and other nations have run these campaigns by exploiting social bots, influencers, media companies and generative AI.

[...]

[Influence campaigns include] which researchers call inauthentic coordinated behavior. [They] identify clusters of social media accounts that post in a synchronized fashion, amplify the same groups of users, share identical sets of links, images or hashtags, or perform suspiciously similar sequences of actions.

[...]

[Researchers] have uncovered many examples of coordinated inauthentic behavior. For example, we found accounts that flood the network with tens or hundreds of thousands of posts in a single day. The same campaign can post a message with one account and then have other accounts that its organizers also control “like” and “unlike” it hundreds of times in a short time span. Once the campaign achieves its objective, all these messages can be deleted to evade detection. Using these tricks, foreign governments and their agents can manipulate social media algorithms that determine what is trending and what is engaging to decide what users see in their feeds.

[...]

One technique increasingly being used is creating and managing armies of fake accounts with generative artificial intelligence. [Researchers] estimate that at least 10,000 accounts like these were active daily on the platform, and that was before X CEO Elon Musk dramatically cut the platform’s trust and safety teams. We also identified a network of 1,140 bots that used ChatGPT to generate humanlike content to promote fake news websites and cryptocurrency scams.

In addition to posting machine-generated content, harmful comments and stolen images, these bots engaged with each other and with humans through replies and retweets.

[...]

These insights suggest that social media platforms should engage in more – not less – content moderation to identify and hinder manipulation campaigns and thereby increase their users’ resilience to the campaigns.

The platforms can do this by making it more difficult for malicious agents to create fake accounts and to post automatically. They can also challenge accounts that post at very high rates to prove that they are human. They can add friction in combination with educational efforts, such as nudging users to reshare accurate information. And they can educate users about their vulnerability to deceptive AI-generated content.

[...]

These types of content moderation would protect, rather than censor, free speech in the modern public squares. The right of free speech is not a right of exposure, and since people’s attention is limited, influence operations can be, in effect, a form of censorship by making authentic voices and opinions less visible.

you are viewing a single comment's thread
view the rest of the comments
[–] papertowels@lemmy.one 4 points 1 month ago* (last edited 1 month ago)

Sorry, I don't quite understand your point. Can you clarify?