this post was submitted on 31 May 2024
35 points (92.7% liked)

Cybersecurity

5683 readers
49 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !cybersecurity@lemmy.capebreton.social !securitynews@infosec.pub !netsec@links.hackliberty.org !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub

Notable mention to !cybersecuritymemes@lemmy.world

founded 1 year ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[–] Telorand@reddthat.com 5 points 5 months ago

The networks also used AI to enhance their own productivity, applying it to tasks such as debugging code or doing research into public social media activity, it said.

So it's not all bad news. Obviously, the people who decided to use AI in this way have no idea what its limitations are.

[–] autotldr@lemmings.world 2 points 5 months ago

This is the best summary I could come up with:


OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The networks also used AI to enhance their own productivity, applying it to tasks such as debugging code or doing research into public social media activity, it said.

Pressure is mounting on fast-growing AI companies such as OpenAI, as rapid advances in their technology mean it is cheaper and easier than ever for disinformation perpetrators to create realistic deepfakes and manipulate media and then spread that content in an automated fashion.

Microsoft-backed OpenAI said it was committed to uncovering such disinformation campaigns and was building its own AI-powered tools to make detection and analysis “more effective.” It added its safety systems already made it difficult for the perpetrators to operate, with its models refusing in multiple instances to generate the text or images asked for.

These included a Russian operation, Doppelganger, which was first discovered in 2022 and typically attempts to undermine support for Ukraine, and a Chinese network known as Spamouflage, which pushes Beijing’s interests abroad.

It also said it had thwarted a pro-Israel disinformation-for-hire effort, allegedly run by a Tel Aviv-based political campaign management business called STOIC, which used its models to generate articles and comments on X and across Meta’s Instagram and Facebook.


The original article contains 606 words, the summary contains 232 words. Saved 62%. I'm a bot and I'm open source!