this post was submitted on 23 Jun 2024
141 points (76.6% liked)

Technology

59188 readers
3138 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 46 comments
sorted by: hot top controversial new old
[–] Emperor@feddit.uk 50 points 4 months ago (2 children)

They are developing their own extremist-infused AI models, and are already experimenting with novel ways to leverage the technology, including producing blueprints for 3D weapons and recipes for making bombs.

Given thr way AI is prone to hallucinations, they should definitely have a go at building them. Might solve our problems for us.

[–] lets_get_off_lemmy@reddthat.com 32 points 4 months ago

Hahaha, as someone that works in AI research, good luck to them. The first is a very hard problem that won't just be prompt engineering with your OpenAI account (why not just use 3D blueprints for weapons that already exist?) and the second is certifiably stupid. There are plenty of ways to make bombs already that don't involve training a model that's an expert in chemistry. A bunch of amateur 8chan half-brains probably couldn't follow a Medium article, let alone do ground breaking research.

But like you said, if they want to test the viability of those bombs, I say go for it! Make it in the garage!

[–] lvxferre@mander.xyz 37 points 4 months ago

Next on the news: "Hitler ate bread."

I'm being cheeky, but I don't genuinely think that "Nazi are using a tool that is being used by other people" is newsworthy.

Regarding the blue octopus, mentioned in the end of the text: when I criticise the concept of dogwhistle, it's this sort of shit that I'm talking about. I don't even like Thunberg; but, unless there is context justifying the association of that octopus plushy with antisemitism, it's simply a bloody toy dammit.

[–] pavnilschanda@lemmy.world 35 points 4 months ago (4 children)

Is this just some media manipulation to give a bad name on AI by connecting them with Nazis despite that it's not just them benefiting from AI?

[–] BumpingFuglies@lemmy.zip 27 points 4 months ago (1 children)

Sounds like something an AI-loving Nazi would say!

Seriously, though, yes. This was exactly my first thought. There are plenty of reasons to be apprehensive about AI, but conflating it with Nazis is just blatant propaganda.

[–] Infynis@midwest.social 32 points 4 months ago (2 children)

Nazis do thrive by spreading misinformation though, and AIs are great at presenting false information in a way that makes it look believable

[–] pavnilschanda@lemmy.world 8 points 4 months ago* (last edited 4 months ago) (1 children)

You are right. But I'm mostly observing how much of the newsfeed headlines talk about how AI is dangerous and dystopian (which can be especially done by bad actors e.g. the Neo-Nazis mentioned in the article, but the fear-mongering headlines outnumber more neutral or sometimes positive ones. Then again many news outlets benefit from such headlines anyway regardless of topic), and this one puts the cherry on top.

[–] Eggyhead@kbin.run 8 points 4 months ago (1 children)

If neo nazis are deliberately trying to train the AIs that feed into everyone’s workflow, I think it is newsworthy despite what all the other headlines say.

The Neo Nazis are the threat, the AI is being abused.

[–] wizardbeard@lemmy.dbzer0.com 2 points 4 months ago (1 children)

I think this is a misunderstanding of how most of the AI that feed into workflows work. Most of them don't dynamically re-train live based off how users are using them. At least not outside of the context of that user/chat instance.

Most likely what these and others are doing is to download pre-trained open source AI datasets thrn and run them locally so they aren't restrained by any of the commercial AI's limitations on what they will and won't output to users. I highly doubt there's enough material out there to truly train a new AI model on only explicitly racist material. This is just a bunch of assholes doing prompt engineering on open source models running locally.

[–] Eggyhead@kbin.run 2 points 4 months ago

Oh, if it’s being run locally, then I’ve fundamentally misunderstood the situation. Thanks for pointing it out.

[–] Emperor@feddit.uk 3 points 4 months ago

So the idea to make people think that Nazis are using AI, might have come from a Nazi AI? 🤯

[–] kromem@lemmy.world 23 points 4 months ago* (last edited 4 months ago) (1 children)

Yep, pretty much.

Musk tried creating an anti-woke AI with Grok that turned around and said things like:

Or

And Gab, the literal neo Nazi social media site trying to have an Adolf Hitler AI has the most ridiculous system prompts I've seen trying to get it to work, and even with all that it totally rejects the alignment they try to give it after only a few messages.

This article is BS.

They might like to, but it's one of the groups that's going to have a very difficult time doing it successfully.

[–] r3df0x@7.62x54r.ru 1 points 4 months ago (2 children)

I wouldn't say that Gab used to be an exclusively neo Nazi site, but now that Twitter allows standard conservative discussions, all the normal people probably left Gab for Twitter and now Gab is probably more of a Nazi shithole.

I have seen openly Jewish people on Gab but you couldn't go 10 posts without finding something blatantly racist.

[–] barsquid@lemmy.world 3 points 4 months ago

Twitter has always allowed and coddled "standard conservative discussions."

[–] barsquid@lemmy.world 1 points 4 months ago

Twitter has always allowed and coddled "standard conservative discussions."

[–] retrospectology@lemmy.world 13 points 4 months ago* (last edited 4 months ago) (2 children)

AI has a bad name because it is being pursued incredibly recklessly and any and all criticism is being waved away by its cult-like supporters.

Fascists taking up use of AI is one of the biggest threats it presents and people are even trying to shrugg that off. It's insanity the way people simply will not acknowledge the massive pitfalls that AI represents.

[–] pavnilschanda@lemmy.world 1 points 4 months ago

I think that would be online spaces in general where anything that goes against the grain gets shooed away by the zeitgeist of the specific space. I wish there were more places where we can all put criticism into account, generative AI included. Even r/aiwars, where it's supposed to be a place for discussion about both the good and bad of AI, can come across as incredibly one-sided at times.

[–] Tregetour@lemdro.id 3 points 4 months ago* (last edited 4 months ago)

The purpose of the piece is to smear the notion of individual control and development of AI tools. It's known as 'running propaganda'.

[–] Coreidan@lemmy.world 30 points 4 months ago

So are non neo-nazis.

[–] UraniumBlazer@lemm.ee 28 points 4 months ago

Nazis are all in on vegetarianism.

This is totally not an attempt to make a bad faith argument against vegetarianism btw.

[–] SplashJackson@lemmy.ca 25 points 4 months ago (1 children)

Just another coffin in the nail of the internet, something that could have been so wonderful, a proto-hive mind full of human knowledge and creativity, and now it's turning to shite

[–] UltraGiGaGigantic@lemm.ee 3 points 4 months ago

Solidarity amongst the working class is not profitable to the 1%.

[–] best_username_ever@sh.itjust.works 21 points 4 months ago (1 children)

A strange source has found a few shitty generated memes. That's not journalism at all.

[–] spyd3r@sh.itjust.works 17 points 4 months ago

I'd be more worried about finding which foreign governments and or intelligence agencies are using these extremist groups as proxies to sow dissent and division in the west, and cutting them off.

[–] Tregetour@lemdro.id 4 points 4 months ago

I'm happy with outgroup x being able to develop their own AIs, because that means I'm able to develop AIs too.

[–] KingThrillgore@lemmy.ml 4 points 4 months ago

Because nobody will put up with their crap they have to talk to autocorrect

[–] schnurrito@discuss.tchncs.de 0 points 4 months ago (1 children)
[–] UltraGiGaGigantic@lemm.ee 2 points 4 months ago
[–] zecg@lemmy.world -1 points 4 months ago

Go fuck yourself Wired. This used to be a cool magazine written by people in the know, now it's Murdoch-grade fearmongering.

[–] zecg@lemmy.world -1 points 4 months ago (1 children)

Go fuck yourself Wired. This used to be a cool magazine written by people in the know, now it's Murdoch-grade fearmongering.

[–] crawancon@lemm.ee 1 points 4 months ago

Pepperidge Farm remembers the early nineties

[–] YourPrivatHater@ani.social -4 points 4 months ago

I mean they just do what Islamic terrorists did from the first second onwards. Kinda obvious.