this post was submitted on 03 Jun 2025
387 points (98.0% liked)

Technology

70715 readers
4172 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they've made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.

top 50 comments
sorted by: hot top controversial new old
[–] meme_historian@lemmy.dbzer0.com 132 points 2 days ago* (last edited 2 days ago) (9 children)

At this rate we'll soon have a decentralized para-religious terrorist organization full of brainlets that got scared shitless after discovering Roko's Basilisk and are now doing the cyber lord's bidding in order to not get punished once AGI arrives

edit: change to non-mobile link

[–] ggtdbz@lemmy.dbzer0.com 39 points 2 days ago (2 children)

Boy have I got news for you.

Look up the Zizians.

(Ok they’re only a tangential offshoot of people who maybe really like the Basilisk thought experiment and mostly don’t believe it. But hey. It’s underway!)

[–] digredior@lemmynsfw.com 16 points 2 days ago (1 children)
[–] meme_historian@lemmy.dbzer0.com 17 points 2 days ago (3 children)

What the fuck did I just read??

shit is so fucking insane, hard to believe. not gonna help us trans folk with the whole "this is just a mental illness" look either, especially with the "correlation is causation" attitude running so rampant these days.

honestly I get popping off once in a while but the reasoning is poor in this case. they could have done something more actually effective with all those resources and shit, why this?? I mean, I totally get cultism - I spent 16 years in a cult. but ugh what a waste.

load more comments (2 replies)
[–] Brunbrun6766@lemmy.world 22 points 2 days ago

Zizians my guy, they already exist

[–] pticrix@lemmy.ca 17 points 2 days ago (1 children)

"rationalists". They dare use that name unironically.

load more comments (1 replies)
[–] kat_angstrom@lemmy.world 8 points 1 day ago (2 children)

Imagine a boot SO mighty that if it exists it might crush you, so you need to lick it ahead of time so that someday if it does exist, it might not crush you.

load more comments (2 replies)
load more comments (5 replies)
[–] shadowfax13@lemmy.ml 6 points 1 day ago

that sub seems to be fully brigaded by bots from marketing team of closed-ai and preplexity

[–] answersplease77@lemmy.world 30 points 1 day ago* (last edited 1 day ago) (1 children)

"Artifical" Intellegence has already taken over "Social" Media and the internet.

What I mean by the quotes: We replaced our social interactions with each other with Social Media, which has nothing social about it, then replaced the humans in social media with artificial slop generated by computers guessing what you want to read, watch, or hear.

Most of Facebook, Insta, Youtube, Reddit, Twitter...etc is AI profiles, AI channels, and AI sloptrash content that give back google-ad revenue money to some russian or indian dude who doen't even speak english.

[–] CalipherJones@lemmy.world 12 points 1 day ago (1 children)

Lemmy seems to be the only place with actual people for the most part. I worry so heavily for the idiots of the world that can't discern robots from people. They're really going to fall for what a programmed machine has to say.

[–] webghost0101@sopuli.xyz 11 points 1 day ago (3 children)

I’ve wondered about this.

I cant believe that somehow everything except lemmy got infected. There must exist some ai comments here but i have not noticed them…

Of course its better not do dwell on this to much or paranoia quickly sets in.

[–] SparroHawc@lemm.ee 13 points 1 day ago (1 children)

Oh, I'm sure there are bots on Lemmy too. The general userbase, however - of people who are sick of Reddit's BS - are also going to have very little tolerance for bot BS, so the instances are incentivized to try to keep bot activity down lest they be de-federated.

[–] thiseggowaffles@lemmy.zip 18 points 1 day ago

Plus they don't see ad revenue, so there's no profit incentive to keep bots around acting as if they're real traffic. If anything, Lemmy instances are disincentivized from allowing bot traffic because it means more traffic than necessary, which costs them bandwidth.

[–] CalipherJones@lemmy.world 11 points 1 day ago (1 children)

It's not paranoia. The internet is legitimately dying.

load more comments (1 replies)
[–] dzsimbo@lemm.ee 6 points 1 day ago (1 children)

Or maybe even take a step back. Dead internet theory is real and we're living it, but just because the main subreddits and FB are ai schlock doesn't mean Lemmy is the only 'real' place.

I feel the hardest part is keeping my senses about me when I argue with hivemind mentality. You will probably get a feeling from my writing, that I too embraced the hive speak and mind. And this is where the bots get the everyperson. Bots speak in hivemind and meme format. I think this whole kerfuffle will do wonders for real online discussion, as low-effort discussions will be dismissed as white noise.

[–] veni_vedi_veni@lemmy.world 9 points 1 day ago* (last edited 1 day ago) (4 children)

Ngl, that last paragraph felt like some pseudo-prodigal word vomit that only an AI would produce.

you sus af

load more comments (4 replies)
[–] Stern@lemmy.world 84 points 2 days ago (3 children)

tfw its no longer just the AI hallucinating

[–] Skunk@jlai.lu 55 points 2 days ago (4 children)

Yeah, there’s been an article shared on lemmy a few months ago about couples or families destroyed by AI.

Like the husband thinks he discovered some new truth, kinda religious level about how the world is working and stuff. The he becomes an annoying guru and ruins his social life.

Kind of Qanon people but with chatGPT…

[–] pennomi@lemmy.world 41 points 2 days ago (1 children)

Turns out it doesn’t really matter what the medium is, people will abuse it if they don’t have a stable mental foundation. I’m not shocked at all that a person who would believe a flat earth shitpost would also believe AI hallucinations.

[–] Bouzou@lemmy.world 4 points 1 day ago (1 children)

I dunno, I think there's credence to considering it as a worry.

Like with an addictive substance: yeah, some people are going to be dangerously susceptible to it, but that doesn't mean there shouldn't be any protections in place...

Now what the protections would be, I've got no clue. But I think a blanket, "They'd fall into psychosis anyway" is a little reductive.

[–] pennomi@lemmy.world 7 points 1 day ago (1 children)

I don’t think I suggested it wasn’t worrisome, just that it’s expected.

If you think about it, AI is tuned using RLHF, or Reinforcement Learning from Human Feedback. That means the only thing AI is optimizing for is “convincingness”. It doesn’t optimize for intelligence, anything seems like intelligence is literally just a side effect as it forever marches onward towards becoming convincing to humans.

“Hey, I’ve seen this one before!” You might say. Indeed, this is exactly what happened to social media. They optimized for “engagement”, not truth, and now it’s eroding the minds of lots of people everywhere. AI will do the same thing if run by corporations in search of profits.

Left unchecked, it’s entirely possible that AI will become the most addictive, seductive technology in history.

load more comments (1 replies)
[–] Vanth@reddthat.com 23 points 2 days ago (2 children)

This feels a bit like PTA-driven panic about kids eating Tide Pods when like one person did it. Or razor blades in Halloween candy. Or kids making toilet hooch with their juice boxes. Or the choking game sweeping playgrounds.

But also, man on internet with no sense of mental health ... sounds almost feasible.

[–] Pogogunner@sopuli.xyz 19 points 2 days ago (1 children)

I directly work with one of these people - they admit to spending all of their free time talking to the LLM chatbots.

On our work forums, I see it's not uncommon at all. If it makes you feel any better, AI loving is highly correlated with people you shouldn't ever listen to in the first place.

[–] CalipherJones@lemmy.world 7 points 1 day ago

What an absolutely pathetic life that is holy shit

[–] chaosCruiser@futurology.today 13 points 2 days ago (2 children)

The Internet is a pretty big place. There’s no such thing as an idea that is too stupid. There’s always at least a few people who will turn that idea into a central tenet of their life. It could be too stupid for 99.999% of the population, but that still leaves about 5 000 people who are totally into it.

load more comments (2 replies)
[–] raltoid@lemmy.world 14 points 2 days ago

And that's not even getting started on "ai girlfriends", that are isolating vulnerable people to a terrifying degree. And since they are garbage at context, they do things like that case last year where it could seem like it was encouraging a suicidal teen.

load more comments (1 replies)
[–] Opinionhaver@feddit.uk 12 points 2 days ago

It never was. Hallucinations are in no way unique to LLMs.

load more comments (1 replies)
[–] cyrano@lemmy.dbzer0.com 63 points 2 days ago (1 children)

The year is 2026 the cyberchristo religion is taking off….

[–] thenose@lemmy.world 42 points 2 days ago (1 children)

The wh 40k lore sounds more realistic than ever

[–] veni_vedi_veni@lemmy.world 5 points 1 day ago

I firmly believe when we have actual sexbots, that will be the sharp decline and decadence that will birth Slaanesh.

[–] nagaram@startrek.website 45 points 2 days ago (1 children)

I think Terry A Davis would have found god in chat GPT and could have figured out the API calls on TempleOS

[–] palordrolap@fedia.io 26 points 2 days ago (1 children)

Hard to say. I feel like it's about as likely he would have found LLMs to be an overcomplicated false prophet or false god.

This was a man whose operating system turned a PC into something not unlike an advanced Commodore 64, after all. He liked the simplicity and lack of layers the older computers provided. LLMs are literally layers upon layers of obfuscation and pseudo-neural wiring. That's not simple or beautiful.

It might all boil down to whether the inherent randomness of an LLM could be (made to be) sufficiently influenced by a higher power or not. He often treated random number outcomes as the influence of God, and it's hard to say how seriously he took that on any given day.

[–] Carmakazi@lemmy.world 11 points 2 days ago (3 children)

I'd imagine it's a fool's errand to try and find threads of logic and consistency in the profoundly schizophrenic.

load more comments (3 replies)
[–] Kolanaki@pawb.social 4 points 1 day ago* (last edited 1 day ago)

I wonder how many AI chatbot accounts the pro-AI places have banned.

[–] whaleross@lemmy.world 20 points 1 day ago* (last edited 1 day ago) (4 children)

I've been trying to configure ChatGPT tell me if I'm wrong in a question or statement but damn it never does unless I keep probing for support or links. I've been having the feeling that it has become worse with latter models. Glad but also sad to see I was right.

Anybody know other LLM that are more "trustworthy"* and capable of searching online for more information?

Edit; *trustworthy in quotes because of course people will jump on this. I know the limitations of LLM, I don't need you to tell me how much you hate everything AI. And I know LLM aren't AI.

[–] Etterra@discuss.online 23 points 1 day ago (1 children)

There are no trustworthy LLMs. They don't know it understand what they're saying - they're literally just predicting words that sound like they match what it was taught. It's a only barely smarter than a parrot, and it has no idea how to research anything or tell facts from made-up bullshit. You're wasting your time by trying to force it to do something it's literally incapable of doing.

You're better off researching them the hard way; check primary sources and then check the credibility of those sources.

[–] brandon@lemmy.ml 11 points 1 day ago

Considering that parrots can have actual thoughts, I'd say LLMs are even less smart than that.

[–] webghost0101@sopuli.xyz 6 points 1 day ago

Claude definitely has its impressive moments where it calls out something inaccurate.

It’s also way less sycophantic, mature and better for light coding.

My only issue is that the servers are sometimes slow and so is the ios app which frequently trows an error after 2 minutes if waiting.

load more comments (2 replies)
[–] R3D4CT3D@midwest.social 26 points 2 days ago (1 children)

can i call off work bc of “ai delusions”?

[–] JasonDJ@lemmy.zip 20 points 2 days ago

AI delusions are half the reason I go to work in the first place.

[–] henfredemars@infosec.pub 12 points 2 days ago

AI is not healthy. Our mental health is nowhere near good enough to handle even this level of machine intelligence.

load more comments
view more: next ›