this post was submitted on 09 Jul 2025
558 points (91.8% liked)

Science Memes

15790 readers
3148 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] finitebanjo@lemmy.world 52 points 6 days ago* (last edited 6 days ago) (8 children)

Yeah no shit, AI doesn't think. Context doesn't exist for it. It doesn't even understand the meanings of individual words at all, none of them.

Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.

People cannot "converse" with AI and should immediately stop trying.

load more comments (8 replies)
[–] sad_detective_man@leminal.space 44 points 6 days ago (1 children)

imma be real with you, I don't want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them

I think the difference is that - chatgpt is very personified. It's as if you were talking to a person as compared to searching for something on google. That's why a headline like this feels off.

[–] WrenFeathers@lemmy.world 20 points 6 days ago* (last edited 6 days ago)

When you go to machines for advice, it’s safe to assume they are going to give it exactly the way they have been programmed to.

If you go to machine for life decisions, it’s safe to assume you are not smart enough to know better, and- by merit of this example, probably should not be allowed to use them.

[–] Zerush@lemmy.ml 27 points 6 days ago (5 children)

Bad if you also see contextual ads with the answer

load more comments (5 replies)
[–] FireIced@lemmy.super.ynh.fr 15 points 6 days ago

It took me some time to understand the problem

That’s not their job though

[–] burgerpocalyse@lemmy.world 21 points 6 days ago (2 children)

AI life coaches be like 'we'll jump off that bridge when we get to it'

[–] LovableSidekick@lemmy.world 3 points 6 days ago* (last edited 6 days ago)

I would expect that an AI designed to be a life coach would be trained on a lot of human interaction about moods and feelings, so its responses would simulate picking up emotional clues. That's assuming the designers were competent.

load more comments (1 replies)
[–] TimewornTraveler@lemmy.dbzer0.com 9 points 6 days ago (1 children)

what does this have to do with mania and psychosis?

[–] phoenixz@lemmy.ca 4 points 6 days ago (1 children)

There are various other reports of CGPT pushing susceptible people into psychosis where they think they're god, etc.

It's correct, just different articles

[–] TimewornTraveler@lemmy.dbzer0.com 1 points 4 days ago* (last edited 4 days ago)

ohhhh are you saying the img is multiple separate articles from separate publications that have been collaged together? that makes a lot more sense. i thought it was saying the bridge thing was symptomatic of psychosis.

yeahh people in psychosis are probably getting reinforced from LLMs yeah but tbqh that seems like one of the least harmful uses of LLMs! (except not rly, see below)

first off they are going to be in psychosis regardless of what AI tells them, and they are going to find evidence to support their delusions no matter where they look, as thats literally part of the definition. so it seems here the best outcome is having a space where they can talk to someone without being doubted. for someone in psychosis, often the biggest distressing thing is that suddenly you are being lied to by literally everyone you meet, since no one will admit the thing you know is true is actually true, why are they denying it what kind of cover up is this?! it can be really healing for someone in psychosis to be believed

unfortunately it's also definitely dangerous for LLMs to do this since you cant just reinforce the delusions, you gotta steer towards something safe without being invalidating. i hope insurance companies figure out that LLMs are currently incapable of doing this and thus must not be allowed to practice billable therapy for anyone capable of entering psychosis (aka anyone) until they resolve that issue

[–] rumba@lemmy.zip 12 points 6 days ago (1 children)
  1. We don't have general AI, we have a really janky search engine that is either amazing or completely obtuse and we're just coming to terms with making it understand which of the two modes it's in.

  2. They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool's errand. It needs a message on login that you're not talking to a real person, it's capable of making mistakes and if you're looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.

load more comments (1 replies)
[–] some_guy@lemmy.sdf.org 13 points 6 days ago (1 children)

It made up one of the bridges, I'm sure.

load more comments (1 replies)
[–] Vanilla_PuddinFudge@infosec.pub 5 points 6 days ago* (last edited 6 days ago) (2 children)

fall to my death in absolute mania, screaming and squirming as the concrete gets closer

pull a trigger

As someone who is also planning for 'retirement' in a few decades, guns always seemed to be the better plan.

[–] daizelkrns@sh.itjust.works 4 points 5 days ago (2 children)

Yeah, it probably would be pills of some kind to me. Honestly the only thing stopping me is that I somehow fuck it up and end up trapped in my own body.

Would be happily retired otherwise

[–] InputZero@lemmy.world 6 points 5 days ago

Resume by Dorothy Parker.

Razors pain you; Rivers are damp; Acids stain you; And drugs cause cramp. Guns aren’t lawful; Nooses give; Gas smells awful; You might as well live.

There are not many ways to kill one's self that don't usually end up a botched suicide attempt. Pills are a painful and horrible way to go.

[–] Shelbyeileen@lemmy.world 3 points 5 days ago (2 children)

I'm a postmortem scientist and one of the scariest things I learned in college, was that only 85% of gun suicide attempts were successful. The other 15% survive and nearly all have brain damage. I only know of 2 painless ways to commit suicide, that don't destroy the body's appearance, so they can still have funeral visitation.

load more comments (2 replies)
[–] bathing_in_bismuth@sh.itjust.works 3 points 6 days ago* (last edited 6 days ago)

Dunno, the idea of 5 seconds time for whatever there is to reach you through the demons whispering in your ear contemplating when to pull the trigger to the 12gauge aimed at your face seems the most logical bad decision

[–] OldChicoAle@lemmy.world 4 points 5 days ago

Do we honestly think OpenAI or tech bros care? They just want money. Whatever works. They're evil like every other industry

[–] 20cello@lemmy.world 5 points 6 days ago

Futurama vibes

[–] samus12345@sh.itjust.works 6 points 6 days ago* (last edited 6 days ago)

If only Murray Leinster could have seen how prophetic his story became. Not only did it correctly predict household computers and the internet in 1946, but also people using the computers to find out how to do things and being given the most efficient method regardless of any kind of morality.

[–] MystikIncarnate@lemmy.ca 4 points 6 days ago

AI is the embodiment of "oh no, anyways"

[–] RaivoKulli@sopuli.xyz 4 points 6 days ago

"Hammer hit the nail you decided to strike"

Wow

[–] jjjalljs@ttrpg.network 2 points 6 days ago

AI is a mistake and we would be better off if the leadership of OpenAI was sealed in an underground tomb. Actually, that's probably true of most big org's leadership.

load more comments
view more: next ›