this post was submitted on 27 Feb 2026
142 points (97.3% liked)

Technology

81933 readers
4460 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/43640522

If ChatGPT wants to replace health professionals, it should be held liable for the "advice" it gives.

all 22 comments
sorted by: hot top controversial new old

"Doctor, I have severe chest pain. Do I have a heart attack?"

"Computer says no."

[–] SaharaMaleikuhm@feddit.org 7 points 7 hours ago

Trusting the lying machine now gets you a Darwin award. Nice

[–] konomi@piefed.blahaj.zone 20 points 13 hours ago

For the love of gawd stop putting the bullshit machine in everything.

[–] SaraTonin@lemmy.world 6 points 10 hours ago (1 children)

I honestly don’t get why OpenAI and Apple seem to be trying to explicitly market LLMs as being capable of giving medical advice. It’s so obviously a lawsuit waiting to happen

[–] brynden_rivers_esq@lemmy.ca 3 points 5 hours ago

It’s because they think they’ll win those lawsuits. They may be right. They’re gonna pull an Alex jones: “oh come on, it’s a bit! Everyone knows it’s bullshit!”

[–] thebestaquaman@lemmy.world 36 points 16 hours ago* (last edited 16 hours ago) (4 children)

In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment

So it performs slightly worse than a coin flip...

In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see

Holy shit! That's a lot worse than a coin flip.

Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care

And there are real people out there that actually trust this tech to make real decisions for them. It literally performs significantly worse than a coin flip both with regards to false positives and false negatives. You are literally better off flipping a coin or throwing a dice than asking this thing what to do.

[–] Dave@lemmy.nz 8 points 11 hours ago

Even better than a coin flip is asking this what to do then doing the opposite!

[–] Atherel@lemmy.dbzer0.com 3 points 11 hours ago

You're even better off by doing the opposite of what chatgpt tells you to do.

[–] FallenWalnut@lemmy.world 8 points 15 hours ago

It is truly horrifying when you drill into the numbers.

I can see that it MIGHT be useful as a tool for medical professionals, but exposing it to the public is an insane risk.

[–] U7826391786239@piefed.zip 4 points 15 hours ago

they'll never be regulated because fascists love the mass surveillance. who cares about false positives--number of people bagged goes up either way

[–] floquant@lemmy.dbzer0.com 8 points 12 hours ago

If ChatGPT wants to replace health professionals, it should be held liable for the "advice" it gives.

Not should, it's fucking mental that it isn't.

[–] artyom@piefed.social 24 points 16 hours ago (1 children)

Holy shit, TIL there's a ChatGPT Health!? How is this not unauthorized practice of medicine?

[–] Wammityblam@lemmy.world 15 points 16 hours ago (1 children)

Past that, how is it HIPAA compliant?

There is no fucking way I believe that Open AI is not skimming these interactions for training.

[–] CompactFlax@discuss.tchncs.de 10 points 15 hours ago (1 children)

HIPAA governs how the data is held, but individuals can consent to sharing. Even with OpenAI.

[–] expr@piefed.social 3 points 12 hours ago

You can also revoke that consent, and HIPAA requires data to be able to be completely destroyed. no way they are compliant.

[–] panda_abyss@lemmy.ca 13 points 16 hours ago

This is a product that should not and should never have existed.

[–] kescusay@lemmy.world 9 points 16 hours ago (1 children)

How about a $10 billion fine for OpenAI for every mistake? Make it hurt. Make them pull the plug on this travesty.

[–] whotookkarl@lemmy.dbzer0.com 4 points 13 hours ago

Not likely under fascism or oligarchy, you need a functional legislature and judiciary for that sort of justice.