this post was submitted on 16 Aug 2025
892 points (99.0% liked)

People Twitter

8322 readers
2482 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician. Archive.is the best way.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] ByteJunk@lemmy.world -3 points 1 month ago (9 children)

And why are you assuming that a model that is designed to be used by physicians would not include the very same analysis from experts that goes into UpToDate or Dynamed? This is something that is absolutely trivial to do, the only thing stopping it is copyright.

AI can not only lookup reliable sources, it will probably be much better and faster than you or I or anybody.

I'm a 4th year medical student and I have literally never used an LLM

It was clear enough from your post, but thanks for confirming. Perhaps you should give it a try so you can understand it's limitations and strengths first-hand, no? Grab one the several generic LLMs available and ask something like:

Can you provide me with a small summary of the most up to date guidelines for the management of fibrodysplasia ossificans progressiva? Please be sure to include references, and only consider sources that are credible, reputable and peer reviewed whenever possible.

Let me know how it did. And note that it probably is a general purpose model and trained on very generic data, and not at all optimized for this usage, but it's impossible to dismiss the capabilities here...

[–] gens@programming.dev 4 points 1 month ago (5 children)

It's called RAG, and it's the only "right" way to get any accurate information out of an LLM. And even it is not perfect. Not by far.

You can use it without an LLM. It's basically keyword search. You still have to know what you are asking, so you have to study. Study without an imprecise LLM that can feed you false information that sounds plausible.

There are other problems with current LLMs that make them problematic. Sure you will catch onto those problems if you use them, and you still have to know more about the topic then them.

They are a fun toy and ok for low-stakes knowledge (ex cooking recipies). But as a tool in serious work they are a rubber ducky at best.

PS What the guy couple comments above said about sources, that's probably about web search. Even when an LLM reads the sources it can missinterpet them easily. Like how apple removed their summaries because they were often just wrong.

[–] ByteJunk@lemmy.world -3 points 1 month ago (4 children)

Let's not move the goal post. OP post is about med students using GPT to pass their exam in a successful manner. As another comment put it, it's not about Karen using GPT to diagnose pops, it's about trained professionals using an AI tool to assist them.

And yet, all we get is a bunch of people spewing vague FUD and spitballing opinions as if they're proven facts, or as if AI has stopped evolving and the current limitations are never going to be surpassed.

[–] gens@programming.dev 4 points 1 month ago (1 children)

The current limitations of LLMs are built in how they fundementaly work. We would need something completely new. That is a fact.

Honestly the thought of med students using them to pass exams scares me.

Sure, use them to replace CEOs of some unimportant companies like facebook. But they are not for jobs where other peoples lives are at stake. They inherently halucinate (like many CEOs). It is built in in how they work.

[–] ByteJunk@lemmy.world 0 points 1 month ago

I don't think the bar will be where you're setting it.

Suppose a new cancer drug or something comes out that significantly improves the life expectancy and quality of patients. In rare cases however, it can cause serious liver complications that may be fatal. Should this drug be used, or not?

It's not trivial, but there's a chance that it would in fact be used.

My point with AI hallucinations is that they're the same. If at some point it's proven that it leads to better patient outcomes, but can have side effects, should it be outright discarded?

load more comments (2 replies)
load more comments (2 replies)
load more comments (5 replies)