this post was submitted on 14 Aug 2024
82 points (100.0% liked)
science
14726 readers
866 users here now
just science related topics. please contribute
note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry
Rule 1) Be kind.
lemmy.world rules: https://mastodon.world/about
I don't screen everything, lrn2scroll
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So, let's just use an example of a pill that treats headaches so I can understand, because I'm kinda stupid.
It works super well, and most patients taking it in double blind trials find it relieves headache pain considerably. Why is it a bad thing, to the point of rejecting it as a treatment, that the patient feels that the pill is working very well and has concluded on their own that this is probably not a placebo?
I can understand a patient being misled by coincidence, but surely a measurable, verifiable, and repeatable benefit to the patient compared to pills without medicinal ingredients would warrant a different conclusion, wouldn't it?
In your coma scenario, I'm sure there is a statistical analysis that can be performed to show with a degree of certainty that a specific medication has a higher likelihood of being effective than a placebo in a controlled experiment.
I commented on this same story a while ago when it first broke that it was likely to be rejected and I don't think anyone explained it in the thread.
The problem is that it's not a double blind trial because the participants can tell whether they are on it. The placebo effect is also a problem because there is no real control group.
But why is that such a problem that it's worth rejecting what is otherwise widely considered an effective treatment?
I am fundamentally not understanding the inherent risk to patients resulting from the structure of the study that is apparently so harmful that it must not continue.
Why is being able to tell that your medication is working a negative thing in a study? And such a negative thing that it apparently negates all other positive aspects of the medication.
The problem is that you can't tell if it's truly working due to the placebo effect.
Yeah, I understand that. But if there's a measurable difference between the efficacy of the 2 pills that even the patient is obviously aware of, why does that warrant extreme caution versus another pill that doesn't have this effect?
Like why is it better to have a study in which the patient literally can't tell the difference between treatments? Why is it not detrimental for a federal agency to unilaterally dismiss this?
I understand that people online aren't obligated to engage with me thoughtfully, but I was hoping for an actual explanation that is longer than 50 words from someone who is more knowledgeable than me regarding the validity of scientific experiments as they relate to pharmaceuticals.
The idea of modern medicine is to sell chemical compounds that actually have an effect. It’s a philosophical and ethical thing. All products have a unique psychological effect that gets intertwined with their biochemical effect. If you can’t study them individually, it’s impossible to tell if the biochemical effect even exists at all. If your medicine relies heavily, or even entirely, on the psychological side, it’s no different than homeopathy. The idea of modern medicine is to be better than the old stuff that preceded it.
I prefer to think of this as an equation like this: Pm+Bm=Pp+Bp
Pm=psychological effect, medicine
Bm=biochemical effect, medicine
Pp=psychological effect, placebo = surprisingly big
Bp=biochemical effect, placebo = 0
If these sides are equivalent, the medicine is just as effective as placebo. If the medicine side is bigger, you’ll want to know how much of it comes from the P and B terms. In order to figure that out, you would need to know some values. Normally, you can just assume that Pm=Pp, but if you can’t assume that, it you’re left with two unknowns in that equation. In this case, you really can’t assume them to be equal, which means that your data won’t allow you to figure out how much of the total effect comes from psychological and biochemical effects. It could be 50/50, 10/90, who knows. That sort of uncertainty is a serious problem, because of the philosophical and ethical side of developing medicine.
I'm not sure if the biochemical effects of a placebo are 0.
Shamelessly stolen from Wikipedia because I couldn't find the original source
Statistical tests are very picky. They have been designed by mathematicians in a mathematical ideal vacuum void of all reality. The method works in those ideal conditions, but when you take that method and apply it in messy reality where everything is flawed, you may run into some trouble. In simple cases, it’s easy to abide by the assumptions of the statistical test, but as your experiment gets more and more complicated, there are more and more potholes for you to dodge. Best case scenario is, your messy data is just barely clean enough that you can be reasonably sure the statistical test still works well enough and you can sort of trust the result up to a certain point.
However, when you know for a fact that some of the underlying assumptions of the statistical test are clearly being violated, all bets are off. Sure, you get a result, but who in their right mind would ever trust that result?
If the test says that the medicine is works, there’s clearly financial incentive to believe it and start selling those pills. If it says that the medicine is no better than placebo, there’s similar incentive to reject the test result and demand more experiments. Most of that debate goes out the window if you can be reasonably sure that the data is good enough and the result of your statistical test is reliable enough.