There are multiple reasons why DBRCG studies are conducted, and as is the case with drugs, they’re usually not done with the safety of individuals in mind. That’s mostly lip service. The final goal of drug studies is to manufacture profit, and one of the ways this goal is achieved is through scientificating the results with DBRCG “methodology.”
The best, for instance, that drug studies that rely on inferential statistics sampling can do is help individuals gamble about drug consumption choices better
You may not have meant it this way, but this statement - that using statistics helps you make better ‘gambling’ choices - completely supports the entire point of doing double blind studies. You try to subtly deride it by using the word ‘gambling’, but you’re saying that using these statistics rather than not using them does, in fact, give you a better chance of making the right choice. Which is what the statistics are for. So…yes, they help people ‘gamble’ about their drug choices better…and that’s really the entire point of them in the first place.
I agree that statistics are useful, but their utility is highly overrated.
As Hume rightly stressed, induction doe not warrant such faith.
Induction is the only thing that could possibly warrant thinking that something works.
Only? Is this indicative of your faith in induction? Induction itself is contrary to your “only” implication. Recall the first two rules of the probability calculus: Minimal Rule 1: No probability is less than zero; Minimal Rule 2: If A is a tautology, Pr(A)=1. Your use of “only” equals “1” or "100%, a violation of these rules and an ignorance or forgetfulness about why the problem of induction is a problem.
And whether or not “something works” depends on exactly what that something is and what precisely is meant by “works.”
You can’t prove that a drug works by sitting in your arm chair thinking about it.
What does induction have to do with proof?
That’s what statistics and studies are for. So, yes, induction does warrant, not ‘faith’ because that’s just your derogatory term to put down inductive beliefs, but…if a double blind study shows that drug x is significantly more effective than placebo on 90% of people, and 5% of people experience nausea with the drug, then yes, induction most definitely warrants thinking, “Hey, I’ve got a 90% chance of this drug helping me and a 5% chance of experiencing nausea”. I don’t see what the problem is with that.
Statistical studies serve a variety of purposes, not just the ones you’re limiting them to.
What do you mean by “shows”?
“Inductive beliefs”? According to Bayesianism, they are more than beliefs. They are the “axioms” of the probability calculus.
Placebos are “good enough” for the time being, but they are not always a reliable methodological tool. We should not rely too heavily on our assumptions about them and rest on our laurels by continuing to assume that they provide reliable standards of comparison. As Guy P. Harrison in his book Think: Why You Should Question Everything (setting aside for now the fact that he constantly violates this maxim in his book, especially when it comes to his scientific materialist, atheistic, ultra-sceptical assumptions):
It’s always a good idea to keep in mind the placebo effect when hearing about some alternative medicine that a salesperson, friend or family member is raving about. This strange phenomenon is very real and undoubtedly is responsible for much of medical quackery’s success. Some people some of the time can get a positive health benefit from taking a fake medicine pill…instead of real medicine. This is well documented but still is not fully understood. The problem you need to keep in mind, however, is that it’s not consistent, and even if there is some positive gain, it might not be enough to get one through an illness safely. So the placebo effect is not something anyone should rely on (p. 102, Prometheus: 2013; my emphases).
Your 90%/5% breakdown is a good example of how DBRCG studies are misunderstood. The results do not mean you have “a 90% of the drug helping you.” They mean that with this population, 90% of the cohorts were observed to derived a benefit. This raises several problems. First, how were the benefits measured, and how satisfied are we that our instruments are reliable measures (see the OP, parenthesis, end of paragraph 1)? Secondly, is this sample representative of the greater population? This, by the problem of induction, can never be elevated beyond the realm of speculation. Thirdly, did we use critical thinking skills to apply this confidence interval? Is it justifiable? Fourthly, have we accounted for all the possible benefits and all the possible adversities? Fifthly, are we warranted in defining nausea as a concern? Did we trial the drug long enough to test whether or not this was a permanent effect, or one that would subside? I could go on, and might, depending on your response.