Control?: The Double Blind, Random Gontrol Group Method

The guy is saying here, as far as I can tell, that having a bigger sample size should create more error.

As far as I can tell, this literally means that he thinks that a study to test the efficacy of some drug would be more valuable if it were done on fewer people. Eg he’d trust a study done on 4 people more than a study done on 1000, because of some nonsensical ‘complexity argument’. And he probably thinks a study done on 2 people is better than one done on 4.

I don’t think it’s complicated to understand why bigger sample size is better. I have a hard time believing this post was thought about much before it was posted.

No. A bigger sample size implies the probability of more hidden variables, not more error. It means we should retain scepticism about our “results,” including not framing them in terms of error versus correctness.

No. It means I don’t trust nferential statistical sampling procedures in general. They rely too much on Queteletian l’homme moyen (“average man”) asssumptions and fallacies. Groups can’t take drugs. Only individuals can, and drug effects vary wildly from individual to individual. You can infer from DBRCG studies how much of the population can be expected to statistically react per study category, but you cannot infer who will react and in what ways. If you could rely on drug study data to address individual differnces there’d be no need to include side effect warnings, contraindications, etc… The best, for instance, that drug studies that rely on inferential statistics sampling can do is help individuals gamble about drug consumption choices better (which with drugs, is particularly wearisome because of the all the problems characters like Ben Goldacre, Peter R. Breggin, etc… have delineated exist with drug studies).

This is part of the problem. People assume it’s not complicated when it’s in actuality extremely complex. Instead of thinking critically, they believe. With all due respect, your derision is unwarranted. I’ve thought about this for years. I’m not saying sample size isn’t a relevant factor. I’m saying we have to much faith in inferentially statistical sampling. As Hume rightly stressed, induction doe not warrant such faith.

No. To do that I’d have to believe deduction is perfect and I don’t.

Actually, what it “gives” us is better dart throwing technique.

Actually, it gives us confidene that our guessing is less random, but it is only confidence: we can neither inductively nor deductively know for sure, and knowing is what we want. We still have to assume the coin is fair, which we can neither prove deductivley nor experimentally.

“Makes chemicals” implies choice, and therefore, contradicts “randomly.”

Again, more choice making.

There’s no “random” with a gun to your head.

That’s…sorta…exactly the point of large sample sizes and double blind studies in the first place. That’s precisely why they’re done.

You may not have meant it this way, but this statement - that using statistics helps you make better ‘gambling’ choices - completely supports the entire point of doing double blind studies. You try to subtly deride it by using the word ‘gambling’, but you’re saying that using these statistics rather than not using them does, in fact, give you a better chance of making the right choice. Which is what the statistics are for. So…yes, they help people ‘gamble’ about their drug choices better…and that’s really the entire point of them in the first place.

Induction is the only thing that could possibly warrant thinking that something works. You can’t prove that a drug works by sitting in your arm chair thinking about it. That’s what statistics and studies are for. So, yes, induction does warrant, not ‘faith’ because that’s just your derogatory term to put down inductive beliefs, but…if a double blind study shows that drug x is significantly more effective than placebo on 90% of people, and 5% of people experience nausea with the drug, then yes, induction most definitely warrants thinking, “Hey, I’ve got a 90% chance of this drug helping me and a 5% chance of experiencing nausea”. I don’t see what the problem is with that.

Essentially, it now just looks like Londis is saying ‘Statistical studies are bad because they don’t give us certainty; just because this drug worked for 90% of people doesn’t guarantee that it won’t work for me.’

He’s not pleased that there’s no guarantee. That’s his beef with statistics.

Statistics aren’t for guarantees. They help you increase your odds. Expected value. They make you a better gambler. That’s what they’re for, that’s what they do, you even seem to agree that that’s what they do, but that’s just not good enough for you.

Tough luck.

That’s a very imaginative interpretation of my comments, but it is quite inaccurate. My view is not an ethical judgement, so “bad” is an irrelevant locution to attribute to my perspective. Nor is my criticism that inferential statistics don’t give us certainty. Nothing gives us certainty. My concern is rather that statistical inferences tend to falsely comfort us with the illusion of control. “90%” is an abstraction derived from assumptions untestable outside its own methods itself (i.e., the problem of induction)–as one finds in the unjustifiable claim that the rules of the probability calculus are “axiomatic”.

I’m extremely pleased that there’s no guarantee, so that’s not my “beef” with statistics. Uncertainty keeps us from settling on assumptions. Remember, I’m concerned with the illusion of certainty, of which the “control” of the DBRCGM implies. In other words, I’m concerned with the standard misinterpretations of the so-called efficacy implications of the DBRCGM, especially where they conflate probability with certainty. What I find unpleasant is the false hope people derive from statistical inferences when they don’t understand or forget that demarcations like “90%”, “statistical significance,” “confidence intervals,” etc… are “determined” by fiat and overly rely on generally unrelated and absurd notions like Quetelet’s l’homme moyen. When this is coupled with the way drug manufacturers, for instance, manipulate statistics, dangers like adverse reactions and contraindication ignorance are compounded.

To your credit, you have this part of my comments accurate, except for the “that’s just not good enough for you.” It’s not, and shouldn’t be good enough for anyone. What’s more important: trusting my experience, which is a complex enough task on its own, or trusting the highly complex and super-abstract inductive procedures of DBRCG studies? Both are valuable, but the former should always trump the latter. Otherwise, individuality is sacrificed to “scientific authority” and critical thinking to prejudicial faith.

This is not a new problem. As Hume put it in Section XIV, “Of the Idea of Necessary Connexion” in his A Treatise of Human Nature:

Well, that’s great then.

Actually, it gives us confidene that our guessing is less random, but it is only confidence: we can neither inductively nor deductively know for sure, and knowing is what we want. We still have to assume the coin is fair, which we can neither prove deductivley nor experimentally.
[/quote]
I suppose if you have perfect faith in your deduction, you might Think this. If you mean that we can never know for sure that the increased confidence - which varies widely from individual to individual - is the right amount of increased confidence, sure, I agree. And I do Think the Medical industry, for example, overestimates its confidence. This hasn’t affected my confidence in it however.

“Makes chemicals” implies choice, and therefore, contradicts “randomly.”

Again, more choice making.

Did you not understand? With a gun pointed at your head and they will kill you if you do not choose, you could simply call out the 7th from the right OR you could look at the research and then choose informed to some degree by the research. Me, for example, I would not drink any of the ones that killed a bunch of mice and some humans. I would not be happy to drink one of the ones that in DB testing did not kill any of the rodents or humans, but I would damn well use the information. Your OP seems to imply that there would be no reason to use DB studies to make choices. The kind of skepticism that would argue that is one that I would guess most skeptics would give up once push came to shove. They would consider such research to improve chances of making an informed choice. And that is all any careful empiricist would suggest one is doing. It is true, a lot of empiricits are not careful, especially when they stand to make Money on interpretations, etc. But your OP makes it sound like we should give up such research. I mean, it COSTs Money. If it does not improve over chance, we could buy some food to feed AFricans - though, it is only via empirical research that we know people die without food also - and just choose what Chemicals to ingest randomly. So much Money would be saved or better spent elsewhere.

I doubt there is a single philosopher who Thinks such studies give certainty. Any working doctor would also know this is not the case, given what happens when patients take drugs. Psychologists and sociologists would also be incredibly skeptical about certainty.

Now many of these people likely overestimate how much confidence you can have, but the illusion of certainty is, yes, such an illusion that this exists.

I would also like to repeat that since you do not Think deduction is perfect and clearly Learning even from highly controlled experience is clearly not perfect, you seem to be presenting your own conclusions IN VERY CERTAIN TERMS.

What do you base your certainty on?

There are multiple reasons why DBRCG studies are conducted, and as is the case with drugs, they’re usually not done with the safety of individuals in mind. That’s mostly lip service. The final goal of drug studies is to manufacture profit, and one of the ways this goal is achieved is through scientificating the results with DBRCG “methodology.”

I agree that statistics are useful, but their utility is highly overrated.

Only? Is this indicative of your faith in induction? Induction itself is contrary to your “only” implication. Recall the first two rules of the probability calculus: Minimal Rule 1: No probability is less than zero; Minimal Rule 2: If A is a tautology, Pr(A)=1. Your use of “only” equals “1” or "100%, a violation of these rules and an ignorance or forgetfulness about why the problem of induction is a problem.

And whether or not “something works” depends on exactly what that something is and what precisely is meant by “works.”

What does induction have to do with proof?

Statistical studies serve a variety of purposes, not just the ones you’re limiting them to.

What do you mean by “shows”?

“Inductive beliefs”? According to Bayesianism, they are more than beliefs. They are the “axioms” of the probability calculus.

Placebos are “good enough” for the time being, but they are not always a reliable methodological tool. We should not rely too heavily on our assumptions about them and rest on our laurels by continuing to assume that they provide reliable standards of comparison. As Guy P. Harrison in his book Think: Why You Should Question Everything (setting aside for now the fact that he constantly violates this maxim in his book, especially when it comes to his scientific materialist, atheistic, ultra-sceptical assumptions):

Your 90%/5% breakdown is a good example of how DBRCG studies are misunderstood. The results do not mean you have “a 90% of the drug helping you.” They mean that with this population, 90% of the cohorts were observed to derived a benefit. This raises several problems. First, how were the benefits measured, and how satisfied are we that our instruments are reliable measures (see the OP, parenthesis, end of paragraph 1)? Secondly, is this sample representative of the greater population? This, by the problem of induction, can never be elevated beyond the realm of speculation. Thirdly, did we use critical thinking skills to apply this confidence interval? Is it justifiable? Fourthly, have we accounted for all the possible benefits and all the possible adversities? Fifthly, are we warranted in defining nausea as a concern? Did we trial the drug long enough to test whether or not this was a permanent effect, or one that would subside? I could go on, and might, depending on your response.

So your notion is that it is all just a conspiracy?

This is not necessary. But it is pragmatic.

I apologize. I could have expanded on that, but I wondered if it was not already too long. I think we should continue to try to improve such research and not fools ourselves, and especially not let the researchers fool us into, believing that our methods are the best they’re ever going to get.

Repeat? Where did you initially state this? Anyway, part of the reason I framed my OP the way I did was for rhetorical effect and to stimulate discourse and a free exchange of ideas (which certainly seems to have worked), and, as I’ve tried to clarify with Flannel Jesus, nothing is certain (and I’m not even certain of that :wink: ). Still, I’m extremely sceptical about claiming that we have anything like “highly controlled experience” but rather, as William James put it, a world of “one great blooming, buzzing confusion.”

No. Is your notion that all conspiracies are fake?

You obviously have no idea who you are talking at. I know conspiracies really, really well. And that means that I also know when something is simply being presumptuously mistaken for one.

You seem to have a misunderstanding of why statistics are done the way they are done. I agree very much that they are used in deceptive ways, but you are not talking about one of those ways.

Please elaborate. Some of my favorite conspiracies from history are the assassination of Julius Caesar, the Gunpowder Plot, the execution of Charles I, Thomas Cromwell’s execution, the American Revolution, the French Revolution, the raid on Harpers Ferry, the assassination of Abraham Lincoln, the Bay of Pigs Invasion, The Gulf of Tonkin Incident, Watergate, 9/11. Conspiracies are quite commonplace. An entire arm of most existing legal systems devote a tremendous amount of resources to their investigation and prosecution. Even someone as ultra-sceptical as Guy P. Harrison acknowledges as much:

Which conspiracy would that be?

Could you be more specific?

I’ve cited only the well documented abuses of pharmaceutical companies (and Flannel Jesus opened that door). There’s nothing secretive or conspiratorial (at least not any more–and there never was “one big conspiracy” as you put it; but there were several–and many more forthcoming–well documented small conspiracies to mislead the public) about the widespread misconduct in that industry, and their manipulation and doctoring of research, including their misuse of statistics. My point is that the flaws of the DBRCGM and the public’s unwarranted confidence in its processes make such “mischief by committee” abuses easier to execute and more likely to occur.

I agree with all of that, but when I tried to discuss an actual legitimate DBRCGM, somehow the communication just seems to break down. I am not interested in discussing the conspiracies on this thread because you challenged the legitimacy of the real method itself even without any maliciousness. I am not seeing that the method itself is flawed. Statistics and the media merely allow for deception and thus of course, they go for it as fast as they can. But they always begin with something that when done properly is legitimate. Deceptions don’t work unless there is a lot of truth within them.

Responder Text:

Source: Randomized Controlled Trials (RCTs): A Flawed Gold Standard

Consider this also: should those with profit motive even be conducting studies on their own products? If the DBRCGM was as methodologically sound as most assume, this should not be an issue. See, for instance, Scope and Impact of Financial Conflicts of Interest in Biomedical Research: A Systematic Review, and How Well Do Meta-Analyses Disclose Conflicts of Interests in Underlying Research Studies.

See also:
The Importance of Beta, the Type II Error and Sample Size in the Design and Interpretation of the Randomized Control Trial — Survey of 71 Negative Trials
Reporting of sample size calculation in randomised controlled trials: review
Sample Size Calculations for Randomized Controlled Trials
Statins have no side effects? What our study really found, its fixable flaws, and why trials transparency matters (again).
Why we need observational studies to evaluate the effectiveness of health care.
Limitations of the Randomized Controlled Trial in Evaluating Population-Based Health Interventions
External validity of randomised controlled trials: “To whom do the results of this trial apply?”
Eligibility Criteria of Randomized Controlled Trials Published in High-Impact General Medical Journals: A Systematic Sampling Review
Special Article: A Comparison of Observational Studies and Randomized, Controlled Trials
Comparison of Evidence of Treatment Effects in Randomized and Nonrandomized Studies
Observational Research, Randomised Trials, and Two Views of Medical Science
When are randomised trials unnecessary? Picking signal from noise
JOURNAL OF MEDICAL CASE REPORTS. Case report on trial: Do you, Doctor, swear to tell the truth, the whole truth and nothing but the truth?

Both of these quotes do NOT say that double blind studies are inherently incorrect in any way, at all. They do not say the things that you suggest in your writings; they do not say anything confirming your ill-thought notion that a larger sample size makes the study ‘more complex’ and thus less effective.

These both say that being double-blind is not enough. That the current way of performing studies is incomplete. Not that it is wrong. Not that big sample sizes make the study worse by adding ‘complexity’. Not that ‘randomization contradicts control’. None of your points are here supported by these quotes, and you will have a hard time finding quotes that do because it’s nonsense. Your ‘complexity argument’ is a misunderstanding. Your ‘random assignment contradicts control’ argument is a misunderstanding.

You have an apparent beef with double blind studies. They also have an apparent beef with it. But that’s where the similarity between your ideas and theirs ends. They do not support your arguments.

That’s an amusing link: it suggests LARGER sample sizes. It suggests that the studies that it’s criticising failed because they were using too small sample sizes.

I don’t think you’re in your element here.

Again, you’re attributing points to me I’ve never made. The locution “incorrect” does not apply to my critique. These quotes support my point that DBRGC studies do not account for the extreme complexities of individuals, the subsequent extremely compounded complexities of groups comprised of extremely complex, that these complexes increase exponentially with sample size, and Quetletian/Bayesian fallacies the statistical inferences are allegedly designed to say something meaningful about, even though the parameter are do by fiat. Instead of addressing these points by their merits, which you haven’t even begun to do,you just keep repeating your belief in the validity of your and the DBRCG assumptions.

If you’d been actually reading what I’ve said instead imputing to me things you’ve imagined I’ve implied, you’d know that part of what I’m saying is exactly this. See my posts to Moreno.

This is exactly what I’m saying and I’ve given specific reasons for it which you’ve almost completely ignored. You’ve just restated a few of my comments out of context and repeated ad nauseam, without substance, that this violates your faith in the method. For example:

Again another locution attributed to me I never said.

“Worse” is another mis-attributed locution. I’m saying it creates the illusion of more control than there likely is. “Worse” or “better” are normative judgements for individuals to make through their individual experience with drugs.

When you’ve exhausted your supply of synonymous ways to restate that you disagree with me, please provide a detailed, specific and substantive critique of the content of my views. Other participants in this thread have been able to do so. It would be nice if you would too.

Again, my “beef” is not with double blind studies. They have their usefulness, which you’d know if you’d paid more more careful attention to my comments. My beef is with the strength most people seem to impute to their significance, especially when they’re cited in media like advertisements or the popular press, linked to profit motive, or disconnected from underlying unexamined assumptions about scientific “progress.”

I apologize if I gave the impression that I was citing documents that only support my critique. My intention was to provide a survey of the problems. And there are plenty of references that support my critique. Have you read all of them?

With all due respect, I was wondering the same thing about you not being in your element.

I’ve compiled some basic philosophy of science literature to not only help you understand “my element” but also to help you expand yours. I’d suggest you read through these before you make any more unfounded allegations about my perspective (again, this is a survey–not citations to only support my analysis).

Feyerabend’s “Science and Myth” excerpt from his Against Method

Schick’s introduction to Induction and Confirmation, Hume’s passage about “the problem of induction” in his Enquiry, and Hempel’s, “The Role of Induction…”

Popper’s “The Role of Induction” (pdf pp. 9-13) from Conjectures & Refutations

Duhem’s “Physical Theory and Experiment”

Kuhn’s “Logic of Discovery or Psychology of Research?”

Lakatos’ “Falsification and the Methodology of Research Programmes”

Laudan’s “A Problem Solving Approach to Scientific Progress”

Lipton’s “Inference to the Best Explanation”

Kuhn’s views on observation from his Structures

Hesse’s “Is There an Independent Observation Language?”

Hempel’s “Laws and Their Role in Scientific
Explanation”

Considering the huge backlog of things I already have to read, I don’t really have time to take suggestions on science literature from someone who lacks understanding to the degree that you do.

Choice quotes that show your level, curated by me:

And then to top it off you end with some nonsense about quantum physics!

I will need to be really bored before I’m interested in working my way through your reading list.