Control?: The Double Blind, Random Gontrol Group Method

“What the vulgar call chance is nothing but a secret and concealed cause.”
-David Hume, A Treatise of Human Nature, Section XII, “Of the Probability of Causes”

It seems to me that the principal weakness of the double blind, random control group method (DBRCGM) is the random control group aspect of the method. The weakness, as I see it, has two parts.

First, the phrase “random assignment” seems contradictory. If “control” is the end, how can randomization be the means? Randomness implies no (or little) controI. The same issue arises with the notion of assignment. If assigning is happening, then the alleged randomness is compromised, if not destroyed, again defeating the purpose of control. And it doesn’t help to say, “we use a non-biased assignment instrument” for that does not get us out of the impossibility of assuring reliable instrumentation for measurement without infinite regress (similar to the part of the Copenhagen Interpretation of quantum mechanics that Niels Bohr called complementarity).

Second, and to my mind, more devastating to the validity of the DBRCGM than the above, is that it rests on the untestable and wildly conjectural assumption that groups are comparable. Individuals are extremely unique and only superficially comparable; no two people are exactly alike, and the deeper we compare them the more contrasts we find. And groups are comprised of these extremely unique individuals. So, as a function of this, group complexity rises exponentially with the size (the sample n) of the group. But the assumption of the DBRCGM is that the larger n, the more assurance we can derive from our comparison test. Yet, the larger n becomes, by my “complexity argument,” not only does our hoped for control over hidden and confounding variables decrease, but in all likelihood we increase, and again in all likelihood exponentially, the amount of hidden and confounding variables present in what we are trying to control and measure–a type of “herding cats” phenomenon (and involving something similar to another aspect, yet again, of the Copenhagen Interpretation, the hidden variable dilemma; and also involving something similar to Heisenberg’s uncertainty/indeterminism principle).

Wow. I never realized that Hume had said anything intelligent, much less actually true.

Have you ever actually done a double-blind experiment? Or do I need to go through the process. You seem to be extrapolating the concept out of context.

My education and training (master’s degree level in a social science), experience in the (social services) field, and years of studying the philosophy of science and analyzing one DBRCGM study after another more than qualify me to understand the process. But thanks for the offer. Yes, I’m extrapolating, but that doesn’t diminish my critical capacities or otherwise delegitimize my critique. Furthermore, conducting a DBRCG study is not the only way to understand the process. And you’ll have to be more precise by which context you’re referring to for me to expand further and satisfactorily.

Well good, in that case;

Actually “random” merely means “without discernible pattern” or “unpredictable”.

If one removes all forces of discernible persuasion or bias, the result will be random. The “control” is merely the control of the forces of persuasion to keep them balanced or absent. The result is not what is being controlled, but rather any interference with it.

Again, it is an issue of forbidding any biases or persuasions regarding the assigning process. The process is controlled in the sense that it is protected against bias. The end result will be a random (unpredictable) distribution of assignments.

This one is an issue of the large group containing a sample of every general type that is also available in the control group. The idea is to make it so complex that the individual affects blur into a gray obscurity, a randomization of individuality or any special effect. That is why a large group is necessary. Complexity plays in favor of blurring out any effect other than the one you are testing for.

This doesn’t address the parallel problem I brought up from quantum mechanics with the infinite regress of measurement problem, and it sidesteps my point that an agent is still required to decide that a randomness procedure is warranted. Hence, control is an illusion.

Any? How would you know? This is why the problem of induction is still a problem. You can’t control for or forbid biases or persuasions in experiments designed on the premises of the naturalistic fallacy and uniformity of nature assumptions. Otherwise, what you really get blinded to is the fact that groups are not comparable.

To the contrary: You have know way of knowing if you’re not dealing with hidden variables or other confounds. Complexity implies loss of control and necessitates using absurd notions like “statistical significance” and misleading tools like “confidence intervals” to scientificate the data. You can’t control for the by fiat nature of such notions or tools.

Landis, it seems to me you are critiquing inductive research for not being deductive in perfection. But it’s not. It is statistical. The results can Always be outweighed by more data. It gives us solider choices than throwing at dart at reality. it does much better than random guesses. Dow makes 40 Chemicals, randomly. First they DBRCG test them on rats, then on humans. Some kill (it seems), some maim (it seems), others do nothing (it seems) and some seem to be beneficial and so on.

Would you, if someone pointed a gun at you,

A choose a chemical at random from the and drink it?

B Use the information from the research to help decide which one?

I can see Little reason not to use the information.

Unlike deduction, this research cannot lead to proof, but it can give a statistical advantage.

The guy is saying here, as far as I can tell, that having a bigger sample size should create more error.

As far as I can tell, this literally means that he thinks that a study to test the efficacy of some drug would be more valuable if it were done on fewer people. Eg he’d trust a study done on 4 people more than a study done on 1000, because of some nonsensical ‘complexity argument’. And he probably thinks a study done on 2 people is better than one done on 4.

I don’t think it’s complicated to understand why bigger sample size is better. I have a hard time believing this post was thought about much before it was posted.

No. A bigger sample size implies the probability of more hidden variables, not more error. It means we should retain scepticism about our “results,” including not framing them in terms of error versus correctness.

No. It means I don’t trust nferential statistical sampling procedures in general. They rely too much on Queteletian l’homme moyen (“average man”) asssumptions and fallacies. Groups can’t take drugs. Only individuals can, and drug effects vary wildly from individual to individual. You can infer from DBRCG studies how much of the population can be expected to statistically react per study category, but you cannot infer who will react and in what ways. If you could rely on drug study data to address individual differnces there’d be no need to include side effect warnings, contraindications, etc… The best, for instance, that drug studies that rely on inferential statistics sampling can do is help individuals gamble about drug consumption choices better (which with drugs, is particularly wearisome because of the all the problems characters like Ben Goldacre, Peter R. Breggin, etc… have delineated exist with drug studies).

This is part of the problem. People assume it’s not complicated when it’s in actuality extremely complex. Instead of thinking critically, they believe. With all due respect, your derision is unwarranted. I’ve thought about this for years. I’m not saying sample size isn’t a relevant factor. I’m saying we have to much faith in inferentially statistical sampling. As Hume rightly stressed, induction doe not warrant such faith.

No. To do that I’d have to believe deduction is perfect and I don’t.

Actually, what it “gives” us is better dart throwing technique.

Actually, it gives us confidene that our guessing is less random, but it is only confidence: we can neither inductively nor deductively know for sure, and knowing is what we want. We still have to assume the coin is fair, which we can neither prove deductivley nor experimentally.

“Makes chemicals” implies choice, and therefore, contradicts “randomly.”

Again, more choice making.

There’s no “random” with a gun to your head.

That’s…sorta…exactly the point of large sample sizes and double blind studies in the first place. That’s precisely why they’re done.

You may not have meant it this way, but this statement - that using statistics helps you make better ‘gambling’ choices - completely supports the entire point of doing double blind studies. You try to subtly deride it by using the word ‘gambling’, but you’re saying that using these statistics rather than not using them does, in fact, give you a better chance of making the right choice. Which is what the statistics are for. So…yes, they help people ‘gamble’ about their drug choices better…and that’s really the entire point of them in the first place.

Induction is the only thing that could possibly warrant thinking that something works. You can’t prove that a drug works by sitting in your arm chair thinking about it. That’s what statistics and studies are for. So, yes, induction does warrant, not ‘faith’ because that’s just your derogatory term to put down inductive beliefs, but…if a double blind study shows that drug x is significantly more effective than placebo on 90% of people, and 5% of people experience nausea with the drug, then yes, induction most definitely warrants thinking, “Hey, I’ve got a 90% chance of this drug helping me and a 5% chance of experiencing nausea”. I don’t see what the problem is with that.

Essentially, it now just looks like Londis is saying ‘Statistical studies are bad because they don’t give us certainty; just because this drug worked for 90% of people doesn’t guarantee that it won’t work for me.’

He’s not pleased that there’s no guarantee. That’s his beef with statistics.

Statistics aren’t for guarantees. They help you increase your odds. Expected value. They make you a better gambler. That’s what they’re for, that’s what they do, you even seem to agree that that’s what they do, but that’s just not good enough for you.

Tough luck.

That’s a very imaginative interpretation of my comments, but it is quite inaccurate. My view is not an ethical judgement, so “bad” is an irrelevant locution to attribute to my perspective. Nor is my criticism that inferential statistics don’t give us certainty. Nothing gives us certainty. My concern is rather that statistical inferences tend to falsely comfort us with the illusion of control. “90%” is an abstraction derived from assumptions untestable outside its own methods itself (i.e., the problem of induction)–as one finds in the unjustifiable claim that the rules of the probability calculus are “axiomatic”.

I’m extremely pleased that there’s no guarantee, so that’s not my “beef” with statistics. Uncertainty keeps us from settling on assumptions. Remember, I’m concerned with the illusion of certainty, of which the “control” of the DBRCGM implies. In other words, I’m concerned with the standard misinterpretations of the so-called efficacy implications of the DBRCGM, especially where they conflate probability with certainty. What I find unpleasant is the false hope people derive from statistical inferences when they don’t understand or forget that demarcations like “90%”, “statistical significance,” “confidence intervals,” etc… are “determined” by fiat and overly rely on generally unrelated and absurd notions like Quetelet’s l’homme moyen. When this is coupled with the way drug manufacturers, for instance, manipulate statistics, dangers like adverse reactions and contraindication ignorance are compounded.

To your credit, you have this part of my comments accurate, except for the “that’s just not good enough for you.” It’s not, and shouldn’t be good enough for anyone. What’s more important: trusting my experience, which is a complex enough task on its own, or trusting the highly complex and super-abstract inductive procedures of DBRCG studies? Both are valuable, but the former should always trump the latter. Otherwise, individuality is sacrificed to “scientific authority” and critical thinking to prejudicial faith.

This is not a new problem. As Hume put it in Section XIV, “Of the Idea of Necessary Connexion” in his A Treatise of Human Nature:

Well, that’s great then.

Actually, it gives us confidene that our guessing is less random, but it is only confidence: we can neither inductively nor deductively know for sure, and knowing is what we want. We still have to assume the coin is fair, which we can neither prove deductivley nor experimentally.
[/quote]
I suppose if you have perfect faith in your deduction, you might Think this. If you mean that we can never know for sure that the increased confidence - which varies widely from individual to individual - is the right amount of increased confidence, sure, I agree. And I do Think the Medical industry, for example, overestimates its confidence. This hasn’t affected my confidence in it however.

“Makes chemicals” implies choice, and therefore, contradicts “randomly.”

Again, more choice making.

Did you not understand? With a gun pointed at your head and they will kill you if you do not choose, you could simply call out the 7th from the right OR you could look at the research and then choose informed to some degree by the research. Me, for example, I would not drink any of the ones that killed a bunch of mice and some humans. I would not be happy to drink one of the ones that in DB testing did not kill any of the rodents or humans, but I would damn well use the information. Your OP seems to imply that there would be no reason to use DB studies to make choices. The kind of skepticism that would argue that is one that I would guess most skeptics would give up once push came to shove. They would consider such research to improve chances of making an informed choice. And that is all any careful empiricist would suggest one is doing. It is true, a lot of empiricits are not careful, especially when they stand to make Money on interpretations, etc. But your OP makes it sound like we should give up such research. I mean, it COSTs Money. If it does not improve over chance, we could buy some food to feed AFricans - though, it is only via empirical research that we know people die without food also - and just choose what Chemicals to ingest randomly. So much Money would be saved or better spent elsewhere.

I doubt there is a single philosopher who Thinks such studies give certainty. Any working doctor would also know this is not the case, given what happens when patients take drugs. Psychologists and sociologists would also be incredibly skeptical about certainty.

Now many of these people likely overestimate how much confidence you can have, but the illusion of certainty is, yes, such an illusion that this exists.

I would also like to repeat that since you do not Think deduction is perfect and clearly Learning even from highly controlled experience is clearly not perfect, you seem to be presenting your own conclusions IN VERY CERTAIN TERMS.

What do you base your certainty on?

There are multiple reasons why DBRCG studies are conducted, and as is the case with drugs, they’re usually not done with the safety of individuals in mind. That’s mostly lip service. The final goal of drug studies is to manufacture profit, and one of the ways this goal is achieved is through scientificating the results with DBRCG “methodology.”

I agree that statistics are useful, but their utility is highly overrated.

Only? Is this indicative of your faith in induction? Induction itself is contrary to your “only” implication. Recall the first two rules of the probability calculus: Minimal Rule 1: No probability is less than zero; Minimal Rule 2: If A is a tautology, Pr(A)=1. Your use of “only” equals “1” or "100%, a violation of these rules and an ignorance or forgetfulness about why the problem of induction is a problem.

And whether or not “something works” depends on exactly what that something is and what precisely is meant by “works.”

What does induction have to do with proof?

Statistical studies serve a variety of purposes, not just the ones you’re limiting them to.

What do you mean by “shows”?

“Inductive beliefs”? According to Bayesianism, they are more than beliefs. They are the “axioms” of the probability calculus.

Placebos are “good enough” for the time being, but they are not always a reliable methodological tool. We should not rely too heavily on our assumptions about them and rest on our laurels by continuing to assume that they provide reliable standards of comparison. As Guy P. Harrison in his book Think: Why You Should Question Everything (setting aside for now the fact that he constantly violates this maxim in his book, especially when it comes to his scientific materialist, atheistic, ultra-sceptical assumptions):

Your 90%/5% breakdown is a good example of how DBRCG studies are misunderstood. The results do not mean you have “a 90% of the drug helping you.” They mean that with this population, 90% of the cohorts were observed to derived a benefit. This raises several problems. First, how were the benefits measured, and how satisfied are we that our instruments are reliable measures (see the OP, parenthesis, end of paragraph 1)? Secondly, is this sample representative of the greater population? This, by the problem of induction, can never be elevated beyond the realm of speculation. Thirdly, did we use critical thinking skills to apply this confidence interval? Is it justifiable? Fourthly, have we accounted for all the possible benefits and all the possible adversities? Fifthly, are we warranted in defining nausea as a concern? Did we trial the drug long enough to test whether or not this was a permanent effect, or one that would subside? I could go on, and might, depending on your response.

So your notion is that it is all just a conspiracy?

This is not necessary. But it is pragmatic.

I apologize. I could have expanded on that, but I wondered if it was not already too long. I think we should continue to try to improve such research and not fools ourselves, and especially not let the researchers fool us into, believing that our methods are the best they’re ever going to get.

Repeat? Where did you initially state this? Anyway, part of the reason I framed my OP the way I did was for rhetorical effect and to stimulate discourse and a free exchange of ideas (which certainly seems to have worked), and, as I’ve tried to clarify with Flannel Jesus, nothing is certain (and I’m not even certain of that :wink: ). Still, I’m extremely sceptical about claiming that we have anything like “highly controlled experience” but rather, as William James put it, a world of “one great blooming, buzzing confusion.”

No. Is your notion that all conspiracies are fake?

You obviously have no idea who you are talking at. I know conspiracies really, really well. And that means that I also know when something is simply being presumptuously mistaken for one.

You seem to have a misunderstanding of why statistics are done the way they are done. I agree very much that they are used in deceptive ways, but you are not talking about one of those ways.

Please elaborate. Some of my favorite conspiracies from history are the assassination of Julius Caesar, the Gunpowder Plot, the execution of Charles I, Thomas Cromwell’s execution, the American Revolution, the French Revolution, the raid on Harpers Ferry, the assassination of Abraham Lincoln, the Bay of Pigs Invasion, The Gulf of Tonkin Incident, Watergate, 9/11. Conspiracies are quite commonplace. An entire arm of most existing legal systems devote a tremendous amount of resources to their investigation and prosecution. Even someone as ultra-sceptical as Guy P. Harrison acknowledges as much:

Which conspiracy would that be?

Could you be more specific?

I’ve cited only the well documented abuses of pharmaceutical companies (and Flannel Jesus opened that door). There’s nothing secretive or conspiratorial (at least not any more–and there never was “one big conspiracy” as you put it; but there were several–and many more forthcoming–well documented small conspiracies to mislead the public) about the widespread misconduct in that industry, and their manipulation and doctoring of research, including their misuse of statistics. My point is that the flaws of the DBRCGM and the public’s unwarranted confidence in its processes make such “mischief by committee” abuses easier to execute and more likely to occur.

I agree with all of that, but when I tried to discuss an actual legitimate DBRCGM, somehow the communication just seems to break down. I am not interested in discussing the conspiracies on this thread because you challenged the legitimacy of the real method itself even without any maliciousness. I am not seeing that the method itself is flawed. Statistics and the media merely allow for deception and thus of course, they go for it as fast as they can. But they always begin with something that when done properly is legitimate. Deceptions don’t work unless there is a lot of truth within them.