This doesn’t address the parallel problem I brought up from quantum mechanics with the infinite regress of measurement problem, and it sidesteps my point that an agent is still required to decide that a randomness procedure is warranted. Hence, control is an illusion.
The same issue arises with the notion of assignment. If assigning is happening, then the alleged randomness is compromised, if not destroyed, again defeating the purpose of control. And it doesn’t help to say, “we use a non-biased assignment instrument” for that does not get us out of the impossibility of assuring reliable instrumentation for measurement without infinite regress (similar to the part of the Copenhagen Interpretation of quantum mechanics that Niels Bohr called complementarity).
Again, it is an issue of forbidding any biases or persuasions regarding the assigning process. The process is controlled in the sense that it is protected against bias. The end result will be a random (unpredictable) distribution of assignments.
Any? How would you know? This is why the problem of induction is still a problem. You can’t control for or forbid biases or persuasions in experiments designed on the premises of the naturalistic fallacy and uniformity of nature assumptions. Otherwise, what you really get blinded to is the fact that groups are not comparable.
Second, and to my mind, more devastating to the validity of the DBRCGM than the above, is that it rests on the untestable and wildly conjectural assumption that groups are comparable. Individuals are extremely unique and only superficially comparable; no two people are exactly alike, and the deeper we compare them the more contrasts we find. And groups are comprised of these extremely unique individuals. So, as a function of this, group complexity rises exponentially with the size (the sample n) of the group. But the assumption of the DBRCGM is that the larger n, the more assurance we can derive from our comparison test. Yet, the larger n becomes, by my “complexity argument,” not only does our hoped for control over hidden and confounding variables decrease, but in all likelihood we increase, and again in all likelihood exponentially, the amount of hidden and confounding variables present in what we are trying to control and measure–a type of “herding cats” phenomenon (and involving something similar to another aspect, yet again, of the Copenhagen Interpretation, the hidden variable dilemma; and also involving something similar to Heisenberg’s uncertainty/indeterminism principle).
This one is an issue of the large group containing a sample of every general type that is also available in the control group. The idea is to make it so complex that the individual affects blur into a gray obscurity, a randomization of individuality or any special effect. That is why a large group is necessary. Complexity plays in favor of blurring out any effect other than the one you are testing for.
To the contrary: You have know way of knowing if you’re not dealing with hidden variables or other confounds. Complexity implies loss of control and necessitates using absurd notions like “statistical significance” and misleading tools like “confidence intervals” to scientificate the data. You can’t control for the by fiat nature of such notions or tools.