(Irrelevant?) Correlations...

That is an issue of the number of correlated changes. Two totally independent things can be exactly identical if neither ever changes. If there is one correlated change over a long period of time, void of any other changing, there is a little higher probability of causal relationship. If there are two, it is higher still. A greater the number of correlated changes over a greater length of time increases the probability of causal relationship.

In each of the provided examples, there is a gradual swaying up and down of both sets of data. Because the subject matter is seemingly unrelated there is an indication that data gathering method might be the correlation responsible for such correlated swaying.

The over-all trend in each sample is less likely to be a sampling error anomaly and thus a distant, “third variable” or “hidden variable” is far more likely to be the correlating cause. Because each case involves a complex system of human behaviors and interaction, there could be several out of thousands of possible correlation causes.

In two of the samples, there are strong simultaneous changes (assuming that data hasn’t been left out such as to merely give such an appearance). But there were not many of such simultaneous changes. So a hidden variable is a definite possibility, but without more opportunity for changes, the correlations could still be merely coincidental and seemingly interesting only because of the short range, scope, and chosen data points being presented. Perhaps one of the data samples is changing very much faster than the other. By choosing to use only a few of the data points of the faster changing sample, one can make the diagram of both samples appear directly correlated. But then, that would constitute the casual factor of the sample correlation - “contrivance”.

This reminds me of an infinite regress of relationship contrivance, based on extemporizing possible hidden variables, at whatever rates of change. But even if contrived, what relevance can such coincidental co-relation have? According to some , believing in such ‘hiddenness’, coincidence has a definite use and value, however they are arguing from the other end of epistemology.

Finally, can such ‘hiddenness’ of variables become the effect, rather than the cause of such contrivance, giving it validity? This argument, as tenuous on it’s face, deserves some attention.

Come off it, James. However accurately it captures the spirit of Occam’s Razor, FJ’s translation of the idea into statistical language clearly evinces an understanding of what the idea is “actually about”. I agree there’s more to Occam’s Razor, but so does FJ: he offers only the “beginnings of understanding”, not the whole picture.

As I understand it, the point FJ makes is extended by seeing that Occam’s Razor is the suggestion that ‘A and B’ is less likely than ‘C’, for arbitrary A, B, and C. The case where A = C is an instance of that principle.

Carleas, Seriously? … Again??
He didn’t offer “the beginning of understanding”. He offered complicated further misunderstanding

The classic example of the use of Occam’s Razor was in the decision concerning the geocentrism model or the heliocentrism model of the universe. It is about which of two competing but otherwise identically realistic ontological models to choose. The rule of thumb was to choose the simplest model until more evidence is acquired.

Occam’s Razor, as I said earlier, doesn’t really have anything to do with this case. He was talking about a possible causal relation between the use of margarine and divorces. He started of by stating that the probability of any causal relation was extremely low. That is an axiomatical presumption based on ignorance of any relationship. And logically it constitutes an “argument from ignorance” logic fallacy.

“Because I have no evidence, I know that the probability of truth is low.” Typically, he doesn’t look for evidence because he already “knows” the probability of truth is low. And then because there is no evidence, it “should be taken as the simpler truth model and accepted as true, without further investigation while we crunch the numbers.” It is a ridiculous proposition.

But since he wanted to use probability numbers, in his biased manner, I suggested that if you are going to begin an honest inquiry, you have to give an equal possibility of truth to both sides of a contest. You, of everyone around here, should certainly know that. That would mean that instead of saying that the possibility of correlation begins at “0.0001”, it should begin at “0.5” both for and against. From that point, you can begin to add evidence and change the odds.

And if you want to throw Occam into it, then the simpler truth model, making the fewest assumptions, is certainly the one that doesn’t propose that either side of the contest has the upper hand. The simplest model is that they are equally probable, not requiring any presumption that there is any reason for them to not be equal.

But;
1) Occam’s Razor doesn’t really have anything to do with the proposition. OR is about selecting ontological models.
2) When adding probabilities, you multiply the possibilities. “A and B” yields “P(A) * P(B)”.
3) You can’t begin with any probability assessment until you GATHER DATA FIRST, else you are merely expressing a prejudice perspective and nothing more.

He was proposing an extreme case of a biased, prejudice, unscientific assessment. And here you are defending that just because he put some numbers on the screen??? Seriously??

Judge at the beginning of a trial: “I am not aware of any association between the defendant and the victim concerning this case, so I am going to disallow coincidental evidence presented by the prosecution as it might prejudice the jury.”

Precisely.

Very recently i tried to explain the same thing to one of our learned members in other thread but he seems to be much influenced by his a priori opinion of the issue. That is where it goes wrong.

It is not necessary that one’s a priori knowledge shuold be consistent with the all others too. If it is so, it can be taken as initial truth. But, if it is not, it has no value in the eyes of the logic. If the first person wants it to be included as a truth, he has to convince the judge that his opinion is right by putting evidence.

Merely saying that i have some a priori knowledge of the case is not sufficient. One must be able to prove it to such a person who is not familiar with the either parties to the case; Third party verification.

with love,
sanjay

James has nailed it. :text-+1:

James, FJ’s point, as I understood it, was that it produces an absurdity to assume that every proposition is 50-50 from the get go. To show this, he showed how if the likelihood of the proposition ‘A’ is .5, and the likelihood of the proposition ‘B’ is .5, then the likelihood of the proposition ‘C’ where C=‘A&B’ cannot be .5.

He then (tangentially, as I read it) mentioned that this same principle is related to Occam’s Razor, and I agree. Once we see that the likelihood of ‘A&B’ will always be less than or equal to the likelihood of either ‘A’ or ‘B’, we can see why the likelihood of ‘A&B’ is less than the likelihood of ‘C’. And the later statement, i.e. P(A&B) <= P(C), is acknowledged as a justification for Occam’s Razor on the very wikipedia page you gratuitously copy/paste at me like I don’t know what Occam’s Razor is:

All this is not to say that selecting the baseline probability in a Baysian analysis is easy or completely determined, but the suggestion that our response to the uncertainty should be to treat all propositions as equally likely is clearly not the answer. Not knowing the exact probability is not the same as not being able estimate reasonable values. Your argument is similar to, and as incorrect as, arguments that Relativistic physics isn’t more correct than Newtonian physics because we know that Relativistic models are incomplete. Knowing that we don’t know everything does not entail that we know nothing, and knowing that we can’t peg a probability precisely doesn’t mean we can’t narrow it down to a better guess than 50-50.

This is FJ’s example:

He sees the graph of the correlation, he does some reasonable calculations and he comes to this conclusion:

But his first assumption that the probability of a causal relation ship was only .00001 almost completely determined the result he would get. He didn’t calculate anything useful although it looked very methodical and scientific.

The only way that you will figure out whether there is a causal relationship, is to deeply investigate the processes involved with an open mind. An open mind would consider any statement as having a 50% chance of being true, at least initially.

Occam’s razor is used to choose between equally valid theories. Based on the data, both theories are equally probable.Occam suggests choosing based on economy.

Carleas, there are many things that were wrong with FJ’s argument, which is why I was willing to leave it with a simple statement of concern, but it is hard to believe that out of all that was wrong with it, you have chosen to advocate prejudice and juris-imprudence. :confused:

A black man appears in court. The judge knows that 60% of the time a black man shows up in court, he is convicted, guilty. So the judge places the “burden of proof” higher on the black man. And if the man doesn’t provide enough extra evidence in his favor to make up for the statistics, then his case will add to the statistics concerning black men being convicted.

So after a few years, the statistics concerning the guilt of black men in court becomes, not the 60% as before, but now 70%, further increasing the required burden of proof. And after a few more years, it becomes 80%, then 90%, eventually 100%. So eventually there really is no need for a court at all because statistically 100% of the time, if a black man is accused in court, it’s been proven that he is guilty, so why waste “the tax payers dollars” on a trial? 8-[

And you actually fell for that? :icon-rolleyes:

In that part of his misreport and proposed dysnomy, he conflates 3 propositions, something that isn’t even related to the OP at all. He states that; we already know that A has a certain probability, we already know that B has a certain probability, and that we can add the two already knowns to form a new proposition and treat it as if it is an unknown with equal probability as the first two. The OP has nothing at all to do with adding propositions unless you are questioning the data sets offered in the OP.

The OP also has nothing to do with comparing known equally possible ontological models, Occam’s Razor.

And then even further, look at his statements;

And then he proposes that his belief in a direct causal relationship is reflected by giving the probability of it being accurate;

So what he is saying is that any time he believes that there is a direct correlation, we can already presume that the probability of him being right is 1/100000.

The OP is saying, “look how interesting it is that there are these correlations concerning things that we wouldn’t normally think had any relationship. Perhaps they do”.

If you are going to assign a probability of there being a relationship, why bother? The OP intentionally presents data sets that are supposed to be superficially low probability cases. So by putting numbers on it, you merely support the fact that the OP was right, “at first glance, these correlations wouldn’t be expected”.

FJ’s argument then states that BECAUSE a mindless superficial examination (none at all) yields the thought that there is no relationship, we should accept that mindless conclusion as strong, very strong, evidence that there is no relationship at all. He is proposing that we give mindlessness a much higher authority in decision making (literally a 100,000 to 1 higher). He is proposing that if an ignorant person at first believes or doesn’t believe something, it should require extreme proof to alter his ignorance; empowering mindlessness, anti-philosophy (aka religiosity = “don’t think, just believe/disbelieve”).

That was a number I gave as an example number. Your straw man here is absurd. It’s as if I said ‘I want to talk about my idea about prime numbers; let’s take 7 for example’ and you said ‘Ha! Look at this fool! He thinks 7 is the only prime number!’

I was making a point about probabilities and updating them based on evidence (the evidence of statistical correlation in this case); my example probability was 1/100,000 and you respond with this absurd and completely transparent strawman.

I already proved that that’s mathematically impossible. Here, something simpler, let’s take these 3 statements:

I’m wearing a hat, and I’m not wearing a shirt, and I’m not wearing shoes.
I’m not wearing a hat, and I’m wearing a shirt, and I’m not wearing shoes.
I’m not wearing a hat, and I’m not wearing a shirt, and I’m wearing shoes.

They can’t all have a probability of 50%. They’re all mutually exclusive; a set of mutually exclusive probabilities has to some to 1 or less. If they’re all 50%, they sum to 1.5. There’s a 150% probability that one of those statements is true?

You were talking about presuming a certainty level of 0.99999% before you even looked for evidence and then stated that any evidence wouldn’t be sufficient to change your mind.

I have found in the past, that you are right. No evidence changes your mind.

You’ve not corrected the strawman yet.

Once again, you misunderstand the OP suggestion.

The OP is saying;
“Here is evidence of a possible relationship”.
True
False

It is NOT saying;
“If we add this proposal to that proposal to another proposal, we might have a relationship.”

…and there was no “strawman”.

What I’m saying is ‘not all statements have a probability of 50%; that is mathematically impossible.’

And I explained very clearly why it was a strawman. I used a number as an example to show how a particular probabilistic calculation works; your post assumes I used that number as a universal constant for all situations.

YOUR “examples” are the strawmen.

YOU are providing cases that do not fit the OP suggestion. The OP is asking a “true-false” question. A true-false question has a 50/50 chance until data is examined.

“Is X true?” - 50/50 chance until you examine what X is.

No, not all true/false questions start out with 50/50 probability.

example:
I state:
A dwarf just broke into your house and stole your keys: 50/50?

You respond, true to form, ‘Yes, 50/50’.

So we go downstairs to your kitchen where you keep your keyhook that holds your keys, and we find your keys missing.

So…now it’s more than 50/50? It’s more than 50% likely that a dwarf stole your keys? Out of ALL the possibilities to explain how your keys are missing, just because of the fact that I mentioned that as a hypothesis, it is now more-than-a-coin-flip likely?

I’m not talking about the probability of some set of statements.
I’m talking about a way of approaching a statement and determining whether it is true or false without a bias. That requires dropping preconceived ideas.

For you to proclaim that it isn’t 50/50, you have to presume that you know something about the truth of that statement first, don’t you?

Do you know who said it?
Do you understand what was meant by “dwarf”?
Was your house unguarded for a long time?
Were your keys in your house to be stolen?

You instinctively assess those things pretty quickly and thus often mislead yourself into biased mis-judgment. People can easily take advantage of you because of your willingness to presume so readily.

“That is silly. No one would do that.”
— exactly what a con artist looks for.

Until you think about the real situation, you cannot assess any probability to favor anything.

You are talking about a set of statements. ‘Consider any statement as having a 50% chance’ is you talking about a set of statements. Namely, the set of All statements.