Basic set theory disproves omni-states

Then how can we count THAT !!!

(That there are MORE reals than wholes)

Actually, what you’re saying goes further than that, because you can use a symbol for a new dimension, say a period.

So what you’re really saying is that there are more reals than symbols!!!

Which is a way of saying they don’t exist…

Maybe you should meditate on that a bit. ??

Ecmandu I think you are right.

I believe that the ultimate math of the universe must allow sets that contain themselves.

Such sets are logically consistent, and are studied under the topic of non-wellfounded set theories. Earlier I posted the relevant Wiki page. en.wikipedia.org/wiki/Non-well- … set_theory

There’s also an excellent detailed article on SEP. plato.stanford.edu/entries/nonwe … et-theory/

It turns out that non-wellfounded sets – that is, sets that contain themselves as elements – are being used to model all kinds of circular and self-referential relationships in mathematics and other fields such as semantics.

This looks like a field of study just getting started. I recommend the SEP article to anyone interested in self-referential phenomena.

We can “account for them”. We just can’t “count them”. Counting implies using the whole number set, which is limited relative to the real number set. To count them would mean that when done, we could say that “there are X number of them”.

I’m curious to understand your point of view.

Cantor invented set theory in 1874. Russell discovered Russell’s paradox in around 1903; and Zermelo first proposed the axiom schema of separation in 1908.

Of course it is clear with a century of hindsight that unrestricted set formation doesn’t work. Yet in realtime it took from 1874 to 1908, a period of 34 years, for the correct principle of set formation to be written down. In fact it took about 30 years for the problem to even be recognized; and another four or five to be solved.

Are you saying that in the fullness of hindsight the axiom scheme of separation is so obvious that the pioneers were “idiots” not to see it, but you are not literally calling them idiots, you are just noting that what’s obvious today was difficult back then.

Or are you saying that Bertrand Russell was an idiot? I’m sure Whitehead felt that way from time to time!

And Cantor, he was criticized by some of the top mathematicians in the world. He was called an idiot and worse. Are you saying that Cantor was an idiot for not writing a footnote to his 1874 paper pointing out that we must employ the axiom schema of specification because unrestricted set formation is bad?

Help me to understand your point of view. Is everyone an idiot in the fullness of time? Or are you saying the mathematical logicians of the late nineteenth and early twentieth centuries were literally idiots and (pardon me, but this seems implicit in the way you write) that you’re here to tell us so?

Pretty much. It has always impressed me how truly great geniuses can be such idiots at the same time. Quite a number of very brilliant people with whom I have conversed will be seriously moronic about a subject only slightly different than they are accustomed to thinking about. And I am not speaking of mere disagreements, but rather short debates wherein they discover themselves trying to defend a simple issue foolishly. They are often as amazed as I: “How can I be as smart as I am and be this dumb at the same time” #-o .

I have always been impressed when I discover how seriously genius someone has been in the past. Of course the greatest geniuses throughout the history of Man are never heard or written about, but of those recorded, when examined carefully, they often showed very impressive intuitive genius. Yet with each there seems to always be something missing, often something that would inspire one to say, “How could he be such an idiot?” I have said as much of myself.

The fact is that there are a few rudimentary concepts at the base of philosophical thought that divert or prevent a great many “red herring” mental ventures. Some of those thoughts were not taught early enough in the recent “Enlightenment Era” to prevent a great deal of nonsensical, yet brilliant revelation. And what is sad, is that most are still not taught today. Thus still today, many people, even very educated and brilliant, will espouse some of the most nonsensical things you will hear. I have heard too many.

I have yet to find any philosopher who hasn’t expressed error in his thoughts. Does that make him an “idiot”? Well, not really. Geniuses are not always geniuses and even a true idiot isn’t always and idiot. But then there are some basic things that just make one feel embarrassed for the enlightened genius who didn’t realize it at the time. There are many examples in every field of study.

The one that you tripped across in this thread is the idea that one must have coherent definitions concerning any concepts involved in constructive thought. Just as any sentence must obey a minimal set of rules of grammar in order to be coherent, any concept, or in this case any set-definition or category-definition, must obey rules for coherency. Isn’t that pretty obvious? Does it really take a genius to tell you that?

And the only thing that I said about an “idiot” was:

It really should have been obvious to the Enlightenment Era crowd that “unrestricted formation of sets”, unrestricted declaration or definition of categories such that self-contradiction is allowed, would be unacceptable. It should not have taken someone like Russell to formulate a theory professing that irrationally defined categories are irrational. But even after he did, they went on for years formulating new category formation theories to attempt to get around a “paradox” issue. All they have ever had to do is make certain that their category definitions were coherent. As a whole, they were being idiotic for not seeing that one simple concept. And there are a couple of other basic concepts that would have saved a hell of a lot of intellectual masturbation at that time. Granted, and as I have said on many occasions, the world was just waking up from a long intellectual sleep, so one must forgive their stumbling.

A paradox is merely a mind game - always. There is never any such actual existence as a paradox. The appearance of a paradox is what tells one that his theory, or more often his wording, is wrong. Paradoxes are games for children, not serious philosophical issues. Zeno’s and Russell’s paradoxes are things to exercise high school students on proper concept definition and coherency, not to teach as world renowned philosophical conundrums.

That is my point of view concerning paradoxes.

Sometimes it is merely an issue of hindsight but quite often it is an issue of the genius being so wrapped up in the complexity of one possible situation, that he never addresses a simple thought that would have saved years of perplexity. And then due to the way society functions, no one gets to hear the many who actually corrected the issue centuries earlier. Specific voices are promoted for political agendas at the expense of truth.

And yes, in general, I do believe that I am living on The Planet of the Apes, albeit cleverly disguised. Everything that has been discovered in the past 500 years could have been derived and realized thousands of years ago. Perhaps a true genius of that time did just that, but he wasn’t the son a king or perhaps just didn’t have anyone to explain it but the apes.

Well this is an interesting reply …

You mean, you know how many wholes there are ?

I know that there are fewer whole numbers than real numbers.

There are infA whole numbers.
There are at least infA^2 real numbers.

So what makes you think you can’t just count off the wholes and make an infinite list for each one??

You squared it, just add another dimension …

Then you have the “reals”. But you didn’t “count them”. You merely “accounted for them”.

Those would be the “hyperreals”.

Aren’t you just confusing the historical naming of Russell’s demonstration with an actual paradox? Russell’s argument is simple. Consider this imaginary dialog between Frege and Russell:

F: You can form a set from any predicate.

R: But if you choose the predicate x ∉ x you get a contradiction.

F: You’re right. You’ve proved that we can’t form sets from arbitrary predicates.

That’s all this is. If some historian had called it Russell’s argument, would that be more satisfactory to you?

Do you oppose proof by contradiction?

What is your actual argument here? After all, there is nothing contradictory about the predicate x ∉ x. Doesn’t it apply to pretty much any set we can think of? The set of apples isn’t an apple, right?

What is your point, other than you don’t like the fact that someone called Russell’s simple little argument a “paradox” when in fact it’s no such thing?

What is your definition of a paradox; and what is it about Russell’s argument that makes it so?

What is infA? Is that notation you’ve made up yourself? Please define it.

And what is infA^2? Are you aware that the cardinality of the Cartesian product of a set with itself is the same as that of the original set? That’s exactly why the rationals are countable.

Do you have any idea what you’re talking about? As far as I can see you have very little understanding of mathematics but you think you are some kind of expert. You don’t understand Russell’s simple little argument against unrestricted comprehension. You pretend to understand about cardinalities but it’s clear you don’t. You seem confused about proof by contradiction. You call people “idiots” and say they have “mental issues” without justification. You seem to have strongly felt objections to widely accepted modern math and physics. You make up your own logically inconsistent notation. You seem devoid of a sense of humor or perspective. What’s your deal?

Yes. It is that simple and that obvious. The exact same thing could be said for a thousand other issues throughout all fields of study. Where is the principle in physics that says:
“E cannot equal mc2 if mc2 cannot equal E”?

…or in chemistry: “A molecule cannot be declared an acid unless it is not a base”?

The lack of contradiction concern has been known for thousands of years. It was hardly a new discovery during Russell’s day. He merely pointed out the obvious, but did it in a obscure way because he was dealing with confused people. It got named and famed as “Russell’s Paradox”, even though in reality it was no different than Aristotle’s law of non-contradiction “A is not not-A”.

When something is very clearly simple and obvious yet people go on about it as if it was a brilliant discovery, we generally refer to them as being “idiots” for thinking so highly of the obvious.

"Did you know that 2+2=4? :astonished: :astonished: :astonished: That should be called JSS’s Revelation!!! [-o< :bow-yellow: "

It’s dumb to give such high praise to such obvious ideas and is an indication that the people of the time were not awake.
… just as those granting such praise today.
End of story.

“infA” is a term that I commonly use when discussing infinity (as Ecmandu is aware).

One of the other obvious principles in developing understanding and helps to avoid a great deal of argumentation and confusion is the need and use of somewhat arbitrarily chosen standards for measurement (That is what Cantor was eluding to with his “aleph-” series against resistance). In the case of infinities, since all are not alike, a standard infinity is called for (yet the apes haven’t bothered to establish one). So for my own work, I established one that I call “infA”, as opposed to “infB” or “infC”.
infA ≡ [1+1+1+ … ]
And thus 1 infinitesimalA ≡ 1 / infA

Once such a standard is declared, general mathematics can be usefully applied:
infA + infA = 2*infA,
2infA / 4 = 0.5 infA,
infA * infinitesimalA = infA / infA = 1
… and so on.

And if using mere whole numbers, it can be validly said that the entire infinite universe is:
4/3 Pi*infA^3 in size

As long as the standards are maintained throughout the construct of understanding, the conclusions will be rational.

And you deduce that merely because I mention the term “infA”?
Interesting, because that is what I was thinking about you.

We haven’t even mentioned cardinalities, that is a term defined by Cantor to mean a specific thing. I avoid using it when I can because people like yourself get confused about it. What you call “infA^2” is irrelevant to me as long as you understand what is meant by “infA^2”.

Really? Show us even one.

You do have a mirror, right?

If you want to argue and display your expertise of mathematics (to further your obvious egocentric bone-to-pick), how about bring it to the Science and Math forum.
Care to try to defend that “1 = 0.999…”?
.

James used my name here, so I should respond :

James: 1.1 is better than 1a, it actually shows as wtf says that you are not a good number theory logitician !!!

That being said, I understand James’ language, but James still thinks that there are orders of infinity, and in this matter, the jury is still definitely out!

James, I hate to be the one to break it to you, but your confidence to this regard is ill founded.

Which had to do with what??

The entire world of mathematicians (most especially Cantor) as well as I agree that there are “orders of infinity”. So you might want to pass that to the “jury”. The only argument is how to best represent those orders. Cantor proposed an “aleph-0 and aleph-1” system of “cardinality”. I don’t disagree with that. But then I have my own notation that I can very well justify to any logic oriented critic as well as to mathematicians who aren’t merely dogmatic. Many mathematicians are no different than religious fundamentalists, not having any idea why what they believe might or might not be true.

Oh geezzz … now I am SO mortified and embarrassed.
:icon-rolleyes:

James, you do realize that even if there are multiple dimensions, it has the same cardinality??

(The same number of listed numbers)

Quite frankly, you are not smart enough to realize that it’s not a shut case that reals can’t be ordered.

“Frankly”, you are not in a position to tell me how smart I am or am not. I can guess what you probably think a dimension is, but I doubt that you fully comprehend what a cardinality is. But either way, it’s irrelevant.

You can’t “count” the real number set merely because the whole number set isn’t big enough. You can “account for” the real number set, but there is nothing new about that.

No James , people have not been able to count the reals because they are using terminating sequence expansions to list.

I can disprove all the proofs that we can’t count them…

Give me one if you don’t believe me !!

I just gave you one. You reply with word salad.

Bullshit!!!

You made the assertion that there are more reals than wholes… But you never gave a proof.

Bullshit!!!

I thought it pretty obvious to anyone that there are more reals than wholes.

Between every whole number pair, there are more reals. Since the wholes count as reals, the additional reals between the wholes add to be more reals than wholes. That’s pretty obvious isn’t it.

W = the whole numbers
Wr = the real numbers that are between the whole numbers
R = the entire set of reals

R = W + Wr
thus, since Wr is > 0
R > W