Is 1 = 0.999... ? Really?

Yeah, unfortunately I have to work within my limits. I have never talked to God or spirits or demons, been to heaven or hell, touched infinity, had out of body experiences, been reincarnated, been one with the universal consciousness, made a perpetual motion machine, circumvented the laws of thermodynamics …

I’m making lemonade with the lemons that I have gathered over the years.

Well… sounds to me like you’re off to a good start.

When these things do occur, try not to go insane like I did!

I’ll tell you something very important to keep with you…

The question of whether there is a grand creator, is actually yours to decide. There isn’t a wrong answer.
That’s how big this cosmos actually is

viewtopic.php?p=2668152#p2668152

Sorry James, I felt obligated to add this to my prior comment, perhaps we can get back to the question phyllo avoided about whether math being a representation of reality is in fact not reality itself, as we ponder ideas of presumed convergences

I think a grand creator is outside of my head. What I decide about it, is inside my head and my decision may be wrong. But I understand that’s how knowledge works and I accept it.

Everything that I know and believe may be wrong.

You have no idea what you are talking about. There have never been any “results” or proofs that 1 = 0.999… And there never will be. And your fantasy about math having to be thrown out is just nonsense.

Yep.

Do you have the balls to make the same statement?

I can tell you this. There are lots of beings older, wiser, and more powerful than you.

I can also tell you this… a grand creator doesn’t care whether you believe in it, it can make a perfectly consistent atheistic universe for you, co-shared perfectly consistently with creationists …

Personally… I’m an all of the above guy

What is an “atheistic universe”? Everyone there is an atheist?

It’s actually much more elegant than that!!

You can co-exist in the same exact world system and the system is internally and externally consistent for atheists and creationists… this type of precision is mind blowing!!

Bah, humbug.

Atheists think there is no creator and theists think there is a creator. The atheists are wrong if a creator made the universe. Nothing mind blowing.

I’ll add to this (sorry James)

The options are:

1.) one grand creator
2.) more than one grand creator (cocreation) but not everyone
3.) everyone created this together
4.) nobody created this

You can pick any option or any multiple options you want. I pick all 4. There is no wrong answer. I can’t stress that enough

Not that simple. It’s not an excluded middle. You don’t think a creator could create a non created cosmos and a created cosmos at the same time?

I know it sounds absurd at first… one person who believes in a creator, will see all the signs of one eventually… those who don’t will always be able to prove the other ones wrong. It’s really hard to explain unless you’ve seen it… everything constructed by perspective in something so mindowingly elegant … it’s your choice!! You cannot make a wrong choice!

Making a wrong choice is not the end of the world.

And on that, I’ll remain silent.

Perhaps we can return to whether math is reality or a representation of reality, and what this implies in terms of convergence with infinite regress

Edit: my context was the decision about creation stuff. That blanket statement outside of context is something not appropriate to discuss

Unlike some, I try to not think with my balls, thk u.

This is a point of great interest. It illustrates the profound difference between the mathematical real numbers on the one hand, and floating point (or any other) representation of real numbers on a physical computer.

The classic example that shows the distinction is the harmonic series

$$\sum_{n=2}^\infty \frac{1}{n} = \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \dots$$

[Some people prefer to start counting at (1), makes no difference].

This series converges on a computer but diverges in the real numbers. What you say does apply to computer math, but it does not apply to the real numbers. Let’s look more closely.

On any physical implementation of floating point arithmetic, there is a natural number (N \in \mathbb N) such that (\frac{1}{n}) is indistinguishable from zero whenever (n > N).

Therefore no matter what the implementation, at some point the entire tail of the series is effectively a string of zeros and adds nothing to the sum. The finitely many nonzero terms add up to the finite sum of the series.

On the other hand, the harmonic series diverges in the real numbers.

$$\underbrace{\frac{1}{2}}{~= ~\frac{1}{2}} + \underbrace{\frac{1}{3} + \frac{1}{4}}{> ~\frac{1}{4} ~+ ~\frac{1}{4} ~= ~\frac{1}{2}} + \underbrace{\frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8}}{> ~\frac{1}{8} ~+ ~\frac{1}{8} ~+ ~\frac{1}{8} ~+ ~\frac{1}{8} ~= ~\frac{1}{2}} + \underbrace{\frac{1}{9} \dots \frac{1}{16}}{> ~8 ~\times ~\frac{1}{16} ~= ~\frac{1}{2}} + \underbrace{\frac{1}{17} \dots \frac{1}{32}}_{> ~16 ~\times ~\frac{1}{32} ~= ~\frac{1}{2}} +\dots$$

We can always grab the next (2^n) terms of the series to have a finitely long block whose sum is greater than (\frac{1}{2}). No matter how large a number you give me, I can go out far enough in the series and find a partial sum that’s greater than your number. If you challenge me with one million for example, I’ll just grab the next two million (2^n)-sized blocks, each of which sums to more than (\frac{1}{2}). The corresponding partial sum of the series will therefore exceed one million.

So the harmonic series diverges. It does not have any finite sum.

Your idea that the terms get so close to zero that they don’t matter, ONLY APPLIES to the computer version! In the real numbers, all those tiny little crumbs at the end keep adding up and the sum goes higher and higher without any finite limit.

It’s kind of weird to imagine. Every tail diverges. No matter how far you go out, where the numbers are really really tiny, the sum of the terms after that STILL fails to converge.

This example in a nutshell is the difference between the real numbers and computer implementations of floating point arithmetic. It also serves as the standard example of a series that fails to converge despite its terms going to zero.

Conclusion: Computers are inadequate to capture the essential nature of the real numbers. Intuitions about computer arithmetic do not necessarily translate to the reals.

All the more reason to approach these problems from several angles.

Do you get consistent results? Why or why not?

No, I’m saying that when you focus down to the level of each digit that gets derived by the algorithm, it’s incredible easy to understand how that digit is derived. Given that a rational number is just a finite series of such digits (or a repeating pattern of such digits), understanding the derivation of each digit amounts to an understanding of the derivation of the entire series.

What is 25 divided by 4?

Well, I know that you can multiple 4 six times to get 24, and the remainder is 1. So I understand how I derive the first digit: 6.

Then when we divide the remainder 1 by 4, I know that we’ll get 4 quarters. So I understand how we get the next two digits: 6.25.

^ There’s nothing complicated here.

Division never gives us irrational numbers. Irrational numbers are, by definition, not representable as a ratio. So I’m not sure what process you’re talking about that sometimes gives us rational numbers and sometimes gives us irrational numbers. But I jumped into this discussion in response to your comment: “nobody on earth actually knows why some rational numbers numbers do or don’t terminate…” ← If you’re talking about only rational numbers, then I assume you’re talking about the difference between, for example, 25/4=6.25 vs. 10/3=3.333… both of which we can understand why they do or don’t terminate. But why some quantities can only be represented as irrational numbers in our base 10 system, I agree that we (or I) don’t understand.

Here’s a theory though: viewtopic.php?t=162173

What is the argument again?

I don’t think this is as easy to explain.

The argument is that 1/99 = 1 & 0.9… = 31/3 & 3*0.3…

It’s a great argument until you reverse engineer the steps …

10-3 = 7
1.0-0.3= 0.7
1.00-0.33 = 0.77

It’s a complete reverse engineer of the same process that not only gives you the repeating fractions, it also reverse engineers the same logic used to justify that .9… = 1

A calculator will let you get 0.7… step by step, which is the method we use to even abstract this problem, but they are automatically programmed (by assholes),
To answer 0.6… if you ask it 1-1/3

To anticipate the obvious question:

The reason the reverse engineering uses 10 instead of 9 (obviously 9-3=6) is that 1 adds a zero as the expansion occurs in the initial procedure. It’s not a proper reverse engineer, if you start with 0.9…