Is 1 = 0.999... ? Really?

So you’ve exhausted rational proof, and you still encounter opposition.

What then do we conclude? You’ve correctly identified that the cause is mathematical ignorance, likely more within the context of flawed education standards than individual fault. The expected consequence of this cause is irrational opposition - as is logically consistent with the that fact that your opposition is against exhaustive rational proof.

This is why I’ve identified that the only valid way to approach this thread is psychological.
This is much to the frustration of the irrational opposition, but the alternative is tedious repetition. You keep repeating all the rational ways the prove your position from a point of knowledge and experience, and they keep repeating all the irrational ways that they think prove their position from a point of ignorance.

It’s an interesting question to ask - why do people dig their heels in, even when they’re wrestling with a topic that they know they’re weak at, and dealing with people who genuinely do know what they’re talking about?
Obviously the low-hanging fruit for them is to get on their high horse and protest that this is a rational debate and it should only deal with the topic directly - even though I’ve just rationally shown that this is not the case, and that dealing only with the topic directly is the whole problem.

From what I can tell so far, there’s not much more going on than the good old Dunning-Kruger effect, along with plenty of cognitive biases and logical fallacies - notably “confirmation bias” and “moving the goalposts”. There’s a lot of forgetting rational arguments that already countered irrational positions, denying that they ever existed, or insisting that they’re irrelevant - anything to avoid the cognitive dissonance of honest introspection.
It’s a psychologically difficult process to admit you’re wrong or not in the position you thought you were, that you wanted to see yourself as being in. People want to hear and remember things that support their position and make them feel validated, special and competent - especially if they don’t feel that way overall in their normal life. It becomes a socially detrimental force when people begin to construct their own identity and a sense of purpose around topics in which they lack sufficient expertise, emotionally investing in them and feeling personally attacked when people challenge your cause. This is especially so when there are others around in the same fragile state of mind, looking for someone with a sense of confidence who is defying a way of thinking that they’re weak at - this vicariously soothes their insecurities and only bolsters yet another movement against rationality, leaving experts such as yourself in a state of confusion about how you’re supposed to deal with what’s going on. It’s truly toxic.

So of course they’ll attack your mathematical competence as a weakness rather than admit their mathematical incompetence is their weakness.
We see it in politics all the time - e.g. “if someone sides with something you don’t agree with, they’ve been brainwashed by them.” It’s taken as “the” (necessary) conclusion, when it’s just “a” possible (sufficient) conclusion that might hold or might not: abductive reasoning. In some cases it’s actually going to be because one person understands something that the other doesn’t - yet it’s so much easier and lazier to simply trust your own prejudices and assume the other person is the ignorant one. Confirmation bias so often clouds all the evidence against this, and over-emphasises all the memorable moments of victory when it’s at least seemed like you were right.

It would appear as though all this “extra baggage” that you’re talking about is simply, or at least largely a result of the above. Mathematically challenged people want to believe that it’s maths that is at fault rather than them.

To this end they see as far as they want to see and no further.
It’s easy to note “0.9 < 1”, “0.99 < 1”, and that since this pattern continues, it must continue indefinitely under all possible circumstances. One simply neglects the key property of infinity that changes this finite pattern, and voila - it can seem as though you were right all along.

Yet the obvious truth that you pointed out is essentially undeniably tautological - math and only math defines all the notations involved in the topic, and in like manner math completes how they equal each other.
That’s as far as rationality “need” go, even though all the proofs you understand and can expertly articulate can extend this rationality far further - there’s only one answer to objections to this: they’re necessarily irrational. And the reason why this irrationality exists is as I’ve just explained.
The only issue left to resolve in this thread is how to resolve irrationality. Demonstrably the answer to this is not simply “rationality” - as the irrational response to rationality is the whole problem to begin with.

Your above argument could maybe be notated as:

(\sum_{i=1}^n\frac1{n_i}=1), or just (\frac1n\times{n}=1)
(\therefore\lim_{n\to\infty}(\frac1n\times{n})=1),
which contradicts:
(\lim_{n\to\infty}(\frac1n)=0)
(0\times{n}=0)
(\therefore\lim_{n\to\infty}(\frac1n\times{n})=0)

The issue is you’re concluding that “1=0”, instead of concluding that the only way in which you can arrive there is invalid.

What even is the infinite sum of 1 over infinity? Anything times infinity is infinite, anything times the limit of zero is zero, anything divided by itself is 1 - that’s why infinity is undefined and not a number. You can get anything you want from abusing it - that’s not the same as “anything you want does actually equal anything you want”. Math is consistent, it’s built from consistency to remain consistent, which means that when you get to inconsistencies from e.g. someone using infinity as a number, there was a problem with them using infinity as a number. In the same way, there’s a problem in your reasoning get 1=0.
You even get the mathematical constant “(e)” from the similar sum (\lim_{n\to\infty}(1+\frac1n)^n), when it might look as though the limit goes to ((1+0)^\infty), which might look like 1 (actually an “indeterminate form”).
All it means when you get inconsistent nonsense like “1=0” (which circularly violates the fundamentals of math that allowed you to arrive at the inconsistency in the first place) is that you need to do more work.

You don’t just stop at a seeming contradiction, and stick with that as your singular definite conclusion.
You have to find valid ways to get a single answer only. This is how the sum I just mentioned can be known to equal exactly “(e)” and nothing else, without contradiction/inconsistency.
This is why it helps to be a mathematician, because you have the experience of being familiar with all sorts of methods to enable you to arrive at a correct answer, when non-mathematicians might be tempted to just settle for the simplest answer that they first arrive at, and conclude that “there’s therefore something inconsistent in math”, to justify that they never needed to gain expertise in math in the first place.

Hi wtf;

It was so much fun reading a well-informed post I hesitate to comment further.

Personally, I have been absolutely committed to a rigorous study of the foundations mathematics. You might get a Geist by looking at my post at:

viewtopic.php?f=4&t=183931&p=2422704&hilit=Mathematica+Principia#p2422704

You can skip personal reflections by scrolling down to The Foundations

For example are you personally committed to the Real Analysis definition of 1 as an equivalency class of Cauchy sequences? (If so you still have an ontological error). Perhaps you might consider {Φ, {Φ}} as the number 1, where Φ is taken to be the null set? This is taken from ZFC. On the other hand Gödel thinks that Plato’s Ideal for 1 is correct – So much for Cauchy sequences.

Since Real Analysis depends on Cauchy sequences which map the Counting numbers to the Rational numbers, Counting numbers and Rational numbers are defined prior to the Real numbers. Do you have any ontological problems with this? I.e. those numbers for ½, 1, 2, 3, … are not the same as Real numbers ½, 1, 2, 3, … .

Really, I immensely enjoyed your post.

Ed

Thank you Ed. I am so glad you read it in the right spirit, I didn’t mean for it to come out as critical as it did. IMO it’s misplaced energy on your part to strenuously claim that functions aren’t numbers, since it’s not really a very important point in the first place; and it’s at least arguable and not as clear cut as you believe. IMO of course.

You ask if I’m ontologically committed to the formal set-theoretic definitions, and of course I’m not. I do take your point that functions and numbers are distinct entities in the Platonic world, even if we can define numbers as functions and probably functions as numbers in various formal symbolic ways. If that’s what you’re saying, we’re in agreement.

So even if I fall back on saying that the real number 1 literally is the sequence .9, .99, etc. or an equivalence class containing it, I would NOT ever be confused by thinking that really “is” the number 1. The very fact that the natural number 1, the integer 1, the rational number 1, and the real number 1 are distinct set-theoretic entities is evidence that NONE of them could be the real 1, a point first made by Benacerraf.

The number 1 is some kind of thing out there in Platonic land along with Captain Ahab and the Baby Jesus. That’s the problem with Platonism. If there’s a non-physical realm of existence, exactly

  • What is it?

  • Where is it?

  • What else might live there? How do we know what lives there and what doesn’t?

Point being that Platonism is easily refuted. So is anti-Platonism. Ontology is hard.

Did I manage to catch the essence of your viewpoint?

tl;dr: If I acknowledge that formalisms don’t imply ontology, then a function and a number definitely are two different things. I concede the antecedent and the logical conclusion; but I don’t understand your insistence on the point, when (IMO) it is not an especially relevant point in this thread. I think that’s what I was going on about.

The input of a function can be a number.

The output of a function can be a number.

Functions can have properties that can be expressed using numbers e.g. the limit of a function is a number.

However, functions themselves aren’t numbers. A function is no more than a set of input-output pairs where every input is paired with exactly one output. A set of pairs of numbers is not a number (even though you can use a set of pairs of numbers to represent a number.)

So Ed is right when he says that functions aren’t numbers but he’s also wrong because he says that (0.\dot9) is a function (which it is not.)

Having studied math, the answer is obviously yes! It’s all a matter of training and mathematical inclination. If you haven’t learned basic arithmetic, you think 2 + 2 and 4 look different. After you make it through grade school, you come to recognize without a moment’s hesitation that 2 + 2 and 4 refer to the same thing. Likewise one comes to recognize that .999… and 1 as two distinct expressions for the same thing, namely the Platonic or intuitive concept of the number 1.

So all you are saying here is that “My mathematical training includes 2 + 2 = 4 but not .999… = 1.” That’s all your remark amounts to.

I’ll concede that I’m unfamiliar with the studies about what the man on the street thinks about the Peano axioms. Most people don’t think about this at all, and if asked, would think you’re weird for asking them.

So let’s leave this with you and me. I myself have a perfectly clear intuition in my mind of the endless sequence 0, 1, 2, 3, 4, … of natural numbers; and even of the “completed” set of them which we call (\mathbb N ). You do not. We could still be friends. Not everyone hears the music, as I say. Some like Picasso and some like Norman Rockwell. It’s all good.

I could ask you, though, to explore the nature of your own ultrafinitism. Do you believe there’s a largest number that has no successor? If the process 0, 1, 2, 3, … ends, where does it end?

But no, THAT IS ONE OF THE CORE CONFUSIONS of many people, pardon my shouting. I’m glad you mentioned it though.

Let’s just consider the limit of a sequence, which DOES give people a lot of conceptual trouble.

Consider the sequence of rational numbers 1/2, 1/4, 1/8, 1/16, etc. We say it has a limit of 0. This causes people who have not studied Real Analysis, which to be fair is a course only math majors take, to think that the sequence “reaches” 0 in some mysterious way.

But NO! The whole point of the formalism of limits is that we DON’T TALK ABOUT REACHING. We talk instead of getting “arbitrarily close.” You give me a small positive real number, no matter how tiny, and I’ll show you that the sequence gets closer to 0 than that. And we DEFINE that condition as being the limit of the sequence.

The entire point of the limit formalism is that we never have to think or talk about “reaching and endpoint at infinity,” which is a hopelessly muddled mess. Instead we FINESSE the whole problem by using the “arbitrarily closeness” idea. That is the brilliance of the modern approach to infinitesimals. We banish them! We don’t have to talk about them.

(I mention in passing that the hyperreals of nonstandard analysis don’t help you, because .999… = 1 is a theorem of nonstandard analysis as well).

So there is no mysterious endpoint, there is no reaching. These are mind-confusing illusions left over from your imprecise intuitions of infinity. And these intuitions are clarified and made logically rigorous in math. That’s a fact.

I apologize. I was not trying to be condescending. I am actually under the sincere impression that most adults, if pressed, will agree that there is no end to the sequence 0, 1, 2, 3, 4, 5, 6, … because you can always add one. I am under the impression that most people do feel that way, if you asked them.

I believe this must be especially true in the computer age, when many people from programmers to spreadsheet users have internalized the concept of “always add 1” or “keep adding 1.” We live in the age of algorithms and “given n, output n + 1” is a perfectly intuitive concept to many people.

Everyone who ever started to learn how to program came to understand (often with great difficulty) the concept of looping, or endless repetition. If you can do something once you can do it forever. That is one of the main grokitudes of programming!

If you genuinely don’t agree, and genuinely reject the concept of adding one, then I’m interested to learn more about what that means. And even if it’s nothing more than a convenient fiction (which is exactly what it is!) What if it’s all bullshit but Newton used it to calculate the motions of the solar system. Wouldn’t you at least grant that the mathematical formalisms are useful and therefore worthy of study?

@Magnus You’re getting a little ahead of me but I will try to catch up to all your replies to me. Just working on the other ones first! You have brought up a lot of items that I need to take time to reply to.

No worries, it’s always better for a person to take their time (:

Evidently @Ed and you both care about the topic of whether functions and numbers are essentially different. I think it’s kind of a distraction but just for sake of discussion I’ll play.

First, a function goes from any set to any other set. We always denote a function as (f : X \to Y ), meaning that (f) is a function that inputs an element of a set (X) and outputs an element of a set (Y).

Since everything in math is a set (in the standard set-theoretic formalism), a function can input anything and output anything.

For example a function can input and output functions. A familiar example is the derivative operator in one real variable. We have a function (D) that inputs a function of one real variable and outputs another. For example (D x^2 = 2x). (D sin
x = cos x), etc.

Or a function could input a set and output a number; for example the function that counts the number of elements of a finite set, and outputs -1 if the set’s not finite. That’s a perfectly valid function from the proper class of all sets to the natural numbers.

So functions can be completely arbitrary in terms of what they input and output. I don’t see how this sheds light.

Well numbers have properties too. Everything has properties. So that doesn’t distinguish functions from numbers.

Conceptually maybe not, but formally functions are often numbers. For example in mathematical logic, we use Gödel numbering to represent a function or a formula by a specific number.

Another example would be to use the fact that there are as many continuous functions from the reals to the reals as there are reals. So in principle there’s a mapping that inputs a continuous function and outputs a real number that can be used as a proxy for it. Instead of saying cosine we can just say #45.3. Every function has an associated number. So again, the distinction between numbers and functions is less clear to me than it is to you and @Ed.

Well … I question the relevance or point of the observation, since it’s not clear to me that it’s true, and it’s definitely clear to me that it’s a red herring in the .999… discussion. I don’t get the bit about numbers and functions. Set theory doesn’t distinguish between numbers and functions, they’re both different types of sets. So I honestly don’t know what point is being made here.

Oh but of course it is. Every decimal expression is a function (d : \mathbb N_+ \to D) where (D) is the set of decimal digits (D = {0,1,2,3,4,5,6,7,8,9 } ). That’s what a decimal expression is. You give me the number 47, I give you back the 47-th decimal digit. (Just referring to the digits to the right of the decimal point, we can patch up the idea to account for the leftward digits if needed). You give me the number 545535 and I return that digit. That is exactly what a decimal expression is, a function from the set of positive natural numbers to the set of digits.

I’m using (\mathbb N_+) which is the set 1, 2, 3, … so that the first place to the right of the decimal point is 1 and not 0 for convenience. I hope that’s clear.

You see this, right? (\pi ) is a function, (\sqrt 2 ) is a function. [After we deal with the pesky leftward digits].

What function represents (\pi - 3 )?

f(1) = 1
f(2) = 4
f(3) = 1
f(4) = 5
f(5) = 9
etc.

Hi Magnus,

What type of entity do you believe .999… to be?

Thanks Ed

The quibble over function or number reminds me of grammar.

Function is to number as verb is to noun. The definition of the word is the same, but is the definition being used differently - as a doing or as a being? Are humans “beings” or “becomings”? Well they’re still humans.

Then there’s the distinction between the definite and indefinite article, or better yet - the type/token distinction. “A human” is one specific concrete specimen, whilst “human” is abstract humanity in general. Again, the definition of human: what we’re dealing with and what specifically is meant, is the same.

The same goes for function and number - the meaning, and what we’re dealing with is the same.
What is it that is the same in this case? Quantity.

“A quantity” is what I’ve been referring to as a concrete representation. “Quantity” is abstract.
You can represent “a quantity” as a function or number, arriving at it algorithmically (the journey) or the final result of doing so (the destination). “The quantity” represented is the same. “Quantity” means the same thing either way.

So yes, this objection of whether something is function or number is superficial at best, and meaningless at worst.

@wtf, what did I say about endless repetition when trying to deal rationally with irrationality?
The irrational are like flies trying to fly through a window, trying the same thing over and over in slightly different ways, and as soon as a new angle is attempted they forget their mistake in trying the old angle, and soon repeat their attempt.
The window is never escaped, sometimes even if you literally open the window for them.

I wonder if the mental block comes from how “1” is being thought of as being arrived at from one side only? In the case of “building” the representation (0.\dot9) from (0.9) through (0.99) etc. it’s approached from below only.
If it was representationally approached from the above, would it also be “1 plus some infinitessimal” as well as “1 minus some infinitessimal” simultaneously?

The algorithmic functional “doing” to get there is superficial. How the number “looks” is superficial.

A couple of people have mentioned Cauchy sequences. (0.\dot9) is cauchy because there is no other quantity that it approaches than “1”.
Representationally, you can get arbitrarily close to “1”, but as above, whilst “the representations” can differ, “the quantity” is identical.
You can represent the quantity (0.\dot9) as different to “1” with a Dedekind cut, but again this is only representation. The “quantity” is equal.

Hi wtf,

You might be onto something here. I am thinking about Von Neumann and Godel numbering, but I need to give it some more thought.

However, with this specific example aren’t you getting into some trouble representing an uncountable object with a countable object? In this specific case I think you need sequences with limits.

Ed

Hi wtf,

I think I have screwed up my comments abount countable/uncountable. Though I am still not certain about your representation for Pi - 3.

Ed

Perhaps some confusion can be cleared by signaling to the matter, or it’s corresponding idea relating to languages in general, mathematics being a quantifier, leading to this proposition:

'formula beginning with a quantifier is called a quantified formula. A formal quantifier requires a variable, which is said to be bound by it, and a subformula specifying a property of that variable.

Formal quantifiers have been generalized beginning with the work of Mostowski and Lindström.’

As far as generalization is concerned, it appears to pair with a tendency to integrate sets that functionally demand such, to qualify within reasonable set specifications.

So the function may be set differently,
but it tends to integrate within some mixed set consisting of both: of specified and more general characteristics.
At least this is what appears to be implied here.

I may be way off with this generalization, it seems credible.

Hello.

I take (0.999\dotso) to represent the same thing as every other decimal number which is a sum of the following form:

(\cdots + d_2 \times 10^2 + d_1 \times 10^1 + d_0 \times 10^0 + d_{-1} \times 10^{-1} + d_{-2} \times 10^{-2} + \cdots)

Every (d_n) represents a decimal digit which is an integer from (0) to (9).

“Function” and “number” do not normally mean the same thing. You can make them mean the same thing, of course, but then, you must not equivocate.

How much money do you have? I have (f(x) = x^2) money. What does that mean? Normally, it means nothing. But of course, if you have a need to, you can make it mean something.

You can use horses to represent numbers. You can say “This kind of horse represents this kind of number”. For example, you can say that pegasuses represent number (1,000), centaurs number (100) and ponies number (10). This allows you to do arithmetic with horses. You can say “A pony multiplied by a centaur equals a pegasus” without being wrong.

You can do the opposite too. You can say “This kind of number represents this kind of horse”. You can say number (1,000) represents pegasuses, and because pegasuses can fly, you can conclude, without making a mistake, that (1,000) can fly too.

It’s all fun and games until you equivocate.

For example:

  1. All numbers are shapeless.
  2. All horses are numbers.
  3. Therefore, all horses are shapeless.

Horses qua numbers are indeed shapeless, but what is argued here is that horses qua animals are shapeless, which is not true.

In the same way, (0.999\dotso) qua limit is indeed (1) but what is being argued is that (0.999\dotso) qua sum is (1), and that is not true.

You’re not? What do you think a decimal expression like .1415926… means? It’s a map from the positive integers to the decimal digits. 1 goes to 1. 2 goes to 4. 3 goes to 1.

The expression is then mapped to a convergent infinite series by summing 1/10 + 4/100 + …

I cannot imagine you not knowing this. Please explain where you’re coming from. You have me totally confused.

Then I’m confused. If the question is, “Is .999… = 1 true in standard math?” the answer is yes without a shred of doubt. I could point you to a hundred books on calculus and real analysis. If we’re talking about standard math, how could anyone hold a different opinion?

But yes! In which case .999… = 1 in standard math and there is no question or dispute, other than to clarify for people what the notation means and why it’s true within standard math. It’s a theorem in ZF set theory. It’s a convergent geometric series in freshman calculus. It’s even true in nonstandard analysis, which some people aren’t aware of. There’s just no question about the matter.

So you really have me puzzled, Magnus. If you agree we’re talking about standard math, what is the basis of your disagreement?

Ok. So you admit that you are NOT talking about standard math, but rather about your private nonstandard use of mathematical symbols. In which case you can define .999… = 47 and I would have no objection. If that’s one of the rules in your game, I am fine with it; just as I learned to accept that the knight can hop over other pieces in standard chess.

You have already said that you are talking about standard math AND that you are talking about your own private nonstandard math. It’s not hard to misunderstand you!

Now let me talk delicately about infA. When I came to this forum several years ago, James was already a prolific poster, an ILP Legend to beat all ILP legends. I am reluctant to criticize him since he is not here to defend himself. He has far more mindshare on this forum than I do. I respect his prolific output, if not always its content.

That said, the concept if infA is confused and wrong in the extreme. The idea seems to be some sort of mishmash of the ordinal numbers, in which we do “continue counting” after all the natural numbers are exhausted; and nonstandard analysis, in which there are true infinite and infinitesimal numbers. The infA concept borrows misunderstood elements from each of these ideas and simply makes a mess.

One really valuable thing I got from this thread a few years ago is that James caused me to go deeply into nonstandard analysis, to the point where I understand its technical aspects. For that I appreciate James. But the infA concept is just bullpucky, I don’t know what else to say.

It means exactly what I’ve described to @Ed3. It’s a particular map from the positive integers to the set of decimal digits; a constant map, in fact, in which f(n) = ‘9’ for all inputs n. We then interpret this symbol as a real number as in the theory of geometric series, in which it’s proved rigorously that .999… = 1.

Again I agree that if you choose to make up a new system in which .999… has some other meaning, you are perfect within your rights. After all there do happen to be many variants of chess; played on infinite boards, or with a new piece called the Archbishop, etc. If someone enjoys playing alternate versions of standard games it’s ok by me.

Yes ok. Then we are done!

I fail to follow that. Did you learn geometric series at one point? The definition of a limit? I can’t tell where you’re coming from.

Actually the limit is defined as the sum. That’s what a limit is. It’s a clever finessing of the idea of “the point at the end” or whatever. You are adding your own faulty intuition. If you would consult a book on real analysis you would find that the limit of a geometric series is defined as the limit of the sequence of partial sums; and that the limit of a sequence is defined as a number that the sequence gets arbitrarily close to. I explained all this to @Phyllo the other day. That’s the textbook definition. You’re just wrong in your impression, either because you had a bad calculus class (as most students do) or none at all. It’s not till Real Analysis, a class taken primarily by math majors, that one sees the formal definition and comes to understand that the sum IS defined as the limit of the sequence of partial sums. That cleverly avoids the confusion you’ve confused yourself with.

Ah, the evil cabal of mathematicians. I will fully agree with you that most math TEACHERS form an evil cabal. One doesn’t get all this stuff sorted out properly till one sees the formal definitions; at which point, one learns that the limit of a sequence is defined by arbitrary closeness. The belief you have is a bad intuition that mathematical training is designed to clarify. It’s sad that we don’t show this to people unless they’re math majors, and you can rest assured that when I am in charge of the public school math curriculum the teaching of the real numbers will be a lot better.

Till then, I apologize on behalf of the math community that you weren’t taught better. But limits are very rigorously defined and your idea is just wrong.

Again I hope I’m not coming on too strong, I’m criticizing your ideas and not you. I know you are sincere. Except for the part where you say you’re talking about standard math AND that you’re not. That point confused me.

As I say, due to JSS’s extreme prolificness (if I may coin a word) on this site, plus the fact that he’s not here to defend his ideas, I prefer not to argue with him due to basic fairness.

So if you could frame your points from the beginning and not ascribe them to JSS, then I won’t be in a position to try to understand the ideas of JSS, which I found faulty four years ago. Just explain your ideas to me in your own terms. Else I’m arguing with someone who can’t argue back.

Again, as I’ve said many times, if you would consult a book calculus or real analysis, you would know that .999… represents the geometric series 9/10 + 9/100 + … whose sum, as defined in real analysis, is 1. I know you have an infinite series on one hand and a number on the other, but they are indeed equal mathematical objects, and this can be rigorously proved from first principles.

I do desire to understand the nature of your disagreement. But as you keep falling back on claiming that the standard definition of a limit is a lie, without providing any more supporting details, I remain puzzled. The definition of a limit is what it is, as is the way the knight moves. Rules in a formal game. They can’t be right or wrong, they’re just formal rules that have turned out to be interesting and (in the case of math) useful in understanding the world.

That can not be rationally disputed. One need only consult hundreds if not thousands of math texts that describe the standard definition of limits.

I could see your saying that mathematicians have gotten it wrong. But I can’t see your saying that they didn’t say what they did say!

It is not necessary for a sequence or sum to “attain” its limit. That’s the whole point of limits. Attainment is NOT part of the definition. Arbitrary closeness is.

Likewise, SOME properties are preserved when we pass to the limit, and others aren’t. For example in the sequence 1/2, 1/3, 1/4, 1/5 1/6, … each term is strictly greater than 0, But the limit is 0. It is a fact taught to math majors that when we pass to the limit, (<) becomes (\leq) and (>) becomes (\geq).

The terms of 1/2, 1/3, 1/4, … get arbitrarily close and STAY arbitrarily close to 0. That is the definition of the limit. You have faulty ideas because you haven’t grokked the formal definition of a limit. That’s all I can see of your objection.

That’s just something you made up. It’s not mathematically true. Limits are based on arbitrary closeness, not attainment. It’s perfectly clear that 1/2, 1/3, … never “attains” the value 0. The limit is 0 because the terms get arbitrarily close to 0. That is the definition. And even though each element is strictly greater than 0, the limit is 0. That’s how limits work.