phyllo wrote:Series which converge add ever smaller numbers to the total until ultimately they add nothing more. That's the basic concept. It's nothing mysterious or inaccessible. One can see it by preforming many calculations on a computer.

This is a point of great interest. It illustrates the profound difference between the mathematical real numbers on the one hand, and

floating point (or any other) representation of real numbers on a physical computer.

The classic example that shows the distinction is the

harmonic series$$\sum_{n=2}^\infty \frac{1}{n} = \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \dots$$

[Some people prefer to start counting at \(1\), makes no difference].

This series

converges on a computer but

diverges in the real numbers. What you say does apply to computer math, but it does not apply to the real numbers. Let's look more closely.

On any physical implementation of floating point arithmetic, there is a natural number \(N \in \mathbb N\) such that \(\frac{1}{n}\) is indistinguishable from zero whenever \(n > N\).

Therefore no matter what the implementation, at some point the entire tail of the series is effectively a string of zeros and adds nothing to the sum. The finitely many nonzero terms add up to the finite sum of the series.

On the other hand, the harmonic series diverges in the real numbers.

$$\underbrace{\frac{1}{2}}_{~= ~\frac{1}{2}} + \underbrace{\frac{1}{3} + \frac{1}{4}}_{> ~\frac{1}{4} ~+ ~\frac{1}{4} ~= ~\frac{1}{2}} + \underbrace{\frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8}}_{> ~\frac{1}{8} ~+ ~\frac{1}{8} ~+ ~\frac{1}{8} ~+ ~\frac{1}{8} ~= ~\frac{1}{2}} + \underbrace{\frac{1}{9} \dots \frac{1}{16}}_{> ~8 ~\times ~\frac{1}{16} ~= ~\frac{1}{2}} + \underbrace{\frac{1}{17} \dots \frac{1}{32}}_{> ~16 ~\times ~\frac{1}{32} ~= ~\frac{1}{2}} +\dots$$

We can always grab the next \(2^n\) terms of the series to have a finitely long block whose sum is greater than \(\frac{1}{2}\). No matter how large a number you give me, I can go out far enough in the series and find a partial sum that's greater than your number. If you challenge me with one million for example, I'll just grab the next two million \(2^n\)-sized blocks, each of which sums to more than \(\frac{1}{2}\). The corresponding partial sum of the series will therefore exceed one million.

So the harmonic series

diverges. It does not have any finite sum.

Your idea that the terms get so close to zero that they don't matter, ONLY APPLIES to the computer version! In the real numbers, all those tiny little crumbs at the end keep adding up and the sum goes higher and higher without any finite limit.

It's kind of weird to imagine.

Every tail diverges. No matter how far you go out, where the numbers are

really really tiny, the sum of the terms after that STILL fails to converge.

This example in a nutshell is the difference between the real numbers and computer implementations of floating point arithmetic. It also serves as the standard example of a series that fails to converge despite its terms going to zero.

Conclusion: Computers are inadequate to capture the essential nature of the real numbers. Intuitions about computer arithmetic do not necessarily translate to the reals.