foobuzz

by Valentin, July 25 2020, in science

The sane hardship it is to understand that 0.999... = 1

One of the most important realizations that I have had about mathematics is the fact that there is no global definition of what "infinity" is. You can't just drop the word "infinite" in any context, as if one day mathematicians had a meeting to decide the meaning of "infinity" and made it an element of reasoning you can use with any mathematical thing.

In set theory, a set is "infinite" if, for every natural number, the set has a subset whose cardinality is that of this natural number. In functional analysis, a function has a limit L as x approaches "infinity" if, for any tiny ɛ, there is a point after which the function won't ever be farther from L than the distance ɛ. The term of "infinity" obviously comes from an intuition about the philosophical notion of infinity, but it is then defined formally using very traditional and non-mystical notions and notations, and there isn't any global notion of "infinity". You can't take some random mathematical thing and ask "What happens when this goes to infinity?".

Now, when someone comes and says "hey, do you know that 0.999 with an infinite amount of 9s after the decimal point is equal to 1?", what most people do is that they try to picture the number 0.999 with an actual infinite number of nines after the decimal point and then they try to actually reason about this monstrosity. This is an interesting intellectual exercise, but when you're doing that, you're doing philosophy, not mathematics. If you want to do mathematics, what you should be asking back is "What is the definition of a number with a decimal repeating infinitely?"

There are multiple definitions for it, but the one that makes the most sense to me is an infinite series. We can simply state that 0.999... can be written as 0.9 + 0.09 + 0.009 + 0.0009 + ... Formally, it would be written as follows:

This form is very helpful, because the notion of infinite series is very well defined. It is simply the limit of the sequence of the partial sums. In other words, 0.999... is the limit of the sequence 0.9, 0.99, 0.999, 0.9999, etc. Let's call this sequence 0.(9)n for later reference. I like this definition because it reminds me of very important aspects of sequences and their limits that I learned when I was in school:

  1. There is no such thing as an "infinite term" of a sequence (and some teachers won't like it if you ever dare say something like that).
  2. The limit of a sequence may very well be distinct than any of its terms.

I think those two aspects are at the origin of the difficulty that there is to accept that 0.999... = 1. I think that the decimal representation with its "infinite digits" is misleading because it leads us to ponder the notion of an "infinite term" (of finite truncations) all while it is explicitly discouraged as soon as we actually consider the sequence of truncations itself. Then our minds are lost wandering in the vast pastures of infinity where it's hard to understand how a limit may not belong to the sequence itself. I think that what people really mean when they think that it's problematic that 0.999... = 1 is that the sequence of 0.(9)n never reaches 1 no matter how many time you look at the next term.

In other words, consider the set of all elements of the sequence: {0.9, 0.99, 0.999, 0.9999, ...}. This set is obviously infinite, and 1, the limit of the sequence, does not belong to this set. When we ponder the notion of "infinity" in the context of 0.9..., what we might envision is the journey through this "infinite" set. With this in mind, it's hard to see how 1 can have anything to do with this, since it is outside the set.

But this begs the question: what is a "limit"? In layman terms, the limit of a sequence is a real number the sequence is ever getting closer to (without the requirement that it actually reaches it at any point). More formally, it means that for any tiny real number ɛ, there is a rank after which the sequence won't ever be farther from the limit than the distance ɛ. (This is the same definition than the one used in functional analysis I presented in the introduction.) Even without a formal proof, it seems pretty obvious than the sequence 0.9, 0.99, 0.999,... has 1 for limit. And therefore, if you actually understand the proper meaning of the notation, that 0.999... = 1.

Now, since the limit of an increasing sequence also happens to be its supremum (least upper bound), this leads us to another definition, which is the one that is actually used by Wikipedia:

The real number represented by an infinite decimal is the least upper bound of its finite truncations.

What this means is that if you take the sequence 0.(9)n, then the number that is written "0.999..." or said to have "an infinite amount of 9s after the decimal point" is the smallest number that is greater than all terms of the sequence. For example, 3 is greater than all the terms of the sequence, so that's a candidate. Is it the smallest such number? Well, certainly not. 2 is greater than all terms of the sequence and it is lesser than 3. Can we find smaller? Sure. 1 is greater than all terms of the sequence. Is there something even smaller? Hmm... doesn't seem so. I think 1 is the smallest. This, of course, is only intuition so far, but the proof is easy enough to follow.