tl;dr; rounding errors mean that you can can never be sure a small positive number is interpreted as a zero to the computer.
Computers (well, the ones we’ll be using) store floats in standard-form [https://www.bbc.com/bitesize/guides/zxsv97h/revision/1]. Although computers use binary numbers instead of decimal.
For example, when we say:
float x = 7681.8932f;
We have the number x in standard form (in usual human base 10)
float x = 7.6818932 * 10^3; // ^ means "to the power of" in this context.
In that example, the mantissa is 7.6818932, and the exponent is 3. In standard form, the decimal place always goes after a single digit: e.g. 0.73 * 10^2 (==73)
or 9.7383 * 10^-3 (== 000.97383)
.
If you have a floating decimal number, the computer only has 32 or maybe 64 bits of data to store that number (some bits for the mantissa (value), and some for the exponent (scale)).
Let’s assume the computer is using 32 bits (4 bytes) for the mantissa, and 32 bits (4 bytes) for the exponent.
The computer might store this information by taking the mantisa without the decimal place as 76818932, which in binary base-2 is ‘100100101000010100111110100’. It then pads 0s to the left until it’s 31-bits long (the 32nd bit is to say if it’s positive or negative).
The exponent, 3, would be written as ‘11’ in binary, or “000 0000 0000 0000 0000 0000 0000 0011” with 31 bits [again the far-left bit is 0 if the exponent is positive, or 1 if it is negative].
In other words, when we write float x = 7681.8932f;
, the computer sees something like this:
x := <0000 0100 1001 0100 0010 1001 1111 0100, 0000 0000 0000 0000 0000 0000 0000 0011>
Now, the number zero would be something like this:
0 := <0000 0000 0000 0000 0000 0000 0000 0000, 0000 0000 0000 0000 0000 0000 0000 0001>
(yes, 0 = 0 * 2^1
, shocking I know.)
But what about numbers that start off non-zero and get smaller and smaller? Suppose we have:
float z;
void Start()
{
z = 1;
}
void Update()
{
z *= 0.9f; // z gets smaller, and eventually converges to zero
// try this on a calculate if you don't believe me.
if (z > 0) printf("Z is still positive");
else printf("Z is now zero.");
}
Note that, mathematically, z will ALWAYS be positive (because if you multiply two positive numbers, the result is also positive). But from a computer’s point if view, z might become 0. Computers are terrible mathematicians.
What do you think this code will do? The answer is: WE DON’T KNOW. It might print “Z is still positive” for ever, or it might eventually reach exactly zero. The problem is that when Z gets super duper small, there is only one bit left to say “z is still positive”, but if you times that by 0.9f, what do you get?
Well if you have JUST ONE bit of data, then the rounding errors matter a lot 1 * 0.9f == 1
, or 1 * 0.9f == 0
depending on whether the computer rounds to the nearest answer or rounds down.
To finish my point, epsilon, i.e. Mathf.Epsilon, is probably stored as something like
public static const float Mathf.Epsilon = 0.000000000000000001f;
// I don't know how many zeros, but you get my point
To a computer, this looks like:
Mathf.Epsilon := <0000 0000 0000 0000 0000 0000 0000 0001, 1000 0000 0000 0000 0000 0000 0000 0000>
(the exponent is “the largest negative number you can represent with just 31 bits”, the details are boring, just believe me).
Now, Mathf.Epsilon > 0
is true, from the computer’s point of view at least. But if you do something like:
float z = Mathf.Epsilon/2;
then you’re asking for trouble. z could be Mathf.Epsilon
or 0
, or even -Mathf.Epsilon
.