When using floating-point numbers then exact, bit-for-bit, equality is almost never what you want. The result of most floating-point operations like addition, multiplication and trigonometric functions cannot be represented exactly due to the limited precision of floating-point numbers. Furthermore, in most practical situations we are just interested in a result that is "close enough" and not correct with every digit available.
Say you want to compare two floating-point numbers and and consider the error . It is natural to compare this error to some bound which is relative to the size of the numbers,
Using ensures that this relation is symmetric in and . This is a nice property to have as it would be unfortunate if we could have that was close to , but that was not close to . We could also use , which would result in a stronger requirement, or , which would lead to a behaviour between the and expressions.
In the inequality above, the quantity controls how close the numbers must be to be considered approximately equal. Using means that roughly the most significant decimal digits are correct. For example, is true for but not for . Also, is true for but not for .
This way of checking closeness brakes down, however, when comparing numbers to zero or close to zero. For instance, is close to zero? Since we see that it would require a relative tolerance of at least to be viewed as approximately equal according to the test above. In such cases it makes sense to look at the absolute error instead,
Combining these two inequalities we get that and are approximately equal, , when
This is the function suggested for approximate equality in a Python Enhancement Proposals from 2015.
It is implemented as isclose
in the math
module
(CPython implementation).
Some rules of thumb for choosing and :
- Use when you want (roughly) correct decimal digits.
- Let determine when a number is considered (close to) zero, . Use if you don't need to consider numbers close to zero.
Some extra resources to check out:
- Comparing Floating Point Numbers by Bruce Dawson.
- The Art of Computer Programming, Volume 2, Section 4.2.2, by Donald E. Knuth.
- Theory behind floating point comparisons from the Boost C++ library.
- What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.