by Fedor Iskhakov, ANU
Description: Binary and hexadecimal numbers. Floating point numbers. Numerical stability and potential issues. Numerical noise.
What is the result of the comparison?
a = 0.1
b = 0.1
c = 0.1
a+b+c == 0.3
False
interest = 0.04
compounding = 365
investment = 1000
t=10
daily = 1 + interest/compounding
sum = investment*(daily**(compounding*t))
format(sum, '.25f')
'1491.7920028601754438568605110'
#using floats
interest1 = 0.04
compounding = 365*24
t=100 #years
investment1 = 10e9 #one billion
daily1 = 1 + interest1/compounding
sum1 = investment1*(daily1**(compounding*t))
print('Amount computed using naive computation: %0.20e'%sum1)
#the same using precise decimal representation
from decimal import *
getcontext().prec = 100 #set precision of decimal calculations
interest2 = Decimal(interest1)
daily2 = 1 + interest2/compounding
investment2 = Decimal(investment1)
sum2 = investment2*(daily2**(compounding*t)) #using exact decimals
print('Amount computed using exact computation: %0.20e'%sum2)
diff=sum2-Decimal.from_float(sum1)
print('The difference is: %0.10f'%diff)
Amount computed using naive computation: 5.45976514222177001953e+11 Amount computed using exact computation: 5.45976514236965270996e+11 The difference is: 14.7882694559
Numerical stability of the code is an important property!
$ r $ — real number
$ b $ — base (radix)
$ d_0,d_1,d_2,...,d_k $ — digits (from lowest to highest)
$$ r = d_k \cdot b^k + d_{k-1} \cdot b^{k-1} + \dots + d_2 \cdot b^2 + d_1 \cdot b + d_0 $$For example for decimals $ b=10 $ (0,1,..,9) we have
$$ 7,631 = 7 \cdot 1000 + 6 \cdot 100 + 3 \cdot 10 + 1 $$$$ 19,048 = 1 \cdot 10000 + 9 \cdot 1000 + 0 \cdot 100 + 4 \cdot 10 + 8 $$Now let $ b=2 $, so we only have digits 0 and 1
$$ 101011_{binary} = 1 \cdot 2^5 + 0 \cdot 2^4 + 1 \cdot 2^3 + 0 \cdot 2^2 + 1 \cdot 2 + 1 = 43_{10} $$$$ 25_{10} = 16 + 8 + 1 = 2^4 + 2^3 + 2^0 = 11001_{binary} $$Other common bases are 8 and 16 (with digits $ 0,1,2,\dots,9,a,b,c,d,e,f) $
$ 0_{binary} $ $ \rightarrow $ $ 1_{binary} $ $ \rightarrow $ $ 10_{binary} $ $ \rightarrow $ $ 11_{binary} $ $ \rightarrow $ ??
Is it possible to count to 1000 using 10 fingers?
In base-$ b $ it takes $ n $ digits to count up to up to $ b^n - 1 $
In base-$ b $ using $ k $ fractional digits
$$ 1.r = 1 + d_{-1} \cdot b^{-1} + d_{-2} \cdot b^{-2} + \dots + d_{-k} \cdot b^{-k} $$$$ 1.5627 = \frac{15,627}{10,000} = 1 + 5 \cdot 10^{-1} + 6 \cdot 10^{-2} + 2 \cdot 10^{-3} + 7 \cdot 10^{-4} $$Yet, for some numbers there is no finite decimal representation
$$ \frac{4}{3} = 1 + 3 \cdot 10^{-1} + 3 \cdot 10^{-2} + 3 \cdot 10^{-3} + \dots = 1.333\dots $$$$ \frac{4}{3} = 1 + \frac{1}{3} = 1 + \frac{10}{3} 10^{-1} = 1 + 3 \cdot 10^{-1} + \frac{1}{3}10^{-1} $$$$ = 1.3 + \frac{10}{3} \cdot 10^{-2} = 1.3 + 3 \cdot 10^{-2} + \frac{1}{3}10^{-2} $$$$ = 1.33 + \frac{10}{3} \cdot 10^{-3} = 1.33 + 3 \cdot 10^{-3} + \frac{1}{3}10^{-3} = \dots $$Therefore $ 0.1 $ can not be represented in binary exactly!
$ 0 \le r_0 < b $ — mantissa (coefficient)
$ b $ — base (radix)
$ e $ — exponent
Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation
$ p $ — number of digits in the representation for $ r_0 $
$ e $ — exponent between $ e_{min} $ and $ e_{max} $, taking up $ p_e $ bits to encode
$$ r \approx \pm d_0. d_1 d_2 \dots d_p \cdot b^e $$The float takes the total of $ 1 + p + p_e $ digits + one bit for the sign
numbers are represented in binary form $ \Rightarrow $ can not compare floats for equality
loss of precision when substracting close real numbers represented by floats $ \Rightarrow $ innocent formulas may in fact be numerically unstable
large to be represented as float
indistinguishable from zero
Look at these cases in the Lab