#!/usr/bin/env python # coding: utf-8 # # Measurement Error Propagation # Here, we are interested in estimating the error of our result, provided we know the error of input variables. In experiments, measurements are always known only up to finite precision. It is important to quantify the resulting uncertainty before we can reliably interpret the results of the measurement. # Suppose that we want to calculate error estimation of function $f(x,y)$, where both $x$ and $y$ are known with finite precision. We either have a large number of measurements of $x$ and $y$, or, we might know the probability distribution of measurements and their averages. For now, let's assume # # For now, let's assume that we have measured $x$ and $y$ and have a large number $n$ of measurements with values $[x_0,x_1,...,x_{n-1}]$ and $[y_0,y_1,...,y_{n-1}]$. # Using statistics, we can define # # $$ = \frac{1}{n}\sum_{i=0}^{n-1} x_i$$ # $$ = \frac{1}{n}\sum_{i=0}^{n-1} y_i$$ # $$\sigma_x^2 = \frac{1}{n-1}\sum_{i=0}^{n-1} (x_i-)^2$$ # $$\sigma_y^2 = \frac{1}{n-1}\sum_{i=0}^{n-1} (y_i-)^2$$ # In many experimental measurements we can assume that errors on measurement of $x$ and $y$ are uncorrelated. On the other hand, there are cases of non-negligible correlation between the measurements of different variables. In the latter case, the covariance should also be nonzero: # # $$cov(x,y)=\frac{1}{n-1}\sum_{i=0}^{n-1} (x_i-)(y_i-)$$ # # $cov(x,y)$ quantifies the degree of correlation between variables $x$ and $y$. # # Using the inputs $$, $$, $\sigma_x^2$, $\sigma_y^2$, and $cov(x,y)$, we want to estimate the error of function $f$, which we denote by $\sigma_f$. # If we have $n$ measurements of $x$ and $y$, we could calculate it by # # $$\sigma^2_f=\frac{1}{n-1}\sum_{i=0}^{n-1}(f_i-)^2$$ # # Below we will derive general expression for $\sigma_f$ knowing function $f(x,y)$ and the input variables and their errors. # # We will work under the assumption that erros are small and can be trated in a linear approximation. If only the variable $x$ has finite error, which is small, we could estiate # # $$f(x_i)-f()\approx (x_i-) \frac{\partial f()}{\partial x}$$ # which is correct up to linear order. # It then folows # $$\sigma^2_f = \frac{1}{n-1}\sum_{i=0}^{n-1} (f(x_i)-f())^2= \frac{1}{n-1}\sum_{i=0}^{n-1} # (x_i-)^2 \left(\frac{\partial f()}{\partial x}\right)^2 # = \left(\frac{\partial f()}{\partial x}\right)^2 \sigma^2_x $$ # # For shorter notation, we will denote the derivative # $$\frac{\partial f()}{\partial x}\equiv \frac{\partial f}{\partial x}. $$ Note that we always evaluate derivative at the average value of $x$. # # For single variable with uncertainty, we thus simply have # # $$\sigma_f = \left|\frac{\partial f}{\partial x}\right|\sigma_x$$ # If the error of $y$ and $cov$ are also finite, we need to add more terms. We start by estimating # # $$f(x_i,y_i)-f(,)\approx (x_i-) \frac{\partial f}{\partial x} + (y_i-)\frac{\partial f}{\partial y} + O(\Delta_x^2,\Delta_y^2,\Delta_x\Delta_y).$$ # # Up to linear order in all variables, we have # $$ # \sigma_f^2=\frac{1}{n-1}\sum_{i=0}^{n-1} \left[(x_i-) \frac{\partial f}{\partial x} + (y_i-)\frac{\partial f}{\partial y} \right]^2 # $$ # which gives # $$ # \sigma_f^2=\sigma^2_x \left(\frac{\partial f}{\partial x}\right)^2 + \sigma^2_y \left(\frac{\partial f}{\partial y}\right)^2+2cov(x,y)\frac{\partial f}{\partial x}\frac{\partial f}{\partial y} # $$ # # We can also write this equation in matrix form # $$ # \sigma_f^2 = # \begin{bmatrix}\frac{\partial f}{\partial x},&\frac{\partial f}{\partial y} # \end{bmatrix} # \begin{bmatrix} # \sigma^2_x & cov(x,y)\\ # cov(x,y) & \sigma^2_y # \end{bmatrix} # \begin{bmatrix}\frac{\partial f}{\partial x}\\ # \frac{\partial f}{\partial y} # \end{bmatrix} # $$ # # Now it is easy to generalize this equation to any number of variables # $$ # \sigma_f^2 = # \begin{bmatrix}\frac{\partial f}{\partial x_1},&\frac{\partial f}{\partial x_2},& # \frac{\partial f}{\partial x_3},... # \end{bmatrix} # \begin{bmatrix} # \sigma^2_{x_1} & cov(x_1,x_2) & cov(x_1,x_3) & ...\\ # cov(x_2,x_1) & \sigma^2_{x_2} & cov(x_2,x_3) & ...\\ # cov(x_3,x_1) & cov(x_3,x_2) & \sigma^2_{x_3} & ...\\ # ... # \end{bmatrix} # \begin{bmatrix}\frac{\partial f}{\partial x_1}\\ # \frac{\partial f}{\partial x_2}\\ # \frac{\partial f}{\partial x_3}\\ # ... # \end{bmatrix} # $$ # The following demonstrates covariance with a set of measurements of variables $x$ and $y$: We display 400 random measurements (black dots) which have the same means (white squares) but different covariance structures. The marginal distributions of the $x$ and $y$ variables are shown as ‘bell curves’ on the top and right axis of each plot. # # # # Often we do not have many measurements, but we know the probability distribution of measurement $P(x)$, and we need to compute the variance from the distribution. The formulas are straighforard generalization of discrete formulas # # $$\sigma_x^2 = \int dx P(x) (x-)^2$$ # where probability is normalized # $$ \int dx P(x)=1 $$ and average is given by # $$ = \int dx P(x) x$$ # # We can check that with discrete probability distribution defined by # $$P(x) = \frac{1}{n}\sum_{i=0}^{n-1} \delta(x-x_i)$$ # we recover the discrete formulas, except that $n-1$ is replaced by $n$ in estimation of variance. For continuum distributions it is assumed that $n$ is very large, hence the two formulas are equivalent. # As an example we consider uniform distribution between $[-a,a]$. The probability is than # $$P(x) = \frac{1}{2a} \theta(-a=0$ and # $$\sigma^2_x = \frac{1}{2a}\int_{-a}^a x^2 dx=\frac{a^2}{3} $$ # and hence $\sigma_x = \frac{a}{\sqrt{3}}$. # # As the second example we consider triangular distribution between $[-a,a]$. The probability is # # $$P(x) = \frac{1}{a}\left(1-\left|\frac{x}{a}\right|\right) \theta(-a=0$ and # $$\sigma^2_x = \frac{1}{a}\int_{-a}^a \left(1-\left|\frac{x}{a}\right|\right)x^2 dx=\frac{a^2}{6} $$ hence $\sigma_x = \frac{a}{\sqrt{6}}$ # In[46]: import numpy as np import matplotlib.pyplot as plt N=200 x=np.linspace(-2,2,N) yuniform=np.hstack( (np.zeros(int(N/4)),0.5*np.ones(int(N/2)),np.zeros(int(N/4)) )) ytriang= [0 if abs(t)>1 else 1.-abs(t) for t in x] plt.plot(x,yuniform, label='P(uniform)') plt.plot(x,ytriang, label='P(triangular)') plt.legend(loc='best'); # As a simplest example, assume that we want to calculate person's body mass index (BMI), which is the ratio of # $$\textrm{BMI} = \frac{\textrm{body_mass}(kg)}{[\textrm{body_height}(m)]^2}.$$ # It is often used as an (imperfect) indicator of obesity or malnutrition. # # Suppose that the scale tells you that your weigh is 84 kg, but has precision to the nearest kilogram. The tape measure says you are between 181 and 182 cm tall, most likely 181.5cm. # # We can safely assume that measurements of height and weight are uncorrelated, i.e., error in measuring weight is not affected by the way we measure height: $cov(m,l)=0$. # # We can say that probability for mass $m$ is uniformly distributed between 83.5 kg and 84.5 kg. The probability for height has most likely triangular probability distribution between 1.81 m and 1.82 m. # # # The uniform distribution between $83.5$ and $84.5$ has variance $\sigma_m = 0.5/\sqrt{3}=0.2887$ kg. # The triangular distribution between 1.81 and 1.82 cm has variance $\sigma_h=0.005/\sqrt{6}=0.00204$ # # $$ BMI = \frac{m}{h^2} $$ # $$ \frac{\partial BMI}{\partial m} = \frac{1}{h^2} $$ # $$ \frac{\partial BMI}{\partial h} = -\frac{2m}{h^3}$$ # $$\sigma_{BMI} = \sqrt{\frac{1}{h^4}\sigma^2_m + \frac{4 m^2}{h^6}\sigma^2_h}$$ # Below we compute BMI and its error using just derived formulas: # In[44]: mi = 84; sm = 0.5/sqrt(3); # average mass and its variance hi = 1.815; sh=0.005/sqrt(6); # average height and its variance # BMI and its variance [mi/hi**2 , sqrt(sm**2/hi**4+sh**2*4*mi**2/hi**6)] # Python provides package `uncertainties`, which has capabilities to automatically differentiate any function, and propagate error. # # Here we demonstrate it on the example of BMI: # In[45]: from uncertainties import ufloat from uncertainties.umath import * # functions like sin(), etc. m = ufloat(84,0.5/sqrt(3)) l = ufloat(1.815,0.005/sqrt(6)) m/l**2 # Clearly our derivation agrees with the package, and we are on the right track. # ### Stochastic solution # # If we do not have the analytical form of the function, it is sometimes hard to differentiate it. The solution with Monte Carlo is more robust and straightforward. The idea is to throw many points with the correct distribution and analyze the resulting distribution of points. Essentially, we go from continuous functions to a discrete representation by a random sample of Monte Carlo points. # # In[61]: import numpy as np from numpy.random import * # N is the number of runs in our stochastic simulation N = 100_000 def BMI(): return uniform(83.5, 84.5) / triangular(1.81, 1.815, 1.82)**2 sim = np.zeros(N) for i in range(N): sim[i] = BMI() print("{} ± {}".format(sim.mean(), sim.std())) plt.hist(sim, alpha=0.5, bins=100); # ### More information on using *uncertainties* package # In[69]: import uncertainties as u from uncertainties import ufloat from uncertainties.umath import * # functions like sin(), etc. print(uncertainties.umath.__all__) # many, but not all functions are defined in umath # Suppose we need to use a special function `erf`, which is not available in `umath` part. Can we still compute uncertainty with the package? # # The package provide function (a decorator) that wraps any numerically given function, and produces function that can work with uncertainties through `ufloat`. The wrapper is `u.wrap`. # # The limitation is that such function needs to return float given input float (should not return a list). No need to specify any derivatives, which seems are computed internally numerically. # # In[77]: from scipy.special import erf u.wrap(erf)(ufloat(1,0.5)) # How do we know if the package got the correct unswer? # Can we check if package works correctly? # In[86]: x0=1.0 dx=1e-4 df=(erf(x0+dx)-erf(x0-dx))/(2*dx) df*0.5 # sigma_f = df/dx * sig_x # Indeed the package works correctly for this simple case. # In[87]: from scipy.optimize import fsolve def f(x,a): return a*erf(x)-1 def g(x0,a): sol, = fsolve(f, x0, args=(a,)) return sol u.wrap(g)(1.0, ufloat(2,0.1)) # ## Projectile motion with uncertainty # Next we want to understand how hard it is to hit a target with projectile taking into account a realistic air resistance and its uncertainty due to weather and density of athmospehere. # # # A spherical projectile of mass $m$ launched with some initial velocity moves under the influence of two forces: gravity $\vec{F}_g=-mg \vec{e}_z$, and air resistance (drag), $\vec{F}_D=- \frac{1}{2} c \rho A v \vec{v}$, acting in the opposite direction to the projectile's velocity and proportional to the square of that velocity. Here, $c$ is the drag coefficient, $\rho$ the air density, and $A$ the projectile's cross-sectional area. # # The relevant equations of motion are therefore: # # $$m\ddot{\vec{r}} = \vec{F} = -mg \vec{e}_z - \frac{1}{2} c \rho A v \vec{v}$$ # # The horizontal axis is choosen as $x$ and vertical as $z$, hence in component work we have # $$ m \begin{bmatrix}\ddot{x}\\ \ddot{z}\end{bmatrix} = # -mg\begin{bmatrix}0\\ 1\end{bmatrix} # -\frac{1}{2} c \rho A \sqrt{\dot{x}^2+\dot{z}^2} \begin{bmatrix}\dot{x}\\ \dot{z}\end{bmatrix}$$ # # Next we choose $$ k=\frac{c \rho A}{2 m}$$ and a set of new variables # # $$\begin{bmatrix}u_1\\ u_2\\ u_3\\ u_4 \end{bmatrix}= \begin{bmatrix}x\\ z\\ \dot{x}\\ \dot{z} \end{bmatrix}$$ # with which we can decomposed these Eq. into the following four first-order ODEs: # $$\begin{bmatrix}\dot{u}_1=\dot{x}\\ \dot{u}_2=\dot{z}\\ \dot{u}_3=\ddot{x}\\ \dot{u}_4=\ddot{z} \end{bmatrix}= \begin{bmatrix} \dot{x}\\ \dot{z}\\ -k\, \dot{x} \sqrt{\dot{x}^2+\dot{z}^2}\\ -g-k\, \dot{z} \sqrt{\dot{x}^2+\dot{z}^2} \end{bmatrix}$$ # # # We have only one parameter of the theory $k$ (in addition to $g$ which we will treat as a constant withouth error). # # This constant has units of 1/m if we meassure distance in m. Indeed $c$ has no units, density of air $\rho$ has units of km/m$^3$ and crossection m$^2$ and mass kg. # In the absence of air resistance, we can solve the equations to get # $$ z= v_0 \sin\theta t - \frac{1}{2} g t^2 $$ # $$ x = v_0 \cos\theta t$$ # and distance traveled when the projectile hits the ground is # $$D=\frac{v_0^2}{g}\sin(2\theta)$$ # # We thus want to aim at $45^\circ$ to get maximum distance, in which case projectile should go to $D=\frac{v_0^2}{g}$. Let's assume that we can choose initial speed of 1000 m/s and $g=9.82$, which would result in $D\approx 102 km$ # Next we set up integration of these equations when $k$ is choosen to be a small constant ($10^{-4}$). We will adrees its good estimation and its error below. # In[323]: from scipy.integrate import odeint # We will first use odeint, which we are already familiar with from numpy import * def dy(u,t): # appropriate for odeint """ The derivative for projectile motion with air resistance """ g, k = 9.82, 1e-4 x,z,xdot,zdot = u dx = xdot dz = zdot dxdot = -k * xdot * sqrt(xdot**2+zdot**2) dzdot = -g -k * zdot * sqrt(xdot**2+zdot**2) return array([dx,dz,dxdot,dzdot]) # choose an initial state v0 = 1000. # velocity magnitude in m/s x0 = [0, 0, v0*cos(pi/4), v0*sin(pi/4)] # initial velocity at 45 degrees t0f = (0,100) # initial and final time t = linspace(t0f[0], t0f[1], 250) # create linear mesh of time points # solve the ODE problem y = odeint(dy, x0, t) plt.plot(y[:,0],y[:,1], '.-') # Surprising, this more realistic trajectory suggests only around 16 km range rather than over 100 km in the absence of air resistance. # Next we will use more appropriate solver `solve_ivp`, which chooses time steps automatically, and can resolve with high accuracy certain events, here the time the projectile hits the ground. # # In[324]: from scipy.integrate import solve_ivp help(solve_ivp) # In[325]: from scipy.integrate import solve_ivp def dx(t,x): # appropriate for solve_ivp return dy(x,t) def hit_target(t, u): # We've hit the target if the z-coordinate is 0. return u[1] # Stop the integration when we hit the target. hit_target.terminal = True # We must be moving downwards (don't stop before we begin moving upwards!) hit_target.direction = -1 def max_height(t, u): # The maximum height is obtained when the z-velocity is zero. return u[3] # solve the ODE problem sol = solve_ivp(dx, t0f, x0, dense_output=True, events=(hit_target, max_height)) print(sol) print('Time to target = {:.2f} s'.format(sol.t_events[0][0])) print('Time to highest point = {:.2f} s'.format(sol.t_events[1][0])) print('Distance hit = {:.4f} m'.format(sol.y_events[0][0,0])) # We can plot the projectile motion. Using `sol.y` we get only points at which the calculation was performed. If we need more precise curve, we can use `sol.sol()` with argument being fine mesh. # In[326]: import matplotlib.pyplot as plt tfine = linspace(0, sol.t_events[0][0], 100) soln = sol.sol(tfine) plt.plot(sol.y[0],sol.y[1], '.-') plt.plot(soln[0], soln[1]) # Next we are estmating more realistic values for drag coefficient $k$. # A typical shell used in WW1 had diameter of 155 mm and length 1m and weighs about 50 kg. # # While most of the time the projectile moves such that the air is cut with the smallest crossection, at times the projectile might move a bit to expose the body of the shell. We estimate that the probability for angle $\theta$ of projectile's direction is distributed exponentially, and probability for turning for 90$^\circ$ is only one in a hudred million. # # The air density at normal athmospheric conditions is 1.293 kg/m$^3$, but becomes less dense as projectile moves through higher parts of the athmosphere. We estimate that density drops down to 1.0 kg/m$^3$ at heigh 2000m. We can assume that density distribution is constant between 1.0 kg/m$^3$ and 1.293 kg/m$^3$. More precise calculation would allow air density to be height dependent, but since the density also strongly depends on the weather conditions and clouds, we would not neccessarily get much more realistic values. # # Assume that the coefficient $c$ of air resistance of our projectile is 0.45, but it is normally distributed with standard deviation 0.005. In practice, this coefficient depends strongly on "aerodinamics" of an object, and can be meassured in "wind tunnel" measurements. # # From the picture we can approximate the crossection as a function of angle to be: # $$A = \pi r^2 + 2 r l \sin\theta$$ # # Probability for angle is distributed exponentially, hence we can write: # $$P(\theta) = C e^{-\alpha \theta}$$ # # The normalization constant $C$ can be eliminated by requiring normalization of probability: # $$1=\int_0^{\pi/2} d\theta P(\theta) = \frac{C}{\alpha}(1-e^{-\alpha \pi/2})\rightarrow C=\frac{\alpha}{1-e^{-\alpha \pi/2}}$$ # # Hence a closed form probability for angle is: # $$P(\theta) = \frac{\alpha e^{-\alpha \theta}}{1-e^{-\alpha \pi/2}}$$ # # The unknown coefficient $\alpha$ can be obtained by the statement that only one in hundred million of shells turn for 90$^\circ$: # $$p_0=10^{-8}=\frac{\alpha e^{-\alpha \pi/2}}{1-e^{-\alpha \pi/2}}=\frac{\alpha}{e^{\alpha \pi/2}-1}$$ # # Next we want to calculate the average crossection $$ and its variance. The average is: # $$ =\int_0^{\pi/2} d\theta P(\theta) (\pi r^2 + 2 r l \sin\theta)=\pi r^2 + 2r l <\sin\theta>$$ # where the integral over angle $\theta$ to get $<\sin\theta>$ will need to be carried out numerically. # # To get the variance, we also need: # $$=\int_0^{\pi/2} d\theta P(\theta) (\pi r^2 + 2 r l \sin\theta)^2=(\pi r^2)^2+2\pi r^2 2 r l <\sin\theta>+(2 r l)^2 <\sin^2\theta>$$ # # The standard deviation is the square root of the variance, i.e., # $$\sigma_A = \sqrt{-^2}=2 r l \sqrt{<\sin^2\theta>-<\sin\theta>^2}$$ # # $$A \approx \pi r^2 + 2 r l <\sin\theta> \pm 2 r l \sqrt{<\sin^2\theta>-<\sin\theta>^2}$$ # The air density has maximum $\rho_{max}=1.293$ kg/m$^3$ and minimum $\rho_{min}$=1 kg/m$^3$ with the average $<\rho>=(\rho_{max}+\rho_{min})/2=1.1465$ kg/m$^3$ and $\sigma_\rho=(\rho_{max}-\rho_{min})/(2\sqrt{3})=0.0846$ kg/m$^3$, because the probability distribution is constant. # # The coefficient $c$ has average $0.45$ with $\sigma_c=0.005$ # # We next compute $<\theta>$ and $<\theta^2>$: # In[330]: from scipy.integrate import quad from scipy.optimize import root_scalar sr = root_scalar(lambda a: a/(exp(a*pi/2)-1)-10**-8, bracket=[0.01, 20.], method='brentq') print(sr) a = sr.root def P(th): return a*exp(-a*th)/(1-exp(-a*pi/2)) norm=quad(lambda th: P(th),0,pi/2)[0] # checking that probability is normalized print('Norm=', norm) tha = quad(lambda th: sin(th)*P(th),0,pi/2)[0] # average tha2=quad(lambda th: sin(th)**2*P(th),0,pi/2)[0] # average print('=',tha) print('',tha2) # We are now in a position to estimate the coefficient $k$ of drag. # In[332]: m=50. # mass in kg is measured very precisely c=ufloat(0.45,0.005) # drag coefficient is known approximately r = 155e-3/2 # radius of 155mm shell l = 1. # length of the shell in m A = ufloat(pi*r**2 + 2*r*l*tha, 2*r*l*sqrt(tha2-tha**2)) # cross-section due to finite angle rhomax=1.293 rhomin=1.0 rho = ufloat((rhomax+rhomin)/2, (rhomax-rhomin)/(2*sqrt(3.))) print('c=', c, 'sigma/<>=', c.std_dev/c.nominal_value) print('A=', A, 'sigma/<>=', A.std_dev/A.nominal_value) print('rho=', rho, 'sigma/<>=',rho.std_dev/rho.nominal_value) ka = c*rho*A/(2*m) print('ka=', ka) # We see that the worst error is in the projectile crossection, meaning that the projectile is not very stable in the air. This is where some engeneering of the shape is necessary to improve the stability of the direction. # # The second is the density of the air. Here it is hard to improve unless the missile is guided with high tech equipment, otherwise the air density is not possible to control. Of course the "aerodinamics" of the projectile could reduce $c$ substantially, which is a major goal of all realistic missile engeneering. # Next we calculate the trajectory with varying $k$ around its average value to estimate the derivative. (Note that the wrap function from module `uncertainty` is not working here). # # We are interested in distance in lateral direction where the projectile falls ($x$ at the point $z=0$) and its error estimate. # # We are thus computing $D(k)$ and we will estimate # $$\sigma_D = \sigma_k \left|\frac{d D(k)}{dk}\right|$$ # # In[333]: def dxn(t,u,k): # approproate for odeint """ The derivative for projectile motion with air resistance """ g = 9.82 x,z,xdot,zdot = u dx = xdot dz = zdot dxdot = -k * xdot * sqrt(xdot**2+zdot**2) dzdot = -g -k * zdot * sqrt(xdot**2+zdot**2) return array([dx,dz,dxdot,dzdot]) def hit_target(t, u, k): # We've hit the target if the z-coordinate is 0. return u[1] hit_target.terminal = True # Stop the integration when we hit the target. hit_target.direction = -1 # We must be moving downwards (don't stop before we begin moving upwards!) def max_height(t, u, k): # The maximum height is obtained when the z-velocity is zero. return u[3] def odes(t0f,x0,k): # solve the ODE problem sol = solve_ivp(dxn, t0f, x0, dense_output=True, args=(k,), events=(hit_target, max_height)) print(sol) return sol.y_events[0][0,0] # choose an initial state v0 = 1000. # velocity magnitude in m/s x0 = [0, 0, v0*cos(pi/4), v0*sin(pi/4)] # initial velocity at 45 degrees t0f = (0,100) k0=ka.nominal_value k1=k0 + 0.2*ka.std_dev k2=k0 - 0.2*ka.std_dev print([k0,k1,k2]) fk = odes(t0f,x0,k0) dfdk=(odes(t0f,x0,k1)-odes(t0f,x0,k2))/(k1-k2) print('Distance where projectile hits=', fk) print('Error of the distance=', abs(dfdk)*ka.std_dev) # Clearly, this error is enormous. The distance is 11.5 km with a 3.3 km error. This is unacceptable even for WW1 ammunition. Hence, improving the aerodynamic shape such that the cross-sectional area is constant, and coefficient $c$ is reduced, was essential. # # If we eliminate the error of the cross-section, we get an error in distance of around 870 m, which is much more realistic for classical weaponry. Of course the decrease in $c$ could further decrease the error and increase the reach. # ## Homework: # # In addition to uncertainty in drag coefficient $k$, imagine we also have some uncertainty in the velocity $v_0$ and its lunching angle $\theta$. First let's eliminate the error in crossection area $A=\pi r^2$. Next assume that $v_0$ has still average 1000 m/s, but its distribution is triangular between 999 m/s and 1001 m/s. The angle $\theta$ is much harder to control on uneven surface under constant explosions. Assume that $\theta$ is distrbuted normally with standard deviation of 2 degrees. # # How large is the error of projectile reach? # In[ ]: