# Sampling and Integration - From Gaussians to Maxwell and Boltzmann Distributions¶

Doruk Efe GĂ¶kmen -- 01/08/2018 -- Ankara

## Sampling and Integration¶

Consider 1D gaussian integral $I$. We calculate it as follows: $I^2=\int^\infty_{-\infty}\frac{\text{d}y}{\sqrt{2\pi}} \int^\infty_{-\infty}\frac{\text{d}x}{\sqrt{2\pi}}e^{-(x^2+y^2)/2}=\int^{2\pi}_0 \frac {\text{d}\phi}{2\pi}\int^\infty_0 r e^{-r^2/2} \text{d}r = \int^{2\pi}_0 \frac {\text{d}\phi}{2\pi}\int^1_0 \text{d}\Upsilon=1$.

The variable transformations indicate a transformation between uniformly distributed random variables $\Upsilon, \phi$ and gaussian distributed random variables $x, y$. Changes of variables in integrals also apply to samples. This is a relation between the integration variables on the exponent of the gaussian and the gaussian distributed random variables.

In [91]:
%pylab inline
import random, math, pylab

def gauss_test(sigma):
#the 2nd transformed random variables are uniformly distributed
phi = random.uniform(0.0, 2.0 * math.pi)
Upsilon = random.uniform(0.0, 1.0)
#the first transformed variables are
Psi = - math.log(Upsilon)
r = sigma * math.sqrt(2.0 * Psi)
#the original variables
x = r * math.cos(phi)
y = r * math.sin(phi)
return [x, y]

#exact Gaussian distrubution:
list_x = [i * 0.1 for i in xrange(-40, 40)]
list_y = [math.exp(- x ** 2 / 2.0) / (math.sqrt(2.0 * math.pi)) for x in list_x]

#sampled distribution:
n_sampled_pairs = 50000
data = []
for sample in xrange(n_sampled_pairs):
data += gauss_test(1.0)
#graphics output
pylab.plot(list_x, list_y, color='b', label='exact')
pylab.hist(data, bins=150, normed=True, color='r', histtype='step', label='sampled')
pylab.legend()
pylab.title('Sampling of the gaussian distribution')
pylab.xlabel('$x$', fontsize=14)
pylab.ylabel('$\pi(x)$', fontsize=14)
pylab.savefig('plot-gauss_test.png')

Populating the interactive namespace from numpy and matplotlib


The plots of independently distributed Gaussian random variables is given below. In the figure, each point is an independent random variable in $x$ and $y$. The resulting 2D distribution is isotropic. The reason for this can be seen in the previous section where we have used the variable transformation to get an integral over angle $\phi$ on the $xy$ plane which can be seen as a randomly distributed number between $0$ and $2\pi$ in the evaluation of the gaussian integral $I$. Hence, this property is unique of gaussian distributions, and is generally valid for gaussians in any dimensions, as the 3 variable case (plotted in the second figure) also suggests so.

In [15]:
%pylab inline
import random, math, pylab, mpl_toolkits.mplot3d

nsamples = 10000
x_list, y_list, z_list, x_list_n2, y_list_n2, x_list_n3, y_list_n3, z_list_n3 = [], [], [], [], [], [], [], []
for sample in range(nsamples):
#two gaussian distributed random variables:
x_list.append(random.gauss(0.0, 1.0))
y_list.append(random.gauss(0.0, 1.0))
z_list.append(random.gauss(0.0, 1.0))

#begin graphics output
#2d distribution
pylab.plot(x_list, y_list, color='black', marker='.', linestyle='')
pylab.title('Samples from the 2D Gaussian distribution')
pylab.xlabel('$x$', fontsize=14)
pylab.ylabel('$y$', fontsize=14)
pylab.xlim(-4.0, 4.0)
pylab.ylim(-4.0, 4.0)
pylab.axes().set_aspect('equal') # set the aspect ratio of the plot
pylab.savefig('plot-gauss_2d.png')

#3d distribution
fig = pylab.figure()
ax = fig.gca(projection='3d')
ax.set_aspect('equal') # set the aspect ratio of the plot
pylab.plot(x_list, y_list, z_list, '.')
pylab.title('Samples from the 3D gaussian distribution')
ax.set_xlabel('$x$', fontsize=14)
ax.set_ylabel('$y$', fontsize=14)
ax.set_zlabel('$z$', fontsize=14)
pylab.show()

Populating the interactive namespace from numpy and matplotlib

In [257]:
%pylab inline
import random, math, pylab, mpl_toolkits.mplot3d

#we can also give the distribution a "haircut"! Let us normalise each point to a radius equal to 1:
nsamples = 5000
x_list_n2, y_list_n2, x_list_n3, y_list_n3, z_list_n3 = [], [], [], [], []

for sample in xrange(nsamples/5):
x, y, z = random.gauss(0.0, 1.0), random.gauss(0.0, 1.0), random.gauss(0.0, 1.0)
radius_2 = math.sqrt(x ** 2 + y ** 2)
x_list_n2.append(x / radius_2)
y_list_n2.append(y / radius_2)

radius_3 = math.sqrt(x ** 2 + y ** 2 + z ** 2)
x_list_n3.append(x / radius_3)
y_list_n3.append(y / radius_3)
z_list_n3.append(z / radius_3)

#figure output
pylab.plot(x_list_n2, y_list_n2, color='black', marker='.', linestyle='')
pylab.title('Uniform sampling of the circle')
pylab.xlabel('$x$', fontsize=14)
pylab.ylabel('$y$', fontsize=14)
pylab.xlim(-2.0, 2.0)
pylab.ylim(-2.0, 2.0)
pylab.grid()
pylab.axes().set_aspect('equal') # set the aspect ratio of the plot

fig = pylab.figure()
ax = fig.gca(projection='3d')
ax.set_aspect('equal')
pylab.plot(x_list_n3, y_list_n3, z_list_n3, '.')
pylab.title('Uniform sampling of the sphere surface')
ax.set_xlabel('$x$', fontsize=14)
ax.set_ylabel('$y$', fontsize=14)
ax.set_zlabel('$z$', fontsize=14)
pylab.show()

Populating the interactive namespace from numpy and matplotlib


Here, the renormalized distributions indicate that the gaussians are uniformly distibuted on the suface of the (hyper)sphere. We can also directly sample the points inside the 3d sphere of radius 1 as done by the following program.

In [7]:
%pylab inline
import random, math, pylab, mpl_toolkits.mplot3d

x_list, y_list, z_list = [],[],[]
nsamples = 1000
for sample in xrange(nsamples):
x, y, z = random.gauss(0.0, 1.0), random.gauss(0.0, 1.0), random.gauss(0.0, 1.0)
length = random.uniform(0.0, 1.0) ** (1.0 / 3.0) / math.sqrt(x ** 2 + y ** 2 + z ** 2)
x, y, z = x * length, y * length, z * length
#if z < 0.075 and z > -0.075 or z > 0.85 or z < -0.85:
x_list.append(x)
y_list.append(y)
z_list.append(z)
# graphics output
fig = pylab.figure()
ax = fig.gca(projection='3d')
ax.set_aspect('equal')
pylab.title('Uniform sampling inside the sphere\n(only shown three intervals for z)')
ax.set_xlabel('$x$', fontsize=14)
ax.set_ylabel('$y$', fontsize=14)
ax.set_zlabel('$z$', fontsize=14)
pylab.plot(x_list, y_list, z_list, '.')
pylab.show()

Populating the interactive namespace from numpy and matplotlib


The following program distributes the $d$ dimensional gaussian random variables on the surface of the $d$ dimensional hypersphere and calculates the corresponding radii distributions $\pi_d(r)$.

In [266]:
# %pylab inline
import random, math, pylab, numpy

nsamples = 100000

colors = ['0.1','0.2','0.3','0.4','0.5','0.6','0.7','0.8','0.9']
colors2 = ['r','g','b','y','c','m','0.7','0.8','0.9']
i = 0
for dimension in [2,4,6,10]:
r_samp = []
i += 1
#analytic distribution of r
dd = dimension
#plot the analytic distribution: a gamma distribution with k=d/2 and theta=2.
norm = math.gamma(dd / 2) * math.pow(2, (dd / 2)) / 2
x = [j * 0.1 for j in xrange(1, 400)]
gamma = [math.pow(r, (dd - 1)) * math.exp(- r ** 2 / 2) / norm for r in x]
#figure output
pylab.plot(x, gamma, color=colors2[i-1], label='analytic: d=%.0f' % dd)

#sampling distribution of r
for sample in range(nsamples):
R = [random.gauss(0.0, 1.0) for d in range(dimension)]
radius = math.sqrt(sum(x ** 2 for x in R))
r_samp.append(radius)
print numpy.var(r_samp) #calculate the variances for each dimension

#print [x / radius for x in R] #print the points that are rescaled to be on the surface of the sphere
pylab.hist(r_samp, bins=150, normed=True, color=colors[i], histtype='step', label='sampling: d=%.0f' % dimension)
pylab.legend()
pylab.xlim(0,7)
pylab.title('$r$ distributions in diffrent dimensions, $\sigma=1$')
pylab.xlabel('$r$', fontsize = 15)
pylab.ylabel('$\pi_d(r,\sigma=1)$', fontsize = 15)
pylab.show()

0.42951734083905774
0.46408913359880166
0.4790036671204991
0.4890500521524456


Using a similar strategy as we did for 1D, one can evaluate an $n$ dimensional gaussian integral by variable transformations and get $1=\left(\frac{1}{\sqrt{2\pi}}\right)^d\int_0^\infty \text{d}r r^{d-1} e^{-r^2/2} \int \text{d}\Omega_d$. Hence we see that the radii $r(d)$ in $d$ dimensions is distributed according to $\pi_d(r)\propto r^{d-1} e^{-r^2/2}$. In other words, the transformed variable square radius $\rho=r^2$ is distributed according to the gamma distirbution $\Gamma(\rho)\propto\rho^{\frac{d-2}{2}}e^{-\rho^2/2}$. This is confirmed by the following program. Furthermore, intriguingly we see that each distribution is approximately centered around the value $r^2=d$. That means that in order to renormalise a gaussian random sample in $d$ dimensions, it is most likely that a successful approach is to rescale it by about an amount $\sqrt{d}$. This implies that in very high dimensions, the renormalising points towards the surface of the unit hypersphare becomes unnecessary if the variance $\sigma$ of the Gaussian was choosen correctly.

In [265]:
# %pylab inline
import random, math, pylab

nsamples = 20000

colors = ['0.1','0.2','0.3','0.4','0.5','0.6','0.7','0.8','0.9']
colors2 = ['r','g','b','y','c','m','0.7','0.8','0.9']
i = 0
for dimension in [4,6,8,10]:
i += 1

dd = dimension
#plot the analytic distribution: a gamma distribution with k=d/2 and theta=2.
norm = math.gamma(dd / 2) * math.pow(2, (dd / 2))
x = [j * 0.1 for j in xrange(1, 400)]
gamma = [math.pow(rho, (dd / 2 - 1)) * math.exp(- rho / 2) / norm for rho in x]
#figure output
pylab.plot(x, gamma, color=colors2[i-1], label='analytic: d=%.0f' % dd)

r2 = []
for sample in range(nsamples):
R = [random.gauss(0.0, 1.0) for d in range(dimension)]
radius2 = sum(x ** 2 for x in R)
r2.append(radius2)

#print [x / radius for x in R] #print the points that are rescaled to be on the surface of the sphere
#figure output
pylab.hist(r2, bins=150, normed=True, color=colors[i], histtype='step', label='sampling: d=%.0f' % dimension)
pylab.legend()
pylab.title('$r^2$ distributions in diffrent dimensions with $\sigma=1$')
pylab.xlabel('$r^2$', fontsize = 15)
pylab.ylabel('$\pi_d(r^2,\sigma=1)=\Gamma_{k=d/2,\\theta=2}(r^2)$', fontsize = 15)
pylab.show()


In this case, we may rescale all points not by their actual distance from the origin, but instead by this central (most likely) value of $\sqrt{d}$, i.e. $x_i\rightarrow y_i=x_i\sqrt{d}$. This corresponds to transforming the gaussian distribution asÂ $\pi(x_i)\propto e^{-x_i^2/2}\rightarrow \pi(y_i)\propto e^{-y_i^2/2\sigma^2}=e^{-x_i^2d/2}$, or in other words, taking the variance as $\sigma_x=1\rightarrow\sigma_y=\sqrt{\frac{1}{d}}$. Incredibly, as we increase $d$ the variance decreases, i.e. instead of true renormalisation, if we choose a fixed scaling factor of $\sqrt{d}$ the risk that we have the wrong scaling reduces with increasing number of dimensions!

In [264]:
# %pylab inline
import random, math, pylab, numpy

nsamples = 200000

colors = ['0.1','0.2','0.3','0.4','0.5','0.6','0.7','0.8','0.9']
colors2 = ['r','g','b','y','c','m','0.7','0.8','0.9']
i = 0
for dimension in [3,5,10,18]:
i += 1

#sampling
r2 = []
for sample in range(nsamples):
R = [random.gauss(0.0, math.pow(1.0 / dimension ,1.0/2.0)) for d in range(dimension)]
radius2 = (sum(x ** 2 for x in R))
r2.append(radius2)
print numpy.var(r2) #calculate the variances for each dimension

#print [x / radius for x in R] #print the points that are rescaled to be on the surface of the sphere
#figure output
pylab.hist(r2, bins=150, normed=True, color=colors[i], histtype='step', label='d=%.0f' % dimension)
pylab.legend()
pylab.xlim(0,5)
pylab.title('Rescaled $r$ distributions in diffrent dimensions, $\sigma=\sqrt{1/d}$')
pylab.xlabel('$r$', fontsize = 15)
pylab.ylabel('$\pi_d(r,\sigma=\sqrt{1/d})$', fontsize = 15)
pylab.show()

0.6645927747878635
0.40020680536587616
0.20079653557590216
0.11102109574157934


For a given kinetic energy $E_k$, the speeds of $N$ particles are constrained by the equation $E_k=\frac{m}{2}\sum_{i=0}^{N-1}v_i^2$, where e.g. in d dimensions, the speed is given in terms of the two components as $v_i^2=\sum_{j=1}^{d}(v_i^j)^2$, where $d$ is either $1$, $2$ or $3$. Therefore, the energy constraint implies that a legal set of velocities is a point on the surface of the $dN$ dimensional hypersphere of radius $r=\sqrt{\frac{2E_k}{m}}$. Hence the problem of finding the velocity distribution of a statistical ensemble is reduced to sampling a random point on the surface of a hypersphere.

On the other hand, since an ensemble contains about $6\times 10^{23}$ of particles, what we have is an hypersphere of a very large number of dimensions. Following the previous argument, for a sampled velocity component $v_i^j$, the suitable fixed scaling towards the surface of the hypersphere of radius $\sqrt{\frac{2E_k}{m}}$ is $\sqrt{\frac{2E_k}{mdN}}$ with certainty approaching to absolute as $N\rightarrow \infty$. (Please note that this implies that the kinetic energy is uniformly partitioned and distributed equally to each degree of freedom, i.e. to each velocity component of each particle.) In other words, instead of sampling velocities as gaussian variables each with variance $\sigma=1$, if we choose a variance that scales each velocity towards the surface of the kinetic energy hypersphere, then the probability that we are mistaken on the value a specific velocity component decreases by increasing $N$!

If we take the mean energy per degree of freedom at a given temperature $T$ as $\frac{E_k}{dN}=\frac{1}{2}k_BT$, where $k_B$ is the Boltzmann constant, we therefore arrive at the celebrated Maxwell velocity distribution $\pi(v_x)dv_x=\sqrt{\frac{m}{2\pi k_BT}}\exp{\left(-\frac{1}{2}\frac{mv_x^2}{k_B T}\right)}dv_x$. The speed distributions clearly have the gamma profiles similar to the ones shown previously. The equally famous Boltzmann distribution for energies $\pi(E)\propto\exp{\left(-\frac{E}{k_BT}\right)}$ is also readily obtained as a result of this approach.

## Sampling discrete and continuous 1D distributions¶

A naive random number generator (NRNG): congruential linear random number generator.

Observe that $\text{idum}\in [0,m-1]$ because of the $\mod{m}$ operation. The method generates an erratic sequence which can contain all integers $\in [0,m-1]$. For a seed equal to e.g. 1000, the NRNG yields the same sequence of numbers at each run. However, if we set the initial seed "idum" to e.g. the system time, then the sequence of random numbers is different at every run.

Note that the output of the program is periodic, if you hit a number a certain time, you'll generate the same sequence as you did the first time. The program generates the same numbers it generated before.

In [3]:
m = 134456 #mode
n = 8121
k = 28411
idum = 1000 #seed (initial integer value)
for iteration in xrange(200000):
idum = (idum *  n + k) % m #modulo operation (fold back to the range given by m) "congruential"
ran = idum / float(m) #divide by m to obtain a "random" number between 0 and 1.
#print idum, ran, iteration


### Rejection sampling¶

Direct sampling of discrete and one-dimensional continuous systems is an important subject.

Here we sample 1D gaussian distribution through Markov-chain rejection sampling. (Calculating the area under the gaussian with the same method that is used in the calculation of $\pi$ in the first week.)

In [129]:
import random, math, pylab

x = 0.0 #start at x=0
delta = 0.5 #maximum step size
data = []
for k in range(50000):
x_new = x + random.uniform(-delta, delta) #propose a random move with step size between \pm delta

#if the randomly generated number between 0 and 1 is less than the acceptance rate, then accept the sample
if random.uniform(0.0, 1.0) <  \
math.exp (- x_new ** 2 / 2.0) / math.exp (- x ** 2 / 2.0): #acceptance rate
x = x_new
data.append(x)

pylab.hist(data, 100, normed = 'True') #histogram of the sample
x = [a / 10.0 for a in range(-50, 51)]
y = [math.exp(- a ** 2 / 2.0) / math.sqrt(2.0 * math.pi) for a in x]
pylab.plot(x, y, c='red', linewidth=2.0)
pylab.title('Theoretical Gaussian distribution $\pi(x)$ and \
\nnormalized histogram for '+str(len(data))+' Markov-chain samples', fontsize = 18)
pylab.xlabel('$x$', fontsize = 30)
pylab.ylabel('$\pi(x)$', fontsize = 30)
pylab.savefig('plot_markov_gauss.png')
pylab.show()


Let us do the same sampling by direct sampling on a limited range given by a rectangle. (Note that we have to introduce an inelegant cutoff $\pm x_\text{cut}$ for the $x$ range of the rectangle.)

In [144]:
import random, math

y_max = 1.0 / math.sqrt(2.0 * math.pi)
x_cut = 5.0
n_data = 10000
n_accept = 0
data = []
while n_accept < n_data:
#select a random position within the rectangle
y = random.uniform(0.0, y_max)
x = random.uniform(-x_cut, x_cut)
if y < math.exp( - x **2 / 2.0)/math.sqrt(2.0 * math.pi): #check whether the sample is below the gaussian curve
n_accept += 1
data.append(x)

pylab.hist(data, 100, normed = 'True') #histogram of the sample
x = [a / 10.0 for a in range(-50, 51)]
y = [math.exp(- a ** 2 / 2.0) / math.sqrt(2.0 * math.pi) for a in x]
pylab.plot(x, y, c='red', linewidth=2.0)
pylab.title('Theoretical Gaussian distribution $\pi(x)$ and \
\nnormalized histogram for '+str(len(data))+' direct samples', fontsize = 18)
pylab.xlabel('$x$', fontsize = 30)
pylab.ylabel('$\pi(x)$', fontsize = 30)
pylab.savefig('plot_direct_gauss.png')
pylab.show()


Rejection sampling can be problematic It is not due to the presence of rejections per se, but because the rejection rate can become really enormous, even prohibitive. To see this, let us look at a function that is not quite as friendly as a Gaussian distribution: the distribution $\pi(x)=\frac{2}{\sqrt{x}}$. Here, we do not know what box size to choose: what should $x_\text{cut}$ be? With increasing $x_\text{cut}$, the rejection rate will increase enormously!

Below we compare the results from direct sampling with a cut-off and Markov-chain sampling methods.

In [135]:
import random, math, pylab

y_max = 100.0
x_cut = 1.0
n_data = 10000
data = []
n_accept = 0
while n_accept < n_data:
y = random.uniform(0.0, y_max)
x = random.uniform(0.0, x_cut)
if y < 1.0 / (2.0 * math.sqrt(x)):
n_accept += 1
data.append(x)

pylab.hist(data, bins=100, normed='True')
x = [a / 100.0 for a in xrange(1, 100)]
y = [1.0 / (2.0 * math.sqrt(a)) for a in x]
pylab.plot(x, y, 'red', linewidth = 2)
pylab.title('Theoretical distribution $\pi(x)={1}/{(2 \sqrt{x})}$ and normalized\
\n histogram for '+str(n_accept)+' accepted direct samples',fontsize=16)
pylab.xlabel('$x$', fontsize=18)
pylab.ylabel('$\pi(x)$', fontsize=18)
pylab.show()

In [134]:
import random, math, pylab

x = 0.2
delta = 0.5
data = []
y_max = 0
n_trials = 10000
for k in xrange(n_trials):
x_new = x + random.uniform(-delta, delta)
if x_new > 0.0 and x_new < 1.0:
if random.uniform(0.0, 1.0) < math.sqrt(x) / math.sqrt(x_new):
x = x_new
if 1.0 / math.sqrt(x) > y_max:
y_max =  1.0 / math.sqrt(x)
print y_max, x, k #print the maximum y values and their x values
data.append(x)

pylab.hist(data, bins=100, normed='True')
pylab.xlabel('$x$', fontsize=16)
pylab.ylabel('$\pi(x)$', fontsize=16)
x = [a / 100.0 for a in xrange(1, 101)]
y = [0.5 / math.sqrt(a) for a in x]
pylab.plot(x, y, linewidth=1.5, color='r')
pylab.title('Theoretical distribution $\pi(x)={1}/{(2 \sqrt{x})}$ and normalized\
\n histogram for '+str(len(data))+' Markov-chain samples',fontsize=16)
pylab.show()

2.2360679775 0.2 0
4.911152595 0.0414603671158 21
7.58973327104 0.0173598901546 68
11.3758766412 0.00772734410312 196
14.7582356921 0.00459125184976 369
29.4346418263 0.00115420377131 930
32.9788774849 0.000919450305395 3647
46.9006508302 0.000454613429938 6548
106.487244559 8.81870512329e-05 8023
122.781918606 6.63332336615e-05 8855


### Discrete distributions¶

Here we discuss rejection free discrete sampling algorithms.

#### Tower sampling¶

The tower sampling is rejection free but has logarithmic computational complexity and this is not optimal.

In [ ]:
import random

# bisection search to find the bin corresponding to eta
def bisection_search(eta, w_cumulative):
kmin = 0
kmax = len(w_cumulative)
while True:
k = int((kmin + kmax) / 2)
if w_cumulative[k] < eta:
kmin = k
elif w_cumulative[k - 1] > eta:
kmax = k
else:
return k - 1

# sample an integer number according to weights
def tower_sample(weights):
sum_w = sum(weights)
w_cumulative = [0.0]
for l in xrange(len(weights)):
w_cumulative.append(w_cumulative[l] + weights[l])
eta = random.random() * sum_w
sampled_choice = bisection_search(eta, w_cumulative)
return sampled_choice

weights = [0.4, 0.3, 0.8, 0.1, 0.2]
n_samples = 20
for sample in xrange(n_samples):
print tower_sample(weights)s


#### Walker's algorithm¶

Optimal.

In [137]:
import random, pylab

N = 5
pi = [[1.1 / 5.0, 0], [1.9 / 5.0, 1], [0.5 / 5.0, 2], [1.25 / 5.0, 3], [0.25 / 5.0, 4]]
x_val = [a[1] for a in pi]
y_val = [a[0] for a in pi]
pi_mean = sum(y_val) / float(N)
long_s = []
short_s = []
for p in pi:
if p[0] > pi_mean:
long_s.append(p)
else:
short_s.append(p)
table = []
for k in range(N - 1):
e_plus = long_s.pop()
e_minus = short_s.pop()
table.append((e_minus[0], e_minus[1], e_plus[1]))
e_plus[0] = e_plus[0] - (pi_mean - e_minus[0])
if e_plus[0] < pi_mean:
short_s.append(e_plus)
else:
long_s.append(e_plus)
if long_s != []:
table.append((long_s[0][0], long_s[0][1], long_s[0][1]))
else:
table.append((short_s[0][0], short_s[0][1], short_s[0][1]))
print table
samples = []
n_samples = 10000
for k in xrange(n_samples):
Upsilon = random.uniform(0.0, pi_mean)
i = random.randint(0, N-1)
if Upsilon < table[i][0]:
samples.append(table[i][1])
else: samples.append(table[i][2])

pylab.figure()
pylab.hist(samples, bins=N, range=(-0.5, N-0.5), normed=True)
pylab.plot(x_val, y_val,'ro', ms=8)
pylab.title("Histogram using Walker's method for a discrete distribution\n\
on $N=$"+str(N)+" choices ("+str(n_samples)+" samples)",fontsize=14)
pylab.xlabel('$k$',fontsize=20)
pylab.ylabel('$\pi(k)$',fontsize=20)
pylab.show()

[(0.05, 4, 3), (0.09999999999999998, 3, 1), (0.1, 2, 1), (0.17999999999999997, 1, 0), (0.19999999999999998, 0, 0)]


### Continuous distributions¶

Let us investigate the continuum limit of tower sampling method.

For instance, for a gaussian distribution $\pi(x) \propto e^{-x^2}$, to obtain the position corresponding to a particular value $\Upsilon$ of the cumulative probability distribution $\Phi(x)\equiv \int^x_{-\infty}dx'\pi(x')$, we need to calculate the inverse of the error function $\text{erf}{(2\Phi-1)}$ which cannot be obtained analytically.

Below we pick a random value on the "tower" (i.e. we choose a value $\Upsilon$ of the cumulative distribution), and find the position that it corresponds to by numerically inverting the error function. The sampling is given as a histrogram.

In [215]:
import scipy.special, random, math

n_trials = 100000
data = []
for trial in xrange(n_trials):
Upsilon = random.uniform(0.0, 1.0) #pick a value for the cumulative distribution (or a value on the tower)
x = math.sqrt(2.0) * scipy.special.erfinv(2.0 * Upsilon - 1.0) #calculate x's from the numerical inverse of erf(x)
data.append(x)

pylab.hist(data, 100, normed = 'True') #histogram of the sample
x = [a / 10.0 for a in range(-50, 51)]
y = [math.exp(- a ** 2 / 2.0) / math.sqrt(2.0 * math.pi) for a in x]
pylab.plot(x, y, c='red', linewidth=2.0)
pylab.title('Theoretical Gaussian distribution $\pi(x)$ and \
\nnormalized histogram for '+str(len(data))+' "tower" samples', fontsize = 18)
pylab.xlabel('$x$', fontsize = 20)
pylab.ylabel('$\pi(x)\propto\exp{(-x^2/2)}$', fontsize = 20)
pylab.savefig('plot_tower_gauss.png')
pylab.show()


Let us now consider a divergent distribution $\pi(x)\propto x^\gamma$, $0<x\leq1$. For $\gamma < 0$, $\pi(x)$ blows up at $x=0$. However we have a finite cumulative distribution $\Phi(x)= x^{\gamma+1}$

In [146]:
import random

gamma = -0.5
n_trials = 10000
data = []

for trial in xrange(n_trials):
x = (random.uniform(0.0, 1.0)) ** (1.0 / (gamma + 1.0)) #simply invert the cumulative distribution
data.append(x)

pylab.hist(data, bins=100, normed='True')
pylab.xlabel('$x$', fontsize=16)
pylab.ylabel('$\pi(x)$', fontsize=16)
x = [a / 100.0 for a in xrange(1, 101)]
y = [0.5 / math.sqrt(a) for a in x]
pylab.plot(x, y, linewidth=1.5, color='r')
pylab.title('Theoretical distribution $\pi(x)={1}/{(2 \sqrt{x})}$ and normalized\
\n histogram for '+str(len(data))+' Markov-chain samples',fontsize=16)
pylab.show()


We found great rejection-free sampling programs for discrete and continuous one-dimensional distributions, and solved the direct sampling problem in these cases. Markov chain methods could generally be avoided. The situation is much more complicated in high-dimensional situations for example the hard-spheres simulations that we did last week or the quantum problem that we will turn our attention to next week.

## Calculation of High Dimensional Integrals Using Monte Carlo Methods¶

### Volume of the $d$-dimensional unit hypersphere $V_\text{sph}(d)$¶

We know a priori that the volume of the $d$-dimensional hypersphere ($d$-sphere) of radius $r$ is proportional to $r^{d}$, i.e. $V_d(r) \equiv V_\text{sph}(r,d)\propto r^{d}$. It follows that the surface area is given by $S_d(r)\equiv S_\text{sph}(r,d)=\frac{\text{d}V_\text{sph}(r,d)}{\text{d}r}=dC_nr^{d-1}$. Therefore, the task of calculating the volume is reduced to finding the proportionality constant $C_d$. If we consider $V_d(r\rightarrow \infty)$, by transforming the cartesian to the spherical coordinates $V_d(r\rightarrow \infty) = \int_0^{r \rightarrow \infty} \frac{\text{d}V_\text{sph}(d)}{\text{d}r'} \text{d}r' = C_d d \int_0^{r \rightarrow \infty} r'^{d-1} \text{d}r' = \int_{-\infty}^\infty \int_{-\infty}^\infty \cdots \int_{-\infty}^\infty \text{d}x_0\text{d}x_1 \cdots\text{d}x_{d-1} = \int \text{d}\Omega_d \int_{0}^\infty \text{d}r$, where $\text{d}\Omega_d$ is the $d$ dimensional solid angle differential, we get an integral relation given by $C_d d \int_0^{r \rightarrow \infty} r'^{d-1} \text{d}r' = \int \text{d}\Omega_d\int_{0}^\infty \text{d}r$.

Alternatively we consider the gaussian $e^{-r^2}$, where $r^2=x_0^2+\cdots+x_{d-1}^2$. We than have $I^d = \left( \int_{-\infty}^\infty e^{-x^2} \text{d}x \right)^d = \int \text{d}\Omega_d \int_{0}^\infty e^{-r^2} \text{d}r = C_d d \int_0^\infty r^{d-1}e^{-r^2} \text{d}r$ by the above equality. Hence it follows that $C_d=\frac{\left( \int_{-\infty}^\infty e^{-x^2} \text{d}x \right)^d}{d \int_0^\infty r^{d-1}e^{-r^2} \text{d}r}=\frac{2\pi^{d/2}}{\Gamma(d/2)}=\frac{2\pi^{d/2}}{\left(\frac{d}{2}-1\right)!}$. Using this, finally obtain the result that we were after: $S_d(r)=\frac{2\pi^{d/2}r^{d-1}}{\left(\frac{d}{2}-1\right)!}$ and $V_d(r)=\int_0^r S_d(r') \text{d}r' = \frac{\pi^{d/2}r^d}{\left(\frac{d}{2}\right)!}$. Therefore the volume of the unit $d$-sphere is $V_d(r=1) = V_\text{sph}(d) = \frac{\pi^{d/2}}{\left(\frac{d}{2}\right)!}$.

In [375]:
import math, pylab
from pylab import MaxNLocator

volume, dimension = [], []

def V_sph(dim):
return math.pi ** (dim / 2.0) / math.gamma(dim / 2.0 + 1.0)

for d in range(0,20):
dimension.append(d)
volume.append(V_sph(d))

pylab.plot(dimension, volume, 'bo-')
pylab.title('Volume of the unit hypersphere in $d$ dimensions')
pylab.xlabel('$d$', fontsize = 15)
pylab.ylabel('$V_{sph}(d)$', fontsize = 15)
pylab.xlim(0,20)
pylab.ylim(0,6)
for i in range(0,20):
pylab.annotate(round(volume[i],2), (dimension[i], volume[i]), xytext=(10, 0), ha='left',
textcoords='offset points')
pylab.grid()
pylab.show()


Although we are indeed able to calculate the exact expression, since the calculation of the gamma function $\Gamma(d)$ is difficult for large $d$, it is not entirely pointless to develop a numerical approach for this problem. We can numerically calculate $V_\text{sph}(d)$ from the ratio between the volumes of unit hyperspheres in successive dimensions $d$, $d+1$.

We first calculate $\langle Q(d=3) \rangle$ by a Markov-chain Monte Carlo program that samples points $(x, y, z)$ inside the unit cylinder and counts the samples that land inside the unit sphere, as well.

We see that the obtained value agrees with the analytical value of $Q(3)=\frac{4}{3}$ (see the table above).

In [1]:
import random

x, y, z = 0.0, 0.0, 0.0 #initial point
delta = 0.1
Q_avg_3 = 0.0 #initialise the average Q(3)
Q_3 = 4.0 / 3.0

n_trials = 4000
n_runs = 1000

for j in range(n_runs):
n_hits = 0
for i in range(n_trials):
del_x, del_y = random.uniform(-delta, delta), random.uniform(-delta, delta)
if (x + del_x) ** 2 + (y + del_y) ** 2 < 1.0: #sample points inside the unit disk
x, y = x + del_x, y + del_y
z = random.uniform(-1.0, 1.0) #sample the z coordinate (inside the unit cylinder)

#raise the number of counts if the sample is also inside the unit sphere
if x**2 + y**2 + z**2 < 1.0: n_hits += 1
Q_avg_3 += 2.0 * n_hits / float(n_trials) / n_runs #take the average

print '<Q(3)> =', Q_avg_3
print '<Q(3)>/Q(3)=<Q(3)>/(4/3) =', Q_avg_3 / Q_3 #compare with the actual value

<Q(3)> = 1.33341
<Q(3)>/Q(3)=<Q(3)>/(4/3) = 1.0000575


Now we do the same for $\langle Q(4) \rangle$ by a straightforward tweak:

In [4]:
import random, math

x, y, z, t = 0.0, 0.0, 0.0, 0.0 #initial point
delta = 0.1
Q_avg_4 = 0.0 #initialise the average Q(4)
Q_4= 3.0 * math.pi / 8.0

n_trials = 4000
n_runs = 1000

for j in range(n_runs):
n_hits = 0
for i in range(n_trials):
del_x, del_y, del_z = random.uniform(-delta, delta), random.uniform(-delta, delta), random.uniform(-delta, delta)
if (x + del_x)**2 + (y + del_y)**2 + (z + del_z)**2 < 1.0: #sample points inside the unit sphere
x, y, z = x + del_x, y + del_y, z + del_z
t = random.uniform(-1.0, 1.0) #sample the t coordinate (inside the unit cylinder)

#raise the number of counts if the sample is also inside the unit hypersphere
if x**2 + y**2 + z**2 + t**2 < 1.0: n_hits += 1
Q_avg_4 += 2.0 * n_hits / float(n_trials) / n_runs #take the average

print '<Q(4)> =', Q_avg_4
print '<Q(4)>/Q(4) =', Q_avg_4 / Q_4 #compare with the actual value

 <Q(4)> = 1.175211
<Q(4)>/Q(4) = 0.997550079072


Obtain the volume of the 4-sphere $V_\text{sph}(4)$ from $\langle Q(4) \rangle$:

In [386]:
print 'Analytical: V_sph(4) = pi^2/2 =', math.pi**2 / 2 #analytical
print 'Numerical: V_sph(3)â‹…<Q_4> =', 4.0 / 3.0 * math.pi * Q_avg_4
print 'Numerical: V_sph(2)â‹…<Q_3>â‹…<Q_4> = piâ‹…<Q_3>â‹…<Q_4> =', math.pi * Q_avg_3 * Q_avg_4

Analytical: V_sph(4) = pi^2/2 = 4.93480220054
Numerical: V_sph(3)â‹…<Q_4> = 4.92681524536
Numerical: V_sph(2)â‹…<Q_3>â‹…<Q_4> = piâ‹…<Q_3>â‹…<Q_4> = 4.93103198336


Let us now efficiently generalise this program to sample uniformly distributed points inside the $d$-dimensional unit sphere, and calculate $\langle Q(d)\rangle$. Note that, here, instead of modifying all components of x at a time, as we did previously, modify only one component at each iteration $i$ (with $i=0, 1, 2,\cdots, n_\text{trials}$).

As seen previously, for small radii, e.g. $0<r<1$, the distribution of radii $\pi(r)$ is proportional to $r^{d-1}$ in random sampling inside the $d$-sphere. In the following, we verify our generalised program for $d=4$, $20$ by comparing the radius histograms with the analytic approximation to the radii distributions respectively given by $\pi_4(r) \approx 4 r ^3$ and $\pi_{20}(r) \approx 20 r ^{19}$ for small $r$.

In [159]:
%pylab inline
import pylab, random, math

#analytical formulae:
def V_sph(dim): #volume of "dim" dimensional sphere
return math.pi ** (dim / 2.0) / math.gamma(dim / 2.0 + 1.0)
def Q(dim):
return V_sph(dim) / V_sph(dim-1)

#monte carlo algorithm:
def markov_sphere(d, n_runs, n_trials):
x = [0.0] * d #initial point
delta = 0.1
Q_avg_d = 0 #initialise the average Q(d)
old_radius_square = sum(i**2 for i in x)
data = []
for j in range(n_runs):
n_hits = 0
if d == 1:
Q_avg_d = 1
break
for i in range(n_trials):
k = random.randint(0, d - 2)
x_old_k = x[k]
x_new_k = x_old_k + random.uniform(-delta, delta)
new_radius_square = old_radius_square + x_new_k ** 2 - x_old_k ** 2
if new_radius_square < 1.0: #check whether the position is inside the unit (d-1)-sphere
old_radius_square = new_radius_square #update the radius
x[k] = x_new_k #update a component
x[d-1] = random.uniform(-1, 1) #sample the d'th coordinate (inside the unit cylinder)
if old_radius_square + x[d-1]**2 < 1.0: #is x inside the unit d-sphere?
n_hits += 1 #raise the number of hits in case of accepted sample
data.append(math.sqrt(old_radius_square  + x[d-1]**2)) #generate the radius histogram data

Q_avg_d += 2.0 * n_hits / float(n_trials) / n_runs #take the average of Q's

print 'Analytical: Q(d=%i)' % d, Q(d)
if d != 1:
print 'Numerical: <Q(d=%i)> =' % d, Q_avg_d
else:
print 'There is no cylinder in 1 dimension!'
return Q_avg_d, data

n_trials = 500
n_runs = 500

# 4-dimensional sphere radius distribution
Q_avg_d, data = markov_sphere(4, n_trials, n_trials)
pylab.hist(data, 100, normed = 'True') #histogram of the sample
x = [a / 100.0 for a in range(0, 100)]
y = [4 * a**3 for a in x]
pylab.plot(x, y, c='red', linewidth=2.0, label='$P_4(r) \sim 4 r^3, 0<r<1$')
pylab.xlabel('$x$', fontsize = 20)
pylab.ylabel('$\pi_4(x)$', fontsize = 20)
pylab.legend()
pylab.grid()
pylab.savefig('plot_radius_4.png')
pylab.show()

# 20-dimensional sphere radius distribution
Q_avg_d, data = markov_sphere(20, n_trials, n_trials)
pylab.hist(data, 100, normed = 'True') #histogram of the sample
x = [a / 100.0 for a in range(0, 100)]
y = [20 * a**19 for a in x]
pylab.plot(x, y, c='red', linewidth=2.0, label='$P_{20}(r) \sim 20 r^{19}, 0<r<1$')
pylab.xlabel('$x$', fontsize = 20)
pylab.ylabel('$\pi_{20}(x)$', fontsize = 20)
pylab.legend()
pylab.grid()
pylab.savefig('plot_radius_20.png')
pylab.show()

# 200-dimensional sphere radius distribution
Q_avg_d, data = markov_sphere(200, n_trials, n_trials)
pylab.hist(data, 100, normed = 'True') #histogram of the sample
x = [a / 100.0 for a in range(0, 100)]
y = [200 * a**199 for a in x]
pylab.plot(x, y, c='red', linewidth=2.0, label='$P_{200}(r) \sim 200 r^{199}, 0<r<1$')
pylab.xlabel('$x$', fontsize = 20)
pylab.ylabel('$\pi_{200}(x)$', fontsize = 20)
pylab.legend()
pylab.grid()
pylab.savefig('plot_radius_200.png')
pylab.show()

Populating the interactive namespace from numpy and matplotlib
Analytical: Q(d=4) 1.1780972451
Numerical: <Q(d=4)> = 1.186448

Analytical: Q(d=20) 0.553539364154
Numerical: <Q(d=20)> = 0.551448

Analytical: Q(d=200) 0.177023967696
Numerical: <Q(d=200)> = 0.17836


We have now reached to the point where we can numerically calculate $V_\text{sph}(n)=Q(n) \cdots Q(3)Q(2) = 2 Q(n) \cdots Q(3)Q(2)$ for any dimension $n$ through successive calculations of $Q(d)$, $d\in [2,n]$.

In [8]:
import random, math

#Calculate the volume of the d_max dimensional sphere through the formula V(n)=Q(n)...Q(2)V(2)
def V_sph_markov(dim, n_runs, n_trials):
V = []
V_d = V_sph(1) #initialise the volume of "d_max" dimensional sphere
print 'The analytical value for the volume of the unit %i-sphere is =' % dim, V_sph(dim), '.'
print '___________________________________________'
for d in range(1, dim + 1):
Q_avg_d, data = markov_sphere(d, n_runs, n_trials)
V_d *= Q_avg_d
V.append(V_d) #save the volume for each d value along the way until d_max
print 'Analytical: V_sph(d=%i)' % d, V_sph(d)
print 'Numerical: V_sph(d=%i)' % d, V_d
print '___________________________________________'
print 'After %i runs' % n_runs, 'each consisting of %i trials' % n_trials, ', the numerical result for the volume of the unit %i-sphere (with analytical value' % dim, V_sph(dim), ') is found to be', V_d, '.'
return V

d_max = 200 #maximum dimension (dimension of the sphere in question)
n_trials = 1000 #each dimension iteration takes "n_trials" number of iterations
n_runs = 500
V = V_sph_markov(d_max, n_runs, n_trials)

The analytical value for the volume of the unit 200-sphere is = 5.55883284203e-109 .
___________________________________________
Analytical: Q(d=1) 2.0
There is no cylinder in 1 dimension!
Analytical: V_sph(d=1) 2.0
Numerical: V_sph(d=1) 2.0
___________________________________________
Analytical: Q(d=2) 1.57079632679
Numerical: <Q(d=2)> = 1.567504
Analytical: V_sph(d=2) 3.14159265359
Numerical: V_sph(d=2) 3.135008
___________________________________________
Analytical: Q(d=3) 1.33333333333
Numerical: <Q(d=3)> = 1.33864
Analytical: V_sph(d=3) 4.18879020479
Numerical: V_sph(d=3) 4.19664710912
___________________________________________
Analytical: Q(d=4) 1.1780972451
Numerical: <Q(d=4)> = 1.172344
Analytical: V_sph(d=4) 4.93480220054
Numerical: V_sph(d=4) 4.91991405849
___________________________________________
Analytical: Q(d=5) 1.06666666667
Numerical: <Q(d=5)> = 1.058728
Analytical: V_sph(d=5) 5.26378901391
Numerical: V_sph(d=5) 5.20885077132
___________________________________________
Analytical: Q(d=6) 0.981747704247
Numerical: <Q(d=6)> = 0.989828
Analytical: V_sph(d=6) 5.16771278005
Numerical: V_sph(d=6) 5.15586634128
___________________________________________
Analytical: Q(d=7) 0.914285714286
Numerical: <Q(d=7)> = 0.913492
Analytical: V_sph(d=7) 4.72476597033
Numerical: V_sph(d=7) 4.70984265582
___________________________________________
Analytical: Q(d=8) 0.859029241216
Numerical: <Q(d=8)> = 0.865424
Analytical: V_sph(d=8) 4.05871212642
Numerical: V_sph(d=8) 4.07601087057
___________________________________________
Analytical: Q(d=9) 0.812698412698
Numerical: <Q(d=9)> = 0.808216
Analytical: V_sph(d=9) 3.29850890274
Numerical: V_sph(d=9) 3.29429720177
___________________________________________
Analytical: Q(d=10) 0.773126317094
Numerical: <Q(d=10)> = 0.770212
Analytical: V_sph(d=10) 2.55016403988
Numerical: V_sph(d=10) 2.53730723637
___________________________________________
Analytical: Q(d=11) 0.738816738817
Numerical: <Q(d=11)> = 0.725876
Analytical: V_sph(d=11) 1.88410387939
Numerical: V_sph(d=11) 1.84177042751
___________________________________________
Analytical: Q(d=12) 0.708699124003
Numerical: <Q(d=12)> = 0.705664
Analytical: V_sph(d=12) 1.33526276885
Numerical: V_sph(d=12) 1.29967108696
___________________________________________
Analytical: Q(d=13) 0.681984681985
Numerical: <Q(d=13)> = 0.672892
Analytical: V_sph(d=13) 0.910628754783
Numerical: V_sph(d=13) 0.874538277045
___________________________________________
Analytical: Q(d=14) 0.658077758003
Numerical: <Q(d=14)> = 0.66044
Analytical: V_sph(d=14) 0.599264529321
Numerical: V_sph(d=14) 0.577580059691
___________________________________________
Analytical: Q(d=15) 0.636519036519
Numerical: <Q(d=15)> = 0.627748
Analytical: V_sph(d=15) 0.381443280823
Numerical: V_sph(d=15) 0.362574727311
___________________________________________
Analytical: Q(d=16) 0.616947898128
Numerical: <Q(d=16)> = 0.619088
Analytical: V_sph(d=16) 0.235330630359
Numerical: V_sph(d=16) 0.224465662782
___________________________________________
Analytical: Q(d=17) 0.599076740253
Numerical: <Q(d=17)> = 0.603332
Analytical: V_sph(d=17) 0.140981106917
Numerical: V_sph(d=17) 0.135427317257
___________________________________________
Analytical: Q(d=18) 0.582673014898
Numerical: <Q(d=18)> = 0.57796
Analytical: V_sph(d=18) 0.0821458866111
Numerical: V_sph(d=18) 0.0782715722821
___________________________________________
Analytical: Q(d=19) 0.567546385503
Numerical: <Q(d=19)> = 0.563264
Analytical: V_sph(d=19) 0.0466216010301
Numerical: V_sph(d=19) 0.0440875588899
___________________________________________
Analytical: Q(d=20) 0.553539364154
Numerical: <Q(d=20)> = 0.547916
Analytical: V_sph(d=20) 0.02580689139
Numerical: V_sph(d=20) 0.0241562789167
___________________________________________
Analytical: Q(d=21) 0.540520367146
Numerical: <Q(d=21)> = 0.542652
Analytical: V_sph(d=21) 0.013949150409
Numerical: V_sph(d=21) 0.0131084530667
___________________________________________
Analytical: Q(d=22) 0.528378483965
Numerical: <Q(d=22)> = 0.525592
Analytical: V_sph(d=22) 0.00737043094571
Numerical: V_sph(d=22) 0.00688969806424
___________________________________________
Analytical: Q(d=23) 0.517019481618
Numerical: <Q(d=23)> = 0.51786
Analytical: V_sph(d=23) 0.00381065638685
Numerical: V_sph(d=23) 0.00356789903955
___________________________________________
Analytical: Q(d=24) 0.5063627138
Numerical: <Q(d=24)> = 0.502836
Analytical: V_sph(d=24) 0.0019295743094
Numerical: V_sph(d=24) 0.00179406808145
___________________________________________
Analytical: Q(d=25) 0.496338702353
Numerical: <Q(d=25)> = 0.495672
Analytical: V_sph(d=25) 0.000957722408823
Numerical: V_sph(d=25) 0.000889269314068
___________________________________________
Analytical: Q(d=26) 0.486887224807
Numerical: <Q(d=26)> = 0.487064
Analytical: V_sph(d=26) 0.000466302805768
Numerical: V_sph(d=26) 0.000433131069187
___________________________________________
Analytical: Q(d=27) 0.477955787451
Numerical: <Q(d=27)> = 0.475656
Analytical: V_sph(d=27) 0.000222872124721
Numerical: V_sph(d=27) 0.000206021391845
___________________________________________
Analytical: Q(d=28) 0.46949839535
Numerical: <Q(d=28)> = 0.472048
Analytical: V_sph(d=28) 0.000104638104925
Numerical: V_sph(d=28) 9.72519859778e-05
___________________________________________
Analytical: Q(d=29) 0.461474553401
Numerical: <Q(d=29)> = 0.463496
Analytical: V_sph(d=29) 4.82878227389e-05
Numerical: V_sph(d=29) 4.50759064928e-05
___________________________________________
Analytical: Q(d=30) 0.453848448838
Numerical: <Q(d=30)> = 0.454848
Analytical: V_sph(d=30) 2.19153534478e-05
Numerical: V_sph(d=30) 2.05026859164e-05
___________________________________________
Analytical: Q(d=31) 0.446588277485
Numerical: <Q(d=31)> = 0.442332
Analytical: V_sph(d=31) 9.78713994674e-06
Numerical: V_sph(d=31) 9.06899406678e-06
___________________________________________
Analytical: Q(d=32) 0.439665684812
Numerical: <Q(d=32)> = 0.439904
Analytical: V_sph(d=32) 4.30306958703e-06
Numerical: V_sph(d=32) 3.98948676595e-06
___________________________________________
Analytical: Q(d=33) 0.433055299379
Numerical: <Q(d=33)> = 0.43276
Analytical: V_sph(d=33) 1.86346708826e-06
Numerical: V_sph(d=33) 1.72649029283e-06
___________________________________________
Analytical: Q(d=34) 0.426734341141
Numerical: <Q(d=34)> = 0.429012
Analytical: V_sph(d=34) 7.95205400148e-07
Numerical: V_sph(d=34) 7.4068505351e-07
___________________________________________
Analytical: Q(d=35) 0.420682290826
Numerical: <Q(d=35)> = 0.422132
Analytical: V_sph(d=35) 3.34528829411e-07
Numerical: V_sph(d=35) 3.12666863008e-07
___________________________________________
Analytical: Q(d=36) 0.414880609443
Numerical: <Q(d=36)> = 0.41544
Analytical: V_sph(d=36) 1.38789524622e-07
Numerical: V_sph(d=36) 1.29894321568e-07
___________________________________________
Analytical: Q(d=37) 0.409312499182
Numerical: <Q(d=37)> = 0.416892
Analytical: V_sph(d=37) 5.68082871833e-08
Numerical: V_sph(d=37) 5.41519035072e-08
___________________________________________
Analytical: Q(d=38) 0.403962698668
Numerical: <Q(d=38)> = 0.405004
Analytical: V_sph(d=38) 2.29484289973e-08
Numerical: V_sph(d=38) 2.1931737528e-08
___________________________________________
Analytical: Q(d=39) 0.398817306895
Numerical: <Q(d=39)> = 0.397208
Analytical: V_sph(d=39) 9.15223065016e-09
Numerical: V_sph(d=39) 8.71146160003e-09
___________________________________________
Analytical: Q(d=40) 0.393863631201
Numerical: <Q(d=40)> = 0.395044
Analytical: V_sph(d=40) 3.60473079746e-09
Numerical: V_sph(d=40) 3.44141063632e-09
___________________________________________
Analytical: Q(d=41) 0.389090055507
Numerical: <Q(d=41)> = 0.3891
Analytical: V_sph(d=41) 1.40256490607e-09
Numerical: V_sph(d=41) 1.33905287859e-09
___________________________________________
Analytical: Q(d=42) 0.384485925696
Numerical: <Q(d=42)> = 0.384944
Analytical: V_sph(d=42) 5.39266466261e-10
Numerical: V_sph(d=42) 5.15460371297e-10
___________________________________________
Analytical: Q(d=43) 0.380041449565
Numerical: <Q(d=43)> = 0.383916
Analytical: V_sph(d=43) 2.0494360954e-10
Numerical: V_sph(d=43) 1.97893483907e-10
___________________________________________
Analytical: Q(d=44) 0.375747609203
Numerical: <Q(d=44)> = 0.376504
Analytical: V_sph(d=44) 7.7007071306e-11
Numerical: V_sph(d=44) 7.45076882649e-11
___________________________________________
Analytical: Q(d=45) 0.371596084019
Numerical: <Q(d=45)> = 0.373652
Analytical: V_sph(d=45) 2.86155261391e-11
Numerical: V_sph(d=45) 2.78399467355e-11
___________________________________________
Analytical: Q(d=46) 0.367579182916
Numerical: <Q(d=46)> = 0.366528
Analytical: V_sph(d=46) 1.05184717169e-11
Numerical: V_sph(d=46) 1.02041199971e-11
___________________________________________
Analytical: Q(d=47) 0.363689784359
Numerical: <Q(d=47)> = 0.361596
Analytical: V_sph(d=47) 3.82546071052e-12
Numerical: V_sph(d=47) 3.68976897447e-12
___________________________________________
Analytical: Q(d=48) 0.359921283272
Numerical: <Q(d=48)> = 0.36048
Analytical: V_sph(d=48) 1.37686472804e-12
Numerical: V_sph(d=48) 1.33008791992e-12
___________________________________________
Analytical: Q(d=49) 0.356267543862
Numerical: <Q(d=49)> = 0.353968
Analytical: V_sph(d=49) 4.90532214888e-13
Numerical: V_sph(d=49) 4.70808560837e-13
___________________________________________
Analytical: Q(d=50) 0.352722857607
Numerical: <Q(d=50)> = 0.352916
Analytical: V_sph(d=50) 1.73021924584e-13
Numerical: V_sph(d=50) 1.66155874056e-13
___________________________________________
Analytical: Q(d=51) 0.349281905747
Numerical: <Q(d=51)> = 0.350532
Analytical: V_sph(d=51) 6.04334275546e-14
Numerical: V_sph(d=51) 5.82429508447e-14
___________________________________________
Analytical: Q(d=52) 0.34593972573
Numerical: <Q(d=52)> = 0.342352
Analytical: V_sph(d=52) 2.09063233531e-14
Numerical: V_sph(d=52) 1.99395907076e-14
___________________________________________
Analytical: Q(d=53) 0.34269168111
Numerical: <Q(d=53)> = 0.343668
Analytical: V_sph(d=53) 7.16442309573e-15
Numerical: V_sph(d=53) 6.85259925929e-15
___________________________________________
Analytical: Q(d=54) 0.339533434512
Numerical: <Q(d=54)> = 0.339988
Analytical: V_sph(d=54) 2.43256117999e-15
Numerical: V_sph(d=54) 2.32980151697e-15
___________________________________________
Analytical: Q(d=55) 0.336460923272
Numerical: <Q(d=55)> = 0.337352
Analytical: V_sph(d=55) 8.18461780536e-16
Numerical: V_sph(d=55) 7.85963201352e-16
___________________________________________
Analytical: Q(d=56) 0.333470337468
Numerical: <Q(d=56)> = 0.331608
Analytical: V_sph(d=56) 2.7293272616e-16
Numerical: V_sph(d=56) 2.60631685274e-16
___________________________________________
Analytical: Q(d=57) 0.330558100057
Numerical: <Q(d=57)> = 0.332512
Analytical: V_sph(d=57) 9.02201234027e-17
Numerical: V_sph(d=57) 8.66631629338e-17
___________________________________________
Analytical: Q(d=58) 0.327720848891
Numerical: <Q(d=58)> = 0.329124
Analytical: V_sph(d=58) 2.95670154285e-17
Numerical: V_sph(d=58) 2.85229268374e-17
___________________________________________
Analytical: Q(d=59) 0.324955420395
Numerical: <Q(d=59)> = 0.327268
Analytical: V_sph(d=59) 9.6079619284e-18
Numerical: V_sph(d=59) 9.33464122023e-18
___________________________________________
Analytical: Q(d=60) 0.322258834742
Numerical: <Q(d=60)> = 0.3227
Analytical: V_sph(d=60) 3.0962506153e-18
Numerical: V_sph(d=60) 3.01228872177e-18
___________________________________________
Analytical: Q(d=61) 0.319628282356
Numerical: <Q(d=61)> = 0.32114
Analytical: V_sph(d=61) 9.8964926591e-19
Numerical: V_sph(d=61) 9.67366400109e-19
___________________________________________
Analytical: Q(d=62) 0.317061111601
Numerical: <Q(d=62)> = 0.316496
Analytical: V_sph(d=62) 3.13779296345e-19
Numerical: V_sph(d=62) 3.06167596169e-19
___________________________________________
Analytical: Q(d=63) 0.314554817556
Numerical: <Q(d=63)> = 0.315032
Analytical: V_sph(d=63) 9.87007893147e-20
Numerical: V_sph(d=63) 9.64525901563e-20
___________________________________________
Analytical: Q(d=64) 0.312107031733
Numerical: <Q(d=64)> = 0.312644
Analytical: V_sph(d=64) 3.08052103827e-20
Numerical: V_sph(d=64) 3.01553235968e-20
___________________________________________
Analytical: Q(d=65) 0.309715512671
Numerical: <Q(d=65)> = 0.312776
Analytical: V_sph(d=65) 9.5408515266e-21
Numerical: V_sph(d=65) 9.43186149332e-21
___________________________________________
Analytical: Q(d=66) 0.307378137312
Numerical: <Q(d=66)> = 0.307188
Analytical: V_sph(d=66) 2.93264917062e-21
Numerical: V_sph(d=66) 2.89735466841e-21
___________________________________________
Analytical: Q(d=67) 0.305092893079
Numerical: <Q(d=67)> = 0.305388
Analytical: V_sph(d=67) 8.9473041985e-22
Numerical: V_sph(d=67) 8.84817347476e-22
___________________________________________
Analytical: Q(d=68) 0.302857870587
Numerical: <Q(d=68)> = 0.300768
Analytical: V_sph(d=68) 2.70976149705e-22
Numerical: V_sph(d=68) 2.66124743966e-22
___________________________________________
Analytical: Q(d=69) 0.300671256947
Numerical: <Q(d=69)> = 0.301144
Analytical: V_sph(d=69) 8.14747395346e-23
Numerical: V_sph(d=69) 8.01418698968e-23
___________________________________________
Analytical: Q(d=70) 0.298531329579
Numerical: <Q(d=70)> = 0.302312
Analytical: V_sph(d=70) 2.43227623203e-23
Numerical: V_sph(d=70) 2.42278489722e-23
___________________________________________
Analytical: Q(d=71) 0.296436450511
Numerical: <Q(d=71)> = 0.298316
Analytical: V_sph(d=71) 7.21015332887e-24
Numerical: V_sph(d=71) 7.227554994e-24
___________________________________________
Analytical: Q(d=72) 0.294385061112
Numerical: <Q(d=72)> = 0.296176
Analytical: V_sph(d=72) 2.12256142835e-24
Numerical: V_sph(d=72) 2.1406283279e-24
___________________________________________
Analytical: Q(d=73) 0.292375677217
Numerical: <Q(d=73)> = 0.291912
Analytical: V_sph(d=73) 6.20585335048e-25
Numerical: V_sph(d=73) 6.24875096455e-25
___________________________________________
Analytical: Q(d=74) 0.290406884611
Numerical: <Q(d=74)> = 0.292244
Analytical: V_sph(d=74) 1.80222253786e-25
Numerical: V_sph(d=74) 1.82615997688e-25
___________________________________________
Analytical: Q(d=75) 0.288477334854
Numerical: <Q(d=75)> = 0.2891
Analytical: V_sph(d=75) 5.19900354536e-26
Numerical: V_sph(d=75) 5.27942849317e-26
___________________________________________
Analytical: Q(d=76) 0.286585741392
Numerical: <Q(d=76)> = 0.287824
Analytical: V_sph(d=76) 1.48996028555e-26
Numerical: V_sph(d=76) 1.51954622662e-26
___________________________________________
Analytical: Q(d=77) 0.284730875959
Numerical: <Q(d=77)> = 0.287028
Analytical: V_sph(d=77) 4.24237697249e-27
Numerical: V_sph(d=77) 4.36152314334e-27
___________________________________________
Analytical: Q(d=78) 0.282911565221
Numerical: <Q(d=78)> = 0.282004
Analytical: V_sph(d=78) 1.20021750954e-27
Numerical: V_sph(d=78) 1.22996697251e-27
___________________________________________
Analytical: Q(d=79) 0.281126687656
Numerical: <Q(d=79)> = 0.28234
Analytical: V_sph(d=79) 3.37413172925e-28
Numerical: V_sph(d=79) 3.4726887502e-28
___________________________________________
Analytical: Q(d=80) 0.279375170655
Numerical: <Q(d=80)> = 0.283056
Analytical: V_sph(d=80) 9.42648627674e-29
Numerical: V_sph(d=80) 9.82965386876e-29
___________________________________________
Analytical: Q(d=81) 0.277655987809
Numerical: <Q(d=81)> = 0.276744
Analytical: V_sph(d=81) 2.61732035873e-29
Numerical: V_sph(d=81) 2.72029773026e-29
___________________________________________
Analytical: Q(d=82) 0.275968156379
Numerical: <Q(d=82)> = 0.278548
Analytical: V_sph(d=82) 7.22297074053e-30
Numerical: V_sph(d=82) 7.57733492167e-30
___________________________________________
Analytical: Q(d=83) 0.274310734943
Numerical: <Q(d=83)> = 0.276316
Analytical: V_sph(d=83) 1.98133841231e-30
Numerical: V_sph(d=83) 2.09373887622e-30
___________________________________________
Analytical: Q(d=84) 0.272682821184
Numerical: <Q(d=84)> = 0.272624
Analytical: V_sph(d=84) 5.40276947989e-31
Numerical: V_sph(d=84) 5.7080346739e-31
___________________________________________
Analytical: Q(d=85) 0.271083549826
Numerical: <Q(d=85)> = 0.27008
Analytical: V_sph(d=85) 1.4646019295e-31
Numerical: V_sph(d=85) 1.54162600473e-31
___________________________________________
Analytical: Q(d=86) 0.269512090705
Numerical: <Q(d=86)> = 0.269208
Analytical: V_sph(d=86) 3.94727928071e-32
Numerical: V_sph(d=86) 4.1501805348e-32
___________________________________________
Analytical: Q(d=87) 0.267967646955
Numerical: <Q(d=87)> = 0.266368
Analytical: V_sph(d=87) 1.05774314073e-32
Numerical: V_sph(d=87) 1.10547528869e-32
___________________________________________
Analytical: Q(d=88) 0.266449453311
Numerical: <Q(d=88)> = 0.269508
Analytical: V_sph(d=88) 2.8183508159e-33
Numerical: V_sph(d=88) 2.97934434106e-33
___________________________________________
Analytical: Q(d=89) 0.264956774517
Numerical: <Q(d=89)> = 0.266176
Analytical: V_sph(d=89) 7.46741141638e-34
Numerical: V_sph(d=89) 7.93029959325e-34
___________________________________________
Analytical: Q(d=90) 0.26348890383
Numerical: <Q(d=90)> = 0.264404
Analytical: V_sph(d=90) 1.96758004855e-34
Numerical: V_sph(d=90) 2.09680293365e-34
___________________________________________
Analytical: Q(d=91) 0.26204516161
Numerical: <Q(d=91)> = 0.26082
Analytical: V_sph(d=91) 5.15594831803e-35
Numerical: V_sph(d=91) 5.46888141155e-35
___________________________________________
Analytical: Q(d=92) 0.260624894005
Numerical: <Q(d=92)> = 0.263304
Analytical: V_sph(d=92) 1.34376848388e-35
Numerical: V_sph(d=92) 1.43997835119e-35
___________________________________________
Analytical: Q(d=93) 0.259227471701
Numerical: <Q(d=93)> = 0.257688
Analytical: V_sph(d=93) 3.48341706628e-36
Numerical: V_sph(d=93) 3.71065141361e-36
___________________________________________
Analytical: Q(d=94) 0.25785228875
Numerical: <Q(d=94)> = 0.25882
Analytical: V_sph(d=94) 8.98207063212e-37
Numerical: V_sph(d=94) 9.6039079887e-37
___________________________________________
Analytical: Q(d=95) 0.256498761472
Numerical: <Q(d=95)> = 0.256448
Analytical: V_sph(d=95) 2.30388999259e-37
Numerical: V_sph(d=95) 2.46290299589e-37
___________________________________________
Analytical: Q(d=96) 0.255166327409
Numerical: <Q(d=96)> = 0.257024
Analytical: V_sph(d=96) 5.87875148164e-38
Numerical: V_sph(d=96) 6.33025179615e-38
___________________________________________
Analytical: Q(d=97) 0.253854444344
Numerical: <Q(d=97)> = 0.255
Analytical: V_sph(d=97) 1.49234719081e-38
Numerical: V_sph(d=97) 1.61421420802e-38
___________________________________________
Analytical: Q(d=98) 0.252562589374
Numerical: <Q(d=98)> = 0.254532
Analytical: V_sph(d=98) 3.76911070755e-39
Numerical: V_sph(d=98) 4.10869170795e-39
___________________________________________
Analytical: Q(d=99) 0.251290258037
Numerical: <Q(d=99)> = 0.251712
Analytical: V_sph(d=99) 9.47140802271e-40
Numerical: V_sph(d=99) 1.03420700719e-39
___________________________________________
Analytical: Q(d=100) 0.25003696348
Numerical: <Q(d=100)> = 0.252564
Analytical: V_sph(d=100) 2.36820210188e-40
Numerical: V_sph(d=100) 2.61203458564e-40
___________________________________________
Analytical: Q(d=101) 0.24880223568
Numerical: <Q(d=101)> = 0.249248
Analytical: V_sph(d=101) 5.89213977491e-41
Numerical: V_sph(d=101) 6.51044396403e-41
___________________________________________
Analytical: Q(d=102) 0.247585620701
Numerical: <Q(d=102)> = 0.248128
Analytical: V_sph(d=102) 1.45880908343e-41
Numerical: V_sph(d=102) 1.61542343991e-41
___________________________________________
Analytical: Q(d=103) 0.246386679994
Numerical: <Q(d=103)> = 0.246268
Analytical: V_sph(d=103) 3.59431126811e-42
Numerical: V_sph(d=103) 3.97827099699e-42
___________________________________________
Analytical: Q(d=104) 0.245204989733
Numerical: <Q(d=104)> = 0.24584
Analytical: V_sph(d=104) 8.81343057595e-43
Numerical: V_sph(d=104) 9.78018141899e-43
___________________________________________
Analytical: Q(d=105) 0.244040140185
Numerical: <Q(d=105)> = 0.243864
Analytical: V_sph(d=105) 2.15083083326e-43
Numerical: V_sph(d=105) 2.38503416156e-43
___________________________________________
Analytical: Q(d=106) 0.242891735113
Numerical: <Q(d=106)> = 0.243004
Analytical: V_sph(d=106) 5.22419033025e-44
Numerical: V_sph(d=106) 5.79572841396e-44
___________________________________________
Analytical: Q(d=107) 0.241759391211
Numerical: <Q(d=107)> = 0.242156
Analytical: V_sph(d=107) 1.26299707381e-44
Numerical: V_sph(d=107) 1.40347040981e-44
___________________________________________
Analytical: Q(d=108) 0.240642737565
Numerical: <Q(d=108)> = 0.241412
Analytical: V_sph(d=108) 3.03931073379e-45
Numerical: V_sph(d=108) 3.38814598573e-45
___________________________________________
Analytical: Q(d=109) 0.239541415145
Numerical: <Q(d=109)> = 0.241616
Analytical: V_sph(d=109) 7.28040794237e-46
Numerical: V_sph(d=109) 8.18630280489e-46
___________________________________________
Analytical: Q(d=110) 0.238455076315
Numerical: <Q(d=110)> = 0.237576
Analytical: V_sph(d=110) 1.7360502315e-46
Numerical: V_sph(d=110) 1.94486907517e-46
___________________________________________
Analytical: Q(d=111) 0.237383384378
Numerical: <Q(d=111)> = 0.239772
Analytical: V_sph(d=111) 4.12109479403e-47
Numerical: V_sph(d=111) 4.66325147893e-47
___________________________________________
Analytical: Q(d=112) 0.236326013133
Numerical: <Q(d=112)> = 0.235712
Analytical: V_sph(d=112) 9.73921902418e-48
Numerical: V_sph(d=112) 1.0991843326e-47
___________________________________________
Analytical: Q(d=113) 0.235282646463
Numerical: <Q(d=113)> = 0.235548
Analytical: V_sph(d=113) 2.29146922649e-48
Numerical: V_sph(d=113) 2.58910671175e-48
___________________________________________
Analytical: Q(d=114) 0.234252977931
Numerical: <Q(d=114)> = 0.23726
Analytical: V_sph(d=114) 5.36783490142e-49
Numerical: V_sph(d=114) 6.14291458431e-49
___________________________________________
Analytical: Q(d=115) 0.233236710407
Numerical: <Q(d=115)> = 0.231556
Analytical: V_sph(d=115) 1.25197615441e-49
Numerical: V_sph(d=115) 1.42242872948e-49
___________________________________________
Analytical: Q(d=116) 0.232233555707
Numerical: <Q(d=116)> = 0.23398
Analytical: V_sph(d=116) 2.90750874e-50
Numerical: V_sph(d=116) 3.32819874125e-50
___________________________________________
Analytical: Q(d=117) 0.231243234249
Numerical: <Q(d=117)> = 0.231212
Analytical: V_sph(d=117) 6.72341724645e-51
Numerical: V_sph(d=117) 7.69519487361e-51
___________________________________________
Analytical: Q(d=118) 0.230265474726
Numerical: <Q(d=118)> = 0.23286
Analytical: V_sph(d=118) 1.54817086404e-51
Numerical: V_sph(d=118) 1.79190307827e-51
___________________________________________
Analytical: Q(d=119) 0.229300013793
Numerical: <Q(d=119)> = 0.231596
Analytical: V_sph(d=119) 3.54995600478e-52
Numerical: V_sph(d=119) 4.14997585315e-52
___________________________________________
Analytical: Q(d=120) 0.22834659577
Numerical: <Q(d=120)> = 0.229544
Analytical: V_sph(d=120) 8.10620368827e-53
Numerical: V_sph(d=120) 9.52602057235e-53
___________________________________________
Analytical: Q(d=121) 0.227404972357
Numerical: <Q(d=121)> = 0.227776
Analytical: V_sph(d=121) 1.84339102565e-53
Numerical: V_sph(d=121) 2.16979886189e-53
___________________________________________
Analytical: Q(d=122) 0.226474902362
Numerical: <Q(d=122)> = 0.225484
Analytical: V_sph(d=122) 4.1748180255e-54
Numerical: V_sph(d=122) 4.89254926574e-54
___________________________________________
Analytical: Q(d=123) 0.225556151444
Numerical: <Q(d=123)> = 0.226272
Analytical: V_sph(d=123) 9.4165588681e-55
Numerical: V_sph(d=123) 1.10704690746e-54
___________________________________________
Analytical: Q(d=124) 0.224648491859
Numerical: <Q(d=124)> = 0.226892
Analytical: V_sph(d=124) 2.11541574822e-55
Numerical: V_sph(d=124) 2.51180086927e-55
___________________________________________
Analytical: Q(d=125) 0.223751702232
Numerical: <Q(d=125)> = 0.22336
Analytical: V_sph(d=125) 4.73327874594e-56
Numerical: V_sph(d=125) 5.6103584216e-56
___________________________________________
Analytical: Q(d=126) 0.222865567321
Numerical: <Q(d=126)> = 0.222448
Analytical: V_sph(d=126) 1.054884853e-56
Numerical: V_sph(d=126) 1.24801301017e-56
___________________________________________
Analytical: Q(d=127) 0.221989877805
Numerical: <Q(d=127)> = 0.222652
Analytical: V_sph(d=127) 2.34173759616e-57
Numerical: V_sph(d=127) 2.7787259274e-57
___________________________________________
Analytical: Q(d=128) 0.221124430076
Numerical: <Q(d=128)> = 0.22286
Analytical: V_sph(d=128) 5.1781539134e-58
Numerical: V_sph(d=128) 6.1926686018e-58
___________________________________________
Analytical: Q(d=129) 0.220269026039
Numerical: <Q(d=129)> = 0.22174
Analytical: V_sph(d=129) 1.14058691918e-58
Numerical: V_sph(d=129) 1.37316233576e-58
___________________________________________
Analytical: Q(d=130) 0.219423472922
Numerical: <Q(d=130)> = 0.219764
Analytical: V_sph(d=130) 2.50271542977e-59
Numerical: V_sph(d=130) 3.01771647557e-59
___________________________________________
Analytical: Q(d=131) 0.218587583092
Numerical: <Q(d=131)> = 0.219572
Analytical: V_sph(d=131) 5.4706251696e-60
Numerical: V_sph(d=131) 6.62606041973e-60
___________________________________________
Analytical: Q(d=132) 0.217761173884
Numerical: <Q(d=132)> = 0.217624
Analytical: V_sph(d=132) 1.19128975882e-60
Numerical: V_sph(d=132) 1.44198977278e-60
___________________________________________
Analytical: Q(d=133) 0.21694406743
Numerical: <Q(d=133)> = 0.217112
Analytical: V_sph(d=133) 2.58443245765e-61
Numerical: V_sph(d=133) 3.13073283549e-61
___________________________________________
Analytical: Q(d=134) 0.216136090497
Numerical: <Q(d=134)> = 0.217476
Analytical: V_sph(d=134) 5.58589127551e-62
Numerical: V_sph(d=134) 6.8085925413e-62
___________________________________________
Analytical: Q(d=135) 0.215337074338
Numerical: <Q(d=135)> = 0.218112
Analytical: V_sph(d=135) 1.20284948484e-62
Numerical: V_sph(d=135) 1.48503573637e-62
___________________________________________
Analytical: Q(d=136) 0.214546854538
Numerical: <Q(d=136)> = 0.214568
Analytical: V_sph(d=136) 2.58067573454e-63
Numerical: V_sph(d=136) 3.18641147881e-63
___________________________________________
Analytical: Q(d=137) 0.213765270876
Numerical: <Q(d=137)> = 0.21528
Analytical: V_sph(d=137) 5.51658847436e-64
Numerical: V_sph(d=137) 6.85970663158e-64
___________________________________________
Analytical: Q(d=138) 0.212992167186
Numerical: <Q(d=138)> = 0.2138
Analytical: V_sph(d=138) 1.17499013463e-64
Numerical: V_sph(d=138) 1.46660527783e-64
___________________________________________
Analytical: Q(d=139) 0.212227391229
Numerical: <Q(d=139)> = 0.211932
Analytical: V_sph(d=139) 2.49365090992e-65
Numerical: V_sph(d=139) 3.10820589742e-65
___________________________________________
Analytical: Q(d=140) 0.211470794563
Numerical: <Q(d=140)> = 0.21104
Analytical: V_sph(d=140) 5.27334339284e-66
Numerical: V_sph(d=140) 6.55955772591e-66
___________________________________________
Analytical: Q(d=141) 0.210722232426
Numerical: <Q(d=141)> = 0.213312
Analytical: V_sph(d=141) 1.11121069209e-66
Numerical: V_sph(d=141) 1.39923237763e-66
___________________________________________
Analytical: Q(d=142) 0.209981563616
Numerical: <Q(d=142)> = 0.2126
Analytical: V_sph(d=142) 2.33333758631e-67
Numerical: V_sph(d=142) 2.97476803484e-67
___________________________________________
Analytical: Q(d=143) 0.209248650381
Numerical: <Q(d=143)> = 0.209844
Analytical: V_sph(d=143) 4.88247740819e-68
Numerical: V_sph(d=143) 6.24237223503e-68
___________________________________________
Analytical: Q(d=144) 0.208523358313
Numerical: <Q(d=144)> = 0.208628
Analytical: V_sph(d=144) 1.01811058604e-68
Numerical: V_sph(d=144) 1.30233363465e-68
___________________________________________
Analytical: Q(d=145) 0.20780555624
Numerical: <Q(d=145)> = 0.209268
Analytical: V_sph(d=145) 2.11569036647e-69
Numerical: V_sph(d=145) 2.72536755056e-69
___________________________________________
Analytical: Q(d=146) 0.207095116133
Numerical: <Q(d=146)> = 0.2099
Analytical: V_sph(d=146) 4.38149142144e-70
Numerical: V_sph(d=146) 5.72054648862e-70
___________________________________________
Analytical: Q(d=147) 0.206391913001
Numerical: <Q(d=147)> = 0.207224
Analytical: V_sph(d=147) 9.04304396267e-71
Numerical: V_sph(d=147) 1.18543452556e-70
___________________________________________
Analytical: Q(d=148) 0.205695824807
Numerical: <Q(d=148)> = 0.207192
Analytical: V_sph(d=148) 1.86011638667e-71
Numerical: V_sph(d=148) 2.45612550219e-71
___________________________________________
Analytical: Q(d=149) 0.205006732377
Numerical: <Q(d=149)> = 0.205124
Analytical: V_sph(d=149) 3.81336382271e-72
Numerical: V_sph(d=149) 5.03810287512e-72
___________________________________________
Analytical: Q(d=150) 0.204324519309
Numerical: <Q(d=150)> = 0.205616
Analytical: V_sph(d=150) 7.79163730025e-73
Numerical: V_sph(d=150) 1.03591456077e-72
___________________________________________
Analytical: Q(d=151) 0.203649071897
Numerical: <Q(d=151)> = 0.203084
Analytical: V_sph(d=151) 1.58675970475e-73
Numerical: V_sph(d=151) 2.1037767266e-73
___________________________________________
Analytical: Q(d=152) 0.20298027905
Numerical: <Q(d=152)> = 0.203152
Analytical: V_sph(d=152) 3.22080927656e-74
Numerical: V_sph(d=152) 4.27386449561e-74
___________________________________________
Analytical: Q(d=153) 0.202318032212
Numerical: <Q(d=153)> = 0.203896
Analytical: V_sph(d=153) 6.51627794963e-75
Numerical: V_sph(d=153) 8.71423875198e-75
___________________________________________
Analytical: Q(d=154) 0.20166222529
Numerical: <Q(d=154)> = 0.201592
Analytical: V_sph(d=154) 1.31408711193e-75
Numerical: V_sph(d=154) 1.75672081849e-75
___________________________________________
Analytical: Q(d=155) 0.201012754584
Numerical: <Q(d=155)> = 0.20252
Analytical: V_sph(d=155) 2.64148270133e-76
Numerical: V_sph(d=155) 3.5577110016e-76
___________________________________________
Analytical: Q(d=156) 0.200369518718
Numerical: <Q(d=156)> = 0.200548
Analytical: V_sph(d=156) 5.29272617567e-77
Numerical: V_sph(d=156) 7.13491825949e-77
___________________________________________
Analytical: Q(d=157) 0.199732418568
Numerical: <Q(d=157)> = 0.200456
Analytical: V_sph(d=157) 1.05712899988e-77
Numerical: V_sph(d=157) 1.43023717463e-77
___________________________________________
Analytical: Q(d=158) 0.199101357207
Numerical: <Q(d=158)> = 0.199004
Analytical: V_sph(d=158) 2.10475818619e-78
Numerical: V_sph(d=158) 2.84622918699e-78
___________________________________________
Analytical: Q(d=159) 0.198476239835
Numerical: <Q(d=159)> = 0.200164
Analytical: V_sph(d=159) 4.17744490557e-79
Numerical: V_sph(d=159) 5.69712618985e-79
___________________________________________
Analytical: Q(d=160) 0.197856973724
Numerical: <Q(d=160)> = 0.196636
Analytical: V_sph(d=160) 8.26536606916e-80
Numerical: V_sph(d=160) 1.12026010547e-79
___________________________________________
Analytical: Q(d=161) 0.197243468159
Numerical: <Q(d=161)> = 0.197336
Analytical: V_sph(d=161) 1.63028946908e-80
Numerical: V_sph(d=161) 2.21067648172e-80
___________________________________________
Analytical: Q(d=162) 0.19663563438
Numerical: <Q(d=162)> = 0.197848
Analytical: V_sph(d=162) 3.20573003977e-81
Numerical: V_sph(d=162) 4.37377920556e-81
___________________________________________
Analytical: Q(d=163) 0.196033385532
Numerical: <Q(d=163)> = 0.197096
Analytical: V_sph(d=163) 6.28430112798e-82
Numerical: V_sph(d=163) 8.62054386299e-82
___________________________________________
Analytical: Q(d=164) 0.19543663661
Numerical: <Q(d=164)> = 0.194248
Analytical: V_sph(d=164) 1.22818267589e-82
Numerical: V_sph(d=164) 1.6745234043e-82
___________________________________________
Analytical: Q(d=165) 0.194845304408
Numerical: <Q(d=165)> = 0.195908
Analytical: V_sph(d=165) 2.39305627353e-83
Numerical: V_sph(d=165) 3.28052531089e-83
___________________________________________
Analytical: Q(d=166) 0.194259307473
Numerical: <Q(d=166)> = 0.1949
Analytical: V_sph(d=166) 4.6487345444e-84
Numerical: V_sph(d=166) 6.39374383093e-84
___________________________________________
Analytical: Q(d=167) 0.193678566058
Numerical: <Q(d=167)> = 0.195388
Analytical: V_sph(d=167) 9.00360240544e-85
Numerical: V_sph(d=167) 1.24926081964e-84
___________________________________________
Analytical: Q(d=168) 0.193103002072
Numerical: <Q(d=168)> = 0.19258
Analytical: V_sph(d=168) 1.73862265395e-85
Numerical: V_sph(d=168) 2.40582648646e-85
___________________________________________
Analytical: Q(d=169) 0.19253253904
Numerical: <Q(d=169)> = 0.192488
Analytical: V_sph(d=169) 3.34741433997e-86
Numerical: V_sph(d=169) 4.63092728726e-86
___________________________________________
Analytical: Q(d=170) 0.19196710206
Numerical: <Q(d=170)> = 0.1929
Analytical: V_sph(d=170) 6.42593430237e-87
Numerical: V_sph(d=170) 8.93305873712e-87
___________________________________________
Analytical: Q(d=171) 0.191406617759
Numerical: <Q(d=171)> = 0.191796
Analytical: V_sph(d=171) 1.22996635076e-87
Numerical: V_sph(d=171) 1.71332493354e-87
___________________________________________
Analytical: Q(d=172) 0.190851014257
Numerical: <Q(d=172)> = 0.193368
Analytical: V_sph(d=172) 2.34740325544e-88
Numerical: V_sph(d=172) 3.3130221575e-88
___________________________________________
Analytical: Q(d=173) 0.190300221124
Numerical: <Q(d=173)> = 0.191024
Analytical: V_sph(d=173) 4.46711358578e-89
Numerical: V_sph(d=173) 6.32866744613e-89
___________________________________________
Analytical: Q(d=174) 0.189754169347
Numerical: <Q(d=174)> = 0.191204
Analytical: V_sph(d=174) 8.4765342785e-90
Numerical: V_sph(d=174) 1.21006653037e-89
___________________________________________
Analytical: Q(d=175) 0.189212791289
Numerical: <Q(d=175)> = 0.191148
Analytical: V_sph(d=175) 1.6038687113e-90
Numerical: V_sph(d=175) 2.31301797147e-90
___________________________________________
Analytical: Q(d=176) 0.188676020658
Numerical: <Q(d=176)> = 0.190292
Analytical: V_sph(d=176) 3.02611566105e-91
Numerical: V_sph(d=176) 4.40148815827e-91
___________________________________________
Analytical: Q(d=177) 0.188143792469
Numerical: <Q(d=177)> = 0.1906
Analytical: V_sph(d=177) 5.69344876919e-92
Numerical: V_sph(d=177) 8.38923642967e-92
___________________________________________
Analytical: Q(d=178) 0.187616043014
Numerical: <Q(d=178)> = 0.189652
Analytical: V_sph(d=178) 1.06818232918e-92
Numerical: V_sph(d=178) 1.59103546736e-92
___________________________________________
Analytical: Q(d=179) 0.187092709829
Numerical: <Q(d=179)> = 0.187328
Analytical: V_sph(d=179) 1.99849126557e-93
Numerical: V_sph(d=179) 2.9804549203e-93
___________________________________________
Analytical: Q(d=180) 0.186573731664
Numerical: <Q(d=180)> = 0.189
Analytical: V_sph(d=180) 3.72865973115e-94
Numerical: V_sph(d=180) 5.63305979936e-94
___________________________________________
Analytical: Q(d=181) 0.186059048449
Numerical: <Q(d=181)> = 0.185216
Analytical: V_sph(d=181) 6.93750881567e-95
Numerical: V_sph(d=181) 1.0433328038e-94
___________________________________________
Analytical: Q(d=182) 0.18554860127
Numerical: <Q(d=182)> = 0.187528
Analytical: V_sph(d=182) 1.28724505705e-95
Numerical: V_sph(d=182) 1.95654114031e-95
___________________________________________
Analytical: Q(d=183) 0.185042332337
Numerical: <Q(d=183)> = 0.185428
Analytical: V_sph(d=183) 2.38194827645e-96
Numerical: V_sph(d=183) 3.62797510565e-96
___________________________________________
Analytical: Q(d=184) 0.184540184959
Numerical: <Q(d=184)> = 0.184736
Analytical: V_sph(d=184) 4.39565175498e-97
Numerical: V_sph(d=184) 6.70217609117e-97
___________________________________________
Analytical: Q(d=185) 0.184042103514
Numerical: <Q(d=185)> = 0.185384
Analytical: V_sph(d=185) 8.08984995301e-98
Numerical: V_sph(d=185) 1.24247621249e-97
___________________________________________
Analytical: Q(d=186) 0.183548033427
Numerical: <Q(d=186)> = 0.184748
Analytical: V_sph(d=186) 1.48487604959e-98
Numerical: V_sph(d=186) 2.29544995304e-98
___________________________________________
Analytical: Q(d=187) 0.183057921142
Numerical: <Q(d=187)> = 0.183552
Analytical: V_sph(d=187) 2.71818322792e-99
Numerical: V_sph(d=187) 4.21334429781e-99
___________________________________________
Analytical: Q(d=188) 0.1825717141
Numerical: <Q(d=188)> = 0.183744
Analytical: V_sph(d=188) 4.96263371158e-100
Numerical: V_sph(d=188) 7.74176734657e-100
___________________________________________
Analytical: Q(d=189) 0.182089360713
Numerical: <Q(d=189)> = 0.18248
Analytical: V_sph(d=189) 9.03642799993e-101
Numerical: V_sph(d=189) 1.4127177054e-100
___________________________________________
Analytical: Q(d=190) 0.181610810341
Numerical: <Q(d=190)> = 0.18344
Analytical: V_sph(d=190) 1.64111301166e-101
Numerical: V_sph(d=190) 2.59148935879e-101
___________________________________________
Analytical: Q(d=191) 0.181136013274
Numerical: <Q(d=191)> = 0.18094
Analytical: V_sph(d=191) 2.97264668265e-102
Numerical: V_sph(d=191) 4.68904084579e-102
___________________________________________
Analytical: Q(d=192) 0.180664920704
Numerical: <Q(d=192)> = 0.182364
Analytical: V_sph(d=192) 5.37052977202e-103
Numerical: V_sph(d=192) 8.55112244802e-103
___________________________________________
Analytical: Q(d=193) 0.180197484708
Numerical: <Q(d=193)> = 0.1805
Analytical: V_sph(d=193) 9.67755956468e-104
Numerical: V_sph(d=193) 1.54347760187e-103
___________________________________________
Analytical: Q(d=194) 0.179733658226
Numerical: <Q(d=194)> = 0.180868
Analytical: V_sph(d=194) 1.73938318326e-104
Numerical: V_sph(d=194) 2.79165706895e-104
___________________________________________
Analytical: Q(d=195) 0.179273395043
Numerical: <Q(d=195)> = 0.18082
Analytical: V_sph(d=195) 3.11825128544e-105
Numerical: V_sph(d=195) 5.04787431207e-105
___________________________________________
Analytical: Q(d=196) 0.178816649766
Numerical: <Q(d=196)> = 0.177068
Analytical: V_sph(d=196) 5.57595247992e-106
Numerical: V_sph(d=196) 8.93817008689e-106
___________________________________________
Analytical: Q(d=197) 0.178363377809
Numerical: <Q(d=197)> = 0.17626
Analytical: V_sph(d=197) 9.94545718822e-107
Numerical: V_sph(d=197) 1.57544185952e-106
___________________________________________
Analytical: Q(d=198) 0.177913535373
Numerical: <Q(d=198)> = 0.18052
Analytical: V_sph(d=198) 1.76943144926e-107
Numerical: V_sph(d=198) 2.8439876448e-107
___________________________________________
Analytical: Q(d=199) 0.177467079428
Numerical: <Q(d=199)> = 0.176668
Analytical: V_sph(d=199) 3.14015831549e-108
Numerical: V_sph(d=199) 5.02441609231e-108
___________________________________________
Analytical: Q(d=200) 0.177023967696
Numerical: <Q(d=200)> = 0.176428
Analytical: V_sph(d=200) 5.55883284203e-109
Numerical: V_sph(d=200) 8.86447682334e-109
___________________________________________
After 500 runs each consisting of 1000 trials , the numerical result for the volume of the unit 200-sphere (with analytical value 5.55883284203e-109 ) is found to be 8.86447682334e-109 .


As the following plot indicates, the Monte Carlo algorithm is able to calculate the volume of hyperspheres with exceptional accuracy up to very high dimensions.

In [11]:
import pylab

volume, dimension = [], []

for d in range(0,200):
dimension.append(d)
volume.append(V_sph(d))

pylab.semilogy(dimension, volume, 'b', label='Analytical', linewidth=4.0)
pylab.semilogy(dimension, V, 'r--', label='Monte Carlo', linewidth=3.0)
pylab.title('Volume of the unit hypersphere in $d$ dimensions')
pylab.xlabel('$d$', fontsize = 15)
pylab.ylabel('$V_{sph}(d)$', fontsize = 15)
pylab.ylim(0,10)
pylab.legend()
pylab.grid()
pylab.savefig('hypersphere_volumes.png')
pylab.show()

In [164]:
import sys, os

class HiddenPrints:
def __enter__(self):
self._original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w')

def __exit__(self, exc_type, exc_val, exc_tb):
sys.stdout.close()
sys.stdout = self._original_stdout

#Error calculation:
d = 20 #dimension of the sphere
n_runs = 10
Error = []
trials = [10, 100, 1000, 10000, 100000]

for trial in trials:
V_avg, V_avg_square = 0.0, 0.0
for run in range(n_runs):
with HiddenPrints():
v = V_sph_markov(d, 1, trial)
V_avg += v[d-1] / n_runs
V_avg_square += v[d-1] ** 2.0 / n_runs
print V_avg
Error.append(math.sqrt(V_avg_square - V_avg ** 2) / pow(n_runs, 0.5))
print Error
Dif = [] * len(trials)
for i in range(len(trials)):
Dif.append(abs(Error[i] - V_sph(d)))
print Dif

910059.1104
160100.804858
1.6837129243
0.0445860220085
0.0271548828196
[31358.716731116598, 13488.652683320564, 0.30053926397132036, 0.0030817988258178227, 0.0006134494292910471]