This tutorial uses the same example as the problem formulation.
Lagrange polynomials are not a method for creating orthogonal polynomials. Instead it is an interpolation method for creating an polynomial expansion that has the property that each polynomial interpolates exactly one point in space with the value 1 and has the value 0 for all other interpolation values.
To summarize, we need to do the following:
In chaospy, creating low-discrepancy sequences can be done using the distribution.sample
method.
Creating quadrature points can be done using chaospy.generate_quadrature
.
For example:
from problem_formulation import joint
samples = joint.sample(5, rule="halton")
from matplotlib import pyplot
pyplot.scatter(*samples)
pyplot.rc("figure", figsize=[6, 4])
pyplot.xlabel("Parameter $I$ (normal)")
pyplot.ylabel("Parameter $a$ (uniform)")
pyplot.axis([1, 2, 0.1, 0.2])
pyplot.show()
Creating an expansion of Lagrange polynomial terms can be done using the following constructor:
import chaospy
polynomial_expansion = chaospy.expansion.lagrange(samples)
polynomial_expansion[0].round(2)
polynomial(-694.24*q1**2+28.97*q0*q1+170.1*q1-6.65*q0-5.96)
On can verify that the returned polynomials follows the property of evaluating 0 for all but one of the samples used in the construction as follows:
polynomial_expansion(*samples).round(8)
array([[ 1., -0., -0., -0., -0.], [ 0., 1., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [-0., -0., -0., 1., 0.], [-0., -0., -0., -0., 1.]])
Fitting the polynomials to the evaluations does not need to involve regression. Instead it is enough to just multiply the Lagrange polynomial expansion with the evaluations, and sum it up:
from problem_formulation import model_solver
model_evaluations = numpy.array([
model_solver(sample) for sample in samples.T])
model_approximation = chaospy.sum(
model_evaluations.T*polynomial_expansion, axis=-1).T
model_approximation.shape
(1000,)
The results is close to what we are used to from the other methods:
expected = chaospy.E(model_approximation, joint)
variance = chaospy.Var(model_approximation, joint)
from problem_formulation import coordinates
pyplot.fill_between(coordinates, expected-variance**0.5,
expected+variance**0.5, alpha=0.3)
pyplot.plot(coordinates, expected)
pyplot.axis([0, 10, 0, 2])
pyplot.xlabel("coordinates $t$")
pyplot.ylabel("Model evaluations $u$")
pyplot.show()
It is worth noting that the construction of Lagrange polynomials are not always numerically stable. For example when using grids along , most often the expansion construction fails:
nodes, _ = chaospy.generate_quadrature(1, joint)
try:
chaospy.expansion.lagrange(nodes)
except numpy.linalg.LinAlgError as err:
error = err.args[0]
error
'Lagrange abscissas resulted in invertible matrix'
It is impossible to avoid the issue entirely, but in the case of structured grid, it is often possible to get around the problem. To do so, the following steps can be taken:
So we begin with making marginal Lagrange polynomials:
nodes, _ = list(zip(*[chaospy.generate_quadrature(1, marginal)
for marginal in joint]))
expansions = [chaospy.expansion.lagrange(nodes_)
for nodes_ in nodes]
[expansion_.round(4) for expansion_ in expansions]
[polynomial([-0.3041*q0+0.9562, 0.3041*q0+0.0438]), polynomial([-10.0*q0+2.0, 10.0*q0-1.0])]
Then we rotate the dimensions:
vars_ = chaospy.variable(len(joint))
expansions = [
expans(q0=var)
for expans, var in zip(expansions, vars_)]
[expans.round(4) for expans in expansions]
[polynomial([-0.3041*q0+0.9562, 0.3041*q0+0.0438]), polynomial([-10.0*q1+2.0, 10.0*q1-1.0])]
And finally construct the final expansion using an outer product:
expansion = chaospy.outer(*expansions).flatten()
expansion.round(2)
polynomial([3.04*q0*q1-9.56*q1-0.61*q0+1.91, -3.04*q0*q1+9.56*q1+0.3*q0-0.96, -3.04*q0*q1-0.44*q1+0.61*q0+0.09, 3.04*q0*q1+0.44*q1-0.3*q0-0.04])
The end results can be verified to have the properties we are looking for:
nodes, _ = chaospy.generate_quadrature(1, joint)
expansion(*nodes).round(8)
array([[ 1., -0., -0., -0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]])