#!/usr/bin/env python # coding: utf-8 # # Adaptive grid discretizations # ## A set of tools for discretizing anisotropic PDEs on cartesian grids # # Volume : Algorithmic tools # This volume presents some of the algorithmic tools at the foundation of our numerical schemes. In particular *tensor decomposition* and *automatic differentiation*. # # **Tensor decomposition.** Tensors come in various of flavors, and admit numerous decompositions and normal forms. Our numerical schemes are based on a specific technique, known as Voronoi's first reduction of quadratic forms. It allows to decompose a symmetric positive definite tensor $D \in S_d^{++}$ in the form # $$ # D = \sum_{1 \leq i \leq I} \rho_i e_i e_i^T, # $$ # where the weights $\rho_i \geq 0$, and the offsets $e_i \in Z^d\setminus \{0\}$ have integer coordinates. In addition, among all such possible decompositions, the coefficient's sum is maximized # $$ # \sum_{1 \leq i \leq I} \rho_i. # $$ # This decomposition is particularly suitable for the construction of finite difference schemes on Cartesian grids. # **Automatic differentiation.** Numerical analysis is, more often than not, about differentiating various kinds functions. These functions may depend on just one or on a million variables, and may have varying degrees of complexity. Sometimes first order differentiation is enough, while in other circumstances second order or higher order derivatives are needed. # # The derivatives of a given function can in many cases be computed by hand, and implemented numerically as such. However, this manual process is time consuming and is one of the most frequent sources of programming bugs. # # Automatic differentiation is a numerical tool which abstracts function differentiation. It often works by subclassing the numerical data, in our case numpy *ndarray* tensors, so as replace scalars with first or second order Taylor expansions. # Automatic differentiation can be extremely powerful, the tradeoff being a modest increase of CPU time, and a huge reduction of development time. The first experience with such tools, in my experience, feels almost magical. # # However, in contrast with what is often claimed, automatic differentiation is not a *one size fits all* or *univocal* procedure. Several variants exist, depending on the choice of the following features: # * *First order* or *Second order*. # Depending on this choice, a basic scalar $a_0 \in R$ is replaced with the Taylor expansion # $$ # a_0 + + O(h^2), \quad \text{ or } \quad a_0 + + \frac 1 2 + O(h^3), # $$ # where $a_1 \in R^n$ is a $n$-dimensional vector, and $a_2 \in S_n(R)$ is an $n\times n$ symmetric matrix. # The variable $h\in R^n$ is an abstract $n$-dimensional perturbation, which has no existence numerically, in contrast with $a_0, a_1, a_2$. # First order differentiation, a.k.a. the computation of a Jacobian or gradient, is most common among numerical packages. Second order differentiation, a.k.a. the computation of a Hessian, is required for e.g. optimization problems solved with the Newton method. If needed, higher order differentiation can be achieved by recursive first or second order differentiation. # * *Dense*, *Sparse* or *Reverse*. There are three main implementation strategies for automatic differentiation, and each is subject to a different limitation. # Method | Limitation # --- | --- # Dense | Low dimensional input # Sparse | Simple function # Reverse | Low dimensional output # + Dense automatic differentiation, stores the gradient $a_1 \in R^n$ and hessian $a_2 \in S_n(R)$ in dense matrix form. It is adequate if the function of interest is differentiated w.r.t. only a small number $n$ of parameters. # + Sparse automatic differentiation, stores the gradient $a_1 \in R^n$ and hessian $a_2 \in S_n(R)$ in sparse matrix form. It is adequate if the differentiated function is simple in the sense that it contains only a few local operations so as not to destroy the sparsity of the derivatives. # + Reverse automatic differentiation, stores a history of the computation, and computes the on demand the gradient $a_1\in R^n$ and hessian operator $a_2*v$ (where $v\in R^n$ is given) using adjoing operators. It is adequate if the function output is low dimensional, e.g. in an optimization problem. # # # # # # # **Which AD methods are implemented in the AGD package ?** # We provide python routines for the six possible flavors of automatic differentitiation: First/Second order with Dense/Sparse/Reverse implementations. The HamiltonFastMarching C++ library, on the other hand, only provides First order automatic differentiation, with Dense/Reverse implementations. # # **Why did I not use an existing AD package ?** # There exists many high quality automatic differentiation packages. However, in our experience, they usually specialize in a particular implementation modality: for instance Reverse First order automatic differentiation is extremely popular (for good reason) in optimization packages. # In contrast, the numerical schemes presented in these notebooks do require several of the AD implementations, in particular Sparse automatic differentiation which is not so easily found. We also need the ability to compose various AD methods together, so as differentiate a given function *by parts*, using Dense or Sparse differentiation depending on the code sections. # # **Should you use the AGD package for AD ?** # Only, I would say, if it provides functionality not available elsewhere, or if you are using one of the provided notebooks as a starting point for a personal project. # **Github repository** to run and modify the examples on your computer. # [AdaptiveGridDiscretizations](https://github.com/Mirebeau/AdaptiveGridDiscretizations) # # # # Table of contents # [**Main summary**](../Summary.ipynb), including the other volumes of this work. # ### A. Tensor decomposition techniques # # * I. [Selling's algorithm, in dimension 2 and 3](TensorSelling.ipynb) # 1. Decomposing a tensor, or a tensor field # 2. Under the hood : obtuse superbases # 3. Properties of the decomposition # 4. Smooth two-dimensional decomposition # 5. Convolved decomposition # 6. Smooth three-dimensional decomposition # # # * II. [Voronoi's reduction, in dimension 4 and 5](TensorVoronoi.ipynb) # 1. Computing the decomposition of a tensor # 2. Under the hood: Voronoi's first reduction of tensors. # 3. Properties of Voronoi's reduction # # # * III. [Voronoi's reduction, in dimension 6, application to Hooke tensor decomposition](TensorVoronoi6.ipynb) # 1. Computing the decomposition of a tensor # 2. Under the hood, Voronoi's first reduction # 3. Voronoi's first reduction in lower dimensions # # # ### B. Generalized acuteness # # * I. [Finslerian norms and the Stern-Brocot tree](SternBrocot.ipynb) # 1. Stencils and the Stern-Brocot tree # 2. Angular measure associated with a norm # # # * II. [Riemannian norms and the Voronoi vectors](VoronoiVectors.ipynb) # 1. Stencil and acuteness # 2. Two dimensional construction # 3. Three dimensional construction # # # * III. [Norms defined by a Hooke tensor](SeismicNorm.ipynb) # 1. Geometric distorsion # 2. Convexity of the constraint # 3. Computation of a frame of reference # 4 GPU acceleration of the TTI projection # # # ### C. Automatic differentiation # # * I. [Dense automatic differentiation, and geodesic shooting](Dense.ipynb) # 1. A word on Hamilton's equations # 2. Automatic differentiation basics # 3. Solving ordinary differential equations # 4. Shooting geodesics # # # * II. [Sparse automatic differentiation](Sparse.ipynb) # 1. A word on functional optimization # 2. Sparse automatic differentiation # 3. Composition of Dense and Sparse AD # 4. Differentiating extrema using the envelope theorem # # # * III. [Reverse automatic differentiation](Reverse.ipynb) # 1. Generating variables and requesting an expression's derivatives # 2. Linear operators and their adjoints # 3. Non-linear operators # 4. Format of variables # # # * IV. [Known bugs and incompatibilities](ADBugs.ipynb) # 1 Matrix multiplication and inversion # 2. In place modifications and aliasing # 3. CPU/GPU generic programming # # # ### D. Domain representation # # * I. [Subsets of $R^d$](SubsetRd.ipynb) # 1. Domain description # 2. Boundary-adapted finite differences # 3. Mock boundary conditions # # # * II. [Finite differences, interpolation](FiniteDifferences.ipynb) # 1. Degenerate elliptic finite differences # 2. Higher order finite differences # 3. Composite finite differences # 4. Interpolation # 5. Functions associated with an AD variable # # # ### E. Convex functions and convex bodies # # * I. [Minkowski and Meissner's problems](Meissner.ipynb) # 1. Introduction # 2. Two dimensional convex bodies # 3. Geometry of three dimensional convex bodies # 4. Optimization over three dimensional convex bodies # In[1]: #import sys; sys.path.append("..") # Allow imports from parent directory #from Miscellaneous import TocTools; print(TocTools.displayTOCs('Algo'))