Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$
This tour explores denoising of 3-D meshes using linear filtering, heat diffusion and Sobolev regularization.
from __future__ import division
import numpy as np
import scipy as scp
import pylab as pyl
import matplotlib.pyplot as plt
from nt_toolbox.general import *
from nt_toolbox.signal import *
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%load_ext autoreload
%autoreload 2
The topology of a triangulation is defined via a set of indexes $\Vv = \{1,\ldots,n\}$ that indexes the $n$ vertices, a set of edges $\Ee \subset \Vv \times \Vv$ and a set of $m$ faces $\Ff \subset \Vv \times \Vv \times \Vv$.
We load a mesh. The set of faces $\Ff$ is stored in a matrix $F \in \{1,\ldots,n\}^{3 \times m}$. The positions $x_i \in \RR^3$, for $i \in V$, of the $n$ vertices are stored in a matrix $X_0 = (x_{0,i})_{i=1}^n \in \RR^{3 \times n}$.
from nt_toolbox.read_mesh import *
X0,F = read_mesh("nt_toolbox/data/elephant-50kv.off")
Number $n$ of vertices and number $m$ of faces.
n = np.shape(X0)[1]
m = np.shape(F)[1]
Display the mesh in 3-D.
from nt_toolbox.plot_mesh import *
plt.figure(figsize=(10,10))
plot_mesh(X0, F)
We generate artificially a noisy mesh by random normal displacement along the normal. We only perform normal displacements because tangencial displacements do not impact the geometry of the mesh.
The parameter $\rho>0$ controls the amount of noise.
rho = 0.015
We compute the normals $N = (N_i)_{i=1}^n$ to the mesh. This is obtained by averaging the normal to the faces ajacent to each vertex.
from nt_toolbox.compute_normal import *
N = compute_normal(X0, F)
We create a noisy mesh by displacement of the vertices along the normal direction $$ x_i = x_{0,i} + \rho \epsilon_i N_i \in \RR^3 $$ where $\epsilon_i \sim \Nn(0,1)$ is a realization of a Gaussian random variable, and where $N_i \in \RR^3$ is the normal of the mesh for each vertex index $i$.
from numpy import random
X = X0 + np.tile(rho*random.randn(n), (3,1))*N
Display the noisy mesh.
plt.figure(figsize=(10,10))
plot_mesh(X, F)
We define linear operators that compute local averages and differences on the mesh.
First we compute the index of the edges that are in the mesh, by extracting pairs of index in the $F$ matrix.
E = np.hstack((F[[0,1],:],F[[1,2],:],F[[2,0],:]))
Add the reversed edges. This defines the set of edges $\Ee$ that is stored in a matrix $E \in \{1,\ldots,n\}^{2 \times p}$.
from nt_toolbox.unique_columns import *
E = unique_columns(np.hstack((E,E[[1,0],:])))
We keep only oriented pairs of index $(i,j)$ such that $i<j$, to avoid un-necessary computation.
E0 = E[:,E[0,:] < E[1,:]]
This defines a matrix $E \in \{1,\ldots,n\}^{2 \times p_0}$ where $p_0=p/2$.
p0 = np.shape(E0)[1]
Display statistics of the mesh.
print("#vertices = %i, #faces = %i, #edges = %i" %(n,m,p0))
#vertices = 24955, #faces = 49918, #edges = 74877
The weight matrix $W$ is the adjacency matrix defined by $$ W_{i,j} = \choice{ 1 \qifq (i,j) \in \Ee, \\ 0 \quad \text{otherwise.} } $$ Since most of the entries of $W$ are zero, we store it as a sparse matrix.
from scipy import sparse
W = sparse.coo_matrix((np.ones(np.shape(E)[1]),(E[0,:],E[1,:])))
Compute the connectivity weight vector $ d \in \NN^n $ $$ d_i = \sum_{j} W_{i,j} $$ i.e. $d_i$ is the number of edges connected to $i$.
d = np.ravel((W.sum(0)))
Display the statistics of mesh connectivity.
h = np.histogram(d, np.arange(np.min(d),np.max(d)+1))
plt.figure(figsize=(10,7))
plt.bar(h[1][:-1]-.5,h[0], color="darkblue", width=1)
plt.show()
Store in sparse diagonal matices $D$ and $iD$ respectively $D=\text{diag}_i(d_i)$ and $D^{-1} = \text{diag}_i(1/d_i)$.
D = sparse.coo_matrix((d, (np.arange(0,n),np.arange(0,n))))
iD = sparse.coo_matrix((1/d, (np.arange(0,n),np.arange(0,n))))
The normalized weight matrix is defined as $$ \tilde W_{i,j} = \frac{1}{d_i} W_{i,j}, $$ and hence $\tilde W = D^{-1} W$.
tW = iD.dot(W)
It satisfies $$ \forall i , \quad \sum_j \tilde W_{i,j} = 1, $$ i.e. $\tilde W \text{I} = \text{I}$ where $\text{I} \in \RR^n$ is the vector constant equal to one.
The operator $\tilde W \in \RR^{n \times n} $, viewed as an operator $\tilde W : \RR^n \rightarrow \RR^n$, can be thought as a low pass filter.
The un-normalized Laplacian is on the contrary a symmetric high pass operator $$