Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$
This tour explores denoising of 3-D meshes using linear filtering, heat diffusion and Sobolev regularization.
options(repr.plot.width=3.5, repr.plot.height=3.5)
options(warn=-1) # turns off warnings, to turn on: "options(warn=0)"
library(Matrix)
library(geometry)
library(imager)
library(akima)
library(plot3D)
library(network)
# Importing the libraries
for (f in list.files(path="nt_toolbox/toolbox_general/", pattern="*.R")) {
source(paste("nt_toolbox/toolbox_general/", f, sep=""))
}
for (f in list.files(path="nt_toolbox/toolbox_signal/", pattern="*.R")) {
source(paste("nt_toolbox/toolbox_signal/", f, sep=""))
}
Loading required package: magic Loading required package: abind Loading required package: plyr Loading required package: magrittr Attaching package: 'imager' The following object is masked from 'package:magrittr': add The following object is masked from 'package:plyr': liply The following objects are masked from 'package:stats': convolve, spectrum The following object is masked from 'package:graphics': frame The following object is masked from 'package:base': save.image Attaching package: 'akima' The following object is masked from 'package:imager': interp network: Classes for Relational Data Version 1.13.0 created on 2015-08-31. copyright (c) 2005, Carter T. Butts, University of California-Irvine Mark S. Handcock, University of California -- Los Angeles David R. Hunter, Penn State University Martina Morris, University of Washington Skye Bender-deMoll, University of Washington For citation information, type citation("network"). Type help("network-package") to get started. Attaching package: 'network' The following object is masked from 'package:plyr': is.discrete Attaching package: 'tuneR' The following objects are masked from 'package:imager': channel, play Attaching package: 'pracma' The following objects are masked _by_ '.GlobalEnv': circshift, fftshift, grad, ifftshift The following objects are masked from 'package:magrittr': and, mod, or The following objects are masked from 'package:geometry': dot, polyarea The following object is masked from 'package:magic': magic The following objects are masked from 'package:Matrix': expm, lu, tril, triu
The topology of a triangulation is defined via a set of indexes $\Vv = \{1,\ldots,n\}$ that indexes the $n$ vertices, a set of edges $\Ee \subset \Vv \times \Vv$ and a set of $m$ faces $\Ff \subset \Vv \times \Vv \times \Vv$.
We load a mesh. The set of faces $\Ff$ is stored in a matrix $F \in \{1,\ldots,n\}^{3 \times m}$. The positions $x_i \in \RR^3$, for $i \in V$, of the $n$ vertices are stored in a matrix $X_0 = (x_{0,i})_{i=1}^n \in \RR^{3 \times n}$.
X0 = read_mesh("nt_toolbox/data/elephant-50kv.off")$X0
F = read_mesh("nt_toolbox/data/elephant-50kv.off")$F0
Number $n$ of vertices and number $m$ of faces.
n = dim(X0)[2]
m = dim(F)[2]
Display the mesh in 3-D.
options(repr.plot.width=3.5, repr.plot.height=3.5)
plot_mesh(X0, F, "grey")
We generate artificially a noisy mesh by random normal displacement along the normal. We only perform normal displacements because tangencial displacements do not impact the geometry of the mesh.
The parameter $\rho>0$ controls the amount of noise.
rho = 0.015
We compute the normals $N = (N_i)_{i=1}^n$ to the mesh. This is obtained by averaging the normal to the faces ajacent to each vertex.
N = compute_normal(X0, F)
We create a noisy mesh by displacement of the vertices along the normal direction $$ x_i = x_{0,i} + \rho \epsilon_i N_i \in \RR^3 $$ where $\epsilon_i \sim \Nn(0,1)$ is a realization of a Gaussian random variable, and where $N_i \in \RR^3$ is the normal of the mesh for each vertex index $i$.
X = X0 + t(cbind(rho*replicate(1, rnorm(n)),rho*replicate(1, rnorm(n)),rho*replicate(1, rnorm(n))))*N
Display the noisy mesh.
plot_mesh(X, F, "grey")
We define linear operators that compute local averages and differences on the mesh.
First we compute the index of the edges that are in the mesh, by extracting pairs of index in the $F$ matrix.
E = cbind(F[1:2,],F[2:3,],rbind(F[3,],F[1,]))
Add the reversed edges. This defines the set of edges $\Ee$ that is stored in a matrix $E \in \{1,\ldots,n\}^{2 \times p}$.
E = unique_columns(cbind(E,rbind(E[2,],E[1,])))
We keep only oriented pairs of index $(i,j)$ such that $i<j$, to avoid un-necessary computation.
E0 = E[,E[1,]<E[2,]]
This defines a matrix $E \in \{1,\ldots,n\}^{2 \times p_0}$ where $p_0=p/2$.
p0 = dim(E0)[2]
Display statistics of the mesh.
print(paste("#vertices =" , n , ", #faces =" ,m , ", #edges = " , p0))
[1] "#vertices = 24955 , #faces = 49918 , #edges = 74877"
The weight matrix $W$ is the adjacency matrix defined by $$ W_{i,j} = \choice{ 1 \qifq (i,j) \in \Ee, \\ 0 \quad \text{otherwise.} } $$ Since most of the entries of $W$ are zero, we store it as a sparse matrix.
W = sparseMatrix(i=E[1,]+1, j=E[2,]+1, x=1)
Compute the connectivity weight vector $ d \in \NN^n $ $$ d_i = \sum_{j} W_{i,j} $$ i.e. $d_i$ is the number of edges connected to $i$.
d = colSums(W)
Display the statistics of mesh connectivity.
h = hist(d, xlim=c(min(d),max(d)), col="DarkBlue", main="", breaks=5)
Store in sparse diagonal matices $D$ and $iD$ respectively $D=\text{diag}_i(d_i)$ and $D^{-1} = \text{diag}_i(1/d_i)$.
D = sparseMatrix(i=seq(from=1, to=n), j=seq(from=1, to=n), x=d)
iD = sparseMatrix(i=seq(from=1, to=n), j=seq(from=1, to=n), x=1/d)
The normalized weight matrix is defined as $$ \tilde W_{i,j} = \frac{1}{d_i} W_{i,j}, $$ and hence $\tilde W = D^{-1} W$.
tW = iD %*% W
It satisfies $$ \forall i , \quad \sum_j \tilde W_{i,j} = 1, $$ i.e. $\tilde W \text{I} = \text{I}$ where $\text{I} \in \RR^n$ is the vector constant equal to one.
The operator $\tilde W \in \RR^{n \times n} $, viewed as an operator $\tilde W : \RR^n \rightarrow \RR^n$, can be thought as a low pass filter.
The un-normalized Laplacian is on the contrary a symmetric high pass operator $$ L = D-W \in \RR^{n \times n}. $$ It satisfies $L \text{I} = 0$.
L = D - W
The gradient operator compute directional derivative along edges. It can be used to factor the Laplacian operator, but in practice it is never computed explicitely since it is never needed in numerical computation.
To represent the gradient, we index the set of (oriented) edges $ \Ee_0 = (e_k)_{k=1}^{p_0} $ where each edge is $e_k = (i,j) \in \{1,\ldots,n\}^2$ with $i<j$.
The gradient operator is a matrix $G \in \RR^{p_0 \times n}$ defined as, for all $e_k=(i,j)$ and all $\ell \notin \{i,j\}$, $$ G_{k,i}=1, \quad G_{k,j}=-1, \quad G_{k,\ell}=0. $$
It is stored as a sparse matrix, and can be thought as a derivative operator $G : \RR^n \rightarrow \RR^{p_0} $ that maps signal defined on vertices to differences located along directed edges.
G = sparseMatrix( i = cbind(seq(from=0,to=p0, by=p0/(p0-1)), seq(from=0,to=p0, by=p0/(p0-1)))+1,
j = cbind(E0[1,],E0[2,])+1,
x = cbind(Matrix(1,nrow=1,ncol=p0),Matrix((-1),nrow=1,ncol=p0)))
Display the non-zero entries of $G$ and $W$.
plot(E[1,]+1, E[2,]+1, col="DarkBlue", xlim=c(0,max(E[1,]+1)), ylim=c(0,max(E[2,]+1)), xlab="", ylab="", type="p", main="W", pch=20, cex=0.5)
plot(cbind(seq(from=0,to=p0, by=p0/(p0-1)), seq(from=0,to=p0, by=p0/(p0-1)))+1, cbind(E0[1,],E0[2,])+1, xlab="", ylab="", type="p",cex=0.5,
col="Darkblue", main="G", pch=20)
The Laplacian can be factored as follow $$ L = G^* G $$ where $G^*$ is the transposed matrix (i.e. the adjoint operator, which can be thought as some kind of divergence).
Check numerically that the factorization indeed hold.
err = norm(t(G)%*%G - L)
print(paste("Factorization error (should be 0) =" , err))
[1] "Factorization error (should be 0) = 0"
Note that this factorization shows that $L$ is a positive semi-definite operator, i.e. it satisfies
$$ \dotp{L f}{f} = \norm{G f}^2 \geq 0. $$If the mesh is connected, then only constant signals $f \in \RR^n$ satisfies $Lf=0$.
Note that this convention is the contrary to the usual convention of differential calculus, in which a Laplacian is a negative operator.
A signal defined on the mesh is a vector $f \in \RR^n$, where $f_i \in \RR$ is the value at vertex $1 \leq i \leq n$.
Load a texture image $I$.
M = load_image("nt_toolbox/data/lena.png", n=256)
Compute spherical coordinates $ (\theta_i,\phi_i)$ for each vertex $x_{0,i}$ on the mesh.
v = X0 - Matrix(rowMeans(X0), nrow=dim(X0)[1], ncol=dim(X0)[2])
theta = acos(v[1,]/sqrt(colSums(v**2)))/pi
phi = (atan2(v[2,], v[3,])/pi+1)/2
Interpolate the texture on the mesh.
x = seq(from=0, to=1, by=1/(dim(M)[1]-1))
f = rescale(bicubic(x, x, t(as.matrix(M)), theta, phi)$z)
Display the textured mesh.
options(repr.plot.width=6, repr.plot.height=5)
points3D(matrix(X0[1,]), matrix(X0[2,]), matrix(X0[3,]), axis=FALSE, grid=FALSE, box=FALSE, colkey=FALSE, type="p", pch=20,cex=0.05,
col=as.color(f))
par(new=TRUE)
The operator $\tilde W : \RR^n \rightarrow \RR^n$ can be used to smooth a function $f$, simply by computing $\tilde W f \in \RR^n$.
To further smooth the mesh, it is possible to iterate this process, by defining $f^{(0)} = f$ and
$$ f^{(\ell+1)} = \tilde W f^{(\ell)}.$$Note that one has $ f^{(\ell)} = \tilde W^{\ell} f, $ but it is preferable to use the iterative algorithm to do the computations.
Exercise 1
Display the evolution of the image on the mesh as the number of iterations increases.
source("nt_solutions/meshproc_3_denoising/exo1.R")
## Insert your code here.
The quality of a noisy mesh is improved by applying local averagings, that removes noise but also tends to smooth features.
The operator $\tilde W : \RR^n \rightarrow \RR^n$ can be used to smooth a function, but it can also be applied to smooth the position $W \in \RR^{3 \times n} $. Since they are stored as row of a matrix, one should applies $\tilde W^*$ (transposed matrix) on the right side. $$ X^{(0)} = X \qandq X^{(\ell+1)} = X^{(\ell)} W^* $$
niter = 5
X1 = X
for (i in (1:niter)){
X1 = t(tW %*% t(X1))
}
We can compute the errors in dB with respect to the clean mesh, using
$$ \text{SNR}(X,Y) = -20 \log_{10} \pa{ \norm{X-Y}/\norm{Y} }. $$pnoisy = snr(X0, X)
pfilt = snr(X0, X1)
print(paste("Noisy =", pnoisy, "dB", "Denoised = " , pfilt," dB" ))
[1] "Noisy = 26.8231946547605 dB Denoised = 39.971730546081 dB"
Display the results.
plot_mesh(data.matrix(X1), F, "grey")
Exercise 2
Determine the optimal number of iterations to maximize the SNR. Record, for each number $i$ of iteration, the SNR in $err(i)$.
source("nt_solutions/meshproc_3_denoising/exo2.R")
## Insert your code here.
Plot the error as a function of the number of iterations.
plot(err, type="l", col="blue", xlab = "Iteration", ylab = "SNR")
Iterative filtering is closely related to the heat diffusion. The heat diffusion is a linear partial differential equation (PDE) that compute a continuous denoising result for arbitrary time $t$. It is thus more precise than simple iterative filterings.
This PDE defines a function $f_t \in \RR^n$ parameterized by the time $t>0$ as $$ \forall t>0, \quad \pd{f_t}{t} = -\tilde L f_t \qandq f_0 = f, $$ where $ \tilde L $ is the symetric normaled Laplacian defined as $$ \tilde L = D^{-1} L = \text{Id}_n - \tilde W. $$
tL = iD %*% L
This PDE is applied to the three components of a 3-D mesh to define a surface evolution $$ \forall t>0, \quad \pd{X_t}{t} = -X_t \tilde L^* \qandq f_0 = f. $$
One can approximate the solution to this PDE using explicit finite difference in time (Euler explicit scheme) $$ X^{(\ell+1)} = X^{(\ell)} - \tau X^{(\ell)} \tilde L^* = (1-\tau) X^{(\ell)} + \tau X^{(\ell)} \tilde W^* $$ where $0 < \tau < 1$ is a (small enough) time step and $f^{(\ell)}$ is intended to be an approximation of $X_t$ at time $t=\tau \ell$. The smaller $\tau$, the better the approximation.
One can see that with $\tau=1$, one recovers the iterative filtering method.
Time step $\tau$.
tau = .2
Maximum time of resolution.
Tmax = 40
Number of iterations needed to reach this time.
niter = as.integer(ceiling(Tmax/tau))
Initial solution at time $t=0$.
Xt = X
We use an explicit discretization in time of the PDE. Here is one iteration.
Xt = Xt - tau*t(tL %*% t(Xt))
Exercise 3
Compute the linear heat diffusion. Monitor the denoising SNR $err(l)$ between $X_t$ and $X_0$ at iteration index $l$.
source("nt_solutions/meshproc_3_denoising/exo3.R")