Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$
This tour uses the Stein Unbiased Risk Estimator (SURE) to optimize the value of parameters in denoising algorithms.
using PyPlot
using NtToolBox
using Autoreload
arequire("NtToolBox")
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:1
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:1
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:7
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:7
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:12
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: readall is deprecated, use readstring instead.
in depwarn(::String, ::Symbol) at .\deprecated.jl:64
in readall(::IOStream, ::Vararg{IOStream,N}) at .\deprecated.jl:30
in #parse_file#6(::Array{Any,1}, ::Function, ::String) at C:\Users\Ayman\.julia\v0.5\Autoreload\src\files.jl:63
in parse_file(::String) at C:\Users\Ayman\.julia\v0.5\Autoreload\src\files.jl:61
in #arequire#10(::Symbol, ::Array{String,1}, ::Function, ::String) at C:\Users\Ayman\.julia\v0.5\Autoreload\src\Autoreload.jl:65
in arequire(::String) at C:\Users\Ayman\.julia\v0.5\Autoreload\src\Autoreload.jl:47
in include_string(::String, ::String) at .\loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at C:\Users\Ayman\.julia\v0.5\IJulia\src\execute_request.jl:157
in eventloop(::ZMQ.Socket) at C:\Users\Ayman\.julia\v0.5\IJulia\src\eventloop.jl:8
in (::IJulia.##13#19)() at .\task.jl:360
while loading In[1], in expression starting on line 4
WARNING: replacing module NtToolBox
We consider a simple generative model of noisy images $F = f_0+W$ where $f_0 \in \RR^N$ is a deterministic image of $N$ pixels, and $W$ is a Gaussian white noise distributed according to $\Nn(0,\si^2 \text{Id}_N)$, where $\si^2$ is the variance of noise.
The goal of denoising is to define an estimator $h(F)$ of $f_0$ that depends only on $F$, where $h : \RR^N \rightarrow \RR^N$ is a potentially non-linear mapping.
Note that while $f_0$ is a deterministic image, both $F$ and $h(F)$ are random variables (hence the capital letters).
The goal of denoising is to reduce as much as possible the denoising error given some prior knowledge on the (unknown) image $f_0$. A mathematical way to measure this error is to bound the quadratic risk $\EE_W(\norm{h(F) - f_0}^2)$, where the expectation is computed with respect to the distribution of the noise $W$
For real life applications, one does not have access to the underlying image $f_0$. In this tour, we however assume that $f_0$ is known, and $f = f_0 + w\in \RR^N$ is generated using a single realization of the noise $w$ that is drawn from $W$. We define the estimated deterministic image as $h(f)$ which is a realization of the random vector $h(F)$.
Number $N = n \times n$ of pixels.
n = 128*2
N = n^2
WARNING: Base.UTF8String is deprecated, use String instead
65536
. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31
First we load an image $f \in \RR^N$ where $N=n \times n$ is the number of pixels.
f0 = load_image("NtToolBox/src/data/hibiscus.png", n)
256×256 Array{Float32,2}: 0.116279 0.110802 0.113704 0.134063 … 0.0287548 0.0287359 0.0314583 0.118994 0.105394 0.126063 0.153145 0.0137308 0.0191786 0.0328152 0.121933 0.131637 0.167055 0.1561 0.0191221 0.0219491 0.0383352 0.1686 0.187318 0.194206 0.179032 0.0218771 0.0327673 0.0437247 0.202441 0.179545 0.188899 0.180755 0.0342584 0.0396646 0.0410583 0.236876 0.231387 0.243987 0.195669 … 0.0397783 0.0451862 0.045026 0.242166 0.26833 0.392237 0.219388 0.0492727 0.0520715 0.0412221 0.227555 0.239195 0.222395 0.223745 0.0614059 0.0561919 0.0450853 0.174463 0.192817 0.173714 0.161738 0.0588799 0.0560296 0.0411872 0.240959 0.214167 0.198041 0.176431 0.06423 0.0561998 0.0491571 0.233736 0.268875 0.323605 0.390379 … 0.0613594 0.0601386 0.0492048 0.469764 0.440609 0.320349 0.217411 0.0590014 0.0561119 0.0479389 0.217896 0.224526 0.240435 0.225365 0.0605422 0.0628551 0.0480219 ⋮ ⋱ ⋮ 0.247533 0.309003 0.308757 0.305036 0.135376 0.126861 0.126032 0.246182 0.279635 0.295631 0.297313 … 0.136632 0.129989 0.127163 0.240817 0.24315 0.283953 0.312362 0.133973 0.131301 0.120581 0.243502 0.235563 0.261751 0.3049 0.128674 0.1311 0.119053 0.243583 0.225477 0.2399 0.238397 0.131327 0.117945 0.127137 0.227261 0.233908 0.228406 0.247246 0.104366 0.112181 0.119152 0.196036 0.223394 0.217576 0.224995 … 0.0903733 0.0930875 0.101424 0.21693 0.190604 0.220216 0.217304 0.0862589 0.084944 0.0930124 0.476838 0.167249 0.212108 0.21071 0.0807355 0.0833831 0.0848607 0.534911 0.40117 0.175342 0.191631 0.0684402 0.071227 0.087462 0.481751 0.479992 0.236952 0.164247 0.0669885 0.0697431 0.0725492 0.342149 0.467888 0.411812 0.143802 … 0.0684046 0.0711055 0.0766073
Display it.
figure(figsize = (5, 5))
imageplot(f0)
Standard deviation $\si$ of the noise.
sigma = .08
0.08
Then we add Gaussian noise $w$ to obtain $f=f_0+w$.
using Distributions
f = f0 .+ sigma.*rand(Normal(), n, n)
256×256 Array{Float64,2}: -0.0481563 0.216313 0.163803 0.0843028 … 0.119732 0.104779 0.102411 0.0526892 0.178067 0.106113 0.0127167 -0.0101154 0.151016 0.167007 0.065942 0.0729951 0.13423 -0.00509014 0.228161 0.152796 0.148526 0.258568 0.12233 0.246733 0.21403 0.133931 0.393994 0.23436 -0.012462 0.0518537 0.141159 0.432584 0.124082 0.301956 … 0.0158762 -0.0523401 0.148337 0.258329 0.459986 0.33038 0.153176 0.100495 0.143644 0.0919991 0.361093 0.252016 0.0172173 -0.102035 0.0951665 0.168589 0.274003 0.0935564 -0.0118546 0.118036 0.201453 0.239861 0.205022 0.141039 0.0439873 0.140283 0.320464 0.215516 0.446886 0.439776 … 0.0474105 0.012013 0.394456 0.561957 0.326484 0.27349 0.141744 -0.00934799 0.216836 0.342128 0.32378 0.212558 0.0641502 0.132374 ⋮ ⋱ ⋮ 0.173311 0.301857 0.283567 0.197074 0.100707 0.19751 0.240277 0.390585 0.312623 0.280332 … 0.250128 0.118421 0.21911 0.28111 0.295446 0.420927 0.106278 -0.0127025 0.204777 0.33892 0.279969 0.280947 0.249026 0.0085981 0.344421 0.215872 0.386031 0.233024 0.158699 0.175246 0.391335 0.306883 0.168223 0.239161 0.286398 0.135893 0.212865 0.287457 0.223504 0.444268 … 0.0317181 0.247217 0.336257 0.191347 0.220818 0.0859646 -0.0585355 0.0564756 0.3184 0.134818 0.235534 0.33387 -0.000471639 0.204953 0.495076 0.349866 0.186881 0.249806 -0.0431502 -0.0639729 0.518207 0.49195 0.213737 0.367571 0.074607 0.0304969 0.404995 0.295286 0.414649 0.0669222 … 0.119945 0.057715
Display the noisy image. Note the use of the clamp function to force the result to be in $[0,1]$ to avoid a loss of contrast of the display.
figure(figsize = (5, 5))
imageplot(clamP(f), @sprintf("Noisy, SNR = %.1f dB", snr(f0, f)))
PyObject <matplotlib.text.Text object at 0x000000001E7F8A90>
The Stein Unbiased Risk Estimator (SURE) associated to the mapping $h$ is defined as
$$ \text{SURE}(f) = -N\si^2 + \norm{h(f)-f}^2 + 2\si^2 \text{df}(f) $$where df stands for degree of freedom, and is defined as
$$ \text{df}(f) = \text{div} h(f) = \sum_i \pd{h}{f_i}(f). $$It has been introduced in:
Stein, Charles M. (November 1981). "Estimation of the Mean of a Multivariate Normal Distribution". The Annals of Statistics 9 (6): 1135-1151.
And it has been applied to wavelet-based non-linear denoising in:
Donoho, David L.; Iain M. Johnstone (December 1995). "Adapting to Unknown Smoothness via Wavelet Shrinkage". Journal of the American Statistical Association (Journal of the American Statistical Association, Vol. 90, No. 432) 90 (432): 1200-1244.
If the mapping $f \mapsto h(f)$ is differentiable outside a set of zero measure (or more generally weakly differentiable), then SURE defines an unbiased estimate of the quadratic risk :
$$ \EE_W(\text{SURE}(F)) = \EE_W( \norm{f_0-h(F)}^2 ). $$This is especially useful, since the evaluation of SURE does not necessitate the knowledge of the clean signal $f_0$ (but note however that it requires the knowledge of the noise level $\si$).
In practice, one replaces $\text{SURE}(F)$ from its empirical evaluation $\text{SURE}(f)$ on a single realization $f$. One can then minimize $\text{SURE}(f)$ with respect to a parameter $\la$ that parameterizes the denoiser $h=h_\la$.
We consider a translation-invariant linear denoising operator, which is thus a convolution
$$ h(f) = g \star h $$where $g \in \RR^N$ is a low pass kernel, and $\star$ denotes the periodic 2-D convolution.
Since we use periodic boundary condition, we compute the convolution as a multiplication over the Fourier domain.
$$ \forall \om, \quad \hat h(f)(\om) = \hat f(\om) \hat g(\om) $$where $\hat g(\om)$ is the frequency $\om$ of the discrete 2-D Fourier transform of $g$ (computed using the pylab function fft2 from the pylab package).
convol = (f, g) -> real(plan_ifft((plan_fft(f)*f).*(plan_fft(g)*g))
*((plan_fft(f)*f).*(plan_fft(g)*g)))
(::#1) (generic function with 1 method)
We define a parameteric kernel $g_\la$ parameterized by its bandwidth $\la>0$. We use here a Gaussian kernel
$$ g_\la(a) = \frac{1}{Z_\la} e^{ -\frac{\norm{a}}{2 \la^2} } $$where $Z_\la$ ensures that $\sum_a g_\la(a) = 1$.
include("ndgrid.jl")
normalize = f -> f./sum(f)
x = [collect(0 : Base.div(n, 2)); collect(-Base.div(n, 2) + 1 : -1)]
(Y, X) = meshgrid(x, x)
g = lambd -> normalize(exp(-(X.^2 .+ Y.^2)/(2*lambd^2)))
(::#7) (generic function with 1 method)
Define our denoising operator $h=h_\la$ (we make explicit the dependency on $\la$): $$ h_\la(f) = g_\la \star f. $$
h = (f, lambd) -> convol(f, g(lambd))
(::#9) (generic function with 1 method)
Example of denoising result.
lambd = 1.5
figure(figsize = (5, 5))
imageplot(clamP(h(f, lambd)))
For linear operator, the dregree of freedom is equal to the trace of the operator, and thus in our case it is equal to the sum of the Fourier transform $$ \text{df}_\la(f) = \text{tr}(h_\la) = \sum_{\om} \hat g_\la(\om) $$ Note that we have made explicit the dependency of df with respect to $\la$. Note also that df$(f)$ actually not actually depend on $f$.
df = lambd -> real(sum(plan_fft(g(lambd))*g(lambd)))
(::#11) (generic function with 1 method)
We can now define the SURE=SURE$_\la$ operator, as a function of $f, h(f), \lambda$.
SURE = (f ,hf, lambd) -> -N*sigma^2 + vecnorm(hf - f)^2 + 2*sigma^2*df(lambd) # vecnorm is for Frobenius norm
(::#13) (generic function with 1 method)
Exercise 1
For a given $\lambda$, display the histogram of the repartition of the quadratic error $\norm{y-h(y)}^2$ and of $\text{SURE}(y)$. Compute these repartition using Monte-Carlo simulation (you need to generate lots of different realization of the noise $W$). Display in particular the location of the mean of these quantities.
#run -i nt_solutions/denoisingadv_9_sure/exo1
include("Exos\\denoisingadv_9_sure\\exo1.jl") #It takes time to run
# ntrials = 100
# nlaunch = 20
# E0 = []
# E = []
# for i in 1:nlaunch
# f = repeat(f0, inner = [1, 1, ntrials]) + sigma.*rand(Normal(), n, n, ntrials)
# hf = h(f, lambd)
# #quadratic error
# e = sum((hf - repeat(f0, inner = [1, 1, ntrials])).^2, (1, 2))
# E0 = [E0; e[:]]
# #sure error
# e = -N*sigma^2 + sum((hf - f).^2, (1, 2)) + 2*sigma^2*df(lambd)
# E = [E; e[:]]
# end
# v_true = mean(E0)
# v_sure = mean(E)
# a = v_true - 8*stdm(E0, mean(E0))
# b = v_true + 8*stdm(E0, mean(E0))
# t = linspace(a, b, 31)
# mybar = e -> hist(e[collect((i > a) & (i < b) for i in E0)], t)
# figure(figsize = (10, 7))
# subplot(2,1,1)
# s = mybar(E0)[2]
# s = [s; 0]
# bar(t[1 : end], s, width = (b-a)/31, color = "darkblue", edgecolor = "white")
# axvline(v_true, color = "red", linewidth = 3)
# subplot(2,1,2)
# s = mybar(E)[2]
# s = [s; 0]
# bar(t[1 : end], s, width = (b-a)/31, color = "darkblue",edgecolor = "white")
# axvline(v_sure, color = "red", linewidth = 3)
# show()
WARNING: hist(...) and hist!(...) are deprecated. Use fit(Histogram,...) in StatsBase.jl instead.
in depwarn(::String, ::Symbol) at .\deprecated.jl:64
in #hist!#996(::Bool, ::Function, ::Array{Int64,1}, ::Array{Any,1}, ::LinSpace{Float64}) at .\deprecated.jl:629
in hist(::Array{Any,1}, ::LinSpace{Float64}) at .\deprecated.jl:644
in (::##79#81)(::Array{Any,1}) at .\In[94]:29
in include_string(::String, ::String) at .\loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at C:\Users\Ayman\.julia\v0.5\IJulia\src\execute_request.jl:157
in eventloop(::ZMQ.Socket) at C:\Users\Ayman\.julia\v0.5\IJulia\src\eventloop.jl:8
in (::IJulia.##13#19)() at .\task.jl:360
while loading In[99], in expression starting on line 34
WARNING: hist(...) and hist!(...) are deprecated. Use fit(Histogram,...) in StatsBase.jl instead.
in depwarn(::String, ::Symbol) at .\deprecated.jl:64
in #hist!#996(::Bool, ::Function, ::Array{Int64,1}, ::Array{Any,1}, ::LinSpace{Float64}) at .\deprecated.jl:629
in hist(::Array{Any,1}, ::LinSpace{Float64}) at .\deprecated.jl:644
in (::##79#81)(::Array{Any,1}) at .\In[94]:29
in include_string(::String, ::String) at .\loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at C:\Users\Ayman\.julia\v0.5\IJulia\src\execute_request.jl:157
in eventloop(::ZMQ.Socket) at C:\Users\Ayman\.julia\v0.5\IJulia\src\eventloop.jl:8
in (::IJulia.##13#19)() at .\task.jl:360
while loading In[99], in expression starting on line 40
## Insert your code here.
In practice, the SURE is used to set up the value of $\la$ from a single realization $f=f_0+w$, by minimizing $\text{SURE}_\la(f)$.
Exercise 2
Compute, for a single realization $f=f_0+w$, the evolution of
$$ E(\la) = \text{SURE}_\la(f) \qandq E_0(\lambda) = \norm{f-h_\la(f)}^2 $$as a function of $\lambda$.
# run -i nt_solutions/denoisingadv_9_sure/exo2
include("Exos\\denoisingadv_9_sure\\exo2.jl")
## Insert your code here.
Exercise 3
Display the best denoising result $h_{\la^*}(f)$ where $$\la^* = \uargmin{\la} \text{SURE}_\la(f) $$
# run -i nt_solutions/denoisingadv_9_sure/exo3
include("Exos\\denoisingadv_9_sure\\exo3.jl")
PyObject <matplotlib.text.Text object at 0x0000000027CF9048>
## Insert your code here.
In order to enhance the denoising results for piecewise regular signal and image, it is possible to use non-linear thresholding in an orthogonal wavelet basis $ \Bb = \{ \psi_m \}_{m} $ where $\psi_m \in \RR^N$ is a wavelet element.
Re-generate a noisy image.
f = f0 + sigma.*rand(Normal(), n, n)
256×256 Array{Float64,2}: 0.00787074 0.025612 -0.0725594 … -0.0822616 0.0154688 -0.119925 0.0341441 0.0975363 0.109153 -0.0624078 -0.0618629 0.0536324 0.165369 -0.00756058 0.199728 0.0265142 -0.0404859 0.0313165 0.220887 0.227685 0.278621 0.0525216 0.147671 -0.00509466 0.274817 0.205812 0.191851 0.119863 0.0552271 0.00770196 0.319838 0.394596 0.159638 … 0.149415 0.0636854 -0.0723506 0.214628 0.432847 0.218799 0.223317 0.0258768 0.130389 0.147296 0.37509 0.332772 0.013689 0.136279 0.06931 0.13104 0.276264 0.185667 0.106057 -0.0139588 -0.0110369 0.236173 0.210444 0.187472 0.0380039 0.154639 -0.00325552 0.307763 0.320589 0.342707 … 0.11962 0.0490141 -0.100267 0.459207 0.383855 0.321436 -0.111846 0.0602695 -0.024765 0.143523 0.30484 0.321225 0.0509025 -0.0717199 0.195389 ⋮ ⋱ ⋮ 0.234541 0.294973 0.348408 0.101997 0.20211 -0.0723792 0.208464 0.162358 0.396353 … 0.106785 -0.0362497 0.0301776 0.424495 0.328175 0.320789 0.0778988 0.0981918 0.0905971 0.280687 0.342849 0.277547 0.0462737 0.151695 0.118225 0.100841 0.233332 0.173969 0.0521889 -0.0559805 0.081509 0.141653 0.178424 0.124116 0.0589653 0.108929 0.182441 0.146252 0.237812 0.162948 … 0.126885 0.0970663 0.0235084 0.264313 0.126361 0.18606 0.0135799 0.0380177 -0.0159182 0.461323 0.271247 0.19225 0.0911539 -0.0139847 -0.0409658 0.465158 0.332106 0.243837 0.111361 0.151442 0.161776 0.304878 0.458954 0.164391 0.0693032 0.0313552 0.104272 0.298135 0.516391 0.630884 … 0.0567999 0.121527 0.258541
The soft-thresholding estimator thus reads $$ h_\la(f) = \sum_m s_\la( \dotp{f}{\psi_m} ) \psi_m \qwhereq s_\la(\al) = \max\pa{0, 1-\frac{\la}{\abs{\al}}} \al. $$ It can be conveniently written as $$ h_\la = \Ww^* \circ S_\la \circ \Ww $$ where $\Ww$ and $\Ww^*$ are forward and inverse wavelet transform $$ \Ww(f) = ( \dotp{f}{\psi_m} )_m \qandq \Ww^*(x) = \sum_m x_m \psi_m, $$ and $ S_\la $ is the diagonal soft thresholding operator $$ S_\la(x) = ( s_\la(x_m) )_m. $$
Define the wavelet transform and its inverse.
h_daub = compute_wavelet_filter("Daubechies", 4)
W = f1 -> NtToolBox.perform_wavortho_transf(f1,0,+1,h_daub) # NtToolBox. must be taken away, I only used it here because I forgot to export function perform_wavortho_transf
Ws = x -> NtToolBox.perform_wavortho_transf(x,0,-1,h_daub) # Il faut revoir les fonctions prises de signal.jl surtout perform_wavortho_transf.
(::#17) (generic function with 1 method)
Display the wavelet transform $\Ww(f_0)$ of the original image.
figure(figsize = (10,10))
plot_wavelet(W(f0), 1)
show()
Define the soft thresholding operator.
# S = (x, lambd) -> max(zeros(shape(x)), 1 - lambd/max(1e-9.*ones(shape(x)), abs(x)))*x
S = (x, lambd) -> max(0, 1 - lambd./max(1e-9, abs(x)) ) .* x;
Define the denoising operator.
h = (f1, lambd) -> Ws(S(W(f1), lambd))
(::#21) (generic function with 1 method)
Example of denoising result.
lambd = 3*sigma/2
figure(figsize = (5, 5))
imageplot(clamP(h(f,lambd))) # Il faut revoir les fonctions prises de signal.jl surtout perform_wavortho_transf.
Since $Ww$ is an orthogonal transform, one has $$ \text{df}(f) = \text{div}( S_\la )( \Ww(f) ) = \sum_m s_\la'( \dotp{f}{\psi_m} ) = \norm{\Ww(h(f))}_0 $$ where $ s_\la' $ is the derivative of the 1-D function $s_\la$, and $\norm{\cdot}_0$ is the $\ell^0$ pseudo-norm $$ \norm{x}_0 = \abs{ \enscond{m}{x_m \neq 0} }. $$
To summarize, the degree of freedom is equal to the number of non-zero coefficients in the wavelet coefficients of $h(f)$.
df = (hf, lambd) -> sum(abs(W(hf)) .> 1e-8)
(::#23) (generic function with 1 method)
We can now define the SURE operator, as a function of $f, h(f), \lambda$.
SURE = (f, hf, lambd) -> -N*sigma^2 + vecnorm(hf - f)^2 + 2*sigma^2*df(hf, lambd)
(::#27) (generic function with 1 method)
Exercise 4
For a given $\lambda$, display the histogram of the repartition of the quadratic error $\norm{y-h(y)}^2$ and of $\text{SURE}(y)$. Compute these repartition using Monte-Carlo simulation (you need to generate lots of different realization of the noise $W$). Display in particular the location of the mean of these quantities. Hint: you can do the computation directly over the wavelet domain, i.e. consider that the noise is added to the wavelet transform.
run -i nt_solutions/denoisingadv_9_sure/exo4
## Insert your code here.
Exercise 5
Compute, for a single realization $f=f_0+w$, the evolution of
$$ E(\la) = \text{SURE}_\la(f) \qandq E_0(\lambda) = \norm{f-h_\la(f)}^2 $$as a function of $\lambda$.
# run -i nt_solutions/denoisingadv_9_sure/exo5
include("Exos\\denoisingadv_9_sure\\exo5.jl")
## Insert your code here.
Exercise 6
Display the best denoising result $h_{\la^*}(f)$ where $$\la^* = \uargmin{\la} \text{SURE}_\la(f) $$
# run -i nt_solutions/denoisingadv_9_sure/exo6
include("Exos\\denoisingadv_9_sure\\exo6.jl")
#The problem here is also a wavelet problem, the argument h of snr funtion is calculated using W and Ws which use perform_wavelet_transf
SNR = 19.7 dB
PyObject <matplotlib.text.Text object at 0x0000000027145080>
## Insert your code here.
To improve the result of soft thresholding, it is possible to threshold blocks of coefficients.
We define a partition $ \{1,\ldots,N\} = \cup_k b_k $ of the set of wavelet coefficient indexes. The block thresholding is defined as
$$ h_\la(f) = \sum_k \sum_{m \in b_k} a_\la( e_k ) \dotp{f}{\psi_m} \psi_m \qwhereq e_k = \sum_{m \in b_k} \abs{\dotp{f}{\psi_m}}^2, $$where we use the James-Stein attenuation threshold $$ a_\la(e) = \max\pa{ 0, 1 - \frac{\la^2}{e^2} }. $$
The block size $q$.
q = 4
4
A function to extract blocks.
#[X,Y,dX,dY] = np.meshgrid(np.arange(1,n-q+2,q),np.arange(1,n-q+2,q),np.arange(0,q),np.arange(0,q))
# (X, Y) = meshgrid(1:q:n-q+1, 1:q:n-q+1)
# (dX, dY) = meshgrid(0:q-1, 0:q-1)
# dX = repeat(reshape(dX, (1, 1, q, q)), inner = [size(X)[1], size(X)[2], 1, 1])
# dY = repeat(reshape(dY, (1, 1, q, q)), inner = [size(Y)[1], size(Y)[2], 1, 1])
# X = repeat(X, inner = [1, 1, q, q])
# Y = repeat(Y, inner = [1, 1, q, q])
# I = X + dX + (Y + dY - 1)*n
# for i in 1:div(n, q)
# for j in 1:div(n, q)
# I[i,j] = transpose(I[i,j])
# end
# end
# blocks = fw -> fw[I]
include("ndgrid.jl")
(X, Y, dX, dY) = ndgrid(1:q:n-q+1, 1:q:n-q+1, 0:q-1, 0:q-1)
# I = X + dX + (Y + dY - 1).*n
# I
for i in 1:Base.div(n, q)
for j in Base.div(n, q)
I[i,j, :, :] = transpose(I[i,j, :, :])
end
end
blocks = fw -> fw[I]
WARNING: Method definition ndgrid(AbstractArray{T<:Any, 1
(::#49) (generic function with 1 method)
}) in module Main at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:3 overwritten at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:3. WARNING: Method definition ndgrid(AbstractArray{#T<:Any, 1}, AbstractArray{#T<:Any, 1}) in module Main at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:6 overwritten at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:6. WARNING: Method definition ndgrid_fill(Any, Any, Any, Any) in module Main at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:13 overwritten at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:13. WARNING: Method definition ndgrid(AbstractArray{#T<:Any, 1}...) in module Main at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:19 overwritten at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:19. WARNING: Method definition meshgrid(AbstractArray{T<:Any, 1}) in module Main at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:33 overwritten at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:33. WARNING: Method definition meshgrid(AbstractArray{#T<:Any, 1}, AbstractArray{#T<:Any, 1}) in module Main at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:36 overwritten at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:36. WARNING: Method definition meshgrid(AbstractArray{#T<:Any, 1}, AbstractArray{#T<:Any, 1}, AbstractArray{#T<:Any, 1}) in module Main at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:44 overwritten at C:\Users\Ayman\.julia\v0.5\ndgrid.jl:44.
A function to reconstruct an image from blocks.
function assign(M, I, H)
M_temp = M
M_temp[I] = H
return reshape(M_temp, n,n)
end
unblock = H -> assign(zeros(n,n), I, H)
(::#51) (generic function with 1 method)
Compute the average energy of each block, and duplicate.
function energy(H)
H_tmp = copy(H)
for i in 1:Base.div(n, q)
for j in 1:Base.div(n, q)
# H_tmp[i][j] = np.mean(H_tmp[i][j]**2)*np.ones([q,q])
H_tmp[i, j, :, :] = mean(H_tmp[i, j, :, :].^2).*ones(q, q)
end
end
return H_tmp
end
WARNING: Method definition energy(Any) in module Main at In[37]:2 overwritten at In[47]:2
energy (generic function with 1 method)
.
Threshold the blocks. We use here a Stein block thresholding. All values within a block are atenuated by the same factor.
# S = (H, lambd) -> max(1-lambd^2/energy(H), zeros(shape(H)))*H
S = (H,lambda) -> max(1 - lambda^2 ./ energy(H), 0) .* H
(::#57) (generic function with 1 method)
Block thresholding estimator $h_\lambda(f)$.
h = (f, lambd) -> Ws(unblock(S(blocks(W(f)), lambd)))
(::#59) (generic function with 1 method)
Example of block denoising.
lambd = 1.1*sigma
figure(figsize = (5, 5))
imageplot(clamP(h(f, lambd)))
# h(f, lambd)
# S(blocks(W(f)), lambd)
# energy(blocks(W(f)))
Since the block-thresholding operates in a block diagonal manner over the wavelet coefficients, it degree of freedom is a sum of the divergence of each block James-Stein operator $$ \text{df}(f) = \sum_{ e_k > \la^2 } \text{tr}( \partial \phi (a_k) ) $$ where $ a_k = (\dotp{f}{\psi_m})_{m \in b_k} $ is the set of coefficients inside a block, that satisfies $\norm{a_k}=e_k$, and where $$ \phi(a) = \pa{ 1 - \frac{\la^2}{\norm{a}^2} } a. $$ One can compute explicitely the derivative of $\phi$ $$ \partial \phi(a) = \pa{ 1 - \frac{\la^2}{\norm{a}^2} } \text{Id} + 2 \frac{\la^2}{\norm{a}^2} \Pi_a $$ where $\Pi_a$ is the orthogonal projector on $a$.
This gives the folowing formula for the degree of freedom $$ \text{df}(f) = \norm{\Ww(h_\la(f))}_0
$$ One can note that the degree of freedom differs from the one of the soft thresholding (it is not in general an integer).