This numerical tour explores the use of Sobolev and TV regularization to perform image deconvolution.
Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$
using PyPlot
using NtToolBox
using Autoreload
arequire("NtToolBox")
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:1
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:1
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:7
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:7
WARNING: Base.UTF8String is deprecated, use String instead.
likely near C:\Users\Ayman\.julia\v0.5\Autoreload\src\constants.jl:12
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: Base.UTF8String is deprecated, use String instead.
likely near In[1]:4
WARNING: readall is deprecated, use readstring instead.
in depwarn(::String, ::Symbol) at .\deprecated.jl:64
in readall(::IOStream, ::Vararg{IOStream,N}) at .\deprecated.jl:30
in #parse_file#6(::Array{Any,1}, ::Function, ::String) at C:\Users\Ayman\.julia\v0.5\Autoreload\src\files.jl:63
in parse_file(::String) at C:\Users\Ayman\.julia\v0.5\Autoreload\src\files.jl:61
in #arequire#10(::Symbol, ::Array{String,1}, ::Function, ::String) at C:\Users\Ayman\.julia\v0.5\Autoreload\src\Autoreload.jl:65
in arequire(::String) at C:\Users\Ayman\.julia\v0.5\Autoreload\src\Autoreload.jl:47
in include_string(::String, ::String) at .\loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at C:\Users\Ayman\.julia\v0.5\IJulia\src\execute_request.jl:157
in eventloop(::ZMQ.Socket) at C:\Users\Ayman\.julia\v0.5\IJulia\src\eventloop.jl:8
in (::IJulia.##13#19)() at .\task.jl:360
while loading In[1], in expression starting on line 4
WARNING: replacing module NtToolBox
This tour is concerned with the deconvolution problem. The measurement are assumed to be blurry and noisy: $$y=\Phi f_0 + w = h \star f_0 + w$$
Where here |h| is the filter (low pass) and |w| some noise (here assumed to be white Gaussian).
We consider variational deconvolution methods, that finds a regularizer through a convex optimization: $$f^\star \in \text{argmin}_f \frac{1}{2}\|y-\Phi f\|^2 + \lambda J(f)$$
Where $J(f)$ is a prior energy. In this tour we consider a simple L2 prior (the image is assumed to have a bounded energy), a Sobolev prior (the image is uniformly smooth) and an approximate total variation (the image has edges of bounded perimeter).
Note that the parameter $\lambda$ should be carefully chosen to fit the noise level.
Deconvolution corresponds to removing a blur from an image. We use here a Gaussian blur.
First we load the image to be inpainted.
n = 256
name = "NtToolBox/src/data/hibiscus.png"
f0 = load_image(name, n)
WARNING: Base.UTF8String is deprecated, use String
256×256 Array{Float32,2}: 0.116279 0.110802 0.113704 0.134063 … 0.0287548 0.0287359 0.0314583 0.118994 0.105394 0.126063 0.153145 0.0137308 0.0191786 0.0328152 0.121933 0.131637 0.167055 0.1561 0.0191221 0.0219491 0.0383352 0.1686 0.187318 0.194206 0.179032 0.0218771 0.0327673 0.0437247 0.202441 0.179545 0.188899 0.180755 0.0342584 0.0396646 0.0410583 0.236876 0.231387 0.243987 0.195669 … 0.0397783 0.0451862 0.045026 0.242166 0.26833 0.392237 0.219388 0.0492727 0.0520715 0.0412221 0.227555 0.239195 0.222395 0.223745 0.0614059 0.0561919 0.0450853 0.174463 0.192817 0.173714 0.161738 0.0588799 0.0560296 0.0411872 0.240959 0.214167 0.198041 0.176431 0.06423 0.0561998 0.0491571 0.233736 0.268875 0.323605 0.390379 … 0.0613594 0.0601386 0.0492048 0.469764 0.440609 0.320349 0.217411 0.0590014 0.0561119 0.0479389 0.217896 0.224526 0.240435 0.225365 0.0605422 0.0628551 0.0480219 ⋮ ⋱ ⋮ 0.247533 0.309003 0.308757 0.305036 0.135376 0.126861 0.126032 0.246182 0.279635 0.295631 0.297313 … 0.136632 0.129989 0.127163 0.240817 0.24315 0.283953 0.312362 0.133973 0.131301 0.120581 0.243502 0.235563 0.261751 0.3049 0.128674 0.1311 0.119053 0.243583 0.225477 0.2399 0.238397 0.131327 0.117945 0.127137 0.227261 0.233908 0.228406 0.247246 0.104366 0.112181 0.119152 0.196036 0.223394 0.217576 0.224995 … 0.0903733 0.0930875 0.101424 0.21693 0.190604 0.220216 0.217304 0.0862589 0.084944 0.0930124 0.476838 0.167249 0.212108 0.21071 0.0807355 0.0833831 0.0848607 0.534911 0.40117 0.175342 0.191631 0.0684402 0.071227 0.087462 0.481751 0.479992 0.236952 0.164247 0.0669885 0.0697431 0.0725492 0.342149 0.467888 0.411812 0.143802 … 0.0684046 0.0711055 0.0766073
instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31 WARNING: Base.UTF8String is deprecated, use String instead. likely near C:\Users\Ayman\.julia\v0.5\IJulia\src\kernel.jl:31
Initial image $f_0$.
imageplot(f0, "Image f0", [1, 2, 1])
PyObject <matplotlib.text.Text object at 0x0000000001E5AB38>
We build a convolution kernel. Since we are going to use Fourier to compute the convolution, we set the center of the kernel in the (1,1) pixel location.
Width $s$ of the kernel, in pixel.
s = 3
3
Define the convolution kernel $h$.
include("ndgrid.jl")
x = [collect(0:n/2 - 1); collect(-n/2:-1)]
(Y, X) = meshgrid(x, x)
h = exp((-X.^2 - Y.^2)/(2*s^2))
h = h/sum(h)
256×256 Array{Float64,2}: 0.0176839 0.0167282 0.0141601 … 0.0141601 0.0167282 0.0167282 0.0158242 0.0133949 0.0133949 0.0158242 0.0141601 0.0133949 0.0113386 0.0113386 0.0133949 0.0107258 0.0101462 0.00858856 0.00858856 0.0101462 0.00727006 0.00687718 0.00582141 0.00582141 0.00687718 0.00440952 0.00417122 0.00353086 … 0.00353086 0.00417122 0.00239325 0.00226392 0.00191637 0.00191637 0.00226392 0.00116234 0.00109952 0.000930726 0.000930726 0.00109952 0.000505149 0.00047785 0.000404492 0.000404492 0.00047785 0.00019645 0.000185834 0.000157305 0.000157305 0.000185834 6.83645e-5 6.467e-5 5.4742e-5 … 5.4742e-5 6.467e-5 2.12889e-5 2.01385e-5 1.70468e-5 1.70468e-5 2.01385e-5 5.93228e-6 5.6117e-6 4.7502e-6 4.7502e-6 5.6117e-6 ⋮ ⋱ ⋮ 5.93228e-6 5.6117e-6 4.7502e-6 4.7502e-6 5.6117e-6 2.12889e-5 2.01385e-5 1.70468e-5 … 1.70468e-5 2.01385e-5 6.83645e-5 6.467e-5 5.4742e-5 5.4742e-5 6.467e-5 0.00019645 0.000185834 0.000157305 0.000157305 0.000185834 0.000505149 0.00047785 0.000404492 0.000404492 0.00047785 0.00116234 0.00109952 0.000930726 0.000930726 0.00109952 0.00239325 0.00226392 0.00191637 … 0.00191637 0.00226392 0.00440952 0.00417122 0.00353086 0.00353086 0.00417122 0.00727006 0.00687718 0.00582141 0.00582141 0.00687718 0.0107258 0.0101462 0.00858856 0.00858856 0.0101462 0.0141601 0.0133949 0.0113386 0.0113386 0.0133949 0.0167282 0.0158242 0.0133949 … 0.0133949 0.0158242
Useful for later : the Fourier transform (should be real because of symmetry).
hF = real(plan_fft(h)*h)
256×256 Array{Float64,2}: 1.0 0.997293 0.989216 0.975898 … 0.975898 0.989216 0.997293 0.997293 0.994593 0.986538 0.973256 0.973256 0.986538 0.994593 0.989216 0.986538 0.978547 0.965374 0.965374 0.978547 0.986538 0.975898 0.973256 0.965374 0.952377 0.952377 0.965374 0.973256 0.957555 0.954963 0.947228 0.934476 0.934476 0.947228 0.954963 0.934476 0.931946 0.924398 0.911954 … 0.911954 0.924398 0.931946 0.907023 0.904567 0.897241 0.885162 0.885162 0.897241 0.904567 0.875616 0.873246 0.866173 0.854512 0.854512 0.866173 0.873246 0.840726 0.83845 0.831659 0.820463 0.820463 0.831659 0.83845 0.802862 0.800689 0.794204 0.783512 0.783512 0.794204 0.800689 0.762558 0.760494 0.754334 0.744179 … 0.744179 0.754334 0.760494 0.720361 0.718411 0.712592 0.702999 0.702999 0.712592 0.718411 0.67682 0.674987 0.66952 0.660507 0.660507 0.66952 0.674987 ⋮ ⋱ ⋮ 0.67682 0.674987 0.66952 0.660507 0.660507 0.66952 0.674987 0.720361 0.718411 0.712592 0.702999 … 0.702999 0.712592 0.718411 0.762558 0.760494 0.754334 0.744179 0.744179 0.754334 0.760494 0.802862 0.800689 0.794204 0.783512 0.783512 0.794204 0.800689 0.840726 0.83845 0.831659 0.820463 0.820463 0.831659 0.83845 0.875616 0.873246 0.866173 0.854512 0.854512 0.866173 0.873246 0.907023 0.904567 0.897241 0.885162 … 0.885162 0.897241 0.904567 0.934476 0.931946 0.924398 0.911954 0.911954 0.924398 0.931946 0.957555 0.954963 0.947228 0.934476 0.934476 0.947228 0.954963 0.975898 0.973256 0.965374 0.952377 0.952377 0.965374 0.973256 0.989216 0.986538 0.978547 0.965374 0.965374 0.978547 0.986538 0.997293 0.994593 0.986538 0.973256 … 0.973256 0.986538 0.994593
Display the kernel and its transform. We use |fftshift| to center the filter for display.
imageplot(fftshift(h), "Filter", [1, 2, 1])
imageplot(fftshift(hF), "Fourier transform", [1, 2, 2])
PyObject <matplotlib.text.Text object at 0x000000001E8D9278>
We use this short hand for the filtering. Note that this is a symmetric operator.
Phi = (x, h) -> real(plan_ifft((plan_fft(x)*x) .* (plan_fft(h)*h))*((plan_fft(x)*x) .* (plan_fft(h)*h)))
(::#3) (generic function with 1 method)
Important Scilab user should define a function |Phi| in a separate file |Phi.sci| to perform this.
Apply the filter.
y0 = Phi(f0, h)
256×256 Array{Float64,2}: 0.147006 0.164445 0.178005 0.187653 … 0.0906591 0.107472 0.127165 0.138294 0.156249 0.170627 0.181218 0.0822509 0.0987673 0.118306 0.132873 0.151598 0.166987 0.178637 0.0760766 0.0926947 0.112434 0.131464 0.151143 0.167608 0.180264 0.0728161 0.0899615 0.110277 0.133701 0.154393 0.171868 0.185341 0.0725241 0.0904672 0.111599 0.138475 0.160082 0.17837 0.192364 … 0.0747425 0.0935097 0.11548 0.144456 0.166724 0.185502 0.199624 0.0787502 0.098146 0.12077 0.150461 0.172999 0.191824 0.205601 0.0837858 0.103476 0.126429 0.155538 0.177844 0.196176 0.20908 0.0891548 0.108739 0.131609 0.158878 0.180365 0.197607 0.209119 0.0942428 0.113281 0.135589 0.159786 0.179831 0.195408 0.205105 … 0.0985227 0.116535 0.137736 0.157835 0.17587 0.189329 0.196972 0.101614 0.118099 0.137626 0.153118 0.168747 0.179862 0.185461 0.103376 0.117879 0.135232 ⋮ ⋱ ⋮ 0.211392 0.227374 0.239281 0.245411 0.163151 0.176982 0.193746 0.208349 0.225184 0.238373 0.24617 … 0.159525 0.173321 0.190243 0.204261 0.221724 0.235947 0.245225 0.155038 0.168817 0.185842 0.19968 0.21752 0.232441 0.242862 0.150085 0.163933 0.181068 0.195207 0.213193 0.228429 0.239543 0.145006 0.159105 0.17645 0.191305 0.209261 0.224435 0.235735 0.140027 0.154636 0.172375 0.188057 0.205868 0.220664 0.231677 … 0.135161 0.150538 0.168877 0.184983 0.202593 0.216809 0.227195 0.13015 0.146431 0.165494 0.181112 0.198512 0.212106 0.221746 0.124497 0.141595 0.161332 0.175394 0.192616 0.20572 0.214754 0.117678 0.135268 0.155434 0.167307 0.184435 0.197327 0.206113 0.109455 0.127084 0.147325 0.157342 0.17452 0.187562 0.196563 … 0.100144 0.117432 0.137446
Display the filtered observation.
imageplot(y0, "Observation without noise", [1, 2, 2])
PyObject <matplotlib.text.Text object at 0x000000001EDDE978>
Variance $\sigma^2$ of the noise $w$.
sigma = .02
0.02
Add some noise to obtain the measurements $y = \Phi f_0 + w$.
y = y0 .+ randn(n, n).*sigma
256×256 Array{Float64,2}: 0.151497 0.156661 0.158723 0.200943 … 0.0855666 0.105065 0.154676 0.128197 0.153212 0.204437 0.144666 0.144395 0.0792051 0.113058 0.163829 0.163432 0.201985 0.197461 0.100375 0.109656 0.114212 0.157414 0.12863 0.151896 0.210052 0.0686484 0.0870508 0.119795 0.105056 0.175754 0.161697 0.194657 0.0648838 0.11115 0.128349 0.0979527 0.164152 0.173758 0.164726 … 0.0705468 0.0842978 0.0832299 0.138998 0.16353 0.172074 0.183183 0.101824 0.124637 0.164174 0.182639 0.133786 0.195349 0.189965 0.103157 0.114552 0.0943955 0.115323 0.198757 0.207743 0.197172 0.111343 0.112721 0.184525 0.132442 0.207344 0.170676 0.186718 0.0735518 0.126847 0.116173 0.167729 0.179642 0.203359 0.144056 … 0.112005 0.123076 0.145931 0.130043 0.185567 0.185507 0.20567 0.105781 0.0933402 0.138121 0.158515 0.175519 0.200186 0.186478 0.061173 0.148652 0.107409 ⋮ ⋱ ⋮ 0.215098 0.212715 0.243894 0.235835 0.1713 0.175189 0.149754 0.198082 0.215331 0.217057 0.232337 … 0.19755 0.213086 0.215075 0.242271 0.25164 0.267781 0.254595 0.152434 0.20092 0.155913 0.197873 0.239698 0.22226 0.246612 0.133002 0.102168 0.170487 0.180182 0.229346 0.244966 0.227581 0.160357 0.143198 0.165565 0.161042 0.209684 0.19716 0.214735 0.165361 0.149927 0.184192 0.184417 0.241261 0.189138 0.220115 … 0.103472 0.169334 0.151424 0.174154 0.217453 0.233964 0.235056 0.141581 0.167344 0.160778 0.180469 0.176171 0.211045 0.240946 0.133008 0.102179 0.155672 0.180782 0.170594 0.195509 0.213029 0.10495 0.133586 0.168533 0.169704 0.12598 0.200014 0.168629 0.118593 0.161339 0.153791 0.165114 0.135443 0.228965 0.206421 … 0.112003 0.110828 0.130652
Display.
imageplot(clamP(y), "Observation with noise", [1, 2, 2])
PyObject <matplotlib.text.Text object at 0x000000001F00DC50>
imageplot(y, string("Observation, SNR = ", string(round(snr(f0, y),2)), "dB"))
PyObject <matplotlib.text.Text object at 0x000000001F1E92E8>
Deconvolution is obtained by dividing the Fourier transform of $y$ by $\hat h$. $$f^\star(\omega) = \frac{\hat y(\omega)}{\hat h(\omega)} = \hat f_0(\omega) + \hat w(\omega)/{\hat h(\omega)}$$
Unfortunately this creates an explosion of the Noise.
To avoid this explosion, we consider a simple regularization. $$f^{\star} = \text{argmin}_f \: \|y-\Phi f\|^2 + \lambda \|f\|^2$$
Since the filtering is diagonalized over Fourier, the solution is simply computed over the Fourier domain as: $$\hat f^\star(\omega) = \frac{\hat y(\omega) \hat h(\omega)}{ \|\hat h(\omega)\|^2 + \lambda }$$
Useful for later: Fourier transform of the observations.
yF = plan_fft(y)*y
256×256 Array{Complex{Float64},2}: 23958.7+0.0im -2144.59-880.427im … -2144.59+880.427im -2226.11-215.09im -2513.8-1522.87im -1422.37+671.403im -995.274-28.004im -210.573+499.168im 421.721+1009.84im 340.052+388.049im -116.447-339.164im -221.646-42.5098im -88.0326+596.207im 58.1121+173.68im 351.278+148.988im 350.608+197.358im 261.588-18.6983im … 124.366-74.2399im -122.305+377.567im 115.71+162.547im -317.304+69.243im -94.7202+203.708im 297.969+219.672im -309.183+58.5572im 94.0593+197.707im 32.0245+5.91317im -2.67815+88.1942im -31.5239+153.376im 77.0757-167.517im -98.0655+79.8516im -106.165+211.759im 3.20676+79.6975im … -72.7179+5.81958im -136.505+54.4924im -86.2551-30.0241im -7.33114-7.85917im 106.081-1.75823im 72.6982-7.23549im 23.1504-130.005im ⋮ ⋱ ⋮ 106.081+1.75823im 23.1504+130.005im 72.6982+7.23549im -136.505-54.4924im -7.33114+7.85917im … -86.2551+30.0241im -106.165-211.759im -72.7179-5.81958im 3.20676-79.6975im -31.5239-153.376im -98.0655-79.8516im 77.0757+167.517im 94.0593-197.707im -2.67815-88.1942im 32.0245-5.91317im -94.7202-203.708im -309.183-58.5572im 297.969-219.672im -122.305-377.567im -317.304-69.243im … 115.71-162.547im 350.608-197.358im 124.366+74.2399im 261.588+18.6983im -88.0326-596.207im 351.278-148.988im 58.1121-173.68im 340.052-388.049im -221.646+42.5098im -116.447+339.164im -995.274+28.004im 421.721-1009.84im -210.573-499.168im -2226.11+215.09im -1422.37-671.403im … -2513.8+1522.87im
Select a value for the regularization parameter.
Lambda = 0.02
0.02
Perform the inversion.
fL2 = real(plan_ifft((yF .* hF ./ (abs(hF).^2 .+ Lambda)))*((yF .* hF ./ (abs(hF).^2 .+ Lambda))))
256×256 Array{Float64,2}: 0.149093 0.165089 0.178622 0.190415 … 0.0878122 0.109116 0.13022 0.136647 0.154546 0.169676 0.182277 0.0726905 0.0940917 0.116137 0.124065 0.144078 0.161611 0.176425 0.0579342 0.0793316 0.102077 0.114283 0.13597 0.156215 0.17442 0.0478882 0.0689342 0.0916311 0.111222 0.133845 0.156543 0.178514 0.0463233 0.0668159 0.08879 0.117624 0.140685 0.165205 0.190297 … 0.0541622 0.0744158 0.0956815 0.132834 0.156421 0.18205 0.208774 0.0683762 0.0892419 0.110712 0.152338 0.177014 0.203073 0.229466 0.0831152 0.105592 0.128707 0.169684 0.195987 0.221727 0.245692 0.0925567 0.117262 0.143277 0.179659 0.207576 0.232026 0.251615 0.0937519 0.120597 0.149964 0.180669 0.209474 0.231512 0.245211 … 0.087803 0.116035 0.148205 0.174971 0.203534 0.222234 0.229285 0.0788628 0.107346 0.141024 0.166859 0.194232 0.209321 0.209974 0.0717455 0.099263 0.132887 ⋮ ⋱ ⋮ 0.231873 0.265332 0.288015 0.296808 0.125001 0.155286 0.192983 0.227424 0.261521 0.285827 0.297261 … 0.121727 0.151465 0.188651 0.218411 0.252965 0.278616 0.29245 0.11502 0.14348 0.179876 0.206276 0.240726 0.267103 0.282819 0.106232 0.13308 0.168389 0.194443 0.227716 0.253787 0.270566 0.0988959 0.124261 0.158066 0.186214 0.217016 0.241587 0.258483 0.0962109 0.12057 0.152419 0.18252 0.20981 0.231945 0.248204 … 0.0987845 0.122654 0.152191 0.181488 0.204834 0.224153 0.239443 0.104356 0.127987 0.155024 0.180076 0.199742 0.21644 0.230789 0.109414 0.132704 0.157282 0.176158 0.192983 0.207637 0.221166 0.111117 0.133808 0.156271 0.169317 0.184521 0.197891 0.210633 0.108098 0.130074 0.151131 0.160122 0.175083 0.18804 0.200076 … 0.100146 0.121599 0.142219
Display.
imageplot(clamP(fL2), string("L2 deconvolution, SNR = ", string(round(snr(f0, fL2), 2)), "dB"))
PyObject <matplotlib.text.Text object at 0x0000000026434160>
Exercise 1: Find the optimal solution fL2 by testing several value of $\lambda$.
# run -i nt_solutions/inverse_2_deconvolution_variational/exo1
include("Exos\\inverse_2_deconvolution_variational\\exo1.jl")
256×256 Array{Float64,2}: 0.153353 0.168386 0.18086 0.192 … 0.0908441 0.113237 0.134826 0.140676 0.158475 0.172996 0.184732 0.0737242 0.0964072 0.119591 0.126376 0.14685 0.164242 0.178415 0.0564598 0.0791595 0.103327 0.113612 0.135722 0.156164 0.174416 0.0445396 0.0665043 0.0902054 0.107696 0.13007 0.153035 0.175974 0.0430262 0.063685 0.0855858 0.113049 0.134862 0.159468 0.18635 … 0.0533059 0.0729169 0.0928173 0.129751 0.151289 0.176755 0.205749 0.0714045 0.091151 0.110363 0.152487 0.174964 0.200758 0.229467 0.0894884 0.110914 0.131717 0.172989 0.197717 0.22332 0.249224 0.0998481 0.124038 0.148561 0.184421 0.211917 0.236488 0.257276 0.0988583 0.125916 0.155148 0.184746 0.214385 0.236756 0.250658 … 0.088517 0.117586 0.151021 0.177084 0.207497 0.226617 0.232795 0.0748909 0.104562 0.140539 0.167227 0.197145 0.212723 0.211598 0.0646893 0.0934342 0.129855 ⋮ ⋱ ⋮ 0.236462 0.270976 0.294077 0.302818 0.124359 0.156366 0.195936 0.232515 0.267254 0.291772 0.303179 … 0.122946 0.15409 0.192641 0.222816 0.258031 0.283876 0.297678 0.115761 0.145421 0.183147 0.208891 0.244206 0.27081 0.286455 0.104639 0.132629 0.16954 0.195276 0.229439 0.255616 0.272201 0.0948914 0.121516 0.157218 0.186468 0.21778 0.242074 0.258567 0.091444 0.117334 0.151175 0.183625 0.210676 0.231967 0.247629 … 0.0953814 0.121099 0.15242 0.184024 0.206238 0.22417 0.238831 0.103578 0.129238 0.157575 0.183565 0.201338 0.216315 0.230216 0.11121 0.136444 0.161677 0.179808 0.194329 0.207188 0.220527 0.114617 0.13892 0.161446 0.172871 0.185813 0.197515 0.210164 0.112314 0.135519 0.156338 0.163926 0.177116 0.188658 0.200404 … 0.10422 0.126696 0.147237
Display optimal result.
imageplot(clamP(fL2), string("L2 deconvolution, SNR = ", string(round(snr(f0, fL2), 2)), "dB") )
PyObject <matplotlib.text.Text object at 0x0000000026998A90>
L2 regularization did not perform any denoising. To remove some noise, we can penalize high frequencies using Sobolev regularization (quadratic grow).
The Sobolev prior reads (note the conversion from spacial domain to Fourier domain) $$J(f) = \sum_x \|\nabla f(x)\|^2 = \sum_{\omega} S(\omega) \|\hat f(\omega)\|^2 $$ where $S(\omega)=\|\omega\|^2$.
Since this prior can be written over the Fourier domain, one can compute the solution to the deblurring with Sobolev prior simply with the Fourier coefficients: $$\hat f^\star(\omega) = \frac{\hat y(\omega) \hat h(\omega)}{ \|\hat h(\omega)\|^2 + \lambda S(\omega) }$$
Compute the Sobolev prior penalty |S|(rescale to [0,1]).
S = (X.^2 + Y.^2) .* (2/n).^2
256×256 Array{Float64,2}: 0.0 6.10352e-5 0.000244141 … 0.000244141 6.10352e-5 6.10352e-5 0.00012207 0.000305176 0.000305176 0.00012207 0.000244141 0.000305176 0.000488281 0.000488281 0.000305176 0.000549316 0.000610352 0.000793457 0.000793457 0.000610352 0.000976563 0.0010376 0.0012207 0.0012207 0.0010376 0.00152588 0.00158691 0.00177002 … 0.00177002 0.00158691 0.00219727 0.0022583 0.00244141 0.00244141 0.0022583 0.00299072 0.00305176 0.00323486 0.00323486 0.00305176 0.00390625 0.00396729 0.00415039 0.00415039 0.00396729 0.00494385 0.00500488 0.00518799 0.00518799 0.00500488 0.00610352 0.00616455 0.00634766 … 0.00634766 0.00616455 0.00738525 0.00744629 0.00762939 0.00762939 0.00744629 0.00878906 0.0088501 0.0090332 0.0090332 0.0088501 ⋮ ⋱ ⋮ 0.00878906 0.0088501 0.0090332 0.0090332 0.0088501 0.00738525 0.00744629 0.00762939 … 0.00762939 0.00744629 0.00610352 0.00616455 0.00634766 0.00634766 0.00616455 0.00494385 0.00500488 0.00518799 0.00518799 0.00500488 0.00390625 0.00396729 0.00415039 0.00415039 0.00396729 0.00299072 0.00305176 0.00323486 0.00323486 0.00305176 0.00219727 0.0022583 0.00244141 … 0.00244141 0.0022583 0.00152588 0.00158691 0.00177002 0.00177002 0.00158691 0.000976563 0.0010376 0.0012207 0.0012207 0.0010376 0.000549316 0.000610352 0.000793457 0.000793457 0.000610352 0.000244141 0.000305176 0.000488281 0.000488281 0.000305176 6.10352e-5 0.00012207 0.000305176 … 0.000305176 0.00012207
Regularization parameter:
Lambda = 0.2
0.2
Perform the inversion.
fSob = real(plan_ifft((yF .* hF ./ (abs(hF).^2 .+ Lambda.*S)))*((yF .* hF ./ (abs(hF).^2 .+ Lambda.*S))))
256×256 Array{Float64,2}: 0.157885 0.172733 0.184954 0.195787 … 0.0940203 0.117403 0.139365 0.141672 0.158638 0.172746 0.184599 0.0759873 0.0989088 0.121517 0.124521 0.143748 0.160667 0.175263 0.0581004 0.0803031 0.103089 0.110894 0.13186 0.152001 0.170807 0.0460371 0.0671338 0.0891809 0.105992 0.127927 0.151107 0.174707 0.044475 0.0643542 0.0849584 0.113114 0.135549 0.161157 0.188939 … 0.054451 0.0736313 0.0929513 0.131209 0.15432 0.181567 0.211845 0.0721352 0.0917544 0.111015 0.154541 0.179022 0.20707 0.237465 0.090164 0.111548 0.132626 0.175072 0.201627 0.22953 0.257409 0.101049 0.125122 0.149748 0.186348 0.215136 0.241742 0.26461 0.100784 0.127677 0.156711 0.186538 0.216972 0.241011 0.256878 … 0.0906307 0.119676 0.152823 0.178882 0.209911 0.23035 0.238101 0.0761723 0.106199 0.142238 0.169392 0.200087 0.216605 0.216318 0.0643116 0.0939873 0.131282 ⋮ ⋱ ⋮ 0.242964 0.280611 0.305408 0.313826 0.119894 0.155094 0.198576 0.237667 0.275216 0.30123 0.31243 … 0.117415 0.151804 0.194183 0.226794 0.263961 0.290968 0.30491 0.111566 0.143964 0.184586 0.212218 0.248503 0.275954 0.292175 0.103268 0.133181 0.171684 0.197884 0.232348 0.259256 0.276865 0.0958865 0.123564 0.159703 0.187711 0.219163 0.244317 0.262225 0.0929563 0.119168 0.152754 0.1833 0.210722 0.23311 0.250387 … 0.0958281 0.121424 0.152329 0.183234 0.20618 0.225329 0.241445 0.102911 0.128405 0.15659 0.184333 0.2031 0.219155 0.233954 0.110734 0.136154 0.16175 0.183589 0.199151 0.212735 0.226221 0.115672 0.140744 0.164169 0.179253 0.193043 0.205058 0.217271 0.115113 0.139589 0.161604 0.170686 0.1843 0.195821 0.206986 … 0.107803 0.131666 0.153244
Display the result.
imageplot(clamP(fSob), string("Sobolev deconvolution, SNR = ", string( round(snr(f0, fSob), 2) ), "dB") )
PyObject <matplotlib.text.Text object at 0x0000000026D85358>
Exercise 2: Find the optimal solution fSob by testing several value of $\lambda$.
# run -i nt_solutions/inverse_2_deconvolution_variational/exo2
include("Exos\\inverse_2_deconvolution_variational\\exo2.jl")
256×256 Array{Float64,2}: 0.155183 0.170765 0.183713 0.194943 … 0.0916996 0.114484 0.136286 0.139399 0.156491 0.170982 0.183338 0.0751267 0.0974169 0.11948 0.123748 0.14276 0.159765 0.174726 0.0590022 0.0806684 0.102813 0.112364 0.133211 0.153268 0.172033 0.0481561 0.0690593 0.0908498 0.109464 0.131809 0.154922 0.177918 0.0464081 0.0666485 0.0878109 0.117273 0.140868 0.16658 0.193235 … 0.0545223 0.0746354 0.0954639 0.134326 0.159223 0.186784 0.215603 0.0693331 0.0902332 0.111688 0.155464 0.181937 0.210415 0.239286 0.0848572 0.107507 0.130903 0.173906 0.202128 0.230414 0.256975 0.0949854 0.11998 0.146431 0.18433 0.214046 0.240867 0.262858 0.096318 0.123623 0.153563 0.185225 0.215682 0.23974 0.255348 … 0.0895962 0.118576 0.151595 0.179171 0.209378 0.229681 0.237872 0.0790279 0.108668 0.143708 0.170993 0.200176 0.216413 0.217207 0.0699788 0.0991438 0.134877 ⋮ ⋱ ⋮ 0.240808 0.278164 0.303054 0.311813 0.120719 0.154721 0.197145 0.235098 0.272825 0.299223 0.310808 … 0.116655 0.15006 0.191816 0.224537 0.262069 0.289597 0.303932 0.110567 0.14217 0.182344 0.210989 0.247648 0.275672 0.292285 0.103293 0.132513 0.170524 0.197834 0.2327 0.260266 0.278297 0.0973281 0.124285 0.159802 0.188319 0.220368 0.246376 0.264772 0.095197 0.120537 0.15343 0.183785 0.212191 0.235712 0.253525 … 0.0977559 0.122283 0.152573 0.18309 0.207488 0.228029 0.244639 0.103619 0.127918 0.155734 0.183502 0.204104 0.221688 0.236834 0.109904 0.134164 0.159735 0.182221 0.199769 0.21485 0.228512 0.113527 0.137622 0.16133 0.177394 0.193009 0.206347 0.21867 0.112179 0.135905 0.158318 0.168298 0.18328 0.195854 0.207226 … 0.104773 0.128028 0.149834
Display optimal result.
imageplot(clamP(fSob), string("Sobolev deconvolution, SNR = ", string( round(snr(f0, fSob), 2) ), "dB") )
PyObject <matplotlib.text.Text object at 0x00000000270C0358>
Sobolev regularization perform a denoising but also tends to blur the edges, thus producing a poor results on cartoon images.
The TV prior is able to better reconstruct sharp edges. It reads: $$J(f) = \sum_x \| \nabla f(x)\|$$
With respect to the Sobolev energy, it simply corresponding to measuring the L1 norm instead of the L2 norm, thus dropping the square in the functional.
Unfortunately, the TV functional $J(f)$ is not a smooth function of the image $f$. It thus requires the use of advanced convex optimization method to be minimized for regularization.
An alternative is to replace the absolute value by a smooth absolute value. The smoothed TV norm reads: $$J(f) = \sum_x \sqrt{\|\nabla f(x)\|^2+\varepsilon^2}$$
Regularization parameter for the TV norm:
epsilon = 0.4*1e-2
0.004
When $\epsilon$ gets close to zero, the smoothed energy becomes closer to the original total variation, but the optimization becomes more difficult. When |epsilon| becomes large, the smoothed energy becomes closer to the Sobolev energy, thus blurring the edges.
Unfortunately, this prior is non-quadratic, and cannot be expressed over the Fourier domain. One thus need to use an iterative scheme such as a gradient descent to approximate the solution.
An iteration of the gradient descent reads: $$f^{(k+1)} = f^{(k)} - \tau \left( h \star (h \star f^{(k)} - y) + \lambda \text{Grad} J(f^{(k)}) \right)$$
Regularization parameter.
Lambda = 0.06
0.06
The value of $\tau$, the step size, should be smaller than twice the Lipschitz constant of the Gradient of the functional to be minimized, hence: $$ \tau< \frac{2}{1 + \lambda 8/\varepsilon }.$$
tau = 1.9 / (1 + Lambda * 8 / epsilon)
0.015702479338842973
Initialization.
fTV = y
256×256 Array{Float64,2}: 0.151497 0.156661 0.158723 0.200943 … 0.0855666 0.105065 0.154676 0.128197 0.153212 0.204437 0.144666 0.144395 0.0792051 0.113058 0.163829 0.163432 0.201985 0.197461 0.100375 0.109656 0.114212 0.157414 0.12863 0.151896 0.210052 0.0686484 0.0870508 0.119795 0.105056 0.175754 0.161697 0.194657 0.0648838 0.11115 0.128349 0.0979527 0.164152 0.173758 0.164726 … 0.0705468 0.0842978 0.0832299 0.138998 0.16353 0.172074 0.183183 0.101824 0.124637 0.164174 0.182639 0.133786 0.195349 0.189965 0.103157 0.114552 0.0943955 0.115323 0.198757 0.207743 0.197172 0.111343 0.112721 0.184525 0.132442 0.207344 0.170676 0.186718 0.0735518 0.126847 0.116173 0.167729 0.179642 0.203359 0.144056 … 0.112005 0.123076 0.145931 0.130043 0.185567 0.185507 0.20567 0.105781 0.0933402 0.138121 0.158515 0.175519 0.200186 0.186478 0.061173 0.148652 0.107409 ⋮ ⋱ ⋮ 0.215098 0.212715 0.243894 0.235835 0.1713 0.175189 0.149754 0.198082 0.215331 0.217057 0.232337 … 0.19755 0.213086 0.215075 0.242271 0.25164 0.267781 0.254595 0.152434 0.20092 0.155913 0.197873 0.239698 0.22226 0.246612 0.133002 0.102168 0.170487 0.180182 0.229346 0.244966 0.227581 0.160357 0.143198 0.165565 0.161042 0.209684 0.19716 0.214735 0.165361 0.149927 0.184192 0.184417 0.241261 0.189138 0.220115 … 0.103472 0.169334 0.151424 0.174154 0.217453 0.233964 0.235056 0.141581 0.167344 0.160778 0.180469 0.176171 0.211045 0.240946 0.133008 0.102179 0.155672 0.180782 0.170594 0.195509 0.213029 0.10495 0.133586 0.168533 0.169704 0.12598 0.200014 0.168629 0.118593 0.161339 0.153791 0.165114 0.135443 0.228965 0.206421 … 0.112003 0.110828 0.130652
Number of iteration (quite a large number is required).
niter = 300
300
We first check that the discretized grad and -div are adjoint one of each other.
a = rand(n,n)
b = rand(n, n, 2)
dotp = (x, y) -> sum(x[:].*y[:])
print(string("Should be 0: ", string(dotp(grad(a),b) + dotp(a, NtToolBox.div(b)))) ) #We use NtToolBox.div instead of div because otherwise there will be a conflict with the function div of package Base.
Should be 0: -5.684341886080802e-14
The gradient of the smoothed TV energy is: $$\text{Grad} J(f) = -\text{div}\left( \frac{\nabla f}{ \sqrt{\|\nabla f\|^2+\varepsilon^2} } \right)$$
Compute the gradient of the smoothed TV functional.
repeat3 = x -> cat(3, x, x)
Gr = grad(fTV)
d = sqrt(epsilon^2 .+ sum(Gr.^2, 3))[:, :]
G = -NtToolBox.div(Gr ./ repeat3(d) ) #We use NtToolBox.div instead of div because otherwise there will be a conflict with the function div of package Base.
256×256 Array{Float64,2}: 0.259701 0.678925 -1.99717 … -2.93265 -0.383558 2.52216 -1.38586 -1.20901 2.74875 3.22179 -2.69556 -1.28962 2.70549 0.070532 1.7817 1.07623 1.71616 -0.833904 1.47755 -2.48427 -1.70103 -1.35936 -1.39907 -0.277463 -2.22525 3.26256 -1.862 -1.41656 1.87927 2.09882 -1.29611 -0.634774 2.33231 … -1.702 -1.40424 -2.07361 -1.17003 1.10815 -1.21167 0.83532 1.25975 3.22522 2.96325 -2.95216 1.08147 -0.350056 1.6008 -3.32971 -2.69855 0.357819 2.79699 2.25689 -1.22499 3.10691 -0.652107 2.95666 -3.08171 -2.90335 2.20528 -2.93893 2.00214 -1.49731 3.09299 … 1.03513 0.699794 0.882056 -2.5444 2.06174 -1.67252 1.12843 -2.46027 1.5078 1.7335 -1.74409 2.78789 -2.3267 3.11894 -2.58689 ⋮ ⋱ ⋮ 2.28974 -2.12512 1.95835 -0.0937311 0.460886 -2.94249 -2.53062 -0.594801 -2.14334 … 2.59255 1.93681 2.09443 2.68225 0.993511 2.97752 -1.27184 1.31058 -2.52482 -0.529859 1.67859 -3.20401 0.467135 -3.0243 0.223325 -0.346759 0.587377 2.56893 1.82513 -1.65044 -0.615649 -2.26675 -0.431464 -1.78722 2.33328 -1.31354 3.02823 0.583071 3.22679 -2.70147 … -3.31293 2.43273 -3.00951 -0.406627 1.12585 2.12745 0.460407 1.92409 -0.385369 1.70042 -2.4801 -0.426175 0.772475 -3.09512 -0.817467 2.04426 -0.426813 -1.16057 -1.99032 0.00155164 1.36312 0.94405 -2.97797 1.29047 0.559354 2.73687 -0.645544 2.02879 -1.97093 2.90534 … 1.65277 -1.69851 -1.25896
Compute the TV norm, usefull to keep track of its decay through iterations.
tv = sum(d)
print(string("Smoothed TV norm of the image: ", string(round(tv,2))) )
Smoothed TV norm of the image: 2481.38
Perform a step of gradient descent for the inversion.
e = Phi(fTV, h) - y
fTV = fTV - tau*(Phi(e, h) + Lambda*G)
256×256 Array{Float64,2}: 0.151215 0.156025 0.160647 0.199819 … 0.0881783 0.105308 0.15222 0.129456 0.154347 0.201883 0.147926 0.141196 0.0816142 0.114183 0.16123 0.16336 0.200343 0.197504 0.0991888 0.107902 0.114902 0.155977 0.130973 0.153544 0.207076 0.0697545 0.0882315 0.119964 0.107122 0.172698 0.163516 0.194502 0.0660505 0.109252 0.12629 0.0991651 0.164793 0.17165 0.167561 … 0.0719976 0.0855106 0.0851226 0.14012 0.162558 0.173335 0.18335 0.100906 0.123364 0.161101 0.179897 0.13667 0.194478 0.190922 0.103379 0.112984 0.0975268 0.117941 0.198548 0.205279 0.197997 0.109131 0.113839 0.181618 0.13315 0.204702 0.173763 0.186998 0.0762186 0.124751 0.11898 0.165945 0.181201 0.200628 0.147062 … 0.110972 0.12241 0.145148 0.132541 0.183767 0.187255 0.203252 0.104667 0.0956555 0.136751 0.156973 0.177291 0.19771 0.186636 0.0633184 0.14571 0.109891 ⋮ ⋱ ⋮ 0.213075 0.214901 0.242266 0.23621 0.17136 0.174774 0.152603 0.200591 0.216069 0.219292 0.23319 … 0.195066 0.211268 0.213167 0.239856 0.250872 0.265184 0.254842 0.153578 0.199678 0.158343 0.198469 0.23827 0.225476 0.246129 0.132495 0.104996 0.170313 0.18059 0.22893 0.242727 0.227747 0.158559 0.144718 0.166167 0.163243 0.21021 0.199008 0.216108 0.163075 0.151119 0.181347 0.183919 0.238324 0.191829 0.220614 … 0.106499 0.166989 0.154256 0.174575 0.216478 0.232088 0.234825 0.141048 0.165471 0.161128 0.178891 0.178576 0.211555 0.238212 0.132175 0.105027 0.156418 0.178864 0.171047 0.196692 0.212122 0.106713 0.133506 0.167213 0.168806 0.128819 0.198869 0.171676 0.117942 0.15867 0.154349 0.163178 0.137317 0.226282 0.206932 … 0.110308 0.112324 0.131773
Exercise 3: Perform the deblurring by a gradient descent. Keep track of the function being minimized.
# run -i nt_solutions/inverse_2_deconvolution_variational/exo3
include("Exos\\inverse_2_deconvolution_variational\\exo3.jl")
PyObject <matplotlib.text.Text object at 0x0000000026704BA8>
Display the result.
imageplot(clamP(fTV))
Explore the different values of |lambda| to find the optimal solution. Display the SNR as a function of |lambda|.
# run -i nt_solutions/inverse_2_deconvolution_variational/exo4
# include("Exos\\inverse_2_deconvolution_variational\\exo4.jl")
niter = 300
lambda_list = linspace(1e-6,.01,20)
tau = 1.9 / ( 1 + maximum(lambda_list) * 8 / epsilon)
fBest = y
fTV = y
err = zeros(length(lambda_list), 1)
for it in 1 : length(lambda_list)
# print('.',end="",flush=True)
Lambda = lambda_list[it]
for i in 1 : niter
# Compute the gradient of the smoothed TV functional.
Gr = grad(fTV)
d = sqrt(epsilon^2 + sum(Gr.^2, 3))
G = -NtToolBox.div(Gr ./ repeat3(d) )
# step
e = Phi(fTV, h) - y
fTV = fTV - tau.*( Phi(e, h) + Lambda.*G)
end
err[it] = snr(f0, fTV)
if err[it] > snr(f0, fBest)
fBest = fTV
end
end
clf
plot(lambda_list, err)
axis("tight")
xlabel(L"\lambda")
ylabel("SNR")
fTV = fBest
####
# it = 1
# Lambda = lambda_list[it]
256×256 Array{Float64,2}: 0.152193 0.169912 0.183992 0.197425 … 0.0787195 0.104248 0.130522 0.139583 0.160028 0.174824 0.189787 0.0617419 0.0885916 0.116089 0.126929 0.151553 0.167461 0.184603 0.0473128 0.074182 0.102412 0.117631 0.146308 0.163301 0.182783 0.0395104 0.0656472 0.0939512 0.115168 0.145707 0.164183 0.185734 0.0380241 0.064439 0.0924727 0.121844 0.150996 0.17214 0.195112 … 0.0419487 0.0706764 0.0975252 0.137365 0.164269 0.188176 0.222533 0.0522239 0.0826175 0.109108 0.158032 0.183521 0.227832 0.247534 0.0650551 0.0960626 0.126835 0.174623 0.218153 0.239911 0.251698 0.0735915 0.107709 0.144193 0.193423 0.22405 0.241263 0.251815 0.0761599 0.113063 0.15324 0.197583 0.221302 0.237071 0.248168 … 0.0738553 0.110179 0.15495 0.194012 0.212505 0.225523 0.235091 0.0710111 0.101538 0.148755 0.18708 0.200346 0.207464 0.210496 0.0711945 0.092608 0.13889 ⋮ ⋱ ⋮ 0.259492 0.280299 0.284787 0.286319 0.118719 0.128091 0.191049 0.253078 0.279296 0.284688 0.286425 … 0.11337 0.121756 0.182419 0.241017 0.275703 0.283079 0.285267 0.108504 0.114626 0.169209 0.223928 0.268393 0.279309 0.282556 0.104831 0.108998 0.154737 0.206231 0.257059 0.272184 0.27742 0.102555 0.105966 0.144974 0.193959 0.244009 0.261815 0.268898 0.101421 0.105183 0.143028 0.188581 0.23193 0.251041 0.259556 … 0.101187 0.107062 0.14706 0.186361 0.221056 0.240542 0.250569 0.101544 0.113269 0.152653 0.183531 0.210739 0.229638 0.241142 0.102281 0.121063 0.155778 0.178693 0.200498 0.217947 0.230465 0.10255 0.124923 0.154977 0.171792 0.190233 0.205879 0.218669 0.0999005 0.123295 0.150366 0.162984 0.180041 0.194377 0.207191 … 0.0921866 0.116338 0.142211
Display the result.
imageplot(clamP(fBest), string("TV deconvolution, SNR = ", string( round(snr(f0, fBest), 2) ), "dB") )
PyObject <matplotlib.text.Text object at 0x000000002A3A1F60>