In [22]:

```
!python -m pip install numpy scipy matplotlib PyWavelets
```

In [23]:

```
import numpy as np
import matplotlib.pyplot as plt
from scipy.sparse import csc_matrix
from scipy.optimize import linprog, milp, Bounds, LinearConstraint
```

Compressed Sensing (CS, also known as "Compressive Sensing") refers to a set of sparse signal processing techniques for reconstructing signals that have been sampled well-under the Nyquist rate. Many image processing and reconstruction applications greatly benefit from faster acquisitions (with potentially more computationally expensive post-processing requirements). Many natural images and signals including those in Magnetic Resonance Imaging and astronomy are sparse (they have few non-zero values) either latently or when transformed into another domain (Sparse MRI, Astronomy). Wavelet, Discrete Cosine, and finite differences are common transforms that can produce sparse representations of images and signals. JPEG compression also takes advantage of the fact that many images are sparse in wavelet domains:

In [24]:

```
# example from PyWavelet's documentation: https://pywavelets.readthedocs.io/en/latest/
import pywt
import pywt.data
# Load image
original = pywt.data.camera()
# Wavelet transform of image, and plot approximation and details
titles = ['Approximation', ' Horizontal detail',
'Vertical detail', 'Diagonal detail']
coeffs2 = pywt.dwt2(original, 'bior1.3')
LL, (LH, HL, HH) = coeffs2
fig = plt.figure(figsize=(12, 3))
for i, a in enumerate([LL, LH, HL, HH]):
ax = fig.add_subplot(1, 4, i + 1)
ax.imshow(a, interpolation="nearest", cmap=plt.cm.gray)
ax.set_title(titles[i], fontsize=10)
ax.set_xticks([])
ax.set_yticks([])
fig.tight_layout()
plt.show()
```

In [25]:

```
all_coefficients = np.concatenate((LL, LH, HL, HH))
mag = np.abs(all_coefficients.flatten())
mag /= np.max(mag)
plt.plot(np.sort(mag))
plt.title("Sorted, normalized coefficient magnitudes")
plt.xticks([])
plt.yticks([])
```

Out[25]:

([], [])

CS differs from classical Nyquist sampling theory in a few important ways (see Eldar et al):

- Classical sampling theory usually considers infinite dimensional, continuous-time signals while CS is usually restricted to
`n`

-dimensional signals - CS formulations usually do not sample the signal at specific, periodic points in time as is done classically. Measurements also typically include some tactically random components
- Classical formulations rely on sinc interpolation, whereas CS signal reconstruction is often relies on non-linear reconstruction and interpolation methods

Formally, CS can be understood as a method of finding an exact (or close) solution to the highly under-determined system:

$$ Ax = b $$with $A$ being an $M \times N$ matrix such that $M \ll N$.

Often we can solve inverse problems using a psuedonorm (i.e., $\hat{x} = A^{-1}b$) as long as $A$ is full rank:

In [26]:

```
A = np.random.normal(loc=0, scale=1, size=(128, 128))
x = np.random.rand(128)
b = A @ x
x_hat = np.linalg.solve(A, b)
plt.plot(x, label="x")
plt.plot(x_hat, ":", label="x_hat")
plt.legend()
plt.title("Full rank reconstruction")
plt.xticks([])
```

Out[26]:

([], [])