This Jupyter notebook is part of a collection of notebooks in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to Sascha.Spors@uni-rostock.de.
It was shown previously, that the convolution is an important operation in the theory of signals and linear time-invariant (LTI) systems. The linear convolution of two discrete signals $s[k]$ and $g[k]$ is defined as
\begin{equation} \sum_{\kappa = -\infty}^{\infty} s[\kappa] \cdot g[k - \kappa] = s[k] * g[k] \end{equation}where $*$ is a common short-hand notation of the convolution. The general properties of the linear convolution are discussed, followed by a geometrical interpretation of the operation.
For the discrete signals $s[k]$, $g[k]$, $h[k] \in \mathbb{C}$ the convolution shows the following properties
The Dirac impulse is the identity element of the convolution \begin{equation} s[k] * \delta[k] = s[k] \end{equation}
The convolution is commutative \begin{equation} s[k] * g[k] = g[k] * s[k] \end{equation}
The convolution is associative \begin{equation} \left( s[k] * g[k] \right) * h[k] = s[k] * \left( g[k] * h[k] \right) \end{equation}
The convolution is distributive \begin{equation} s[k] * \left( g[k] + h[k] \right) = s[k] * g[k] + s[k] * h[k] \end{equation}
Multiplication with a scalar $a \in \mathbb{C}$ \begin{equation} a \cdot \left( s[k] * g[k] \right) = \left( a \cdot s[k] \right) * g[k] = s[k] * \left( a \cdot g[k] \right) \end{equation}
The first property is a consequence of the sifting property of the Dirac pulse, the second to fifth property can be proven by considering the definition of the convolution.
The convolution can be interpreted in a graphical manner. This provides valuable insights into its calculation and allows to estimate the result. The calculation of the linear convolution
\begin{equation} y[k] = x[k] * h[k] = \sum_{\kappa = -\infty}^{\infty} x[\kappa] \cdot h[k - \kappa] \end{equation}can be decomposed into four subsequent steps:
substitute $k$ by $\kappa$ in both $x[k]$ and $h[k]$,
time-reverse $h[\kappa]$ (reflection at vertical axis),
shift $h[- \kappa]$ by $k$ to the to yield $h[k - \kappa]$, i.e. a shift to right for $k>0$ or a shift to left for $k<0$,
check for which $k = -\infty \dots \infty$ the mirrored & shifted $h[k - \kappa]$ overlap with $x[\kappa]$, calculate the specific sum for all the relevant $k$ to yield $y[k]$
The graphical interpretation of the convolution is illustrated by means of the following example.
Example
The procedure is illustrated with the signals
\begin{align} h[k] &= \epsilon[k] \cdot e^{- \frac{k}{2}} \\ x[k] &= \frac{3}{4} \text{rect}_N[k] \end{align}for $N=6$. Before proceeding, some helper functions and the signals are defined.
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def heaviside(k):
return np.where(k >= 0, 1.0, 0.0)
def rect(k, N):
return np.where((0 <= k) & (k < N), 1.0, 0.0)
def x(k):
return 3/4 * rect(k, 6)
def h(k):
return heaviside(k) * np.exp(- k/2)
def plot_signals(k, x, h, xlabel, hlabel, klabel):
plt.figure(figsize=(8, 4))
plt.stem(k, x, linefmt='C0-', markerfmt='C0o', label=xlabel)
plt.stem(k, h, linefmt='C1-', markerfmt='C1o', label=hlabel)
plt.xlabel(klabel)
plt.legend()
plt.ylim([0, 1.5])
Now let's compute and plot the signals $x[k]$ and $h[k]$
k = np.arange(-20, 21)
plot_signals(k, x(k), h(k), r'$x[k]$', r'$h[k]$', r'$k$')
The first step is to substitute $k$ by $\kappa$ in both $x[k]$ and $h[k]$ to yield $x[\kappa]$ and $h[\kappa]$. Note, the horizontal axis of the plot represents now $\kappa$, which is our temporal helper variable for the integration
kappa = np.arange(-20, 21)
x1 = x(kappa)
h1 = h(kappa)
plot_signals(kappa, x1, h1, r'$x[\kappa]$', r'$h[\kappa]$', r'$\kappa$')
The second step is to time-reverse $h[\kappa]$ to yield $h[-\kappa]$
h2 = h(kappa[::-1])
plot_signals(k, x1, h2, r'$x[\kappa]$', r'$h[-\kappa]$', r'$\kappa$')
In the third step the impulse response $h[-\kappa]$ is shifted by $k$ samples to yield $h[k - \kappa]$. The shift is performed to the right for $k>0$ and to the left for $k<0$.
For the fourth step it is often useful to split the calculation of the result according to the overlap between $h[k-\kappa]$ and $x[\kappa]$. For the given signals three different cases may be considered
The first case, no overlap, is illustrated for $k= - 5$
h3 = h(-5 + kappa[::-1])
plot_signals(k, x1, h3, r'$x[\kappa]$', r'$h[-5 -\kappa]$', r'$\kappa$')
From this it becomes clear that the convolution result for the first case is given as
\begin{equation} y[k] = 0 \qquad \text{for } k < 0 \end{equation}The second case, partial overlap, is illustrated for $k = 3$
h4 = h(3 + kappa[::-1])
plot_signals(k, x1, h4, r'$x[\kappa]$', r'$h[3 -\kappa]$', r'$\kappa$')
Hence, for the second case the convolution sum degenerates to
\begin{equation} y[k] = \frac{3}{4} \sum_{\kappa=0}^{k} e^{-\frac{k - \kappa}{2}} \qquad \text{for } 0 \leq k < 6 \end{equation}The third case, full overlap, is illustrated for $k = 10$
h5 = h(10 + kappa[::-1])
plot_signals(k, x1, h5, r'$x[\kappa]$', r'$h[10 -\kappa]$', r'$\kappa$')
For the third case the convolution sum degenerates to
\begin{equation} y[k] = \frac{3}{4} \sum_{\kappa=0}^{5} e^{-\frac{k - \kappa}{2}} \qquad \text{for } k \geq 6 \end{equation}The overall result is composed from the three individual results. As alternative and in order to plot the result, the convolution is evaluated numerically
def y(k):
return np.convolve(x(k), h(k), mode='same')
plt.stem(k, y(k), linefmt='C2-', markerfmt='C2o')
plt.xlabel(r'$y[k]$')
plt.ylabel(r'$y[k]$')
plt.ylim([0, 2])
(0, 2)
The entire process is illustrated in the following animation. The upper plot shows the integrands $h[k-\kappa]$ and $x[\kappa]$ of the convolution sum, the lower plot the result $y[k] = x[k] * h[k]$ of the convolution. The red dot in the lower plot indicates the particular time instant $k$ for which the result of the convolution sum is computed. The time instant $k$ is varied in the animation.
from animation import animate_discrete_convolution
plt.rcParams['animation.html'] = 'jshtml'
kappa = np.arange(-5, 15)
anim = animate_discrete_convolution(x, h, y, k, kappa)
anim
The length of a discrete signal $x[k]$ is defined as the total number of samples in between the first sample which is not zero and the last sample which is zero, plus one. A signal of finite-length or finite-length signal is a signal whose length is finite. The Dirac impulse $\delta[k]$ is for instance a finite-length signal of length one.
The convolution of two finite-length signals is of practical importance since the convolution can only be evaluated numerically for finite-length signals. Any infinite-length signal can be truncated to a finite length by multiplying it with the rectangular signal $\text{rect}_N[k]$. It is hence sufficient to consider the convolution of two rectangular signals of length $N \in \mathbb{N}$ and $M \in \mathbb{N}$
\begin{equation} x[k] = \text{rect}_N[k] * \text{rect}_M[k] \end{equation}in order to derive insights on the convolution of two arbitrary finite-length signals. By following above geometrical interpretation of the linear convolution, the result for $N \leq M$ can be found as
\begin{equation} x[k] = \begin{cases} 0 & \text{for } k < 0 \\ k+1 & \text{for } 0 \leq k < N \\ N & \text{for } N \leq k < M \\ N+M-1-k & \text{for } M \leq k < N+M-1\\ 0 & \text{for } k \geq N+M \end{cases} \end{equation}The convolution of two rectangular signals results in a finite-length signal. The length of the signal is $N+M-1$. This result can be generalized to the convolution of two arbitrary finite-length signals by following above reasoning. The convolution of two finite-length signals with length $N$ and $M$, respectively results in a finite-length signal of length $N+M-1$.
For two causal signals $s[k]$ and $g[k]$ of finite length $N$ and $M$ the convolution reads
\begin{equation} s[k] * g[k] = \sum_{\kappa = 0}^{N-1} s[\kappa] \cdot g[k - \kappa] = \sum_{\kappa = 0}^{M-1} s[k - \kappa] \cdot g[\kappa] \end{equation}for $0 \leq k < N+M-1$. The computation of each output sample requires at least $N$ multiplications and $N-1$ additions. The numerical complexity of the convolution for the computation of $N$ output samples is therefore in the order of $\mathcal{O} ( N^2 )$.
Example
The convolution of two rectangular signals $x[k] = \text{rect}_N[k] * \text{rect}_M[k]$ of length $N$ and $M$ is computed in the following. The resulting signal is plotted for illustration.
N = 7
M = 15
x = np.convolve(np.ones(N), np.ones(M), mode='full')
plt.stem(x)
plt.xlabel('$k$')
plt.ylabel('$x[k]$')
plt.ylim([0, N+.5])
plt.xlim([-5, 25])
(-5, 25)
Exercise
Copyright
The notebooks are provided as Open Educational Resource. Feel free to use the notebooks for your own educational purposes. The text is licensed under Creative Commons Attribution 4.0, the code of the IPython examples under the MIT license. Please attribute the work as follows: Lecture Notes on Signals and Systems by Sascha Spors.