Deconvolution: Inverse Convolution

Posted by Jan Wilczek on July 23, 2021 · 13 mins read

Can we invert the effect of convolution?

Please accept marketing cookies to access the video player.

The Convolution Series

  1. Definition of convolution and intuition behind it
  2. Mathematical properties of convolution
  3. Convolution property of Fourier, Laplace, and z-transforms
  4. Identity element of the convolution
  5. Star notation of the convolution
  6. Circular vs. linear convolution
  7. Fast convolution
  8. Convolution vs. correlation
  9. Convolution in MATLAB, NumPy, and SciPy
  10. Deconvolution: Inverse convolution
  11. Convolution in probability: Sum of independent random variables

Deconvolution Definition

Given the output of the convolution operation y[n]y[n]

y[n]=x[n]h[n],(1)y[n] = x[n] \ast h[n], \quad (1)

where x[n]x[n] is the input signal and h[n]h[n] is the impulse response of a linear time-invariant (LTI) system, we may want to estimate

  1. x[n]x[n] given h[n]h[n] (input signal given the system),
  2. h[n]h[n] given x[n]x[n] (system given the input signal, so-called system identification), or
  3. both, x[n]x[n] and h[n]h[n] (input signal and the system simultaneously, so-called blind deconvolution).

While tasks 1 and 2 are somewhat similar thanks to the commutativity of convolution (identify one signal given two others), task 3 poses a significant challenge that is an active area of research.

This article contains a brief description of various methods used to accomplish deconvolution. By no means is this list complete nor are the explanations in-depth. Nevertheless, it will give you an overview of the methodologies used and when to use them.

But before I give you a tour of the deconvolution methods, I will present two vivid use cases of deconvolution to motivate the topic.

Example Application of Non-Blind Deconvolution

A simple example of deconvolution application is frequency response measurement of a loudspeaker. The excitation signal in this case cannot be an impulse because it could damage the loudspeaker. A fairly often used option is to use an exponential sweep: a signal whose frequency rises exponentially over time. Having recorded the response of a loudspeaker to the exponential sweep excitation, we need to deconvolve it to obtain just the loudspeaker’s impulse response. The loudspeaker is our unknown h[n]h[n], x[n]x[n] is the exponential sweep, and y[n]y[n] is the recorded response. Thus, it is a system identification problem.

Example Application of Blind Deconvolution

Imagine a smart home system. Whenever one of the users (residents) speaks up, the system needs to record that speech, perform automatic speech recognition, understand the message conveyed by speech, and ultimately decide what action to take. All these tasks are significantly more dificult when the recorded speech is reverberant, i.e., bears the impact of room acoustics. The system knows neither the speech utterance nor the room impulse response (which varies with user and smart device positions). Thus, it needs to use statistical or machine learning algorithms to infere which is which and improve the quality of the recorded speech.

In this case, x[n]x[n] is the speech signal, h[n]h[n] is the room’s impulse response, and y[n]y[n] is the signal recorded by the smart device.

A Catalogue of Deconvolution Methods

What follows is a quick tour of some of the deconvolution approaches where either the input signal or the system (but not both) are known. If you are interested in the topic, you are welcome to follow the sources specified at the end.

Here’s a navigable table of contents:

  1. Deconvolution Using Frequency-Domain Division
    1. Deconvolution Functions in Numerical Software
  2. Deconvolution Via (Pseudo-)Inverse of the Convolution Matrix
  3. Wiener Filtering (Wiener Deconvolution)
  4. Deconvolution Using Complex Cepstrum Liftering

Deconvolution Using Frequency-Domain Division

As we know from the convolution property of the zz-transform, a convolution of time-domain signals is equivalent to multiplication of their zz-transforms. Thus, why not try to deconvolve the signals in the zz-domain?

In the following we will assume that capital letters denote the zz-transforms of the time-domain signals. We have

y[n]=x[n]h[n],(2)y[n] = x[n] \ast h[n], \quad (2)
Y(z)=X(z)H(z).(3)Y(z) = X(z)H(z). \quad (3)

With this formulation we can easily obtain the desired time domain signal h[n]h[n] if we know x[n]x[n]

h[n]=Z1{H(z)}=Z1{Y(z)X(z)}.(4)h[n] = \mathcal{Z}^{-1} \lbrace H(z) \rbrace = \mathcal{Z}^{-1} \lbrace \frac{Y(z)}{X(z)} \rbrace. \quad (4)

There are two caveats to this approach:

  1. X(z)X(z) mustn’t be zero for any zz (we mustn’t divide by 0),
  2. The inverse zz-transform in Equation (4) must exist.

With that in mind, we can present two numerical software functions that use the above approach.

Deconvolution Functions in Numerical Software

Deconvolution in numerical software is achieved through polynomial division in the zz-domain, as in Equation (4).

In SciPy and Matlab, we have two very similar functions for deconvolution:

quotient, remainder = scipy.signal.deconvolve(signal, divisor)
[quotient, remainder] = deconv(signal, divisor)

In these functions, the divisor is deconvolved from signal to obtain the quotient. The remainder is the signal that could not be properly deconvolved (typically because of numerical precision). For these operations, the following identities should hold

signal = convolve(divisor, quotient) + remainder
signal = conv(divisor, quotient) + remainder

Keep in mind the caveats above: if the divisor signal has zeros in its zz-transform, then the application of the above functions will lead to noise amplification and, ultimately, worthless output.

Deconvolution Via (Pseudo-)Inverse of the Convolution Matrix

If we write the convolution in Equation (1) in a matrix form it should be easier for us to reason about it. First, let’s write x[n]x[n] in a vector form

x[n]=[x[n],x[n1],,x[nMN+1]],(5)\pmb{x}[n] = [x[n], x[n-1], \dots, x[n-M-N+1]]^\top, \quad (5)

where MM is the length of the impulse response hh and NN is the length of the observation window y[n]\pmb{y}[n].

Second, we can write

y[n]=Hx[n],(6)\pmb{y}[n] = \mathbf{H} \pmb{x}[n], \quad (6)

where H\mathbf{H} is an N×(M+N1)N \times (M+N-1) convolution matrix which has Toeplitz structure (identical elements along each diagonal of the matrix)

H=[h[0]h[1]h[M1]000h[0]h[1]h[M1]00h[0]h[1]h[M1]].(7)\mathbf{H} = \begin{bmatrix} h[0] & h[1] & \dots & h[M-1] & 0 & \dots & 0 \newline 0 & h[0] & h[1] & \dots & h[M-1] & \dots & 0 \newline \vdots & & \ddots & & & & \vdots \newline 0 & & \dots & h[0] & h[1] & \dots & h[M-1] \end{bmatrix}. \quad (7)

To ensure proper understanding, let’s write out the elements of y[n]\pmb{y}[n] analogously to x[n]\pmb{x}[n]

y[n]=[y[n],y[n1],,y[nN+1]],(8)\pmb{y}[n] = [y[n], y[n-1], \dots, y[n-N+1]]^\top, \quad (8)

Since there is no additive noise, we can obtain x[n]\pmb{x}[n] or its estimate x~[n]\tilde{\pmb{x}}[n] by

  • inverting H\mathbf{H} if H\mathbf{H} is full-rank and not ill-conditioned

    x[n]=H1y[n],(9)\pmb{x}[n] = \mathbf{H}^{-1} \pmb{y}[n], \quad (9)
  • computing the Moore-Penrose pseudoinverse of H\mathbf{H} if H\mathbf{H} cannot be easily inverted

    x~[n]=(HH+δI)1Hy[n],(10)\tilde{\pmb{x}}[n] = (\mathbf{H}^\top\mathbf{H} + \delta \mathbf{I})^{-1}\mathbf{H}^\top \pmb{y}[n], \quad (10)

    where δ\delta is a small constant added for numerical stability (so-called Tikhonov regularization). This is the solution of least squares minimization of the squared difference between the samples of y[n]\pmb{y}[n] and Hx[n]\mathbf{H}\pmb{x}[n].

Wiener Filtering (Wiener Deconvolution)

What if y[n]y[n] in Equation (1) is not a perfect convolution of x[n]x[n] and h[n]h[n] but contains some additive noise? Such situations typically occur if y[n]y[n] is measured and real-world phenomena influence the result. We could write it as

y[n]=x[n]h[n]+w[n],(11)y[n] = x[n] \ast h[n] + w[n], \quad (11)

where w[n]w[n] denotes the noise signal that is not correlated with either x[n]x[n] or h[n]h[n]; the crosscorrelation between w[n]w[n] and either x[n]x[n] or h[n]h[n] is 0. That means that noise is not similar to xx nor hh in any significant way.

Assuming we know h[n]h[n], we can obtain an estimate of x[n]x[n], x~[n]\tilde{x}[n], by using a Wiener deconvolution filter [3]. This filter is defined in the frequency domain so we have

x~[n]=F1{X~(jω)}=F1{G(jω)Y(jω)},(12)\tilde{x}[n] = \mathcal{F}^{-1} \{\tilde{X}(j\omega)\} = \mathcal{F}^{-1} \{G(j\omega)Y(j\omega)\}, \quad (12)

where X~(jω)\tilde{X}(j\omega) and Y(jω)Y(j\omega) denote the Fourier transforms of x~[n]\tilde{x}[n] and y[n]y[n] respectively, and G(jω)G(j\omega) is the inverse filter specified in the frequency domain.

The formula for G(jω)G(j\omega) is

G(jω)=H(jω)SXX(jω)H(jω)2SXX(jω)+SWW(jω),(13)G(j\omega) = \frac{H^*(j\omega)S_{XX}(j\omega)}{|H(j\omega)|^2 S_{XX}(j\omega) + S_{WW}(j\omega)}, \quad (13)

where \cdot^* denotes complex conjugate, H(jω)H(j\omega) is the Fourier transform of h[n]h[n], SXX(jω)S_{XX}(j\omega) and SWW(jω)S_{WW}(j\omega) are mean power spectral densities (PSDs) of x[n]x[n] and w[n]w[n] respectively. (If you don’t know what PSD is, don’t worry; think of it as a probabilistic version of the Fourier transform. Actually, it can be estimated by properly averaging the short-time Fourier transform).

I think of the quotient in Equation (14) as a fraction of the clean signal present in the output Y(jω)Y(j\omega). After all, if there was no noise, then SWW(jω)=0S_{WW}(j\omega) = 0, and accordingly G(jω)=1H(jω)G(j\omega) = \frac{1}{H(j\omega)}, what brings Equation 12 back to Equation 4.

Surprisingly, I wasn’t able to find the derivation of Equation 13; it probably can be found in the Wiener’s original works from the 1940s.

Deconvolution Using Complex Cepstrum Liftering

The complex cepstrum of a discrete signal x[n]x[n] is defined as a stable sequence x^[n]\hat{x}[n] whose zz-transform is [1]

X^(z)=logX(z),(14)\hat{X}(z) = \log X(z), \quad (14)

where X(z)X(z) is the zz-transform of x[n]x[n]. Thus, x^[n]\hat{x}[n] can be expressed as

x^[n]=12πππlog(X(ejω))ejωndω=12πππ(logX(ejω)+jX(ejω))ejωndω.(15)\hat{x}[n] = \frac{1}{2 \pi} \int \limits_{-\pi}^{\pi} \log(X(e^{j\omega}))e^{j\omega n} d\omega \newline =\frac{1}{2 \pi} \int \limits_{-\pi}^{\pi} (\log |X(e^{j\omega})| + j \angle X(e^{j\omega})) e^{j\omega n} d\omega. \quad (15)

As we know from the convolution property of the zz-transform, a convolution of time-domain signals is equivalent to multiplication of their zz-transforms. If we apply a logarithm function to the multiplication of these transforms, we obtain a summation of the logarithms of the individual transforms. Mathematically speaking, if

y[n]=x[n]h[n],(16)y[n] = x[n] \ast h[n], \quad (16)

then

Y(z)=X(z)H(z)(17)Y(z) = X(z)H(z) \quad (17)

and

y^[n]=x^[n]+h^[n].(18)\hat{y}[n] = \hat{x}[n] + \hat{h}[n]. \quad (18)

If the nonzero values of x^[n]\hat{x}[n] and h^[n]\hat{h}[n] occupy different ranges of the nn index (different quefrencies of the cepstrum), we can zero-out the corresponding elements of y^[n]\hat{y}[n] corresponding to, for example, h^[n]\hat{h}[n] and after computing the inverse cepstrum obtain the signal x[n]x[n].

The operation of manipulating the elements of the cepstrum is called liftering.

Deconvolution via complex cepstrum liftering can be done, for example, to extract the glottal excitation from a recording of a human voice. In this case, x[n]x[n] is the glottal exciation, h[n]h[n] is the vocal tract impulse response (because vocal tract is a filter) and y[n]y[n] is the recorded speech signal. Take note that this is another example of blind deconvolution (we initially know neither x[n]x[n] nor h[n]h[n]).

Of course, typically x^[n]\hat{x}[n] and h^[n]\hat{h}[n] will overlap in the cepstral domain what makes the task more difficult and renders perfect deconvolution with this method impossible.

Summary

In this article, we discussed 4 deconvolution techniques out of which 1 works in the presence of noise. For further details, please refer to the sources below.

Bibliography

[1] Alan V Oppenheim, Ronald W. Schafer Discrete-Time Signal Processing, 3rd Edition, Pearson 2010.

[2] Simon Haykin, Adaptive Filter Theory, 5th Edition, Pearson 2014.

[3] Wiener Deconvolution on Wikipedia

[4] scipy.signal.deconvolve documentation

[5] Matlab’s deconv documentation

Share this page on:

Comments powered by Talkyard.

Please accept marketing cookies to access the display and add comments.