How to convolve and do nothing at the same time?
The Convolution Series
- Definition of convolution and intuition behind it
- Mathematical properties of convolution
- Convolution property of Fourier, Laplace, and z-transforms
- Identity element of the convolution
- Star notation of the convolution
- Circular vs. linear convolution
- Fast convolution
- Convolution vs. correlation
- Convolution in MATLAB, NumPy, and SciPy
- Deconvolution: Inverse convolution
- Convolution in probability: Sum of independent random variables
Table of Contents
- Justification of the need for an identity element
- Identity element of the discrete convolution
- Identity element of the continuous convolution
- The sifting property
- Signal representation using the delay
For any operation, a very important concept is the neutral or identity element. Adding 0 to any number results in the same number. Multiplying a number by 1 results in the same number. These trivial facts are extensively used to prove numerous theorems of mathematics, especially in engineering. Particularly popular is adding and subtracting a variable or a constant (effectively adding 0) to introduce a desired element in the inspected (in)equality.
More formally, a neutral element or an identity element with respect to a binary operation defined on a set is an element such that [1, Sec. 220.127.116.11]
What is the identity element of convolution?
Why do we need a neutral element?
We often want to represent a “do nothing” operation in our processing, regardless of the domain. Examples of such operations are “add 0” for addition and “multiply by 1” for multiplication, as mentioned in the introduction. Another example is the NOP (“no operation”) instruction of processors used, for instance, for memory alignment.
Imagine that you would like to identify the impulse response of a certain system. As we know from one of the previous articles, the output of an LTI system is the convolution of its impulse response with the input. What if the system does nothing? We need a way to represent its impulse response (Figure 1).
Figure 1. How to represent a system that does not alter our signal at all?
Identity element of the discrete convolution
Let’s focus on the discrete convolution first. We are looking for a discrete signal, let’s denote it by , such that for any signal it holds (according to Equation 1) that
From the above it is clear that should be equal to 1 if and 0 for every other . In this way, we can pick out untouched from the infinite sum and only .
If for , then . Thus,
And so we have found our neutral element! The signal defined in Equation 3 is called a unit sample sequence, a discrete-time impulse, or just an impulse . I have also often stumbled upon the name discrete (Dirac) delta (impulse).
The definition in Equation 3 makes sense also from a different perspective. In the first article in the convolution series, we said that convolution in the context of filtering means delaying and scaling the impulse response by the samples of the input signal. If the impulse response consists of a single sample with value 1, convolving a signal with it should yield only delayed successive weights, i. e., just the input signal.
Identity element of the continuous convolution
How does the neutral element look in the case of continuous convolution? According to Equation 2, we obtain
Are you able to extract the formula for out of this equation? Me neither. So how is our defined, then?
It turns out that there exists no function satisfying Equation 4. We need another type of entity called a generalized function or a distribution. Then our is called the Dirac function and can be approximated by [1, Eq. 15.33a]
How to tackle this definition? I try to think about it as a function being 0 everywhere apart from . At , tends to like an infinitesimally narrow impulse of infinite height. But it is just an intuition; a correct mathematical definition is beyond the scope of this article.
Dirac function is ubiquitious in mathematics and engineering. It is often used to define empirical probability distributions (i.e., the ones resulting directly from data) . Additionally, I have seen it in action when defining the excitation function of partial differential equations (PDEs), e.g., representing the influence of a hammer strucking a piano string in physical modeling sound synthesis .
The sifting property
Dirac function has a valuable property
Substituting (what we can do) yields exactly our desired Equation 4.
The property in Equation 7 is called the sifting property of the function, because the function “sifts” our signal only to return the value of at a point where the argument of is equal to 0.
In the discrete case, the sifting property was shown in action in Equation 2; there we extracted a single element out of the (possibly infinite) sequence.
What happens if we shift the argument of the discrete-time impulse by 1?
Looking at the discrete time instant , the convolution with an argument-shifted impulse, , yielded , i. e., a sample that was already “known” to us (we are at time so we already observed at time ). That is the concept of a unit delay.
By adjusting the argument shift of and convolving the result with a signal we can obtain an arbitrarily delayed signal. If , we can even obtain “samples from the future”, i. e., . is called the delay length or simply the delay.
The concept of the delay and its application in digital signal processing and audio programming is very profound. Delay is an inherent property of any filter, or more generally, any LTI system. You may have stumbled upon the “Delay effect” as an audio plug-in to a digital audio workstation (DAW); the underlying principle relies on delaying the input signal and possibly adding it to the original. Delaying one channel with respect to the other helps to set up panning based on interaural time difference (ITD). Examples of other applications of the delay, just in the domain of audio effects, include artificial reverberation, comb filter, flanger, chorus, and Karplus-Strong synthesis.
In DSP diagrams, the delay by samples is marked with a box (Figure 2).
Figure 2. Representation of a delay by samples as a functional block in a DSP diagram.
That is because the -transform of is equal to
Notice that Equation 9 could be viewed as an application of the sifting property. From an infinite “stream” of we pick out only the one for which .
Arranging delays in a series
From the associativity property of the convolution, which we derived in one of the previous articles, it can be inferred that arranging delays in a series results in a delay of length equal to the sum of the individual delay lengths. That is because
( only if what results in ).
That means we can stack the delays one after another to increase the delay length (Figure 3).
Figure 3. Appending a delay element to the system results in adding its delay length to the original delay of the system.
Unsurprisingly, the notation in Figure 3 results directly from the convolution property of the -transform, which we discussed in the previous article
What is a signal, really?
Let’s recap once again the convolutional sum of Equation 2 [2, Eq. 2.5]
Let’s evaluate it for a few conrete values of .
As the value of changes, the corresponsing shift of the delta argument must change as well to make equal to 0 and produce a single result . We could think of this change of as a change of the delay length.
Let’s now assume that starts at 0, i. e., . Writing down the sum in Equation 12 explicitly yields 
Can you see the beauty of it? already contains all possible samples of the sequence ; we just need to delay it properly to receive the desired sample. In other words, any discrete-time signal is a convolutional sum, a weighted sum of delayed impulses. Fixing index to some concrete value sets the delay length accordingly so as to return the signal value for that particular .
In this article we examined the identity element of the convolution, i. e., for the discrete convolution (Equation 3) and for the continuous convolution (Equation 5). The former is much more easily tractable mathemathically .
We introduced the sifting property of the delta impulse and interpreted it as the delay in the context of digital signal processing.
Finally, we looked at a discrete-time signal as a weighted sum of delayed impulses.
 I.N. Bronshtein et. al. Handbook of Mathematics, 5th Edition, Springer, 2007.
 A. V. Oppenheim, R. W. Schafer Discrete-Time Signal Processing, 3rd Edition, Pearson, 2010.
 I. Goodfellow, Y. Bengio, A. Courville Deep learning, MIT Press, 2016, https://www.deeplearningbook.org/.
 M. Schäfer Simulation of Distributed Parameter Systems by Transfer Function Models Ph.D. dissertation, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2020.
Comments powered by Talkyard.