Evaluating transfer functions

Evaluating Transfer function Z icon

The article on Z-Transforms introduced a difference equation for discrete stable causal Linear Time Invariant (LTI) systems, that from here on we will refer to as a LTI system, or system for short \(\)

$$ \begin{align} \sum_{k=0}^N a_k\,y[n-k]&= \sum_{k=0}^M b_k\,x[n-k]\quad\Rightarrow \nonumber \\[10mu] a_0y[n]+a_1y[n-1]+a_2y[n-2]+\ldots &= b_0x[n]+b_1x[n-1]+b_2x[n-2]+\ldots \label{eq:diffequation} \end{align}\nonumber $$

The article Discrete Transfer Functions showed us the discrete transfer function \(H(z)\) for causal LTI systems. $$ \begin{align} H(z) &= \frac{b_0+b_1z^{-1}+b_2z^{-2}+\cdots+b_Mz^{-M}}{a_0+a_1z^{-1}+a_2z^{-2}+\cdots+a_Nz^{-N}} \label{eq:tf_polynomial} \\[10mu] &= K\,z^{\small N-\small M}\frac{(z-q_1)(z-q_2)\cdots(z-q_{\small M})}{(z-p_1)(z-p_2)\cdots(z-p_{\small N})},&K=\frac{b_M}{a_N} \label{eq:tf_factors} \end{align} $$

Here we will evaluate the response of discrete transfer functions to sinusoidal inputs, introduce stability criteria and give methods to transfer a response back to the time-domain.

We will following the notation used in our piece on Z Transforms, where \( \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \ztransform\) denotes a unilateral Z-transform, equivalent to the more common notation \(\mathfrak{Z}\left\{\,f[n]\,\right\}\), and \(f[n]\) is defined as the sample taken at time \(nT\) or \(f(nT)\). The terms filter and system will be used interchangeably.

Filter Types

In general, the term electronic filters refers to circuits that perform signal processing to remove unwanted frequency components from a signal and/or to enhance wanted components. Examples include enhancing X-ray images at airports and extracting radio signals from far away space probes. An electronic control system typically refers to a circuit that processes one signal into another to give the desired system response. E.g. modern thermostats learn the characteristics of your house, and chemical plants measure fluid levels to control flow pumps. In the context of this writing we refer to filters and systems interchangeably.

In equation \(\eqref{eq:diffequation}\), the \(b_i\) coefficients are called feedforward coefficients, and the \(a_i\) coefficients are called feedback coefficients.

We classify filters based on whether or not they use any previous value of the output, in what case we say they have feedback. Based on this we classify filters into two groups: finite impulse response and infinite impulse response filters.

Finite Impulse Response

As the name implies, Finite Impulse Response (FIR) filters have a finite response to an input. If the filter order is \(M\), then the maximum delay for the input to the output will be \(M\) samples. In other words, given an impulse input, the output will return to \(0\) after \(M\) samples.

In FIR filters, the filter output does not depend on any previous value of the output and the coefficients \(a_i=0\) for all \(i\gt 0\) in equation \(\eqref{eq:diffequation}\).

The figure below shows the signal flow of a FIR filter, where the \(z^{-1}\) block represents an one sample delay.

FIR Signal Flow
Signal Flow Graph for FIR filter
Output \(y[n]\)depends on \(x[n-M]\cdots x[n]\)
Impulse response Has a finite duration
Coefficients \(b_i\)

Infinite Impulse Response

A filter is said to be recursive when \(a_i\neq 0\) for some \(i\gt 0\). Recursive filters are also called Infinite Impulse Response (IIR) filters.

FIR and IIR Signal Flow, direct form i
Signal Flow Graph, IIR filter, Direct Form I
Output \(y[n]\) depends on \(x[n-M]\ldots x[n],\ y[n-N]\ldots y[n]\)
Impulse response Duration depends on feedback
Coefficients \(a_i, b_i\)

We may view an IIR filter \(H(z)\) as a series combination of two subsystems \(H_1(z)\) and \(H_2(z)\).

$$ H(z) = H_1(z)\,H_2(z)=H_2(z)\,H_1(z) $$

The commutative property of multiplication, allows the order of the subsystems to be reversed. If we draw such a circuit, it becomes apparent that each delay element \(z^{-1}\) is next to another delay element with the same input. We may then replace each such set of delay elements by one delay element with the same input. The resulting signal flow graph is called “Direct Form II” as depicted below.

Signal Flow Graph, IIR filter, Direct Form II

Impulse response

The response of a system to any input can be calculated by the time convolution or the frequency product of the impulse response of the system and the input signal.

We derived the Z-Transform for the impulse function \(\delta[n]\) as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \delta[n] \ztransform 1 \triangleq \Delta(z)\nonumber \label{eq:impulse} $$

When applying this input \(X(z)\) to the filter with impulse response \(H(z)\), the output \(Y(z)\) is

$$ Y(z) = X(z)\,H(z) = 1\,H(z) $$

In other words, the response to an impulse input, is simply the transfer function \(H(z)\) itself. That is not very surprising considering that \(h[n]\) was defined as the response to an impulse input function.

$$ \shaded{ Y(z)=H(z) } $$

Since the transfer function \(H(z)\) equals the impulse response \(Y(z)\) of the transfer function, these terms are often used interchangeably.

Frequency response

Discrete Transfer Functions introduced the concept of poles and zeros and their effect on the transfer function. Here we will take it a step further by evaluating specific \(z\) values.

The frequency response of a linear time invariant system is defined as the steady state response to a sinusoidal input. In other words the output after all transients have died out.

As part of the Z-transform, we defined:

$$ z\triangleq\mathrm{e}^{sT} \nonumber $$ where \(s=\sigma+j\omega\)

To find the frequency response, we follow the same methodology as we did for the Continuous Frequency Response and evaluate the expression \(F(z)\) along \(s=j\omega\)

$$ z = \left.\mathrm{e}^{sT}\right|_{s=\omega T}=\mathrm{e}^{j\omega T} \label{eq:zunitcircle} $$

Substitute \(\eqref{eq:zunitcircle}\) in \(\eqref{eq:tf_factors}\)

$$ \begin{align} H(\mathrm{e}^{j\omega T}) &= K\,\mathrm{e}^{j(\small N-\small M)\omega T}\frac{(\mathrm{e}^{j\omega T}-q_1)(\mathrm{e}^{j\omega T}-q_2)\dots(\mathrm{e}^{j\omega T}-q_{\small M})}{(\mathrm{e}^{j\omega T}-p_1)(\mathrm{e}^{j\omega T}-p_2)\dots(\mathrm{e}^{j\omega T}-p_{\small N})},&K=\frac{b_M}{a_N} \label{eq:tf_unitcircle} \end{align} $$
this will only converge when the ROC includes the unit circle where \(|z|=1\).

Expressing equation \(\eqref{eq:tf_unitcircle}\) in polar form, helps us distinguish between the amplitude and phase response

$$ \shaded{ \begin{align} H\left(\mathrm{e}^{j\omega T}\right) &= r\,\mathrm{e}^{j\varphi} \nonumber \\ \text{where}\quad r&=\left|H\left(\mathrm{e}^{j\omega T}\right)\right|&\text{amplitude response} \nonumber \\ \text{and}\quad\varphi&=\angle{H\left(\mathrm{e}^{j\omega T}\right)}&\text{phase response} \nonumber \\ \end{align} } \label{eq:tf_polarform} $$

Remember, that in the \(z\)-plane, angular frequency are shown in normalized form, where the normalized angular frequency \(\omega T\) is the angle with the positive horizontal axis.

Amplitude response

We find the amplitude response as the magnitude \(|H(\mathrm{e}^{j\omega T})|\) when substituting equation \(\eqref{eq:tf_unitcircle}\) in \(\eqref{eq:tf_polarform}\)

$$ \begin{align} \left|H(\mathrm{e}^{j\omega T})\right| =& \left|K\,\mathrm{e}^{j(\small N-\small M)\omega T}\frac{(\mathrm{e}^{j\omega T}-q_1)(\mathrm{e}^{j\omega T}-q_2)\dots(\mathrm{e}^{j\omega T}-q_{\small M})}{(\mathrm{e}^{j\omega T}-p_1)(\mathrm{e}^{j\omega T}-p_2)\dots(\mathrm{e}^{j\omega T}-p_{\small N})}\right|,&K=\frac{b_M}{a_N}\nonumber\\[14mu] =& \left|K\right|\,\left|\mathrm{e}^{j(\small N-\small M)\omega T}\right| \frac{\left|\mathrm{e}^{j\omega T}-q_1\right|\cdot\left|\mathrm{e}^{j\omega T}-q_2\right|\dots\left|\mathrm{e}^{j\omega T}-q_{\small M}\right|} {\left|\mathrm{e}^{j\omega T}-p_1\right|\cdot\left|\mathrm{e}^{j\omega T}-p_2\right|\dots\left|\mathrm{e}^{j\omega T}-p_{\small N}\right|},&K=\frac{b_M}{a_N} \end{align} $$

With \(\left|\mathrm{e}^{j(\small N-\small M)\omega T}\right|=1\), according to Euler’s formula, the amplitude response follows as

$$ \begin{align} \shaded{ \left|H(\mathrm{e}^{j\omega T})\right| = |K|\,\frac{\prod_{i=1}^{M}\left|\mathrm{e}^{j\omega T}-q_i\right| } {\prod_{i=1}^{N}\left|\mathrm{e}^{j\omega T}-p_i\right|}}, & & K=\frac{b_M}{a_N} \end{align} $$

The amplitude response can be visualized with the length of vectors from the poles and zeros to point \(z\) on the unit circle that corresponding to the natural frequency for which the function is evaluated.

The product of the vector lengths from the zeros divided by that of the poles times \(K\) represents the amplitude response for that normalized angular frequency.

Phase response

Along the same line of thought, the phase response is the angle \(\angle\left(H(\mathrm{e}^{j\omega T})\right)\) when substituting equation \(\eqref{eq:tf_unitcircle}\) in \(\eqref{eq:tf_polarform}\)

$$ \begin{align} \angle H(\mathrm{e}^{j\omega T}) =\,& \angle\left(K\,\mathrm{e}^{j(\small N-\small M)\omega T}\frac{(\mathrm{e}^{j\omega T}-q_1)(\mathrm{e}^{j\omega T}-q_2)\dots(\mathrm{e}^{j\omega T}-q_{\small M})}{(\mathrm{e}^{j\omega T}-p_1)(\mathrm{e}^{j\omega T}-p_2)\dots(\mathrm{e}^{j\omega T}-p_{\small N})}\right),&K=\frac{b_M}{a_N} \nonumber \\ =\,& \angle K+\angle\mathrm{e}^{j(\small N-\small M)\omega T} \nonumber \\ & +\angle\left(\mathrm{e}^{j\omega T}-q_1\right) + \angle\left(\mathrm{e}^{j\omega T}-q_2\right)+\dots +\angle\left(\mathrm{e}^{j\omega T}-q_{\small M}\right) \nonumber \\ & -\angle\left(\mathrm{e}^{j\omega T}-p_1\right) – \angle\left(\mathrm{e}^{j\omega T}-p_2\right) -\dots -\angle\left(\mathrm{e}^{j\omega T}-p_{\small N}\right),&K=\frac{b_M}{a_N} \nonumber \end{align} $$

With \(K\) a real valued scalar and \(\angle\mathrm{e}^{j\phi}=\phi\), according to Euler’s formula

$$ \begin{align} \angle H(\mathrm{e}^{j\omega T}) =&(N-M)\omega T\nonumber\\ &+\angle\left(\mathrm{e}^{j\omega T}-q_1\right) + \angle\left(\mathrm{e}^{j\omega T}-q_2\right)+\dots +\angle\left(\mathrm{e}^{j\omega T}-q_{\small M}\right) \nonumber \\ &-\angle\left(\mathrm{e}^{j\omega T}-p_1\right) – \angle\left(\mathrm{e}^{j\omega T}-p_2\right) -\dots -\angle\left(\mathrm{e}^{j\omega T}-p_{\small N}\right) \nonumber \end{align} $$

The phase response \(\angle H(\mathrm{e}^{j\omega T})\) follows as

$$ \shaded{\angle H(\mathrm{e}^{j\omega T}) =(N-M)\omega T +\sum_{i=1}^{M}\angle\left(\mathrm{e}^{j\omega T}-q_i\right) -\sum_{i=1}^{N}\angle\left(\mathrm{e}^{j\omega T}-p_i\right) } $$

The phase response can be visualized using the angle of vectors from the poles and zeros to point \(z\) compared to a horizontal line. Where \(z\) is a point on the unit circle (\(|z|=1\)) for which the function is evaluated. The phase response follows as is the sum of the angles from the zeroes minus that of the poles plus \((N-M)\omega T\).

Inverse Z-transform

Eventually there comes a time to return to the time-domain using an inverse Z-transform. The article Z-Transforms metioned some of the techniques for the inverse Z-Transform:

  • using the binomial theorem
  • using the convolution theorem
  • performing long division
  • using the initial-value theorem
  • partial fractions expansion

We already used the binomial theorem to proof the binomial scaled pair. Here we use long division to reduce the order of the numerator, and use partial fraction expansion to split up the remaining fraction.

Partial Fraction Expansion (and long division)

As we have seen in equation \(\eqref{eq:tf_polynomial}\), the impulse response \(Y(z)=H(z)\) is a rational fraction with \(N\) poles and \(M\) zeroes

$$ \begin{align} Y(z) &= \frac{b_0+b_1z^{-1}+b_2z^{-2}+\ldots+b_Mz^{-M}}{a_0+a_1z^{-1}+a_2z^{-2}+\ldots+a_Nz^{-N}},&a_0=1 \nonumber \end{align} \nonumber $$

This rational fraction is proper when the degree of the numerator polynomial is less than the degree of the denominator polynomial. To make the function proper, we use long division of the denominator/nominator until the order of the numerator is less than that of the denominator.

Let’s call the quotient from the long division \(F(z)\), and the ratio of the remainder/denominator \(G(z)\)

$$ Y(z) = F(z)+G(z) $$
if \(Y(z)\) was already proper (\(M\lt N\)), we can skip the long division and set the term \(F(z)\) to \(0\).

Determine \(F(z)\), the FIR part

The \(F(z)\) part will be a polynomial in \(z^{-1}\) of the order \(M-N\).

$$ F(z) = f_0+f_1z^{-1}+f_2z^{-2}+\ldots+f_{\color{purple}{M-N}}z^{-(\color{purple}{M-N})}=\sum_{k=0}^{M-N}f_k\,z^{-k} \label{eq:firpart5} $$

Recall the delay from the Z-transform pairs

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \delta[n-a]\,\ztransform\, z^{-a}\nonumber $$

Using this transform, \(F(z)\) transforms to a parallel combination of delayed impulses in the time-domain

$$ \begin{align} \shaded{f[n] = \sum_{k=0}^{M-N}f_k\ \delta[n-k]} & & \text{FIR part} \label{eq:firpart} \end{align} $$
we call this the FIR part, because it does not depend on any value of the output.

Determine \(G(z)\), the IIR part

The proper rational function \(G(z)\) is the remainder/denominator of the long division. Note that the numerator uses \(\dot{b}_i\) coefficients and the constant \(\dot{K}\) brings the numerator and denominator in unity form

$$ \begin{align} G(z) &= \dot{K}\,\frac{1+\dot{b}_{1}z^{-1}+\dot{b}_{2}z^{-2}+\ldots+\dot{b}_{\color{red}{N-1}}z^{-(\color{red}{N-1})}} {1+a_{1}z^{-1}+a_{2}z^{-2}+\ldots+a_{\color{red}{N}}z^{-N}},&\dot{K}=\frac{\dot{b}_0}{\dot{a}_0} \nonumber \\[10mu] &= \dot{K}\,\frac{1+\dot{b}_1z^{-1}+\dot{b}_2z^{-2}+\ldots+\dot{b}_{\color{red}{N-1}}z^{-(\color{red}{N-1})}}{(z-r_1)(z-r_2)(z-r_3)\ldots(z-r_N)},&\shaded{\dot{K}=\frac{\dot{b}_0}{\dot{a}_0}} \label{eq:gfactors} \end{align} $$

This proper fraction can be split up into a sum of simpler fractions as introduced by Oliver Heaviside and described in Partial Fraction Expansion (PFE).

At this point in time we need to decide on the format of the time-domain function \(g[n]\). We will consider \(G(z)\) with singular poles and show you three forms. The same can be done with multiple poles, but it is a bit more involved.

While all formats lead to the same output sequence, some may be more intuitive than others. For example, if you expect a exponentially decaying response, you may want to work towards that and see how well it matches.

Choice 1: The obvious

When \(G(z)\) only has single poles, PFE gives a summation of partial fractions in the form \(\frac{c}{z-a}\)

$$ G(z) = \frac{c_1}{z-r_1}+\frac{c_2}{z-r_2}+\cdots+\frac{c_N}{z-r_N}=\sum_{k=1}^N\,\frac{c_k}{z-r_k} \label{eq:iir1} $$

Thanks to the linearity property of the Z-transform, these the simpler fraction can each be transformed to and summed up in the time-domain. The Z-transform for the scaled delay pair is then found in the table of Z-transform pairs as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} a^{n-1}\gamma[n-1] \ztransform \dfrac{1}{z-a}, & & |z|\gt|a| \nonumber \end{align} \nonumber $$

Therefore, \(G(z)\) transforms to a parallel combination of delayed scaled step functions in the time-domain

$$ \shaded{ \begin{align} g[n]&=\left(c_1+c_2r_2+\cdots+c_{\small N}(r_{\small N})^{\,\small N-1}\right)\,\color{grey}{\gamma[n\color{black}{-1}]} \nonumber \\ &= \sum_{k=1}^{N}c_k(r_k)^{k-1}\, \color{grey}{\gamma[n\color{black}{-1}]} \nonumber \end{align} } \label{eq:iir0} $$
note the \(-1\) in the step function \(\gamma[n\color{black}{-1}]\). For the next choice, will prevent that delay.

Choice 2: Work towards \(\frac{z}{z-a}\) partial fractions

This time, we decide to work towards partial fractions in the form \(\frac{z}{z-a}\) that transform to \(a^n\,\gamma[n]\) in the time-domain.

Once more, start with equation \(\eqref{eq:gfactors}\), but this time preserve a power of \(z\) by divide both sides by \(z\).

$$ \begin{align} \frac{G(z)}{\color{blue}{z}}&=\dot{K}\,\frac{1+\dot{b}_1z^{-1}+\dot{b}_2z^{-2}+\ldots+\dot{b}_{\color{red}{N-1}}z^{-(\color{red}{N-1})}}{\color{blue}{z}(z-r_1)(z-r_2)(z-r_3)\ldots(z-r_N)} \label{eq:choice2} \end{align} $$

When \(\frac{G(z)}{z}\) has only single poles, according to Heaviside, it expands to the summation

$$ \begin{align} \frac{G(z)}{\color{blue}z}&=\color{blue}{\frac{c_0}{z}}+\frac{c_1}{z-r_1}+\frac{c_2}{z-r_2}+\cdots+\frac{c_{\small N}}{z-r_N}=\color{blue}{\frac{c_0}{z}}+\sum_{k=1}^N\,c_k\,\frac{1}{z-r_k}\quad\Rightarrow \nonumber \\[6mu] G(z) &= \color{blue}{c_0}+\frac{c_1z}{z-r_1}+\frac{c_2z}{z-r_2}+\cdots+\frac{c_{\small N}z}{z-r_N}=\color{blue}{c_0}+\sum_{k=1}^N\,c_k\,\frac{\color{blue}{z}}{z-r_k} \end{align} $$

The Z-transform for the constant and the the scaled pair are found in the table of Z-transform pairs as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \small{a\delta[n]\triangleq\begin{cases}a,&n=0\\0,&n\neq0\end{cases}} \ztransform a \nonumber \\ a^{n}\gamma[n] \ztransform \dfrac{z}{z-a},&&|z|\gt|a| \nonumber \end{align} \nonumber $$

Thus \(G(z)\) transforms to a parallel combination of impulse and scaled step functions in the time-domain

$$ \shaded{ \begin{align} g[n] &= c_0\,\color{grey}{\delta[n]}+\left(c_1r_1+c_2(r_2)^2+\cdots+c_{\small N}(r_{\small N})^{\,\small N}\right)\,\color{grey}{\gamma[n]} \nonumber \\ &= c_0\color{grey}{\delta[n]}+\sum_{k=1}^{N}c_k(r_k)^{\small N}\,\color{grey}{\gamma[n]} \nonumber \end{align} } \label{eq:choise2b} $$

Choice 3: delayed form

In this variation, the IRR part begins after the FIR part has finished. This can be more accurate in signal modeling applications, as the IIR part may be delayed so that its impulse response begins where that of the FIR part died out.

Start at the beginning with equation \(\eqref{eq:gfactors}\), and multiply the numerator with the highest power of \(z^{-1}\): \(z^{\small M}\)

$$ \begin{align} H(z) &= \frac{b_0+b_1z^{-1}+b_2z^{-2}+\cdots+b_Mz^{-M}}{a_0+a_1z^{-1}+a_2z^{-2}+\cdots+a_Nz^{-N}} \nonumber \\[10mu] &= \color{blue}{z^{-M}}\,\frac{b_0z^{\color{blue}M}+b_1z^{\color{blue}{M}-1}+b_2z^{\color{blue}{M}-2}+\cdots+b_M}{a_0+a_1z^{-1}+a_2z^{-2}+\cdots+a_Nz^{-N}} \end{align} $$

Similar to before, split the expression in \(F(z)\) and \(G(z)\) parts using long division. Call the quotient from the long division \(F(z)\), and the ratio of the remainder/denominator \(G(z)\)

The \(F(z)\) part will be a polynomial in \(z\) of the order \(M-N\).

$$ F(z) = f_0+f_1z+f_2z^{2}+\ldots+f_{\color{purple}{M-N}}z^{(\color{purple}{M-N})} = \sum_{k=0}^{M-N}f_k\,z^{k} $$

Determine the \(G(z)\) part, by first bringing it back to a polynomial in \(z^{−1}\)

$$ \begin{align} G(z) &= \dot{K}\,\color{blue}{z^{-M}}\,\frac{\ddot{b}_0+\ddot{b}_1z^{-1}+\ddot{b}_2z^{-2}+\ldots+\ddot{b}_{{N-1}}z^{-({N-1})}} {1+a_1z^{-1}+a_2z^{-2}+\ldots+a_{N}z^{-N}},&\dot{K}=\frac{\ddot{b}_0}{\dot{a}_0} \nonumber \\[10mu] &= \dot{K}\,\color{blue}{z^{-M}}\,\frac{\ddot{b}_0+\ddot{b}_1z^{-1}+\ddot{b}_2z^{-2}+\ldots+\ddot{b}_{{N-1}}z^{-({N-1})}}{(z-r_1)(z-r_2)(z-r_3)\ldots(z-r_N)},&\shaded{\dot{K}=\frac{\ddot{b}_0}{\dot{a}_0}} \end{align} $$

Split up the proper fraction into a sum of simpler fractions using PFE, using the “obvious” choice and assuming \(G(z)\) only has single poles

$$ \begin{align} G(z)&=\color{blue}{z^{-M}}\left(\frac{c_1}{z-r_1}+\frac{c_2}{z-r_2}+\cdots+\frac{c_N}{z-r_N}\right) \nonumber \\[8mu] &=\color{blue}{z^{-M}}\left(\sum_{k=1}^N\,\frac{c_k}{z-r_k}\right) \end{align} $$

Therefore, \(G(z)\) transforms to a parallel combination of delayed scaled step functions in the time-domain

$$ \shaded{ \begin{align} g[n]&=\left(c_1+c_2r_2+\cdots+c_{\small N}(r_{\small N})^{\,\small N-1}\right)\,\color{grey}{\gamma[n\color{black}{-M-1}]} \nonumber \\ &= \sum_{k=1}^{N}c_k(r_k)^{k-1}\, \color{grey}{\gamma[n\color{black}{-M-1}]} \nonumber \end{align} } $$

Examples for these forms are given in the appendix.

Stability

As we have seen in Z-transforms, the convergence of a transfer function depends on its magnitude, while its phase has no effect. A system is stable if the magnitude of its impulse response \(h[n]\) decays to \(0\) as \(t\to\infty\).

As we have seen, every finite-order LTI filter can be expressed as FIR and IIR parts. We will now examine how these parts contribute to the stability of the system.

FIR part

The FIR part \(F(z)\) from equation \(\eqref{eq:firpart5}\) is a finite-order polynomial in \(z^{-1}\)

$$ \begin{align} F(z) &= f_0+f_1z^{-1}+f_2z^{-2}+\ldots+f_Kz^{-K},\ \ \ K=M-N& \forall_{M\geq N} \end{align} $$

In the time-domain this transforms to \(\eqref{eq:firpart}\)

$$ f[n]=\sum_{k=0}^{M-N}f_k\ \delta[n-k]\nonumber $$
This is always stable because there are finite terms.

IIR part

As shown in Choice 1 above, the IIR part \(\eqref{eq:choice2}\) can be expressed as a summation of \(\frac{1}{z-a}\) terms \(\eqref{eq:choise2b}\). These terms transform to the time-domain as \((\ref{eq:iir1},\ref{eq:iir0})\).

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} y_{\tiny{IIR}}[n]=\sum_{k=0}^{N}\,c_k\,(r_k)^{n-1}\,\gamma[n-1] \ztransform \sum_{k=0}^{N}c_k\,\frac{1}{z-r_k} $$
where \(c_k\) is some finite-order polynomial and \(r_k\) is the \(k\)th pole of the filter.

If all the poles are inside the unit circle in the \(z\)-plane, then the IIR part is stable and consequently the transfer function \(H(z)\) is stable. More formally:

An irreducible transfer function \(H(z)\) is stable if and only if all its poles have a magnitude less than one.

Appendix

Example 1

This example shows an Inverse Z-Transform for a rational function where the numerator and denominator have the same degree \(N=M=3\).

$$ Y(z)=\frac{2z^3+z^2-z+4}{(z-2)^3} $$

This conveniently matches Example 2 in the Partial Fraction Expansion article.

$$ \frac{\color{green}{2}x^3+\color{green}{x}^2\color{green}{-1}x+\color{green}{4}}{(x-2)^3}=\color{blue}{-\frac{1}{2}}+\color{blue}{11}\frac{x}{(x-2)^3}+\color{blue}{8}\frac{x}{(x-2)^2}+\color{blue}{\frac{5}{2}}\frac{x}{x-2}\nonumber $$

This implies that \(Y(z)\) can be expressed in partial fractions as

$$ \begin{align} Y(z) &= \color{blue}{-\frac{1}{2}}+\color{blue}{\frac{5}{2}}\frac{z}{(z-2)}+\color{blue}{8}\frac{z}{(z-2)^2}+\color{blue}{11}\frac{z}{(z-2)^3} \end{align} $$

These terms are readily found in the Z-transform pairs table

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \delta[n] &\,\ztransform\, 1 \nonumber \\ a^n\,\gamma[n] &\,\ztransform \frac{z}{z-a},&|z|\gt |a| \nonumber\\[6mu] n\,a^n\,\gamma[n] &\,\ztransform \frac{az}{(z-a)^2},&|z|\gt |a| \nonumber \\ \tfrac{1}{2}{n(n-1)}\,a^n\,\gamma[n] &\,\ztransform \frac{a^2z}{(z-a)},&|z|\gt |a| \nonumber \end{align} \nonumber $$

Using these transform pairs, the time-domain response is

$$ \begin{align} y[n] &= \color{blue}{-\tfrac{1}{2}}\delta[n]+\color{blue}{\tfrac{5}{2}}2^n\gamma[n]+\tfrac{1}{2}\color{blue}{8}n2^n\gamma[n]+\tfrac{1}{2^2}\color{blue}{11}\tfrac{1}{2}n(n-1)2^n, & n\geq0\nonumber\\[8mu] &= -\tfrac{1}{2}\delta[n]+\tfrac{5}{2}2^n\gamma[n]+4n2^n\gamma[n]+\tfrac{11}{8}n(n-1)2^n, &n\geq0 \end{align} $$

Example 2

This example shows an Inverse Z-Transform for a rational function where the degree of the numerator is 1 more as that of the denominator \(M=3, N=2\). [CCRMA]

Examine the impulse response of a filter with transfer function

$$ H(z) = \frac{\color{teal}{2}+\color{teal}{6}z^{-1}+\color{teal}{6}z^{-2}+\color{teal}{2}z^{-3}}{\color{olive}{1}\color{olive}{-2}z^{-1}+\color{olive}{1}z^{-2}} \label{eq:example2_def} $$

Solve it by using long division to bring the order of the numerator down to \(N-1\), so we can use partial fraction expansion on the remaining IIR part.

To help with notation, define \(d\triangleq z^{-1}\) Do the long division

This bought the order of the numerator down to one less than that of the denominator (\(M=1, N=2\))

$$ H(z) = \underbrace{\color{purple}{10}+\color{purple}{2}z^{-1}}_{F(z)}+\underbrace{\frac{\color{green}{-8}+\color{green}{24}z^{-1}}{1-2z^{-1}+z^{-2}}}_{G(z)} $$

Examine the IIR part, \(G(z)\)

$$ G(z) = \frac{\color{green}{-8}+\color{green}{24}z^{-1}}{1-2z^{-1}+z^{-2}} \label{eq:example2g} $$

This matches Example 1 in the partial fraction expansion article.

$$ \frac{\color{green}{-8}+\color{green}{24}x}{1-2x+x^2}=\frac{\color{blue}{-24}}{1-x}+\frac{\color{blue}{16}}{\left(1-x\right)^{2}}\nonumber $$

Analogue to that example, fraction \(G(z)\) can be expressed in partial fractions as

$$ G(z) = \frac{\color{blue}{-24}}{1-z^{-1}}+\frac{\color{blue}{16}}{\left(1-z^{-1}\right)^{2}} $$

With \(H(z) = F(z)+G(z)\), and the impulse response \(Y(z) = \Delta(z)\,H(z)=H(z)\)

$$ \shaded{ Y(z)=\color{purple}{10}+\color{purple}{2}z^{-1}\color{blue}{-}\frac{\color{blue}{24}}{1-z^{-1}}+\frac{\color{blue}{16}}{\left(1-z^{-1}\right)^{2}} } $$

These terms are readily found in the Z-transform pairs table

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \delta[n] &\ztransform 1 \nonumber \\ \gamma[n-a]\,\color{grey}{\gamma[n]} &\ztransform \small{\begin{cases}z^{-a},&a\geq0\\0,&a\lt0\end{cases}} &z\neq0 \nonumber \\ a^n\,\color{grey}{\gamma[n]} &\ztransform {\frac{1}{1-az^{-1}}},&|z|\gt|a| \nonumber \\ \small{\left(\begin{array}{c}n+m-1\\m-1\end{array}\right)}\,a^n\,\color{grey}{\gamma[n]} &\ztransform\dfrac{1}{(1-az^{-1})^m}&|z|\gt|a| \nonumber \\ n\,\color{grey}{\gamma[n]} &\ztransform \frac{z^{-1}}{\left(1-z^{-1}\right)^2},&|z|\gt1 \nonumber \end{align}\nonumber $$

The impulse response \(y[n]\) follows as

$$ \begin{align} y[n] &= \color{purple}{10}\delta[n]+\color{purple}{2}\gamma[n-1]\color{blue}{-24}\gamma[n]+\color{blue}{16}(n+1)\gamma[n] \\[8mu] &= \{2,10,26,42,\dots\} \end{align} $$

Delayed version

The delayed version is found by first multiplying numerator and denominator of equation \(\eqref{eq:example2_def}\) with \(z^3\), to make them a polynomial in \(z\) instead of \(z^{-1}\)

$$ H(z) = \frac{\color{grey}{2}z^3+\color{grey}{6}z^{2}+\color{grey}{6}z+\color{grey}{2}}{z(\color{purple}{1}z^2\color{purple}{-2}z+\color{purple}{1})} $$

Do a long division to reduce the order of the numerator

This bought the order of the numerator down to less than that of the denominator (M=1,N=3)

$$ \begin{align} H(z) &= \frac{\color{purple}{2}z+\color{purple}{10}}{z}+\frac{\color{green}{24}z\color{green}{-8}}{z(\color{purple}{1}z^2\color{purple}{-2}z+\color{purple}{1})} \nonumber \\[6mu] &= \underbrace{\color{purple}{2}+\color{purple}{10}z^{-1}}_{\triangleq F(z)}+\underbrace{z^{-1}\frac{\color{green}{24}z\color{green}{-8}}{z^2\color{purple}{-2}z+\color{purple}{1}}}_{\triangleq G(z)} \end{align} $$

Examine the IIR part \(G(z)\), by bringing it back to a polynomial in \(z^{-1}\)

$$ \begin{align} G(z)&=z^{-1}\frac{\color{green}{24}z\color{green}{-8}}{z^2\color{purple}{-2}z+\color{purple}{1}} =z^{-2}\frac{\color{green}{24}\color{green}{-8}z^{-1}}{\color{purple}{1}\color{purple}{-2}z^{-1}+\color{purple}{1}z^{-2}} \end{align} $$

Partial fraction expansion, left as an exercise to the reader

$$ \begin{align} G(z) &= z^{-2}\frac{\color{green}{24}\color{green}{-8}z^{-1}}{\color{purple}{1}\color{purple}{-2}z^{-1}+\color{purple}{1}z^{-2}} = z^{-2}\left(\frac{\color{blue}{8}}{1-z^{-1}}+\frac{\color{blue}{16}}{(1-z^{-1})^{\color{pink}{2}}}\right) \end{align} $$

With \(H(z)=F(z)+G(z)\), and the impulse response \(Y(z)=\Delta(z)\,H(z)=H(z)\)

$$ \shaded{ Y(z)=F(z)+G(z)=\color{purple}{2}+\color{purple}{10}z^{-1}+z^{-2}\left(\frac{\color{blue}{8}}{1-z^{-1}}+\frac{\color{blue}{16}}{(1-z^{-1})^{\color{magenta}{2}}}\right) } $$

Using GNU/Octave

GNU/Octave residuez function returns the FIR part as \(\color{purple}{f}\), the filter-pole residues as \(\color{blue}{r}\), the filter poles as \(\color{brown}{p}\) and the pole multiplicities as \(\color{magenta}{m}\) pkg load signal B=[2 6 6 2]; A=[1 -2 1]; [r,p,f,m] = residuez(B,A) r = -24 16 p = 1 1 f = 10 2 m = 1 2

In other words

$$ H(z) = \color{purple}{10}+\color{purple}{2}z^{-1}+\frac{\color{blue}{-24}}{\left(\color{brown}{1}-z^{-1}\right)^{\color{magenta}{1}}}+\frac{\color{blue}{16}}{\left(\color{brown}1-z^{-1}\right)^{\color{magenta}{2}}} $$

We can also use GNU/Octave to determine the delayed form of the IIR

For the same example, the residued function returns [r,p,f,m] = residued(B,A) r = 8 16 p = 1 1 f = 2 10 m = 1 2

In other words

$$ H(z) = \color{purple}{2}+\color{purple}{10}z^{-1}+z^{-2}\left(\frac{\color{blue}{8}}{\left(\color{brown}{1}-z^{-1}\right)^{\color{magenta}{1}}}+\frac{\color{blue}{16}}{\left(\color{brown}1-z^{-1}\right)^{\color{magenta}{2}}}\right) $$

Transfer functions

Transfer function Z icon

The article on Z-Transforms introduced a difference equation for discrete stable causal Linear Time Invariant (LTI) systems, that from here on we will refer to as a LTI system, or system for short\(\)

$$ \begin{align} \sum_{k=0}^N a_k\,y[n-k] &= \sum_{k=0}^M b_k\,x[n-k]\quad\Rightarrow \nonumber \\[10mu] a_0y[n]+a_1y[n-1]+a_2y[n-2]+\ldots &= b_0x[n]+b_1x[n-1]+b_2x[n-2]+\ldots \nonumber \end{align} \nonumber $$

Here we will focus on the black box with discrete input signal \(x[n]\) and output \(y[n]\), where \(n\) is the sample number, sampled every \(T\) seconds.

Blackbox model

We will following the notation from Z-transforms, where \(\def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \ztransform\) is equivalent to the more common notation \(\mathfrak{Z}\left\{\,f[n]\,\right\}\), and \(f[n]\) is defined as the sample taken at time \(nT\). The terms filter and system will be used interchangeably.

Linear Time Invariant System

The blackbox that we will examine is a Stable Causal Linear Time Invariant System (LTI). Most of the practical systems can be modeled as LTI systems or at least approximated by one around nominal operating point. Examples are include stereo engineering to counter the effect of a stadium on the music, or process control engineering in chemical plants.

The relation between the causal LTI system \(L\), its input \(x[n]\) and output \(y[n]\) can be expressed as

$$ y[n]=\color{purple}{L\Big(\color{black}{x[n]}\Big)} \label{eq:ynLx} $$

Definition

Before we start exploring the properties of stable causal LTI systems, let’s take a moment to define what a stable causal LTI system is.

A Causal system:

Is one in which changes in output do not precede changes in input.

The Linearity property states:

If input \(x_{1}(t)\), produces response \(L\big(x_{1}(t)\big)\), and input \(x_{2}(t)\), produces response \(L\big(x_{2}(t)\big)\), and \(a_i\) are real scalars, then a scaled and summed input produces the scaled and summed response
$$ \color{purple}{L\Big(\color{green}{a_1}\color{blue}{x_1(t)}\color{black}{+}\color{green}{a_2}\color{blue}{x_2(t)}\Big)} = \color{green}{a_1} \color{purple}{L\Big(\color{blue}{x_1(t)}\Big)}+\color{green}{a_2} \color{purple}{L\Big(\color{blue}{x_2(t)}\Big)} \label{eq:linear} $$

The Time Invariance property states:

When we apply an input to the system now or \(\tau\) seconds from now, the output will be identical except for a time delay of \(a\) samples. That is, if the output due to input \(x(t)\) is \(y(t)\), then the output due to input \(x(t-\tau)\) is
$$ \color{purple}{L\Big(\color{blue}{x(t-\tau)}\Big)} = \color{blue}{y(t-\tau)} \label{eq:timeinvariance} $$

Further more, the system needs to be stable:

LTI system is bounded-input bounded-output stable if all bounded inputs result in bounded outputs.

From here on we will refer to a stable causal linear time invariant system as a LTI system, or system for short.

Combining systems

The two common configurations when combining filters are: series and parallel.

The figure below shows a series, or cascade, connection of filter \(H_1(z)\) and \(H_2(z)\), where the output from the first filter feeds the input of the next filter.

Two filters in series

The transfer function for the series circuit is

$$ H(z) = \frac{V(z)}{X(z)}\cdot\frac{Y(z)}{V(z)}= H_1(z)\,H_2(z) = H_2(z)\,H_1(z) $$
where the commutative property of multiplication implies that the order of the filters may be reversed.

The other common configuration is called parallel as shown below. In a parallel circuit, both filters get the same input signal and their outputs are summed.

Two filters in parallel

The transfer function for the parallel circuit is

$$ H(z) = \frac{Y_1(z)}{X(z)}+\frac{Y_2(z)}{X(z)}=H_1(z)+H_2(z)=H_2(z)+H_1(z) $$
where the commutative property of addition implies that the order of the filters may be reversed.

Output of a causal LTI system

In the time-domain, we find the output of a Causal LTI system system by passing the function of the input signal as a parameter to the system equation as shown in equation \(\eqref{eq:ynLx}\)

$$ y[n] = \color{purple}{L\Big(\color{black}{x[n]}\Big)} \nonumber $$

Convolution in the time-domain

The article on Z-transforms showed how any discrete input signal \(x[n]\) can be expressed as a summation of scaled impulses.

$$ x[n] = \sum_{k=0}^{\infty}{\color{green}{x[k]}\ \color{blue}{\delta[n-k]}} \nonumber $$

Consider this \(x[n]\) to be the input to LTI system \(L\). The output of the system follows from substituting the equation for signal \(x[n]\) in \(\eqref{eq:ynLx}\)

$$ y[n] = \color{purple}{L\Big(\color{black}{x[n]}\Big)}=\color{purple}{L\left(\color{black}{\sum_{k=0}^{\infty}\color{green}{x[k]}\ \underbrace{\color{blue}{\delta[n-k]}}_{\text{variable}}}\right)} \label{eq:yn} $$

Equation \(\eqref{eq:yn}\) matches the general form of a LTI system with a scalar and variable as a function of time \(t=\color{blue}{n}T\). This implies that we can use the part of the linearity property \(\eqref{eq:linear}\) that states:

$$ \color{purple}{L\Big(\color{green}{a}\ \color{blue}{x[n]}\Big)}=\color{green}{a}\ \color{purple}{L\Big(\color{blue}{x[n]}\Big)} $$

Apply the linearity property to bring the time independent part out of the system function \(L\) in equation \(\eqref{eq:yn}\)

$$ \begin{align} y[n]=\color{purple}{L}\left(x[n]\right)=&\color{purple}{L}\left(\sum_{k=0}^{\infty}{\color{green}{x[k]}\,\ \color{blue}{\delta[n-k]}}\right) \nonumber \\ &= \sum_{k=0}^{\infty}{x[k]\,\ \underbrace{\color{purple}L\Big(\delta[n-k]\Big)}_{\color{blue}{\text{?}}}} \label{eq:yn2} \end{align} $$

Let’s examine the expression \(L\big(\delta[n-k]\big)\). Suppose we know how \(L\) acts on one impulse function \(\delta[n]\), and define it as

$$ \shaded{ h[n]\triangleq L\Big(\delta[n]\Big) } \label{eq:hn} $$
this, so called, impulse response \(h[n]\) fully describes any LTI system, like the difference equation coefficients. The \(h[n]\) is also called the Transfer Function of the system.

Applying the time invariance property \(\eqref{eq:timeinvariance}\) to \(\eqref{eq:hn}\) yields

$$ L\Big(\delta[n-k]\Big)=h[n-k] \label{eq:ldelta} $$

Substituting \(\eqref{eq:ldelta}\) in \(\eqref{eq:yn2}\), gives the convolution sum for LTI systems

$$ \shaded{ y[n] = \sum_{k=0}^{\infty}h[n-k]\,x[k]\triangleq x[n]\ast h[n]\triangleq (x\star h)[n] } $$
or written out
$$ y[n] = h[0]x[n]+h[1]x[n-1]+h[2]x[n-2]+\ldots+h[n]x[0] $$

In summary, when we model the transfer function of a LTI black box as \(h[n]\), in the time-domain the output signal \(y[n]\) is a convolution ‘\(\ast\)’ of input \(x[n]\) and the system impulse response \(h[n]\).

Multiplication in the \(z\)-domain

Likewise, in the \(z\)-domain, the transfer function fully describes how the output signal \(Y(z)\) responds to an arbitrary input signal \(X(z)\).

As we have seen in Z-Transforms, the convolution in the time-domain transforms to a multiplication in the \(z\)-domain.

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} (f\ast g)[n]\,\color{grey}{\gamma[n]} \ztransform F(z)\,G(z) \nonumber $$

That implies that the output \(Y(z)\) is the result of the input signal \(X(z)\) multiplied with the impulse response \(H(z)\) of the filter.

$$ \shaded{ Y(z)=X(z)\,H(z) } \label{eq:yhz} $$
This is very convenient because it lets one determine the system response without having to solve the convolution.

Transfer function

Time to take a closer look at the transfer function of the LTI system. We will express the transfer function as a ratio of polynomials and show it in its factorized form.

The Z-Transforms article opened with a generic form of Linear Constant-Coefficient Difference Equation (LCCDE) that expresses the relation between input \(x[n]\) and output \(y[n]\)

$$ \begin{align} \sum_{k=0}^N a_k\,\color{blue}{y[n-k]} &= \sum_{k=0}^M b_k\,\color{blue}{x[n-k]} \label{eq:lccde} \end{align} $$

Recall the time delay property

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n-a]\, \color{grey}{\gamma[n} – a \color{grey}{]} \,\ztransform z^{-a}F(z) \nonumber $$

Apply the time delay property to transform both sides of equation \(\eqref{eq:lccde}\) to the \(z\)-domain

$$ \begin{align} \sum_{k=0}^N a_k\,\color{blue}{z^{-k}\,Y(z)}&=\sum_{k=0}^M b_k\,\color{blue}{z^{-k}X(z)}\quad\Rightarrow \nonumber \\ \color{blue}{Y(z)}\sum_{k=0}^N a_k\,\color{blue}{z^{-k}}&=\color{blue}{X(z)}\sum_{k=0}^M b_k\,\color{blue}{z^{-k}} \end{align} $$

Bring \(X(z)\) and \(Y(z)\) to the left and the summations to the right, yields the generic form of the transfer function \(H(z)\)

$$ \shaded{ H(z) =\frac{Y(z)}{X(z)} =\frac{\sum_{k=0}^{M}b_k\,z^{-k}}{\sum_{k=0}^{N}a_k\,z^{-k}} =\frac{b_0+b_1z^{-1}+b_2z^{-2}+\cdots+b_Mz^{-M}}{a_0+a_1z^{-1}+a_2z^{-2}+\cdots+a_Nz^{-N}} } \label{eq:tf_polynominal} $$
note that \(a_0\) is typically assigned the value \(1\). Also note that some authors write this equation using subtractions for the \(a_{1\ldots{\small M}}\)coefficients, and negate the coefficients.

The \(b_k\) coefficients are called feedforward coefficients, and the \(a_k\) coefficients are called feedback coefficients.

Factorized form

It is often convenient to factor the polynomials of the transfer function \(\eqref{eq:tf_polynominal}\), and write the function in terms of those factors.

The first step towards a factorized form, is to rewrite \(H(z)\) in a standard from, so that the highest order term of the numerator and denominator are unity.

$$ \begin{align} H(z) &=K\,\frac{\frac{b_0}{b_M}+\frac{b_1}{b_M}z^{-1}+\frac{b_2}{b_M}z^{-2}+\frac{b_3}{b_M}z^{-3}+\cdots+z^{-M}}{\frac{a_0}{a_N}+\frac{a_1}{a_N}z^{-1}+\frac{a_2}{a_N}z^{-2}+\frac{a_3}{a_N}z^{-3}+\cdots+z^{-N}}, &K=\frac{b_M}{a_N} \end{align} $$

Factorize the polynomials

$$ \begin{align} H(z)=K\,\frac{N(z)}{D(z)} &= K\,\frac{(1-q_1z^{-1})(1-q_2z^{-1})\cdots(1-q_{\small M}z^{-1})}{(1-p_1z^{-1})(1-p_2z^{-1})\cdots(1-p_{\small N}z^{-1})}&\times z^{\small M-N}\frac{z^{\small M}}{z^{\small N}} \nonumber \\[14mu] &= K\,z^{\small M-N}\,\frac{\prod_{i=1}^M(z-q_i)}{\prod_{i=1}^N(z-p_i)},\quad \text{ where }K=\frac{b_M}{a_N}\label{eq:tf_prod} \end{align} $$
where the factor \(z^{\small M-N}\) appears when the number of poles is not equal to the number of zeros.

The \(q_i\)’s are the roots of the equation \(N(z)=0\) and are called the system zeros. The \(p_i\)’s are the roots of the equation \(D(z)=0\) and are defined as the system poles. The filter order equals the number of poles or zeros in the transfer function, whichever is greater.

If one would make a 3D plot with the \(z\)-plane being the base and \(|H(z)|\) on the vertical axis, then the poles will show as thin “poles” pointing up and the zeros will show as dips pointing down.

The (complex) poles and zeros are properties of the transfer function, and therefore of the difference equation. Together with the gain constant \(K\) and delay \(z^{-(\small N-M})\) give a complete description of the filter.

Visualization

The article Z-transforms introduced the normalized angular frequency \(\omega T\) and the \(z\)-plane. Here we will visualize the poles, zeroes and their evaluation for complex variable \(z\).

Poles and zeros

As we have seen in equation \(\eqref{eq:tf_prod}\), the factorized transfer function can be written as

$$ H(z) = K \frac{\prod_{i=1}^M(z-q_i)}{\prod_{i=1}^N(z-p_i)} \label{eq:visualpoleszeros} $$

The system may be represented graphically by plotting the poles and zeros in the complex \(z\)-plane. In these so called pole-zero plots, it is customary to mark a zero location with a circle (\(\circ\)) and a pole location with a cross (\(\times\)).

Poles and zeroes in \(z\)-plane

Evaluating Poles and zeros

We will run ahead of ourselves and describe how the poles and zeroes affect the system response, later we will come back to this subject and explore it further.

Each of the poles \((z-p_i)\) and zeroes \((z-q_i)\) have a unique contribution to the transfer function. To give you an insight, we will we evaluate the factor \((z-p_i)\), but the same applies to zeroes.

$$ \begin{align} (z-p_i)&={\Large(}\Re(z)+j\,\Im(z){\Large)}-{\Large(}\Re(p_i)+j\,\Im(p_i){\Large)} \nonumber \\[6mu] &= {\Large(}\Re(z)-\Re(p_i){\Large)}+j\,{\Large(}\Im(z)-\Im(p_i){\Large)} \end{align} $$

this factor can be visualized with a vector drawn from \(r\) to \(z\). Each of the factors in the numerator and denominator may be interpreted as a vector in the z-plane, originating from the zero \(z_i\) or pole \(p_i\) and directed to the point \(z\) at which the function is evaluated.

\(p\) evaluated at point \(z\)

The length and angle of these factors represent their contribution to the transfer function. For example for a pole \(p_i=\Re(p_i)+j\,\Im(p_i)\), the magnitude and angle of the vector to the variable \(z=\Re(z)+j\,\Im(z)\) are

$$ \begin{aligned} |z-p_i| &= \sqrt{{\Large(}\Re(z)-\Re(p_i){\Large)}^2+{\Large(}\Im(z)-\Im(p_i){\Large)}^2} \\[6mu] \angle(z-p_i) &= \mathrm{atan2}{\Large(}\Im(z)-\Im(p_i),\,\Re(z)-\Re(p_i){\Large)} \end{aligned} $$
where \(\mathrm{atan2}\) is defined as
$$ \mathrm{atan2}(y,x)\triangleq \begin{cases} \arctan\left(\frac{y}{x}\right) & x \gt 0 \\ \arctan\left(\frac{y}{x}\right)+\pi & x \lt 0 \land y \geq 0 \\ \arctan\left(\frac{y}{x}\right)-\pi & x \lt 0 \land y \lt 0 \\ \frac{\pi}{2} & x= 0 \land y \gt 0 \\ -\frac{\pi}{2} & x= 0 \land y \lt 0 \\ \text{undefined} & x= 0 \land y = 0 \end{cases} $$

Multiplication and division

While we are on the subject, recall multiplication and division of complex numbers as it is most easily done in polar form

$$ \begin{align} Z_1Z_2& = |Z_1|e^{j\angle{Z_1}}\ |Z_2|e^{j\angle{Z_2}} = |Z_1||Z_2|e^{j(\angle{Z_1}+\angle{Z_2{)}}} &\text{multiplication} \\[10mu] \frac{Z_1}{Z_2} &= \frac{|Z_1|e^{j\angle{Z_1}}}{|Z_2|e^{j\angle{Z_2}}} = \frac{|Z_1|}{|Z_2|}e^{j(\angle{Z_1}-\angle{Z_2{)}}} &\text{division} \end{align} $$
Refer to Complex Arithmetic Formulas for an overview of various complex operations.

Applying \(|K|=K\) and \(\angle{K}=\mathrm{atan2}(0,K)=0\), the magnitude and angle of the complete transfer function \(H(z)\) may be written as

$$ \shaded{ \begin{align} H(z) &= |H(z)|\ e^{j\angle{H(z)}} \nonumber \\[10mu] \text{where }\quad|H(z)| &= K \frac{\prod_{i=1}^m\left|(z-q_i)\right|}{\prod_{i=1}^n\left|(z-p_i)\right|} \nonumber \\[10mu] \text{and }\quad\angle{H(z)}&=\sum_{i=1}^m\angle(z-q_i)-\sum_{i=1}^n\angle(z-p_i) \nonumber \end{align} } $$

Suggested next reading is Evaluating Discrete Transfer Functions.

Inverse Z-transform

Inverse Z-Transform

The forward Z-transform helped us express samples in time as an analytic function on which we can use our algebra tools. Eventually, we have to return to the time domain using the Inverse Z-transform.\(\)

The inverse Z-transform can be derived using Cauchy’s integral theorem.

Start with the definition of the Z-transform

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} f[m]\,\ztransform\,&F(z)=\sum_{m=0}^\infty z^{-m}\ f[m]\nonumber \end{align} \nonumber $$

Multiply both sides by \(z^{n-1}\)

$$ \begin{align} F(z)\,\color{purple}{z^{n-1}}&=\sum_{m=0}^\infty z^{-m\color{purple}{+n-1}}\,f[m] \end{align} $$

Integrate with a counterclockwise contour integral for which the contour encloses the origin and lies entirely within the region of convergence of \(F(z)\)

$$ \begin{align} \color{purple}{\frac{1}{2\pi j}\oint_C} F(z)\,z^{n-1}\,\color{purple}{\mathrm{d}z}&=\color{purple}{\frac{1}{2\pi j}\oint_C} \sum_{m=0}^\infty z^{-m+n-1}\,f[m]\,\color{purple}{\mathrm{d}z} \nonumber \\ &= \sum_{m=0}^\infty\,f[m]\ \underbrace{\frac{1}{2\pi j}\oint_C \,z^{-(m-n+1)}\,\mathrm{d}z}_{\color{blue}{\text{?}}} \end{align} $$

A special case of Caughy integral theorem states

$$ \frac{1}{2\pi j}\,\oint_Cz^{-l}\,\mathrm{d}z\,=\, \begin{cases} 1 & l = 1 \\ 0 & l\neq1 \end{cases} \nonumber $$

That implies we can replace the integral with a \(\delta\) function.

$$ \begin{align} \frac{1}{2\pi j}\oint_C F(z)\,z^{n-1}\,\mathrm{d}z &= \sum_{m=0}^\infty\,f[m]\ \underbrace{\frac{1}{2\pi j}\oint_C \,z^{-(m-n+1)}\,\mathrm{d}z}_{\color{purple}{=1\text{ only when m-n+1=1}}} \nonumber \\ &= \sum_{m=0}^\infty\,f[m]\ \color{purple}{\delta[m-n]} \end{align} $$

The Inverse Z-transform follows as

$$ \shaded{ f[n] = \frac{1}{2\pi j}\oint_C F(z)\,z^{n-1}\,\mathrm{d}z } $$
where the contour integral is taken over a counter-clockwise closed contour \(C\) in the region of converge (ROC). The contour \(C\) must encircle all the poles and zeroes of \(F(z)\).

Cauchy’s Residue Theorem

When the transfer functions is rational, a ratio of polynomials, we may use the method described below to calculate the Inverse Z-transform.

Denote the unique poles of \(F(z)\) as \(p_{1\ldots{\small N}}\) and their algebraic multiplicities as \(m_{1\ldots{\small N}}\). As long as \(N\) is finite, which is the case if \(F(z)\) is rational, we can evaluate the inverse Z-Transform via Cauchy’s residue theorem that states

$$ f[n] = \frac{1}{2\pi j}\oint_C F(z)\,z^{n-1}\,\mathrm{d}z=\sum_{p_k\text{ inside }C}\mathrm{Res}\large{(}\,F(z)\,z^{n-1},\,p_k,\,m_k\,{\large{)}} \nonumber $$ where $$ \mathrm{Res}{\Large(}\,F(z)\,z^{n-1},\,p_k,\,m_k\,{\Large)}=\frac{1}{(m_k-1)!}\left[\frac{\text{d}^{(m_k-1)}}{\text{d}z^{(m_k-1)}}\,{\Large[}\,(z-p_k)^{m_k}\,F(z)\,z^{n-1}\,{\Large]}\right]_{z=p_k} \nonumber $$ for a single pole $$ \mathrm{Res}{\Large(}\,F(z)\,z^{n-1},\,p_k,\,1\,{\Large)}=\left.\,(z-p_k)\,F(z)\,z^{n-1}\,\right|_{z=p_k} \nonumber $$

Cauchy’s residue theorem allows us to compute the contour integral by computing derivatives, however tedious.

Other techniques

Given that the \(z\)-transform is a particular type of Laurent series, and the Laurent series in a given annulus of convergence is unique, any technique can be used to generate a power series for \(F(z)\) that converges in the outermost annulus of convergence to obtain the inverse \(z\)-transform.

Inversion techniques available are

  • using the binomial theorem
  • using the convolution theorem
  • performing long division
  • using the initial-value theorem
  • expanding \(F(z)\) in partial fractions
  • power series expansion (for non-rational z-transforms).

Inverse Unilateral Z-Transform

The inverse Z-transform, can be evaluated using Cauchy’s integral. Which is an integral taken over a counter-clockwise closed contour \(C\) in the region of converge of \(Y(z)\). When the ROC is causal, this means the path \(C\) must encircle all the poles of \(Y(z)\).

$$ y[n] = \frac{1}{2\pi j}\oint_C Y(z)\,z^{n-1}\,\mathrm{d}z $$

Let’s try some simplifications:

  1. When all poles of \(Y(z)\) poles are inside the unit circle, \(Y(z)\) is stable and \(C\) can be the unit circle. Thus the contour integral simplifies to the inverse discrete-time Fourier transform (DTFT) of the periodic values of the Z-transform around the unit circle . To proof we take the unit circle \(|z|=1\), and parameterize contour \(C\) by \(z(\omega)=\mathrm{e}^{j\omega}\), with \(-\pi\leq \omega\leq\pi\) so \(\frac{\text{d}z}{\text{d}\omega}=j\mathrm{e}^{j\omega}\)
    $$ \begin{align} y[n] &= \frac{1}{2\pi j}\oint_C Y(z)\,z^{n-1}\,\mathrm{d}z \nonumber \\ &= \frac{1}{2\pi \bcancel{j}}\int_{-\pi}^{\pi} Y(\mathrm{e}^{j\omega})\,(\mathrm{e}^{j\omega})^{n\cancel{-1}}\bcancel{j}\cancel{{\mathrm{e}^{j\omega}}}\,\,\mathrm{d}\omega \nonumber \\ &= \frac{1}{2\pi}\int_{-\pi}^{\pi} Y(\mathrm{e}^{j\omega })\,\mathrm{e}^{j\omega n}\,\mathrm{d}\omega \nonumber \end{align} $$
  2. If a system is represented by a linear constant-coefficient difference equations (LCCDE), it is said to be rational. The output is in the form \((N\gt M)\)
    $$ \sum_{k=0}^N a_k y[n-k]=\sum_{k=0}^M b_k x[n-k] \label{eq:rational} $$
    this allows us to find the impulse response \(h[n]\) and frequency response \(H(\mathrm{e}^{j\omega})\) of this LTI system similarly to the methods to solve a continuous LCCDE problems.

For rational systems captured by equation \(\eqref{eq:rational}\) the output in the Z-domain output can be expressed as

$$ Y(z)=\frac{b_0+b_1z+b_2z^2+\ldots+b_Mz^M}{a_0+a_1z+a_2z^2+\ldots+a_Nz^N} $$

We will examine solution methods for rational systems in the following sections.

Long Division

Long-division of the polynomials directly is a simple but not so practical method for obtaining a power series expansion for \(Y(z)\). Using the definition of the Z-transform, the terms of the sequence can then be identified one at a time. Problem with this method is that it is labor intensive, and does not produce a closed-form expression for \(y[n]\).

Direct Computation

When \(x[n]=\delta[n]\), \(y[n]=h[n]\). For \(n=0\), we obtain the initial condition:

$$ h[0]-ah[-1]=h[0]=\delta[0]=1 $$

For \(n>0\), we plug the general solution \(h[n]=Az^n\) into the DE and get

$$ Az^n-aAz^{n-1}=\delta[n]=0,\ \ n\gt 0 $$

From which we get \(z=a\) and \(h[n]=Aa^n\). But as \(h[0]=1\), we have \(A=1\) and

$$ h[n]=a^n \gamma[n] $$

The Fourier spectrum of \(h[n]\) is the corresponding frequency response

$$ \begin{align} H(e^{j\omega})&: {\cal F}[h[n]]=\sum_{n=-\infty}^\infty h[n]e^{-jn\omega}\\ &:\sum_{n=0}^\infty a^n e^{-jn\omega}=\frac{1}{1-ae^{-j\omega}} \end{align} $$

src: http://fourier.eng.hmc.edu/e101/lectures/handout3/node9.html

Partial Fraction Expansion

2BD: see “discrete transfer functions

Eigenequation method ??

Consider a linear time invariant system \(H\) with impulse response \(h\) operating on some space of infinite length continuous time signals. Recall that the output \(H\big(x(t)\big)\) of the system for a given input \(x(t)\) is given by the continuous time convolution of the impulse response with the input

$$ H\big(x(t)\big)=\int_{-\infty}^{\infty}h(\tau)\,x(t−\tau)\,d\tau $$

Consider the input \(x(t)=\mathrm{e}^{st}\) where \(s\in \mathbb{C}\), the output

$$ \begin{align} H\big(\mathrm{e}^{st}\big) &= \int_{-\infty}^{\infty}h(\tau)\,\mathrm{e}^{s(t-\tau)}\,d\tau \nonumber \\ &= \int_{-\infty}^{\infty}h(\tau)\,\mathrm{e}^{st}\mathrm{e}^{-s\tau}\,d\tau \nonumber \\ &= \mathrm{e}^{st}\int_{-\infty}^{\infty}h(\tau)\,\mathrm{e}^{-s\tau}\,d\tau \end{align} $$

Define

$$ \lambda_s=\int_{-\infty}^{\infty}h(\tau)\,\mathrm{e}^{-s\tau}\,d\tau $$

The eigenvalue follows as

$$ H\big(\mathrm{e}^{st}\big)=\lambda_s\mathrm{e}^{st} $$
corresponding with eigenvector \(\mathrm{e}^{st}\).

This makes it particularly easy to calculate the output of a system when an eigenfunction is the input because the output is simply the eigenfunction scaled by the associated eigenvalue.

src: http://pilot.cnxproject.org/content/collection/col10064/latest/module/m34639/latest

Use the eigenequation of the LTI system.

——– If the input is a complex exponential

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} x[n]\ztransform z^n\\ $$
an eigenfunction of the LTI system, then
$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} y[n]=\ztransform z^n\,H(z) $$

Substitute \(z=e^{j\omega}\)

$$ y[\omega]=e^{j\omega n}\,H(e^{j\omega}) $$

Substituting \(x[n]\) and \(y[n]\) into the given DE, we can obtain \(H(e^{j\omega})\).

Fourier transform ??

Take Fourier transform on both sides of the given DE, and use the linearity and time-shifting properties:

$$ {\cal F}[\sum_{k=0}^N a_k y[n-k]]={\cal F}[\sum_{k=0}^M b_k x[n-k]] $$

Due to the linearity property, this becomes

$$ \sum_{k=0}^N a_k {\cal F}[y[n-k]]=\sum_{k=0}^M b_k {\cal F}[x[n-k]] $$

and due to the time shifting property, we get

$$ Y(e^{j\omega})[\sum_{k=0}^N a_k e^{-jk\omega}] = X(e^{j\omega})[\sum_{k=0}^M b_k e^{-jk\omega}] $$

From which we find

$$ H(e^{j\omega})=\frac{Y(e^{j\omega})}{X(e^{j\omega})} = \frac{\sum_{k=0}^M b_k e^{-jk\omega}}{\sum_{k=0}^N a_k e^{-jk\omega}} $$

2BD …

src: http://fourier.eng.hmc.edu/e101/lectures/handout3/node9.html

Similar to Laplace: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Signals_and_Systems/node2.html

… page 111 in http://web.stanford.edu/~kairouzp/teaching/ece310/secure/Chapter5.pdf

Digital filters can be designed using the Z-transform, similar to how analog filters are designed using the Laplace transform. The properties and functions listed in the tables provide a foundation the design and analysis of such digital filters.

Initial/final value proofs

Lotfi Zadeh (proof)

Proofs for Z-transform initial and final values used in signal processing, presented in the Z-Transforms article\(\)

Proofs for Initial and Final Values Theorem

Initial Value Theorem

The initial value theorem is similar to that in the Laplace transform. As \(z\to\infty\), all terms except \(f[0]z^0\) approach zero, leaving only \(f[0]\)

Let \(f[n]=0\) for \(n\lt0\)

$$ \begin{align} \lim_{z\to \infty}F(z) &=\lim_{z\to \infty}\sum_{n=-\infty}^{\infty}z^{-n}\,f[n]\nonumber\\ &=\sum_{n=-\infty}^{\infty}f[n]\,\lim_{z\to \infty}z^{-n}\nonumber\\ &=x[0]+\sum_{n=1}^{\infty}f[n]\,\cancelto{0}{\lim_{z\to \infty}z^{-n}}\nonumber\\ \end{align} $$

The initial value follows as

$$ \shaded{ f[0]=\lim_{z\to\infty}F(z) } $$

Final Value Theorem

Consider the difference between a shifted version of a function \(f[n+1]\) and the function itself \(f[n]\)

$$ f[n-1]-f[n] \label{eq:final0} $$

1) Apply the Z-transform and take the limit as \(z\to1\) on both sides

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \lim_{z\to1}\left(f[n-1]-f[n]\right)\ztransform & \lim_{z\to1}\left(\sum_{n=0}^{\infty}z^{-n}\left(f[n+1]-f[n]\right)\right)= \nonumber \\ & \sum_{n=0}^{\infty}\left(f[n+1]-f[n]\right)\,\cancelto{1}{\lim_{z\to1}z^{-n}} \end{align} $$

write out the summation to find common terms

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} \begin{split} \lim_{z\to1}\left( f[n+1]-f[n]\right)\ztransform \lim_{n\to\infty}&\left({\ccancel[red]{f[1]}}+{\ccancel[blue]{f[2]}}+{\ccancel[grey]{f[3]}}\ldots \\ +{\ccancel[orange]{f[n-1]}}+{\ccancel[teal]{f[n]}}+{f[n+1]}\\ -f[0]-{\ccancel[red]{f[1]}}-{\ccancel[blue]{f[2]}}-\ldots \\ -{\ccancel[grey]{f[n-2]}} – {\ccancel[orange]{f[n-1]}} – {\ccancel[teal]{f[n]}} \right)\\ = &-f[0] + \lim_{n\to\infty}f[n] \end{split} \label{eq:final1} $$

2) Apply the Z-transform to \(\eqref{eq:final0}\) using the time advance property and take the limit for \(z\to1\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} \begin{align} \lim_{z\to1}f[n+1]-f[n]\ztransform &\lim_{z\to1}\left((zF(z)-zf[0])-F(z)\right)=\nonumber\\ &\lim_{z\to1}\left((z-1)F(z)-zf[0]\right)=\nonumber\\ &\lim_{z\to1}(z-1)F(z)-\cancelto{1}{\lim_{z\to1}z}f[0]=\nonumber\\ &-f[0]+\lim_{z\to1}(z-1)F(z) \label{eq:final2} \end{align} $$

Equating \(\eqref{eq:final1}\) and \(\eqref{eq:final2}\) we get

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\lfzraised#1{\raise{10mu}{#1}} \def\laplace{\lfz{\mathscr{L}}} \def\fourier{\lfz{\mathcal{F}}} \def\ztransform{\lfz{\mathcal{Z}}} \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} -{\ccancel[red]{f[0]}} + \lim_{n\to\infty}f[n] = -{\ccancel[red]{f[0]}}+\lim_{z\to1}(z-1)F(z) $$

The final value theorem, for when \(\lim_{n\to\infty}f[n]\) exists, follows as

$$ \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} \shaded{\lim_{n\to\infty}f[n]=\lim_{z\to1}(z-1)F(z)} $$

Suggested next reading is Discrete transfer functions.

Function proofs

Lotfi Zadeh (proof)

Proofs for Z-transform properties, presented in the Z-Transforms article.\(\)

Proofs for pairs

Impulse

The discrete impulse function \(\delta[t]\) is different from the continuous impulse function. The impulse function is commonly used as an theoretical input signal to study the filter’s behavior.

The definition is

$$ \delta[n] = \begin{cases} 1, & n=0 \\ 0, & n\neq0 \end{cases}\label{eq:impuls_def1} $$

Apply the unilateral Z-transforms definition to equation \(\eqref{eq:impuls_def1}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \delta[n] \ztransform \Delta(z)=\sum_{n=0}^{\infty}z^{-n}\ \delta[n] $$

Since the impulse is \(0\) everywhere but at \(n=0\), the summation simplifies to

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \delta[n] \ztransform \Delta(z)= \cancelto{1}{z^{-0}}\ \cancelto{1}{\delta[0]} $$

The unilateral Z-Transform of the Unit Impulse Function follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{\delta[n] \ztransform 1\triangleq\Delta(z) }, && \text{all }z,\text{ including }\infty \end{align} \label{eq:impulse} $$

This is very similar to the Laplace transform of the continuous impulse function.

Delayed Impulse

Consider the discrete delayed impulse function \(\delta[n-a]\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\lfzraised#1{\raise{10mu}{#1}} \def\laplace{\lfz{\mathscr{L}}} \def\fourier{\lfz{\mathcal{F}}} \def\ztransform{\lfz{\mathcal{Z}}} \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} \delta[n-a]=\begin{cases} 1, & n=a \\ 0, & n\neq0 \end{cases}\label{eq:delayedimpuls_def1} $$

Sorry proof missing. Similar to delay proof? Have the proof, please share it in the comments.

The unilateral Z-Transform of the Delayed Unit Impulse Function follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{\delta[n-a]\,\ztransform\, \shaded{\begin{cases}z^{-a},&a\geq0\\0,&a\lt0\end{cases}}},&&z\neq0 \end{align} \label{eq:delayedimpulse} $$

Unit Step

The discrete unit or Heaviside step function, denoted with \(\gamma[n]\) is defined as

$$ \gamma[n] = \begin{cases} 0, & n\lt 0 \\ 1, & n\geq 0 \end{cases} \label{eq:unitstep_def} $$

The unilateral Z-transforms of \(\eqref{eq:unitstep_def}\) follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \gamma[n] \ztransform \Gamma(z) &= \sum_{n=0}^\infty z^{-n}\,\cancelto{1}{\gamma[t] } =\sum_{n=0}^\infty\,z^{-n} \nonumber \\ &= \sum_{n=0}^\infty\,\underbrace{\left(z^{-1}\right)^n}_{\color{blue}{r^n}} \label{eq:unitstep0} \end{align} $$

Apply the power series

$$ \begin{align} \sum_{n=0}^{\infty}r^n = \frac{1}{1-r},&&|r|\lt1 \nonumber \end{align} \nonumber $$

to \(\eqref{eq:unitstep0}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \gamma[n] \ztransform \Gamma(z) = \frac{1}{1-z^{-1}},&&|z^{-n}|\lt1 \end{align} $$

The unilateral Z-transform for the step function follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ \gamma[n] \ztransform \frac{z}{z-1}\triangleq\Gamma(z) } , && |z|\gt1 \end{align} \label{eq:step} $$

Scaled

Consider the discrete power function starting at \(n=0\)

$$ f[n] = \begin{cases} 0, & n\lt0 \\ a^n, & n\geq 0 \end{cases} \label{eq:scaled_def} $$

Apply the unilateral Z-transforms and the power series

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} a^n\,\color{grey}{\gamma[n]} \ztransform &\sum_{n=0}^\infty z^{-n}\,a^n\nonumber\\ \ztransform &\sum_{n=0}^\infty (az^{-1})^n\nonumber\\ \ztransform &\frac{1}{1-az^{-1}},&|z|\gt|a| \end{align} $$

The unilateral Z-Transform of the scaled function follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ a^n\,\color{grey}{\gamma[n]} \ztransform {\frac{z}{z-a}} } ,&&|z|\gt|a|\label{eq:scaled} \end{align} $$

The Binomial scaled proof comes to this same transform.

Scaled delayed

Recall the delay property, and the scaled pair

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} f[n-b]\,\color{grey}{\gamma[n}-b\color{grey}{]}\, & \ztransform z^{-b}F(z) \nonumber \\ a^n\ \color{grey}{\gamma[n]} & \ztransform \frac{z}{z-a},&|z|\gt|a| \nonumber \end{align} \nonumber $$

The unilateral Z-Transform of the scaled delayed function follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ a^{n-1}\gamma[n-1] \ztransform \dfrac{1}{z-1} } ,&&|z|\gt|a| \end{align} $$

\(n\) scaled

Consider the discrete ramp function starting at \(n=0\)

$$ \begin{cases} 0, & n\lt0 \\ n\,a^n, & n\geq 0 \end{cases} \label{eq:nscaled_def} $$

Apply the unilateral Z-transforms definition and expand

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} n\,a^n \ztransform F(z)=&\sum_{n=0}^\infty z^{-n}\ n\,a^n\nonumber\\ \ztransform &\left[\cancel{0}+\cancel{1}az^{-1}+2a^2z^{-2}+3a^3z^{-3}+\cdots\right]\nonumber\\ \ztransform &\left[az^{-1}+2a^2z^{-2}+3a^3z^{-3}+\cdots\right]\label{eq:ramp1} \end{align} $$

To counter the infinite series, introduce \(az^{-1}F(z)\)

$$ az^{-1}\,F(z)=\left[a^2z^{-2}+2a^3z^{-3}+3a^4z^{-4}+\cdots\right] \label{eq:ramp2} $$

Subtract \(\eqref{eq:ramp2}\) from \(\eqref{eq:ramp1}\)

$$ \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} \begin{align} F(z)-az^{-1}\,F(z) &=\left(az^{-1}+\ccancel[red]{2}a^2z^{-2}+\ccancel[green]{3}a^3z^{-3}+\cdots\right)-\\ &\quad\quad\left(\ccancel[red]{a^2z^{-2}}+\ccancel[green]{2a^3z^{-3}}+\ccancel[orange]{3a^4z^{-4}}+\cdots\right)\Rightarrow \nonumber \\[12mu] (1-az^{-1})\,F(z) &=\left(az^{-1}+a^2z^{-2}+a^3z^{-3}+\cdots\right) \nonumber \\[6mu] &=az^{-1}\left(1+az^{-1}+a^2z^{-2}+\cdots\right) \nonumber \\[6mu] &=az^{-1}\sum_{n=0}^{\infty}\underbrace{\left(az^{-1}\right)^n}_{\color{blue}{r^n}} \label{eq:ramp3} \end{align} $$

Apply the power series

$$ \begin{align} \sum_{n=0}^{\infty}r^n=\frac{1}{1-r},&&|r|\lt1\nonumber \end{align} \nonumber $$

to \(\eqref{eq:ramp3}\) where \(r=az^{-1}\)

$$ \begin{align} (1-az^{-1})\,F(z) &= az^{-1}\,\frac{1}{1-az^{-1}},&|z|\gt1 \nonumber \\ F(z) &= \frac{az^{-1}}{(1-az^{-1})^2},&|z|\gt1 \end{align} $$

The unilateral Z-Transform of the \(n\) scaled function follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ n\,a^n\ \color{grey}{\gamma[n]} \ztransform \frac{az}{\left(z-a\right)^2} } ,&&|z|\gt1\label{eq:timescaled} \end{align} $$

The Binomial scaled proof comes to this same transform.

Ramp

This is a special case for the \(n\) scaled \(z\)-transform, where \(a=1\)

Substitute \(a=1\) in

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} n\,a^n\ \color{grey}{\gamma[n]} \ztransform \frac{az}{\left(z-a\right)^2}, && |z|\gt1 \nonumber \end{align} \nonumber $$

The unilateral Z-Transform of the discrete ramp function follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\lfzraised#1{\raise{10mu}{#1}} \def\laplace{\lfz{\mathscr{L}}} \def\fourier{\lfz{\mathcal{F}}} \def\ztransform{\lfz{\mathcal{Z}}} \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} \begin{align} \shaded{ n\ \color{grey}{\gamma[n]} \ztransform \frac{z}{\left(z-1\right)^2} } ,&&|z|\gt1 \end{align} \label{eq:ramp} $$

Binomial scaled, \(|z|\gt |a|\)

We will do this proof starting from the \(z\)-domain

$$ F(z) = \frac{z^m}{(z-a)^m}=\frac{1}{(1-az^{-1})^m} $$
where \(n\) is an integer, and \(a\) is a constant possibly complex.

Recall the Negative Binomial Series

$$ \begin{align} (1-x)^{-m}&=\sum_{n=0}^{\infty}{n+m-1 \choose n}\,x^n,&|x|\lt1\nonumber\\ \end{align} \nonumber $$

Work towards the form \((1-x)^{-m}\)

$$ \begin{align} F(z) &= (1-\underbrace{az^{-1}}_{\color{blue}{=x}})^{-m} \end{align} $$

Use the Binomial Series where \(x=az^{-1}\)

$$ \begin{align} F(z)=\left(1-(az^{-1})\right)^{-m}&=\sum_{n=0}^{\infty}{n+m-1 \choose n}\,(az^{-1})^k, \quad |az^{-1}|\lt1\nonumber\\ &=\sum_{n=0}^{\infty}{n+m-1 \choose n}\,a^n\,z^{-n}, \quad |z|\gt|a| \end{align} $$

Apply the Symmetry Rule for Binomial Coefficients \({n \choose k}={n \choose n-k}\)

$$ \begin{align} F(z) &= \sum_{n=0}^{\infty}\underbrace{{n+m-1 \choose m-1}\,a^n}_{\color{blue}{=f[n]}}\,z^{-n},&|z|\gt|a| \end{align} $$

The unilateral Z-Transform follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{{n+m-1 \choose m-1}\,a^n\,\color{grey}{\gamma[n]} \ztransform \frac{z^m}{(z-a)^m}=\frac{1}{(1-az^{-1})^m}}, \quad |z|\gt |a|\label{eq:binomialscaled} \end{align} $$

Recall the delay property

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n-b]\,\color{grey}{\gamma[n}-b\color{grey}{]}\,\ztransform\,z^{-b}F(z)\nonumber $$

Combining equation \(\eqref{eq:binomialscaled}\) with the delay property

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} {n+m-1 \choose m-1}\,a^{n-m+1}\,\color{grey}{\gamma[n-m+1]} \,\ztransform\, &z^{-(m-1)}\frac{z^m}{(z-a)^m}&|z|\gt |a| \nonumber\\ \end{align} $$

So that in general

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ n+m-1 \choose m-1 }\,a^{n}\,\color{grey}{\gamma[n-m+1] \,\ztransform\,\frac{a^{m-1}z}{(z-a)^m}}&&|z|\gt |a| \nonumber \\ \end{align} $$

For the case where \(m=1\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} {n \choose 0}\,a^n\,\gamma[n] \,\ztransform\, &\frac{z}{z-a}&|z|\gt |a|\nonumber\\ \frac{\ccancel[red]{n!}}{0!\ccancel[red]{(n-0)!}}\,a^n\,\gamma[n] \,\ztransform \nonumber\\ a^n\,\gamma[n] \,\ztransform \end{align} $$

so that

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ a^n\,\gamma[n] \,\ztransform \frac{z}{z-a} } &&|z|\gt |a| \end{align} $$

For the case where \(m=2\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} {n \choose 1}\,a^{n-1}\,\gamma[n-1] \,\ztransform\, &\frac{z}{(z-a)^2}&|z|\gt |a|\nonumber\\ \frac{\ccancelto[red]{n}{n!}}{1!\ccancel[red]{(n-1)!}}\,a^{n-1}\,\gamma[n] \,\ztransform\nonumber\\ n\,a^{n-1}\,\gamma[n] \,\ztransform \end{align} $$

so that

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{n\,a^n\,\gamma[n] \,\ztransform \frac{az}{(z-a)^2}}&&|z|\gt |a| \end{align} $$

For the case where \(m=3\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} {n \choose 3}\,a^{n-2}\,\gamma[n-2] \,\ztransform\, &\frac{z}{(z-a)^3}&|z|\gt |a|\nonumber\\ \frac{\ccancelto[red]{n(n-1)}{n!}}{2!\ccancel[red]{(n-2)!}}\,a^{n-2}\,\gamma[n] \,\ztransform\nonumber\\[6mu] \frac{n(n-1)}{2}\,a^{n-2}\,\gamma[n] \,\ztransform \end{align} $$

so that

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{\tfrac{1}{2}{n(n-1)}\,a^n\,\gamma[n] \,\ztransform \frac{a^2z}{(z-a)^3}}&&|z|\gt |a| \end{align} $$

Binomial scaled, \(|z|\lt |a|\)

Similar to the previous proof, we will do this starting from the \(z\)-domain

$$ F(z) = \frac{z^m}{(z-a)^m}=\frac{1}{(1-az^{-1})^m} $$
where \(n\) is an integer, and \(a\) is a constant possibly complex.

Recall the Binomial Series

$$ \begin{align} (1+x)^r&=\sum_{k=0}^{\infty}{r \choose k}\,x^k,&|x|\lt1\nonumber\\ \end{align}\nonumber $$

Work towards the form \((1-x)^{-m}\)

$$ \begin{align} F(z) &= (1\underbrace{-az^{-1}}_{\color{blue}{=x}})^{-m} \end{align} $$

Use the Binomial Series where \(x=-az^{-1}\)

$$ \begin{align} F(z) = (1-az^{-1})^{-m}&=\sum_{n=0}^{\infty}{-m \choose n}\,(-az^{-1})^n \nonumber \\ &=\sum_{n=0}^{\infty}{-m \choose n}\,(-1)^n\,a^nz^{-n} \nonumber \end{align} $$

Apply the Moving Top Index to Bottom in Binomial Coefficient \({n \choose m}=(-1)^{n-m}\,{-(m+1) \choose n-m}\)

$$ \begin{align} F(z) = (1-az^{-1})^{-m} &= \sum_{n=0}^{\infty}{-(n+1) \choose -m-n}\,(-1)^{-m-n}\,(-1)^n\,a^nz^{-n} \nonumber \\ &= \sum_{n=0}^{\infty}{-n-1 \choose -m-n}\,(-1)^{-m}\,a^nz^{-n} \nonumber \end{align} $$

Apply the Symmetry Rule for Binomial Coefficients \({n \choose k}={n \choose n-k}\)

$$ \begin{align} F(z)=(1-az^{-1})^{-m} &= \sum_{n=0}^{\infty}{-n-1 \choose \cancel{-n}-1+m\cancel{+n})}\,(-1)^{-m}\,a^nz^{-n} \nonumber \\ &=\sum_{n=0}^{\infty}\,\underbrace{(-1)^{-m}\,{-n-1 \choose m-1}\,a^n}_{\color{blue}{=f[n]}}\,z^{-n} \nonumber \end{align} $$

The unilateral Z-Transform follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ (-1)^{-m}\,{-n-1 \choose m-1}\,a^n\,\color{grey}{\gamma[n]} \ztransform \frac{z^m}{(z-a)^m}=\frac{1}{(1-az^{-1})^m} } && |z|\lt |a| \end{align} $$

Exponential

Consider the discrete exponential function starting at \(n=0\)

$$ f[n]=\begin{cases} 0, & n\lt 0 \\ \mathrm{e}^{-anT}, & n\geq 0 \end{cases} \label{eq:exponential_def} $$

Apply the unilateral Z-transforms definition

$$ \begin{align} \mathrm{e}^{-anT}\ \gamma[n] \ztransform & \sum_{n=0}^\infty z^{-n}\ \mathrm{e}^{-anT} = \nonumber \\ &\sum_{n=0}^\infty {\underbrace{\left(z^{-1}\ \mathrm{e}^{-aT}\right)}_a}^n \label{eq:exponential0} \end{align} $$

Apply the power series

$$ \begin{align} \sum_{n=0}^{\infty}r^n=\frac{1}{1-r},&&|r|\lt1 \nonumber \end{align} \nonumber $$

to \(\eqref{eq:exponential0}\), where \(a=z^{-1}\ \mathrm{e}^{-aT}\)

$$ \begin{align} \mathrm{e}^{-anT}\ \gamma[n] \ztransform &\frac{1}{1-z^{-1}\ \mathrm{e}^{-aT}}\ ,&|\mathrm{e}^{-aT}|\lt z \end{align} $$

The unilateral Z-Transform of the exponential function follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ \mathrm{e}^{-anT}\ \gamma[n] \ztransform \frac{z}{z-\mathrm{e}^{-aT}} },\ \ &&{|\mathrm{e}^{-aT}|\lt |z|} \end{align} \label{eq:exponential} $$

Sine

The Z-transforms of the sine is similar to that of the cosine function, except that it uses the Euler identity for sine

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ \sin(\omega n)\ztransform\frac{z\sin(\omega T)}{z^2-2z\cos(\omega T)+1} } ,&& |z|\gt1\\ \end{align} \label{eq:sine} $$

Cosine

A common notation is to use \(\Omega\) to represent frequency in the \(z\)-domain, and \(\omega\) for frequency in the \(s\)-domain. Here we use \(\omega\) to represent both types of frequency. Another notation that you may encounter is \(\omega_0\) to represent \(\omega T\).

Consider the cosine function starting at \(n=0\)

$$ f[n] = \cos(\omega nT)\,\gamma[n] \label{eq:cos_def} $$

The unilateral Z-transforms of the cosine is

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \cos(\omega nT)\ \gamma[n] \ztransform \sum_{n=0}^\infty z^{-n}\ \cos(\omega nT) $$

Recall the Euler identity for cosine

$$ \cos\varphi = \frac{\mathrm{e}^{j\varphi}+\mathrm{e}^{-j\varphi}}{2} \nonumber $$

Apply the identify for cosine

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \cos(\omega nT)\ \gamma[n] \ztransform & \frac{1}{2}\left(\frac{1}{1-\left(z^{-1}\ e^{j\omega T}\right)}+\frac{1}{1-\left(z^{-1}\ e^{-j\omega T}\right)}\right),& |z|\leq |e^{j\omega T}|=1 \nonumber \\ =\, & \frac{1}{2}\left(\frac{z}{z-e^{j\omega T}}+\frac{z}{z-e^{-j\omega T} }\right),& |z|\lt1 \end{align} $$

Recall the power series

$$ \begin{align} \sum_{n=0}^{\infty}r^n=\frac{1}{1-r},&&|r|\lt1\nonumber \end{align}\nonumber $$

Apply the geometric power series

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\lfzraised#1{\raise{10mu}{#1}} \def\laplace{\lfz{\mathscr{L}}} \def\fourier{\lfz{\mathcal{F}}} \def\ztransform{\lfz{\mathcal{Z}}} \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} \begin{align} \cos(\omega nT)\ \gamma[n] \ztransform &\frac{1}{2}\left(\frac{1}{1-\left(z^{-1}\ e^{j\omega T}\right)}+\frac{1}{1-\left(z^{-1}\ e^{-j\omega T}\right)}\right),& |z|\leq |e^{j\omega T}|=1 \nonumber \\ =\, &\frac{1}{2}\left(\frac{z}{z-e^{j\omega T}}+\frac{z}{z-e^{-j\omega T}}\right),& |z|\lt1 \end{align} $$

Bring over a common denominator and regroup

$$ \begin{align} F(z) &= \frac{1}{2}\left(\frac{z(z-e^{-j\omega})}{(z-e^{j\omega n})(z-e^{-j\omega})}+\frac{z(z-e^{j\omega})}{(z-e^{-j\omega})(z-e^{-j\omega})}\right),& |z|\lt 1 \nonumber \\ &= \frac{1}{2}\left(\frac{z(z-e^{-j\omega})+z(z-e^{j\omega})}{z^2-ze^{-j\omega}+ze^{j\omega}+e^{j\omega}e^{-j\omega}}\right),& |z|\lt 1 \nonumber \\ &= \frac{1}{2}\left(\frac{z^2-ze^{-j\omega}+z^2-ze^{j\omega}}{z^2-z\left(e^{-j\omega}+e^{j\omega}\right)+e^0}\right),& |z|\lt 1 \nonumber \\ &= \frac{z^2-z\frac{1}{2}\left(e^{-j\omega}+e^{j\omega}\right)}{z^2-z\left(e^{-j\omega}+e^{j\omega}\right)+1},& |z|\lt 1 \end{align} $$

Once more, apply the Euler identity for cosine

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ \cos(\omega n)\ztransform\frac{z^2-z\cos(\omega)}{z^2-2z\cos(\omega T)+1} } ,&& |z|\gt1 \end{align} \label{eq:cosine} $$

Decaying Sine

Consider a decaying sine function for \(n\geq0\)

$$ a^n \sin(\omega n)\,\color{grey}{\gamma(n)} $$

Sorry proof is missing. If you have the proof, please help make this list complete and share it in the comments.

The unilateral Z-Transform of the decaying sine follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ a^n \sin(\omega n)\,\color{grey}{\gamma(n)} \ztransform \dfrac{az\sin(\omega)}{z^2-2az\cos(\omega)+a^2} } ,&&|z|\gt|a| \end{align} $$

Decaying Cosine

Consider a decaying cosine function for \(n\geq0\)

$$ a^n \cos(\omega n)\,\color{grey}{\gamma(n)} $$

Sorry the proof is missing. If you have the proof, please share it in the comments.

The unilateral Z-Transform of the decaying cosine follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \shaded{ a^n \cos(\omega n)\,\color{grey}{\gamma(n)} \ztransform \dfrac{1-az\cos(\omega)}{z^2-2az\cos(\omega)+a^2} } ,&&|z|\gt|a| \end{align} $$

Proofs continue at Z-transform initial and final value proofs. Or, if you want to skip ahead, I suggest Discrete Transfer Functions.

Property proofs

Lotfi Zadeh (proof)

Proofs for Z-transform properties, presented in the Z-Transforms article.\(\)

Proofs for Properties

Linearity

Consider the time-domain function

$$ a\,f[n]+b\,g[n] $$

This function transforms to the \(z\)-domain as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} a\,f[n]+b\,g[n] \ztransform &\sum_{n=0}^{\infty}\left(a\,f[n]+ b\,g[n] \right) z^{-n}= \nonumber\\ &a\underbrace{\sum_{n=0}^{\infty} f[n]\,z^{-n}}_{F(z)} + b\underbrace{\sum_{n=0}^{\infty} g[n]\,z^{-n}}_{G(z)} \end{align} $$

From which follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ a\,f[n]+b\,g[n] \ztransform a\,F(z) + b\,G(z) } \label{eq:linearity} $$

Time Delay

Consider a sequence truncated at n=0, and delayed by \(a\) samples, where \(a\gt0\)

$$ f[n-a]\,\color{grey}{\gamma[n}-a\color{grey}{]} \label{eq:delay0} $$

Where the delayed step function \(\gamma[n-a]\) is defined as

$$ \gamma[n-a] = \begin{cases} 0 & n\lt a \\ 1 & n\geq a \\ \end{cases} $$

The unilateral Z-transforms of \(\eqref{eq:delay0}\) is

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n-a]\,\gamma[n-a]\, \ztransform \sum_{n=0}^\infty z^{-n}\ f[n-a]\,\gamma[n-a] \label{eq:timedelay1} $$

Apply \(\gamma[n-a]=0\) for all \(n\lt a\) by changing the start value of the summation

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n-a]\,\gamma[n-a]\, \ztransform \sum_{\color{blue}{n=\mathbf{a}}}^{\color{blue}{\infty}} z^{-n}\ f[n-a]\label{eq:timedelay1b} $$

Make the summation start at \(0\) by subtracting \(a\) on both sides of the summation start value: introduce \(m=n-a\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} f[n-a]\,\gamma[n-a]\, \ztransform &\sum_{\color{blue}{m=0}}^\infty z^{-(\color{blue}{m+a})}\ f[\color{blue}{m}]\nonumber\\ \ztransform &z^{-a}\underbrace{\sum_{m=0}^\infty z^{-m}\ f[m]}_{\color{blue}{=F(z)}} \end{align} $$

The unilateral Z-transform of the positive delay property follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ f[n-a]\,\color{grey}{\gamma[n}-a\color{grey}{]}\, \ztransform z^{-a}F(z) } \label{eq:timedelay} $$

Time Delay #2

Consider the sequence from the previous Time Delay, where also \(f[-a]\ldots f[-1]\) are known. Once more, delayed by \(a\) samples, where \(a\gt0\)

$$ f[n-a]\,\color{grey}{\gamma[n]} $$

The unilateral Z-transforms is

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n-a]\,\gamma[n]\, \ztransform \sum_{n=0}^\infty z^{-n}\ f[n-a]\,\gamma[n] $$

Apply the definition of the unit step function \(\eqref{eq:unitstep_def}\): \(\gamma[n]=0,\ \ \forall_{n\lt 0}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n-a]\,\gamma[n]\, \ztransform \sum_{n=0}^\infty z^{-n}\ f[n-a] $$

Substitute \(m=n-a\) to remove the \(-a\) offset from \(f[n-a]\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} f[n-a]\,\gamma[n]\, \ztransform & \sum_{m=-a}^\infty z^{-(m+a)}\ f[m] \nonumber \\ \ztransform & z^{-a}\,\sum_{m=-a}^\infty z^{-m}\ f[m] \end{align} $$

To make the summation start at \(0\), subtract the first \(a\) terms, and then add these points back again.

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} f[n-a]\,\gamma[n]\, \ztransform &\ z^{-a}\sum_{m=-a}^\infty z^{-m}\ f[m]\ \overbrace{\color{red}{-}z^{-a}\sum_{m=-a}^{-1} z^{-m}\ f[m]\color{blue}{+}z^{-a}\sum_{m=-a}^{-1} z^{-m}\ f[m]}^{\text{=0}}= \nonumber \\[10mu] &\ z^{-a}\,\underbrace{\sum_{m=0}^\infty z^{-m}\ f[m]}_{\color{blue}{=F(z)}}+z^{-a}\sum_{m=1}^{a} z^{m}\ f[-m]) \end{align} $$

The unilateral Z-transform of this positive time delay follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ f[n-a]\,\color{grey}{\gamma[n]}\, \ztransform z^{-a}\left(F(z)+\sum_{m=1}^{a} z^m\ f[-m]\right) } \label{eq:timedelay2} $$

Time Advance

Consider a sequence advanced by \(a\) samples, where \(a\gt0\), and then truncated at time \(0\)

$$ f[n+a]\,\color{grey}{\gamma[n]}\label{eq:timeadv_def} $$

The unilateral Z-transforms of \(\eqref{eq:timeadv_def}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n+a]\,\gamma[n]\, \ztransform \sum_{n=0}^\infty z^{-n}\ f[n+a]\,\gamma[n] $$

Apply the definition of the unit step function \(\eqref{eq:unitstep_def}\): \(\gamma[n]=0,\ \ \forall_{n\lt 0}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n+a]\,\gamma[n]\, \ztransform \sum_{n=0}^\infty z^{-n}\ f[n+a] $$

Substitute \(m=n+a\) to remove the \(+a\) offset from \(f[n+a]\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} f[n+a]\,\gamma[n]\, \ztransform & \sum_{\color{blue}{m=a}}^\infty z^{-(\color{blue}{m-a})}\ f[\color{blue}{m}] \nonumber \\ \ztransform & z^a\,\sum_{m=a}^\infty z^{-m}\ f[m] \end{align} $$

To make the summation start at \(0\), add the first \(a\) terms, and then subtract these points again.

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} f[n+a]\,\gamma[n]\, \ztransform & \ z^a\sum_{m=a}^\infty z^{-m}\ f[m]\ \overbrace{\color{blue}{+}\,z^a\sum_{m=0}^{a-1} z^{-m}\ f[m]\,\color{red}{-}\,z^a\sum_{m=0}^{a-1} z^{-m}\ f[m]}^{\text{=0}} \nonumber \\ \ztransform &\ z^a\Big(\underbrace{\sum_{m=0}^\infty z^{-m}\ f[m]}_{\color{blue}{=F(z)}}-\sum_{m=0}^{a-1} z^{-m}\ f[m])\Big) \end{align} $$

The unilateral Z-transform of the positive time advance follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ f[n+a]\,\color{grey}{\gamma[n]}\, \ztransform z^a\Big( F(z)-\sum_{m=0}^{a-1} z^{-m}\ f[m]\Big) } \label{eq:timeadvance} $$

Time Multiply

Consider multiplied by the sample number \(n\), and truncated at \(n=0\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} n\,f[n]\,\gamma[n] $$

Sorry, proof missing. See “n scaled” pair. Have the proof? Please share in the comments.

The unilateral Z-transform

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ n\,f[n]\,\color{grey}{\gamma[n]} \ztransform -z\frac{\text{d}F(z)}{\text{d}z} } $$

Modulation

Consider a sequence multiplied with complex scalar \(a^n\), and truncated at \(n=0\)

$$ a^n\,f[n]\,\color{grey}{\gamma[n]} $$

Take the unilateral Z-transforms

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} a^n\,f[n]\,\gamma[n] \ztransform &\sum_{n=0}^{\infty}z^{-n}a^nf[n]\nonumber\\ \ztransform &\sum_{n=0}^{\infty}f[n]\left(\underbrace{a^{-1}z}_{\color{blue}{=r}}\right)^{-n}\label{eq:scaling_def} \end{align} $$

Recall the power series

$$ \begin{align} \sum_{n=0}^{\infty}r^n = \frac{1}{1-r},&&|r|\lt1\nonumber \end{align} \nonumber $$

Apply the power series to equation \(\eqref{eq:scaling_def}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} a^n\,f[n]\,\gamma[n] \ztransform & \sum_{n=0}^{\infty}\,f[n]\,\left(za^{-1}\right)^{-n},&|za^{-1}|\gt 1 \end{align} $$

The unilateral Z-transform of the modulation follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\lfzraised#1{\raise{10mu}{#1}} \def\laplace{\lfz{\mathscr{L}}} \def\fourier{\lfz{\mathcal{F}}} \def\ztransform{\lfz{\mathcal{Z}}} \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} \begin{align} \shaded{ a^n\,f[n]\,\color{grey}{\gamma[n]} \ztransform F\left(a^{-1}z\right) } \end{align} $$

In the \(z\)-domain, \(F\left(a^{-1}z\right)\) has a zero at \(z=0\) and pole at \(z=a\).

Note that scaling will affect the region of convergence and all the pole-zero locations will be scaled by a factor of \(a\).

Convolution

Convolution is used to calculate an output signal when the input signal and transfer function are known in the time domain. Convolution is related to autocorrelation that we used in Arduino Pitch Detector.

Consider a convolution of two sequences truncated at the origin (\(n=0\))

$$ \begin{align} (f\ast g)[n]\,\color{grey}{\gamma[n]} &=\sum_{m=-\infty}^{\infty}f[m]\,g[n-m]\,\gamma[n]\nonumber\\ &=\ \sum_{m=0}^{\infty}\ f[m]\,g[n-m]\\ \end{align} $$

Apply the unilateral Z-transforms definition

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} (f\ast g)[n]\,\gamma[n] \ztransform &\color{purple}{\sum_{n=0}^{\infty}}\left(z^{-n}\color{blue}{\sum_{m=0}^{\infty}}f[m]\,g[n-m]\right)&\text{reverse } \tiny\sum\nonumber\\[8mu] \ztransform &\color{blue}{\sum_{m=0}^{\infty}}\left(\color{purple}{\sum_{n=0}^{\infty}}z^{-n}f[m]\,g[n-m]\right)&f[m]\text{ indep.}\nonumber\\[8mu] \ztransform &\sum_{m=0}^{\infty}\left(f[m]\,\underbrace{\sum_{n=0}^{\infty}z^{-n}\,g[n-m]}_{\color{green}{z^{-m}G(z)}}\right) \label{eq:convolution0} \end{align} $$

Recall the equations \((\ref{eq:timedelay1b}, \ref{eq:timedelay})\) from the Delay Property

$$ \sum_{n=0}^\infty z^{-n}\ g[n-m]=\color{green}{z^{-m}G(z)} \label{eq:convolution1} $$

Apply \(\eqref{eq:convolution1}\) to \(\eqref{eq:convolution0}\)

$$ \begin{align} (f\ast g)[n]\,\gamma[n] \ztransform & \sum_{m=0}^{\infty}f[m]\ \color{green}{z^{-m}G(z)}&\overset{G(z)\text{ indep.}}{\Rightarrow} \nonumber \\ \ztransform & \ G(z)\underbrace{\sum_{m=0}^{\infty}f[m]\ z^{-m}}_{F(z)} \end{align} $$

The unilateral Z-transform of the convolution in the time-domain simplifies to a a multiplication in the \(z\)-domain.

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ (f\ast g)[n]\,\color{grey}{\gamma[n]} \ztransform F(z)\,G(z) } \label{eq:convolution} $$

Conjugation

Consider conjugation

$$ f^\star[n] $$

Take the unilateral Z-transforms

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} f^\star[n] \ztransform & \sum_{n=0}^{\infty}z^{-n}f^*[n] \nonumber \\ f^\star[n] \ztransform & \sum_{n=0}^{\infty}\left(\left(z^\star\right)^{-n} f[n]\right)^\star \nonumber \\ f^\star[n] \ztransform &\left(\sum_{n=0}^{\infty}\left(z^\star\right)^{-n} f[n]\right)^\star \end{align} $$

The unilateral Z-transform follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ f^\star[n] \ztransform F^{\star}(z^{\star}) } $$

First Difference

Differencing in the discrete time domain is analogous to differentiation in the continuous time domain.

Consider the difference

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n]-f[n-1]\ztransform F(z)-z^{-1}F(z) $$

The unilateral \(z\)-transform of the first difference follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ f[n]-f[n-1]\ztransform \left(1-z^{-1}\right)F(z) } $$

Accumulation

Accumulation in the discrete time domain is analogous to integration in the continuous time domain.

Consider the accumulation

$$ \sum_{k=-\infty}^{n}x[k]\label{eq:accum_def} $$

Take the unilateral Z-transforms of equation \(\eqref{eq:accum_def}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \begin{align} \sum_{k=-\infty}^{n}x[k] \ztransform &\sum_{n=0}^{\infty}\left(z^{-n}\sum_{k=-\infty}^{n}x[k]\right)\nonumber\\ \ztransform &\sum_{n=0}^{\infty}z^{-n}\left(f[-\infty]+\cdots+f[n-2]+f[n-1]+f[n]\right)\nonumber\\ \ztransform &\sum_{n=0}^{\infty}\left(z^{-n}f[n]+z^{-n}f[n-1]+z^{-n}f[n-2]+\cdots+z^{-n}f[-\infty]\right)\label{eq:accum1} \end{align} $$

Take the delay property from equation \(\eqref{eq:timedelay}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n-a]\,\color{grey}{\gamma[n}-a\color{grey}{]}\, \ztransform \,z^{-a}F(z) \nonumber $$

Apply the delay property to \(\eqref{eq:accum1}\)

$$ \begin{align} \sum_{k=-\infty}^{n}x[k] \ztransform &\sum_{n=0}^{\infty}\left(z^{-n}f[n]+z^{-n}f[n-1]+z^{-n}f[n-2]+\ldots\right) \\[6mu] \ztransform & \left(F(z)+z^{-1}F(z)+z^{-2}F(z)+\ldots\right) \nonumber \\[4mu] \ztransform & F(z)\left(1+z^{-1}+z^{-2}+\ldots\right)=F(z)\sum_{k=0}^{\infty}z^{-k} \end{align} $$

The unilateral Z-transform of accumulation follows from applying thepower series

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ \sum_{i=0}^{n}x[i] \ztransform F(z)\,\frac{z}{z-1} } $$

Double poles

We will take a different approach for this proof.

Recall the Quotient Rule from calculus

$$ \frac{\text{d}}{\text{d}x}\left(\frac{u}{v}\right)=\frac{v\frac{\text{d}u}{\text{d}x}-u\frac{\text{d}v}{\text{d}x}}{v^2}\nonumber $$

Take the first derivative in \(z^{-1}\) of \(\frac{1}{{1-pz^{-1}}}\)

$$ \frac{\text{d}}{\text{d}y}\left(\frac{1}{{1-pz^{-1}}}\right)=\frac{(1-pz^{-1})0-1(-p)}{(1-pz^{-1})^2}=\frac{p}{(1-pz^{-1})^2} $$

Apply the power series and differentiate

$$ \begin{align} \frac{1}{(1-pz^{-1})^2} &=\frac{1}{p}\ \frac{\text{d}}{\text{d}z^{-1}}\left(\frac{1}{{1-pz^{-1}}}\right), \quad \text{power series} \nonumber \\ &=\frac{1}{p}\ \frac{\text{d}}{\text{d}z^{-1}}\left(\sum_{n=0}^{\infty}\left(pz^{-z}\right)^n\right) \nonumber \\ &=\frac{1}{p}\ \frac{\text{d}}{\text{d}z^{-1}}\left(1+pz^{-1}+p^2z^{-2}+p^3z^{-3}+\cdots\right),\quad \tfrac{\mathrm{d}}{\mathrm{d}z^{-1}} \nonumber \\ &=\frac{1}{p}\ \left(\cancel{0}+p+2p^2z^{-1}+3p^3z^{-2}+\cdots\right)\nonumber\\ &=1+2p^1z^{-1}+3p^2z^{-2}+\cdots \end{align} $$

We recognize this as the Z-transform of \((n+1)p^n\)

$$ \begin{align} \frac{1}{(1-pz^{-1})^2} &=\sum_{n=0}^{\infty}\underbrace{(n+1)p^n}_{\color{blue}{=f[n]}}\,z^{-n}\\ \end{align} $$

The unilateral Z-transform of \((n+1)p^n\) follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ (n+1)p^n\,\color{grey}{\gamma[n]} \ztransform \frac{1}{(1-pz^{-1})^2} } $$

Proofs continue at Z-transform functions proofs. Or, if you want to skip ahead, I suggest Discrete Transfer Functions.

Overview

Lotfi Zadeh (colorized)

Overview of the Unilateral Z-transform properties, pairs and initial/final theorem. Includes links to the the proofs.\(\)

The tables below introduce commonly used properties, common input functions and initial/final value theorems that I collected over time.

The time-domain function is usually given in terms of a discrete index \(n\), rather than time. Since \(t=nT\), we may replace \(f[n]\) with \(f(nT)\) where \(T\) is the sampling period.

Unilateral Z-Transform properties

Unilateral Z-Transform properties
Time domain \(z\)-domain
Z-transform \(f[n]\) \( \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\lfzraised#1{\raise{10mu}{#1}} \def\ztransform{\lfz{\mathcal{Z}}} \lfzraised\ztransform\) \(X(z)=\sum_{n=0}^{\infty}z^{-n}f[n]\) proof
Linearity \(a\,f[n]+b\,g[n]\) \(\lfzraised\ztransform\) \(a\,F[n]+b\,G[n]\) proof
Time delay \(f[n-a]\,\color{grey}{\gamma[n}-a\color{grey}{]}\) \(\lfzraised\ztransform\) \(z^{-a}F(z)\) proof
Time delay #2 \(f[n-a]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(z^{-a}\left(F(z)+\sum_{m=1}^{a} z^m\ f[-m]\right)\) proof
Time advance \(f[n+a]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(z^a\left(F(z)-\sum_{m=0}^{a-1} z^{-m}\ f[m]\right)\) proof
Time advance (\(a=1\)) \(f[n+1]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(z\left(F(z)-f[0]\right)\) proof
Time multiply \(n\,f[n]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(-z\dfrac{\text{d}F(z)}{\text{d}z}\) proof
Modulation \(a^n\,f[n]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(F(a^{-1}z)\) proof
Convolution \((f\ast g)[n]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(F(z)\ G(z)\) proof
Conjugation \(f^\star[n]\) \(\lfzraised\ztransform\) \(F^{\star}(z^{\star})\) proof
First difference \(f[n]-f[n-1]\) \(\lfzraised\ztransform\) \(\left(1-z^{-1}\right)F(z)\) proof
Accumulation \(\sum_{i=0}^{n}x[i]\) \(\lfzraised\ztransform\) \(F(z)\,\dfrac{z}{z-1}\) proof
Double poles \((n+1)p^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{1}{(1-pz^{-1})^2}\) proof
Real part \(\Re{f[n]}\) \(\lfzraised\ztransform\) \(\dfrac{X(z)+X^{\star}(z^{\star})}{2}\)
Imaginary part \(\Im{f[n]}\) \(\lfzraised\ztransform\) \(\dfrac{X(z)-X^{\star}(z^{\star})}{2j}\)

Unilateral Z-transform pairs

Unilateral Z-transform pairs
Time domain \(z\)-domain (\(z\)) \(z\)-domain (\(z^{-1}\)) ROC
Impulse \(\small{\delta[n]\triangleq\begin{cases}1,&n=0\\0,&n\neq0\end{cases}}\) \(\lfzraised\ztransform\) \(1\) \(1\) all \(z\) proof
Delayed impulse \(\delta[n-a]\) \(\lfzraised\ztransform\) \(\small{\begin{cases}z^{-a},&a\geq0\\0,&a\lt0\end{cases}}\) \(\small{\begin{cases}z^{-a},&a\geq0\\0,&a\lt0\end{cases}}\) \(z\neq0\) proof
Unit step \(\small{\gamma[n]\triangleq\begin{cases} 0,&n\lt0\\1,&n\geq 0\end{cases}}\) \(\lfzraised\ztransform\) \(\dfrac{z}{z-1}\) \(\dfrac{1}{1-z^{-1}}\) \(|z|\gt1\) proof
Scaled \(a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z}{z-a}\) \(\dfrac{1}{1-az^{-1}}\) \(|z|\gt |a|\) proof
Delayed scaled \(a^{n-1}\,\gamma[n-1]\) \(\lfzraised\ztransform\) \(\dfrac{1}{z-a}\) \(\dfrac{z^{-1}}{a-z^{-1}}\) \(|z|\gt1\) proof
n-scaled \(n\,a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{az}{(z-a)^2}\) \(\dfrac{az^{-1}}{(1-az^{-1})^2}\) \(|z|\gt |a|\) proof
Ramp \(n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z}{(z-1)^2}\) \(\dfrac{z^{-1}}{(1-z^{-1})^2}\) \(|z|\gt1\) proof
Binomial scaled, \(|z|\gt |a|\) \(n^2a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{az(z-1)}{(z-a)^3}\) \(\dfrac{az^{-1}\left(1+az^{-1}\right)}{(1-az^{-1})^3}\) \(|z|\gt |a|\) proof
Binomial scaled, \(|z|\gt |a|\) \(\tfrac{1}{2}n(n-1)a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{a^2z}{(z-a)^3}\) \(\dfrac{a^2z^{-2}}{(1-az^{-1})^3}\) \(|z|\gt |a|\) proof
Binomial scaled, \(|z|\lt |a|\) \(\small{\left(\begin{array}{c}n+m-1\\m-1\end{array}\right)}\,a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z^m}{(z-a)^m}\) \(\dfrac{1}{(1-az^{-1})^m}\) \(|z|\gt |a|\) proof
Binomial scaled, \(|z|\lt |a|\) \(\small{(-1)^m\left(\begin{array}{c}-n-1\\m-1\end{array}\right)}\,a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z^m}{(z-a)^m}\) \(\dfrac{1}{(1-az^{-1})^m}\) \(|z|\lt |a|\) proof
Exponential \(\mathrm{e}^{-anT}\ \gamma[n]\) \(\lfzraised\ztransform\) \(\frac{z}{z-\mathrm{e}^{-aT}}\) \({|\mathrm{e}^{-aT}|\lt |z|}\) proof
Sine \(\sin(\omega n)\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z\sin(\omega)}{z^2-2z\cos(\omega)+1}\) \(\dfrac{z^{-1}\sin(\omega)}{1-2z^{-1}\cos(\omega)+z^{-2}}\) \(|z|\gt1\) proof
Cosine \(\cos(\omega n)\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z^2-z\cos(\omega)}{z^2-2z\cos(\omega)+1}\) \(\dfrac{1-z^{-1}\cos(\omega)}{1-2z^{-1}\cos(\omega)+z^{-2}}\) \(|z|\gt1\) proof
Decaying Sine \(a^n\sin(\omega n)\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{az\sin(\omega)}{z^2-2az\cos(\omega)+a^2}\) \(\dfrac{az^{-1}\sin(\omega)}{1-2az^{-1}\cos(\omega)+a^2z^{-2}} \) \(|z|\gt|a|\) proof
Decaying Cosine \(a^n\cos(\omega n)\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{1-az\cos(\omega)}{z^2-2az\cos(\omega)+a^2}\) \(\dfrac{z^{-1}\left(z^{-1}-a\cos(\omega)\right)} {1-2az^{-1}\cos(\omega)+a^2z^{-2}}\) \(|z|\gt|a|\) proof

The binomial coefficient, used in the table above, is defined as

$$ \left(\begin{array}{c}a\\ b\end{array}\right)\triangleq\frac{a!}{b!(a-b)!} \nonumber $$

Initial and final value theorem

Initial and final value theorem
Time domain \(z\)-domain
Initial Value \(f(0^+)\) \(f[0]=\lim_{z\to\infty}F(z)\) proof
Final Value \(f(\infty)\) \(\lim_{n\to\infty}f[n]=\lim_{z\to1}(z-1)F(z)\) proof

The proof for these transforms can be found in the post Z-Transforms Proofs.

I recommend reading through the proofs for these Z-transforms. If you want to skip ahead, I suggest Discrete Transfer Functions for follow-up reading.

Z-transform

Lotfi Zadeh (colorized)

What is now called the Z-transform (named in honor of Lotfi Zadeh) was known to, mathematician and astronomer, Pierre-Simon Laplace around 1785. With the introduction of digitally sampled-data, the transform was re-discovered by Hurewicz in 1947, and developed by Lotfi Zadeh and John Ragazzinie around 1952, as a way to solve linear, constant-coefficient difference equations.\(\)

As we will see, the convolution property makes the Z-transform a powerful tool in analyzing sampled-data systems.

Introduction

Just as causal continuous systems are controlled by differential equations

$$ \begin{align} \sum_{k=0}^{N}a_k\frac{\text{d}^ky(t)}{\text{d}t^k} &= \sum_{k=0}^{M}b_k\frac{\text{d}^kx(t)}{\text{d}t^k}\quad\Rightarrow \nonumber \\[25mu] a_0y(t)+a_1y^{\prime}(t)+a_2y^{\prime\prime}(t)+\ldots &= b_0x(t)+b_1x^{\prime}(t)+b_2x^{\prime\prime}(t)+\ldots \nonumber \end{align} \nonumber $$
Linear constant-coefficient differential equation
expresses the relation between input \(x[n]\) and output \(y[n]\)

Causal discrete systems operate in accordance with difference equations.

$$ \begin{align} \sum_{k=0}^N a_k\,y[n-k] &= \sum_{k=0}^M b_k\,x[n-k]\quad\Rightarrow \nonumber \\[10mu] a_0y[n]+a_1y[n-1]+a_2y[n-2]+\ldots \nonumber &=b_0x[n]+b_1x[n-1]+b_2x[n-2]+\ldots \nonumber \end{align} $$
Linear constant-coefficient difference equation
expresses the relation between input \(x[n]\) and output \(y[n]\)

From the difference equation we can derive the system characteristics such as the impulse response, step response and frequency response.

Unlike the continuous-time case, causal difference equations can be iterated just like a computer program would do. All one needs to do, is to rewrite the difference equation so that the term \(y[n]\) is on the left and then iterating forward in time. This will give each value of the output sequence without ever obtaining a general expression for \(y[n]\). In this article, we however will look for a general analytical expression for \(y[n]\) using the Z-transform.

The Z-transform can be thought of as an operator that transforms a discontinuous sequence to a continuous algebraic function of complex variable \(z\). As we will see, one of the nice feature of this transform is that a convolution in time, transforms to a simple multiplication in the \(z\)-domain.

Unilateral Z-Transform

We solve the difference equations, by taking the Z-transform on both sides of the difference equation, and solve the resulting algebraic equation for output \(Y(z)\), and then do the inverse transform to obtain \(y[n]\).

Assuming causal filters, the output of the filter will be zero for \(t\lt 0\).

Sampling creates a discontinuous function

For digital systems, time is not continuous but passes at discrete intervals. When it measures a continuous-time signal every \(T\) seconds, it is said to be discrete with sampling period \(T\).

Sampling music to be recorded on Compact Disc

To help understand the sampling process, assume a continuous function \(x_c(t)\) as shown below

Example of continuous function, \(x_c(t)\)

To work toward a mathematical representation of the sampling process, consider a train of evenly spaced impulse functions starting at \(t=0\). This so called Dirac comb, \(s(t)\), has a spacing of \(T\gt0\) and contains \(t=0\).

Dirac comb, \(s(t)\)

The Dirac comb \(s(t)\) can be expressed as

$$ s(t) = \sum_{n=0}^\infty{\delta(t-nT)} \label{eq:combcond} $$
where the impulse function \(\delta(t-nT)\) must satisfy the condition
$$ \int_{-\infty}^{\infty}\delta(t-nT)=1 $$

When multiplying the Dirac comb \(s(t)\) with a continuous-time signal \(x_c(t)\), that signal will scale the comb by that \(x_c(t)\)

Impulse train modulator

Example of Dirac comb applied to \(x_c(t)\)

The resulting signal \(x_s(t)\) follows from substituting \(s(t)\) from equation \(\eqref{eq:combcond}\) as

$$ \begin{align} x_s(t)&\triangleq x_c(t)\,s(t)\label{eq:fstar0} \\ &= x_c(t)\sum_{n=0}^{\infty}{\delta(t-nT)} \nonumber \\ &= \sum_{n=0}^{\infty}{x_c(t)\ \underbrace{\delta(t-nT)}_{\text{delayed impulse}}} \label{eq:fstar} \end{align} $$

The impulse function \(\delta(t-nT)\) is \(0\) everywhere but at \(t=nT\), the so called sifting property, so we can replace \(x_c(t)\)

$$ \begin{align} x_c(t) &= x_c(nT)\label{eq:ft}\\ \end{align} $$
so that
$$ x_s(t)=\sum_{n=0}^{\infty}{x_c(nT)\ \delta(t-nT)} \label{eq:fstarnT} $$

Work towards a continuous function

The goal is to form a continuous algebraic expression, so we can use algebra to manipulate the difference equations.

Start with the Laplace transform of sampled signal \(x_s(t)\) from equation \(\eqref{eq:fstarnT}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \begin{align} x_s(t) \laplace X_s(s)\triangleq&\int_{0^-}^\infty e^{-st}\overbrace{\sum_{n=0}^{\infty}x_c(nT)\ \delta(t-nT)}^{x_s(t)}\ \mathrm{d}t \nonumber \\ =& \int_{0^-}^\infty \sum_{n=0}^{\infty}e^{-st}\,x_c(nT)\ \delta(t-nT)\ \mathrm{d}t \label{eq:laplace0} \end{align} $$

Once more, the impulse function \(\delta(t-nT)\) is \(0\) everywhere but at \(t=nT\), the “sifting property”, so we can replace \(\mathrm{e}^{-st}\) with

$$ \mathrm{e}^{-st}=\mathrm{e}^{-snT}\label{eq:est} $$

After substituting \(\eqref{eq:est}\) in \(\eqref{eq:laplace0}\), the terms \(\mathrm{e}^{-snT}\) and \(x(nT)\) are independent of \(t\) and can be taken outside of the integration.

$$ \begin{align} X(s)&=\int_{0^-}^\infty \sum_{n=0}^\infty e^{-s\color{blue}{nT}}x_c(\color{blue}{nT})\ \delta(t-nT)\ \mathrm{d}t \nonumber \\ &= \sum_{n=0}^\infty e^{-s{nT}}x_c(nT)\underbrace{\int_{0^-}^\infty \delta(t-nT)\ \mathrm{d}t}_{\text{=1 according to equation (\ref{eq:combcond})}} \nonumber \\ &= \sum_{n=0}^\infty\ e^{-s{nT}}\ x_c(nT) \label{eq:Fs} \end{align} $$

The Z-transform follows

Define \(z\) and \(x[n]\) as

$$ \shaded{ \begin{align} z &\triangleq e^{sT} \nonumber \\ x[n] &\triangleq x_c(nT) \nonumber \end{align} } \label{eq:z} $$

The scaling with sample period \(T\) in the form \(x[n]\) matches the notation for computer arrays. Anytime you see an \([n]\) you can translate to seconds replace it with \((nT)\). Be careful with integer expressions, such as \([n – k]\), which stands for \(((n-k)T)\) seconds, not \((nT – k)\).

From equation \(\eqref{eq:z}\) follows

$$ \ln z = sT\ \Rightarrow\ s=\tfrac{1}{T}\ln z \label{eq:s} $$

Substitute \((\ref{eq:z},\ref{eq:s})\) in \(\eqref{eq:Fs}\) and apply the power rule \(x^{ab}=(x^a)^b\). Call the function \(F(z)\) because \(z\) is the only variable after the substitution \(s=\tfrac{1}{T}\ln z\)

$$ X(z) = \sum_{n=0}^\infty\ \overbrace{z^{-n}}^{(e^{sT})^{-n}}\ x[n] $$

The unilateral Z-transform of the discrete function \(x[n]\) follows as

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \shaded{ x[n] \ztransform X(z)=\sum_{n=0}^\infty z^{-n}\ x[n] } \label{eq:ztransform} $$
in this complex polynomial, \(z\) is any \(z\in\mathbb{C}\).

Note that we use the notation \(\def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} \ztransform\) as equivalent to the more common Z-transform notation \(\mathfrak{Z}\left\{\,x[n]\,\right\}\).

In review: the Z-transform maps a sequence \(x[n]\) to a continuous polynomial \(X(z)\) of the complex variable \(z\).

Normalized frequency

The discrete signal only exists at time \(t=nT\) where \(n={0,1,2,\ldots}\) By normalizing time \(t\) with the sampling interval \(T\), using definition \(\eqref{eq:z}\), we get the natural time, measured in “samples”, on the time-axis

$$ x[n]\triangleq x(nT) \nonumber $$ we highlight the normalized time, by using the \([n]\) notation

When using normalized time, other time-dependent variables should be normalized as well. The angular frequency \(\omega\), measured in [rad/s], normalizes to the normalized angular frequency with units of [rad/sample] by multiplying it with the sample period \(T\) [s/sample]

Conversion to/from normalized
regular normalized
time [s] \(\xrightarrow{\div T}\) [sample]
angular frequency [rad/s] \(\xrightarrow{\times T}\) [rad/sample]
natural frequency [cycles/s] \(\xrightarrow{\times T}\) [cycles/sample]

Using normalized frequency allows an author to present concepts independent of sample rate, but it comes at a loss of clarity as \(T\) and \(f_s\) are omitted from expressions.

When visualizing variable \(z\), the normalized angular frequency \(\omega T\) corresponds to the angle with the positive horizontal axis.

Formulas expressed in terms of \(f_{s}\) and/or \(T\) are not normalized and can be readily converted to normalized frequency by setting those parameters to \(1\). The inverse is accomplished by replacing instances of the angular frequency parameter \(\omega\), with \(\omega T\).

Note that some authors use \(\omega\) for normalized angular frequency in [rad/sample], and \(\Omega\) for angular frequency in [rad/s]. Here we avoid using \(\omega\) for natural frequencies. Instead we use the product \(\omega T\) to refer to natural angular frequency. By doing so, it is clear that the angular frequency \(\omega\) is scaled by the sample time \(T\).

Nyquist–Shannon sampling theorem

The frequency-domain representation of a sampled signal teaches us about the limitations of using a discrete signal. We will show this by doing a Fourier transform on the sampled signal \(x_s(t)\).

Recall equation \(\eqref{eq:fstarnT}\) and \(\eqref{eq:fstar0}\), but this time bilateral

$$ \begin{align} x_s(t)=x_c(t)\,s(t) &= \sum_{n=-\infty}^{\infty}{x_c(nT)\ \delta(t-nT)} \nonumber \\ &= \underbrace{\sum_{n=-\infty}^{\infty}x_c(nT)}_{f(t)}\ \ \underbrace{\sum_{n=-\infty}^{\infty}\delta(t-nT)}_{g(t)} \nonumber \end{align} \nonumber $$

The convolution theorem states that multiplication in the time-domain corresponds to convolution in the frequency-domain

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\fourier{\lfz{\mathcal{F}}} f(t)\,g(t)\fourier \frac{1}{2\pi}{\Large(}F(\omega)*G(\omega){\Large)} \nonumber $$ where \(*\) is the convolution sign

Apply the Fourier transform of a product to \(x_s(t)\) and call it \(X_s(\omega)\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\fourier{\lfz{\mathcal{F}}} \begin{align} x_s(t) = x_c(t)\,s(t) \fourier&\frac{1}{2\pi}{\Large(}\color{purple}{X_c(\omega)}*S(\omega){\Large)}\triangleq X_s(\omega) \label{eq:fstar2} \end{align} $$

Recall the Fourier transform of the Dirac comb \(s(t)\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\fourier{\lfz{\mathcal{F}}} s(t)=\sum _{n=-\infty }^{\infty }\delta(t-nT) \fourier {\frac {2\pi }{T}}\sum _{k=-\infty }^{\infty }\delta \left(\omega -{\frac {2\pi k}{T}}\right)\triangleq S(\omega) \nonumber $$

Substituting the Fourier transform of the Dirac comb in \(\eqref{eq:fstar2}\) where \(\frac{2\pi}{T}=2\pi f_s=\omega_s\), the angular sample frequency

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\fourier{\lfz{\mathcal{F}}} \newcommand\ccancel[2][black]{\color{#1}{\cancel{\color{black}{#2}}}} \newcommand\ccancelto[3][black]{\color{#1}{\cancelto{#2}{\color{black}{#3}}}} \begin{align} x_s(t) \fourier X_s(\omega) &= \frac{1}{\ccancel[red]{2\pi}}\color{purple}{X_c(\omega)} * \left(\frac{\ccancel[red]{2\pi}}{T}\sum_{k=-\infty}^{\infty}\delta(\omega-k\omega_s)\right) \nonumber \\ &= \frac{1}{T}\sum_{k=-\infty}^{\infty}\color{purple}{X_c(\omega)} * \delta(\omega-k\omega_s)\label{eq:XsjOmega0} \end{align} $$

Recall the convolution with a delayed impulse function \(\delta(\omega-a)\)

$$ F(\omega)*\delta(\omega-a) = F(\omega-a)\nonumber $$

Apply the convolution to \(\eqref{eq:XsjOmega0}\)

$$ \begin{align} X_s(\omega) &= \frac{1}{T}\sum_{k=-\infty}^{\infty}\color{purple}{X_c(\omega}-k\omega_s\color{purple}{)} \label{eq:XsjOmega} \end{align} $$

Equation \(\eqref{eq:XsjOmega}\) implies that the Fourier transform of \(x_s(t)\) consists of periodically repeated copies of the Fourier transform of \(x_c(t)\). The copies of \(\color{purple}{X_c(\omega)}\) are shifted by integer multiples of the sampling frequency and then superimposed as depicted below.

Frequency-domain representation of sampling

Plot (a) represents the frequency spectrum of the continuous signal \(\color{purple}{X_c(\omega)}\) where \(\omega_{\small N}\) is the highest frequency component. Plot (b) shows the frequency spectrum of the Dirac comb \(S(\omega)\). Finally, plot (c) shows \(X_s(\omega)\), the result of the convolution between \(\color{purple}{X_c(\omega)}\) and the \(S(\omega)\).

From (c), we see that the replicas of \(\color{purple}{X_c(\omega)}\) do not overlap when

$$ \omega_s-\omega_{\small N}\geq\omega_{\small N}\quad\Rightarrow\quad\shaded{\omega_s\geq2\,\omega_{\small N}} \label{eq:ineq} $$

Consequently, \(x_c(t)\) can be recovered from \(x_s(t)\) with an ideal low-pass filter \(H_r\).

Impulse train modulator and LTI \(H_r\)

Inequality \(\eqref{eq:ineq}\) is captured in the Nyquist–Shannon sampling theorem

Let \(x_c(t)\) be a bandlimited signal with $$ \begin{align} X_c(\omega)&=0,&|\omega|\gt \omega_{\small N}\nonumber \end{align} \nonumber $$ then \(x_c(t)\) is uniquely determined by its samples $$ \begin{align} x[n] &= x_c(nT),&n\in\mathbb{Z}\nonumber \end{align} \nonumber $$ if $$ \omega_s = \frac{2\pi}{T}\geq2\,\omega_{\small N} \nonumber $$ where \(\omega_{\small N}\) is called the Nyquist frequency and \(2\omega_{\small N}\) is called the Nyquist rate.

When the inequality of \(\eqref{eq:ineq}\) does not hold, e.g. when the sampling frequency \(\omega_s\) is less than twice the maximum frequency \(\omega_{\small N}\), the copies of \(X_c(\omega)\) overlap, and \(X_c(\omega)\) is no longer recoverable by low-pass filtering as shown below. The resulting distortion is called aliasing.

Frequency-domain representation of sampling with aliasing

\(z\)-plane

We introduced the normalized angular frequency \(\omega \) and how it maps to a polar representation of the complex variable \(z\). Here we will extent this to include the magnitude \(|z|\).

Let’s start with the definition of \(z\) from equation \(\eqref{eq:z}\) and split \(s\) in its real and imaginary parts \(s=\sigma+j\omega\)

$$ z\triangleq\mathrm{e}^{sT} = \mathrm{e}^{(\sigma+j\omega)T} = \mathrm{e}^{\sigma T}\,\mathrm{e}^{j\omega T} \label{eq:zalt} $$

The polar notation for the complex variable \(z\) is a function of the natural angular frequency \(\omega T\) and \(\mathrm{e}^{\sigma T}\)

$$ \shaded{ z\triangleq|z|\,\mathrm{e}^{j\omega T}\quad\text{where}\quad |z|\triangleq \mathrm{e}^{\sigma T} } \label{eq:zphi} $$

This conveniently matches the topology of the \(z\)-plane, where the modules \(|z|\) corresponds to the length of a vector from the origin to \(z\), and \(\omega T\) corresponds to the angle of that vector with the positive horizontal axis.

According to the Nyquist–Shannon sampling theorem, a discrete signal can only have frequencies \(|\omega|\) between \(0\) and half the sampling frequency

$$ 0\leq|\omega|\leq\frac{\omega_s}{2}\label{eq:zphirange} $$

The maximum frequency in the discrete signal, the so called Nyquist frequency corresponds to \(\pi\) radians, because with sample period \(T=\frac{1}{f_s}=\frac{2\pi}{\omega_s}\) and the Nyquist frequency \(\omega_{\small N}=\frac{\omega_s}{2}\), the natural angular frequency for the Nyquist frequency follows as

$$ \omega_{\small N}T = \frac{\ccancel[green]{\omega_s}}{\ccancel[red]{2}}\,\frac{\ccancel[red]{2}\pi}{\ccancel[green]{\omega_s}} = \pi $$
the range for \(\omega T\) follows as
$$ \shaded{-\pi\leq\omega T\leq\pi} $$

For those familiar with the Laplace transform, we will map specific features between the \(s\) and \(z\)-domain:

  • The origin \(s=0\) of the \(s\)-plane is mapped to \(z=e^0=1\) on the real axis in \(z\)-plane.
  • Each vertical line \(\sigma=\sigma_0\) in \(s\)-plane is mapped to a circle \(|z|=e^{\sigma_0}\) centered about the origin in \(z\)-plane. E.g.
    • Leftmost vertical line \(\sigma\to-\infty\) is mapped as the origin, where \(|z|=\mathrm{e}^{-\infty}=0\)
    • The imaginary axis \(\sigma=0\) is mapped as the unit circle, where \(|z|=\mathrm{e}^0=1\)
    • Rightmost vertical line \(\sigma\to\infty\) is mapped as a circle with an infinite radius, where \(|z|=\mathrm{e}^{\infty}=\infty\).
  • Each horizontal line \(j\omega=j\omega_0\) in \(s\)-plane is mapped to an angle from the origin in \(z\)-plane of angle \(\omega_0 T\) with respect to the positive horizontal direction.

Region of convergence

Recall the definition of the Z-transform from equations \((\ref{eq:ztransform},\ref{eq:zalt},\ref{eq:zphi})\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\ztransform{\lfz{\mathcal{Z}}} f[n]\ztransform\sum_{n=0}^\infty z^{-n}\ f[n]\\ \text{where } z\triangleq|z|\,\mathrm{e}^{j\varphi}\text{, } |z|\triangleq \mathrm{e}^{\sigma T}\text{, } \varphi\triangleq \omega T $$

This converges depending on the duration and magnitude of \(f[n]\) as well as on the magnitude \(|z|\). The phase \(\varphi\) has no effect on the convergence.

The power series for the Z-transform is called a Laurent series. The Laurent series, represents an analytic function at every point inside the region of convergence. Therefore, the Z-transform and all its derivatives must be continuous function of \(z\) inside the region of convergence.

Laurent series converge in an annular (=ring shaped) region of the \(z\)-plane, bounded by poles. The set of values of \(z\) for which the Z-transform converges is called the region of convergence (ROC). This means that anytime we use the Z-transform, we need to keep the region of convergence in mind.

Continue reading about inverse Z-transforms.

Evaluating continuous transfer functions

\(\)The transfer function can be evaluated using different inputs. We commonly use the impulse, step and sinusoidal input functions.

Let \(H\) be a stable system with transfer function \(H(s)\), input signal \(x(t)\), and output \(y(t)\). In this, “stable” implies that the poles are in left half of \(s\)-plane.

Transfer Function in s-domain

Impulse Response

Earlier, we derived the Laplace transform for the impulse function as $$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \delta(t)\laplace\Delta(s) = 1 $$

Substituting this input function $$ Y(s) = H(s)\,\Delta(s) $$

The response to an impulse input function, is the transfer function \(H(s)\) itself $$ \shaded{ Y(s)=H(s) } $$

Unit Step Response

The page Laplace Transforms gives the Laplace transform for the unit step function as $$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \gamma(t) \laplace \Gamma(s)\frac{1}{s} $$

Substitute the input function $$ Y(s) = H(s)\,\Gamma(s) $$

The response to an unit step input function follows as $$ \shaded{ Y(s) = \frac{H(s)}{s} } $$

Frequency Response

The frequency response is defined as the steady state response of system to a sinusoidal input.

Given sinusoidal input and transfer function $$ \left\{\begin{align} x(t) &= \sin(\omega t)\gamma(t) &\mathrm{input\ signal} \nonumber \\ H(s) &= |H(s)|\,e^{j\angle H(s)} &\mathrm{transfer\ function} \nonumber \end{align}\right. $$

Find the output signal \(y(t)\), using the Laplace transform of the sinusoidal input function $$ \left.\begin{align} X(s) &= \frac{\omega}{s^2+\omega^2} \nonumber \\[8mu] Y(s) &= H(s)\,X(s)\nonumber \end{align}\right\} $$

Substitute \(X(s)\) in \(Y(s)\) $$ \begin{align} Y(s) &= H(s)\,\frac{\omega}{s^2+\omega^2} \nonumber \\ &= H(s)\,\frac{\omega}{(s+j\omega)(s-j\omega)} \end{align} $$

According to Heaviside, this can be expressed as partial fractions [swarthmore, MIT-cu], where the term \(C_h(s)\) represents the transient response resulting from \(H(s)\). This term is independent of \(j\omega\), and dies out for \(t\to\infty\). $$ Y(s) = \frac{\omega}{(s+j\omega)(s-j\omega)}H(s)=\underbrace{\frac{c_0}{s+j\omega}+\frac{c_1}{s-j\omega}}_{Y_{ss}(s)=\mathrm{steady\ state\ response}}+\underbrace{C_h(s)}_\mathrm{transient\ response} \label{eq:partialfractions} $$

Find \(c_0\) $$ \begin{align} H(s)\,\frac{\omega\cancel{(s+j\omega)}}{\cancel{(s+j\omega)}(s-j\omega)} &= \frac{c_0\cancel{(s+j\omega)}}{\cancel{s+j\omega}}+\frac{c_1(s+j\omega)}{s-j\omega}+C_h(s)(s+j\omega) \nonumber \\ \Rightarrow\, H(s)\,\frac{\omega}{s-j\omega} &= \left. c_0 + c_1\frac{s+j\omega}{s-j\omega} + C_h(s)(s+j\omega)\right|_{s=-j\omega} \nonumber \\ \Rightarrow\, H(-j\omega)\,\frac{\omega}{-j\omega-j\omega} &= c_0+c_1\frac{\cancelto{0}{-j\omega+j\omega}}{s-j\omega}+C_h(j\omega)(\cancelto{0}{-j\omega+j\omega}) \nonumber \\ \Rightarrow\, c_0 &= H(-j\omega)\,\frac{\cancel{\omega}}{-2j\cancel{\omega}} = \frac{H(-j\omega)}{-2j} \end{align} $$

Similarly, find \(c_1\) $$ \begin{align} H(s)\,\frac{\omega\cancel{(s-j\omega)}}{(s+j\omega)\cancel{(s-j\omega)}} &= \frac{c_0(s-j\omega)}{s+j\omega}+\frac{c_1\cancel{(s-j\omega)}}{\cancel{s-j\omega}}+C_h(s)(s-j\omega) \nonumber \\ \Rightarrow\, H(s)\,\frac{\omega}{s+j\omega} &= \left. c_0\frac{s-j\omega}{s-j\omega}+c_1+C_h(s)(s-j\omega)\right|_{s=j\omega} \nonumber \\ \Rightarrow\, H(j\omega)\,\frac{\omega}{j\omega+j\omega} &= c_0\frac{\cancelto{0}{j\omega-j\omega}}{s-j\omega}+c_1+C_h(j\omega)(\cancelto{0}{j\omega-j\omega}) \nonumber \\ \Rightarrow\, c_1 &= H(j\omega)\,\frac{\cancel{\omega}}{2j\cancel{\omega}} = \frac{H(j\omega)}{2j} \end{align} $$

Inverse Laplace transform of \(\eqref{eq:partialfractions}\) back to the time domain $$ y(t) = \underbrace{c_0e^{-j\omega t}+c_1e^{j\omega t}}_{y_{ss}(t)=\mathrm{steady\ state\ response}}+\underbrace{\cancelto{0\mathrm{\ as\ }t \to\infty}{\mathfrak{L}^{-1}\left\{C_h(s)\right\}}}_{\mathrm{transient\ response}} $$

Substitute \(c_0\) and \(c_1\) in \(y_{ss}(t)\) $$ \begin{align} y_{ss}(t) &= \frac{H(-j\omega)}{-2j}e^{-j\omega t}+\frac{H(j\omega)}{2j}e^{j\omega t} \nonumber \\ &= \frac{H(j\omega)e^{j\omega t}-H(-j\omega)e^{-j\omega t}}{2j}\label{eq:yss} \end{align} $$

Based on Euler’s formula we can express \(H(s)\) in polar coordinates

$$ \left\{ \begin{align} H(s) &= |H(s)|\,e^{j\angle H(s)} \nonumber \\ |H(j\omega)| &= K \frac{\prod_{i=1}^m \sqrt{\left(\Re\{{z_i}\}\right)^2+\left(\omega+\Im\{z_i\}\right)^2}}{\prod_{i=1}^n \sqrt{\left(\Re\{{p_i}\}\right)^2+\left(\omega+\Im\{p_i\}\right)^2}} \nonumber \\ \angle{H(s)} &= \sum_{i=1}^m\mathrm{atan2}\left(\omega+\Im\{z_i\}, \Re\{{z_i}\}\right) -\sum_{i=1}^n\mathrm{atan2}\left(\omega+\Im\{p_i\}, \Re\{{p_i}\}\right) \nonumber \\ \end{align} \right. \label{eq:euler} $$

Substitute the polar representation of \(H(s)\), in \(y_{ss}\) $$ \begin{align} y_{ss}(t) &= |H(j\omega)|\left( \frac{e^{j\angle H(j\omega)}e^{j\omega t}-e^{j\angle H(-j\omega)}e^{-j\omega t}}{2j} \right) \nonumber \\ &= |H(j\omega)|\left( \frac{e^{j\left(\angle H(j\omega)+\omega t\right)}-e^{j\left(\angle H(-j\omega)-\omega t\right)}}{2j} \right) \nonumber \\ &= |H(j\omega)|\, \underbrace{\frac{e^{j(\omega t + \angle{H(j\omega)})}-e^{-j(\omega t + \angle H(j\omega))}}{2j}}_{\sin(\omega t+\angle H(j\omega))} \end{align} $$

In this, we recognize the Laplace transfer for a sinusoidal function

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \sin(\omega t)\gamma(t) \laplace \frac{\omega}{s^2+\omega^2} \nonumber $$

The frequency response follows as $$ \shaded{ y_{ss}(t) = |H(j\omega)|\,\sin(\omega t+\angle H(j\omega))\,\gamma(t) } $$

In other words, for a linear system a sinusoidal input generates a sinusoidal output with the same frequency, but different amplitude and phase.

Suggested next reading is Impedance.

Continuous transfer functions

Transfer function Laplace icon

\(\)Consider a black box with input signal \(x(t)\) and output \(y(t)\). This black box processes the input signal and it produces the output signal.

Black box image
Blackbox model , source: Wikipedia

The black box

When we model the transfer function of this black box as \(h(t)\), the output signal \(y(t)\) is a convolution ‘\(\ast\)’ of the input \(x(t)\) and the transfer function \(h(t)\). $$ y(t) = h(t)*x(t) $$

Likewise, in the \(s\)-domain, the transfer function describes how the output signal \(Y(s)\) responds to an arbitrary input signal \(X(s)\). This convolution in the time-domain becomes a multiplication in the \(s\)-domain. Working in this \(s\)-domain makes the convolution, into a multiplication and is thereby easier to solve. $$ Y(s) = H(s) X (s) $$

It allows one to determine the system response characteristics without having to solve the convolution.

Albeit for the discrete case, Discrete Transfer Functions describes why convolution is used in the time domain, and multiplication in the \(z\) domain.

Poles and zeroes

The generic form of the transfer function is $$ H(s) = \frac{Y(s)}{X(s)}=\frac{b_ms^m+b_{m-1}s^{m-1}+\dots+b_1s+b_0}{a_ns^n+a_{n-1}s^{n-1}+\dots+a_1s+a_0} \label{eq:tf_polynominal} $$

where \(s=\sigma+j\omega\). \(X(s)\)  and \(Y(s)\) are the Laplace transform of the time representation of the input and output voltages \(x(t)\) and \(y(t)\). The highest power of the variable \(s\) determines the order of the system, usually corresponding to total number of capacitors and inductors in the circuit.

It can be convenient to factor the polynomials in the numerator and denominator of the transfer function, and to write the function in terms of those factors [MIT] $$ \begin{array}{cr} H(s)=K\frac{N(s)}{D(s)}=K\frac{(s-z_1)(s-z_2)\dots(s-z_m)}{(s-p_1)(s-p_2)\dots(s-p_n)},&K=\frac{b_m}{a_n} \end{array} \label{eq:tf_factors} $$

The \(z_i\)’s are the roots of the equation \(N(s)=0\) and are defined as the system zeros.  The \(p_i\)’s are the roots of the equation \(D(s)=0\) and are defined as the system poles.

The (complex) poles and zeros are properties of the transfer function, and therefore of the differential equation describing the input-output system dynamics. Together with the gain constant \(K\) they completely characterize the differential equation, and provide a complete description of the system.

Pole-Zero plot

The system dynamics may be represented graphically by plotting the pole and zero locations on the complex \(s\)-plane, whose axes represent the real and imaginary parts of the complex variable \(s\). Such plots are known as pole-zero plots.

It is usual to mark a zero location by a circle (\(\circ\)) and a pole location a cross (\(\times\)). The location of the poles and zeros provide qualitative insights into the response characteristics of a system. [MIT]

s-plane complex poles and zero
Poles and zeroes in \(s\)-plane

Transfer function

The transfer function may be evaluated for any value of \(s=\sigma+j\omega\). It is common to express the complex value of the transfer function in polar form. $$H(s)=\left|H(s)\right|e^{j\angle H(s)}\label{eq:tf}$$

where magnitude \(|H(s)|\) and phase \(\angle{H(s)}\) are given by $$|H(s)| \equiv \sqrt{\Re\left\{H(s)\right\}^2 + \Im\left\{H(s)\right\}^2}$$ $$\angle{H(s)} \equiv \mathrm{atan2}\left( \Im\left\{H(s)\right\}, \Re\left\{H(s)\right\} \right)$$

where \(\Re\) is the real operator, and \(\Im\) is the imaginary operator, and \(\mathrm{atan2}\) returns a value between \(-\pi\) and \(\pi\) [wiki], as defined in $$ \mathrm{atan2}(y,x) = \begin{cases} \arctan\left(\frac{y}{x}\right) & x \gt 0 \\ \arctan\left(\frac{y}{x}\right)+\pi & x \lt 0 \land y \geq 0 \\ \arctan\left(\frac{y}{x}\right)-\pi & x \lt 0 \land y \lt 0 \\ \frac{\pi}{2} & x= 0 \land y \gt 0 \\ -\frac{\pi}{2} & x= 0 \land y \lt 0 \\ \text{undefined} & x= 0 \land y = 0 \end{cases} $$

Visualization

The Laplace transform’s \(s\)-domain uses a rectangular coordinate system by defining \(s\triangleq\sigma+j\omega\), where \(\sigma\) on the horizontal axis represents the exponential decay, and \(\omega\) on the vertical axis represents the frequency.

The factorized transfer function \(\eqref{eq:tf_factors}\) can be written as $$ H(z)=K \frac{\prod_{i=1}^m(z-q_i)}{\prod_{i=1}^n(z-p_i)} $$

In the complex plane, the difference between two number \(s_1\) and \(s_2\) can be visualized by an vector from \(s_2\) to \(s_1\) $$ \begin{align} s_1-s_2 &= (\sigma_1+j\omega_1)-(\sigma_2+j\omega_2)\ nonumber \\ &= (\sigma_1-\sigma_2)+j(\omega_1-\omega_2) \end{align} $$

This can be visualized with an vector drawn from the tip of \(s_2\) to the tip of \(s_1\). Note that the length of the vector is unaffected by translation away from the origin. But the angle of the vector must be measured relative to a translated copy of the real axis.

Therefore, each of the factors in the numerator and denominator may be interpreted as a vector in the s-plane, originating from the zero \(z_i\) or pole \(p_i\) and directed to the point \(s\) at which the function is to be evaluated.

s-plane complex poles and zero evaluated for s
Pole ‘\(p\)’ evaluated at point ‘\(s\)’

Each of these vectors may be written in polar form, for example for a pole \(p_i=\sigma_i+j\omega_i\), the magnitude and angle of the vector to the point \(s=\sigma+j\omega\) are $$ \begin{aligned} |s-p_i| &= \sqrt{(\sigma-\sigma_i)^2+(\omega-\omega_i)^2} \\ \angle(s-p_i) &= \mathrm{atan2}\left(\omega-\omega_i,\sigma-\sigma_i\right) \end{aligned} $$

Multiplication and division

While we’re on the subject, a quick note: Multiplication and division of complex numbers is most easily done in polar form $$ \begin{eqnarray} Z_1Z_2 & =|Z_1|e^{j\angle{Z_1}}\ |Z_2|e^{j\angle{Z_2}} & =|Z_1||Z_2|e^{j(\angle{Z_1}+\angle{Z_2{)}}}\\ \frac{Z_1}{Z_2} & =\frac{|Z_1|e^{j\angle{Z_1}}}{|Z_2|e^{j\angle{Z_2}}} & =\frac{|Z_1|}{|Z_2|}e^{j(\angle{Z_1}-\angle{Z_2{)}}} \end{eqnarray} $$

Applying \(|K|=K\) and \(\angle{K}=\mathrm{atan2}(0,K)=0\), the magnitude and angle of the complete transfer function \(H(s)\) may be written as $$ \left\{ \begin{align} H(s) &= |H(s)|\ e^{j\angle{H(s)}}\nonumber\\ |H(s)| &= K \frac{\prod_{i=1}^m\left|(s-z_i)\right|}{\prod_{i=1}^n\left|(s-p_i)\right|} \nonumber \\ \angle{H(s)}&=\sum_{i=1}^m\angle(s-z_i)-\sum_{i=1}^n\angle(s-p_i) \nonumber \end{align} \right. $$

Notes

  • A time-continuous system is unstable when poles are in the right half of the \(s\)-plane.
  • In the \(s\)-plane, the values along the vertical axis are equal to the frequency response of the system. That is, the Fourier transform is the Laplace transform evaluated at \(\sigma=0\).

Suggested next reading is Evaluating Transfer Functions.

Mechanical systems

\(\)Using Laplace transforms to solve mechanical ordinary differential equations.

Elements

Before we look at examples of mechanical systems, let’s recall the equations of mechanical element.

Spring

According to Hooke’s law, the reactive force is linear proportional to the displacement and opposite the direction of the force

own work
Displacement
$$ f_s(t)=S\cdot x(t) \label{eq:spring} $$ where \(S\) is the spring stiffness [N/m].

Resistance

A dashpot provides friction force linear proportional with the velocity and opposite the direction of the force.

own work
Resistance
$$ f_r(t)=R\cdot \frac{\mathrm{d}x(t)}{\mathrm{d}t} \label{eq:damper} $$ where \(R\) is the resistance, or damping coefficient [Ns/m]

Mass

According to Newton’s second law of motion the reactive force is linear proportional to the acceleration and opposite the direction of the force

own work
Mass
$$ f_m(t)=M\cdot \frac{\mathrm{d}^2x(t)}{\mathrm{d}t^2} \label{eq:mass} $$ where \(M\) is the mass [kg]

First Order Example

Constant force on horizontal parallel damper and spring, starting at \(t=0\)

own work
Spring and resistance in parallel
$$ f_{e}(t) = F_0\gamma(t) \label{eq:applied} $$

Sum of the forces must be zero. This combine the equations for the external force \(\eqref{eq:applied}\) with the equations for spring \(\eqref{eq:spring}\) and damper \(\eqref{eq:damper}\). $$ \begin{align} f_r(t) + f_s(t) &= f_e(t) \nonumber \\ R\frac{\mathrm{d}x(t)}{\mathrm{d}t} + S x(t) &= F_0\gamma(t) \end{align} $$

Transform this to the ordinary differential equation, knowing that the system starts from rest, therefor \(\frac{\mathrm{d}x(0)}{\mathrm{d}t}=0\) and \(x(0)=0\).

$$ R\,s\,X(s)+ S X(s) = F_0\frac{1}{s} $$

Solve for \(X(s)\) $$ \begin{align} X(s) (s R +S)&=F_0\frac{1}{s} \nonumber \\ \Rightarrow\ X(s)&=\frac{F_0}{s(s R +S)} \end{align} $$

To return back to the time domain \(x(t)\), we need to find a reverse Laplace transform. There is none. According to Heaviside, this can be expressed as partial fractions. [swarthmore] $$ X(s)=\frac{F_0}{s(sR+S)}\equiv\frac{c_0}{s}+\frac{c_1}{sR+S} \label{eq:heaviside} $$

The constants \(c_{0,1}\) are found using Heaviside’s Cover-up Method [swarthmore, MIT-cu]: multiply \(\eqref{eq:heaviside}\) with respectively \(s\) and \((sR-S)\). $$ \left\{ \begin{align} \frac{\cancel{s}F_0}{\cancel{s}(sR+S)} &\equiv\frac{\cancel{s}c_0}{\cancel{s}}+\frac{sc_1}{sR+S} \nonumber\\ \frac{\cancel{(sR+S)}F_0}{\cancel{(sR+S)}s}&\equiv\frac{(sR+S)c_0}{s}+\frac{\cancel{(sR+S)}c_1}{\cancel{sR+S}}\nonumber \end{align} \right. $$

Given that these equations are true for any value of \(s\), choose two convenient values to find \(c_0\) and \(c_1\) $$ \left\{ \begin{eqnarray} c_0=\left.\frac{F_0}{sR+S}-\frac{s\,c_1}{sR+S}\right|_{s=0}&=\frac{F_0}{S}-\frac{0}{0+S}&=\frac{1}{S}F_0\label{eq:constants1}\\ c_1=\left.\frac{F_0}{s}-\frac{(sR+S)c_0}{s}\right|_{s=-\frac{S}{R}}&=-\frac{R}{S}F_0-\frac{0}{-\frac{S}{R}}&=-\frac{R}{S}F_0\nonumber \end{eqnarray} \right. $$

The unit step response \(x(t)\) follows from the inverse Laplace transform of \(\eqref{eq:heaviside}\) $$ \begin{align} x(t)&= \mathcal{L}^{-1}\left\{\frac{c_0}{s}\right\}+\mathcal{L}^{-1}\left\{\frac{c_1}{sR+S}\right\} & t\geq0 \nonumber \\ &= \mathcal{L}^{-1}\left\{\frac{c_0}{s}\right\}+\frac{1}{R}\mathcal{L}^{-1}\left\{\frac{c_1}{s+\frac{S}{R}}\right\} & t\geq0 \nonumber \\ &= c_0+\frac{1}{R}c_1e^{-\frac{S}{R}t} & t\geq0 \end{align} $$

Substituting the constants \(\eqref{eq:constants1}\) gives the unit step response $$ \begin{align} x(t)&=\frac{1}{S}F_0+\frac{1}{\cancel{R}}(-\frac{\cancel{R}}{S}F_0)e^{-\frac{S}{R}t} & t\geq0 \nonumber \\ &=\frac{1}{S}F_0\left(1-e^{-\frac{S}{R}t}\right) & t\geq0\\ \end{align} $$

Laplace transform and proofs

\(\)Around 1785, Pierre-Simon marquis de Laplace, a French mathematician and physicist, pioneered a method for solving differential equations using an integral transform. This Laplace transform turns differential equations in time, into algebraic equations in the Laplace domain thereby making them easier to solve.

Definition

Pierre-Simon Laplace introduced a more general form of the Fourier Analysis that became known as the Laplace transform.  It transforms a time-domain function, \(f(t)\), into the \(s\)-plane by taking the integral of the function multiplied by \(e^{-st}\) from \(0^-\) to \(\infty\), where \(s\) is a complex number with the form \(s=\sigma +j\omega\). Coordinates in the \(s\)-plane use ‘\(j\)’ to designate the imaginary component, in order to distinguish it from the ‘\(i\)’ used in the normal complex plane. [wiki]

The one-sided Laplace transform is defined as

$$ \shaded{ \mathfrak{L}\left\{\,f(t)\,\right\}=F(s)=\int_{0^-}^\infty e^{-st}f(t)\ \mathrm{d}t } \label{eq:laplace} $$

In this equation

  • \(\mathfrak{L}\) symbolizes the Laplace transform.  \(F(s)\) is the Laplace domain equivalent of the time domain function \(f(t)\).
  • The lower limit of \(0^-\) emphasizes that the value at \(t=0\) is entirely captured by the transform.
  • Since the upper limit of the integral is \(\infty\), we must ask ourselves if the Laplace Transform, \(F(s)\), even exists.  That is the function \(f(t)\) doesn’t grow faster than an exponential function.

Overview

The sections below introduce commonly used properties, common input functions and initial/final value theorems, referred to from my various Electronics articles.  Many are based on the excellent notes from the linear physics group at Swarthmore College, and reproduced here mainly for my own understanding and reference.

Properties

Properties in time and Laplace domains
Time domain Laplace domain
Linearity $$a\cdot f(t)+b\cdot g(t)\nonumber$$ $$a\cdot F(s) + b\cdot G(s)\nonumber$$ proof
First Derivative $$\tfrac{\mathrm{d}}{\mathrm{d}t}f(t)\nonumber$$ $$s\,F(s)-f(0^-)\nonumber$$ proof
Second Derivative $$\tfrac{\mathrm{d}^2}{\mathrm{d}t^2}f(t)\nonumber$$ $$s^2F(s)-sf(0^-) – f'(0^-)\nonumber$$ proof
Integration $$\int_{0^-}^t f(\tau)\mathrm{\tau}\nonumber$$ $$\frac{1}{s}F(s)\nonumber$$ proof
Convolution $$f(t)\ast g(t)\nonumber$$ $$F(s)\,G(s)\nonumber$$ proof

Functions

Functions in time and Laplace domains
Time domain Laplace domain
Impulse $$\delta(t)\nonumber$$ $$1\nonumber$$ proof
Unit Step $$\gamma(t)\nonumber$$ $$\frac{1}{s}\nonumber$$ proof
Ramp $$t\,\gamma(t)\nonumber$$ $$\frac{1}{s^2}\nonumber$$ proof
Exponential $$e^{-at}\gamma(t)\nonumber$$ $$\frac{1}{s+a},\ \forall_{a>0}\nonumber$$ proof
Sine $$\sin(\omega t)\,\gamma(t)\nonumber$$ $$\frac{\omega}{s^2+\omega^2}\nonumber$$ proof
Cosine $$\cos(\omega t)\,\gamma(t)\nonumber$$ $$\frac{s}{s^2+\omega^2}\nonumber$$ proof
Decaying Sine $$e^{-\alpha t}\sin(\omega t)\,\gamma(t)\nonumber$$ $$\frac{\omega}{(s+\alpha)^2+\omega^2}\nonumber$$ proof
Decaying Cosine $$e^{-\alpha t}\cos(\omega t)\,\gamma(t)\nonumber$$ $$\frac{s+\alpha}{(s+\alpha)^2+\omega^2}\nonumber$$ proof
Time Delayed $$f(t-a)\,\gamma(t-a)\nonumber$$ $$e^{-su}F(s)\nonumber$$ proof

Initial and Final Value Theorem

Initial and final value theorem
Time domain Laplace domain
Initial Value $$f(0^+)\nonumber$$ $$\nonumber$$ proof
Final Value $$f(\infty)\nonumber$$ $$\nonumber$$ proof

The proof for each of these transforms can be found below.

Property proofs

Linearity Property

The linearity property in the time domain

$$ u(t) = a\cdot f(t)+b\cdot g(t) $$

Transformed to the Laplace domain

$$ \begin{align} \mathfrak{L}\left\{\,a\cdot f(t)+b\cdot g(t)\,\right\} &=\int_{0^-}^{\infty}\left( a\cdot f(t)+ b\cdot g(t) \right) * e^{-st}\mathrm{d}t \nonumber\\ &= a\underbrace{\int_{0^-}^{\infty} f(t) * e^{-st}\mathrm{d}t}_{F(s)} + b\underbrace{\int_{0^-}^{\infty} g(t) * e^{-st}\mathrm{d}t}_{G(s)} \end{align} $$

From which follows

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ a\cdot f(t)+b\cdot g(t) \laplace a\cdot F(s) + b\cdot G(s) } \label{eq:linearity} $$

First Derivative Property

own work The first derivative in time is used in deriving the Laplace transform for capacitor and inductor impedance. The general formula

$$ u(t) = \frac{\mathrm{d}}{\mathrm{d}t}f(t) $$

Transformed to the Laplace domain using \(\eqref{eq:laplace}\)

$$ \mathfrak{L}\left\{\tfrac{\mathrm{d}}{\mathrm{d}t}f(t)\right\} = \int_{0^-}^{\infty} e^{-st}\tfrac{\mathrm{d}f(t)}{\mathrm{d}t} \mathrm{d}t = \int_{0^-}^{\infty} \underbrace{e^{-st}}_{u(t)} \underbrace{\tfrac{\mathrm{d}f(t)}{\mathrm{d}t}}_{v'(t)} \mathrm{d}t\Rightarrow \label{eq:derivative_} $$

Recall integration by parts, based on the product rule, from your favorite calculus class

$$ \left\{ \begin{align} \int_a^b u(t)\ v'(t)\ \mathrm{d}t&=\left[ u(t)\ v(t)\right]_a^b -\int_a^b u'(t)\ v(t)\ \mathrm{d}t \nonumber \\ u(t)&=\int_{0^-}^t f(\tau)\mathrm{d}\tau \Rightarrow u'(t)=f(t) \nonumber \\ v'(t)&=e^{-st} \Rightarrow v(t)=-\tfrac{1}{s}e^{-st} \nonumber \end{align} \right. \label{eq:intbyparts} $$

Solve \(\eqref{eq:derivative_}\) using integration by parts

$$ \begin{align} \mathfrak{L}\left\{\tfrac{\mathrm{d}}{\mathrm{d}t}f(t)\right\}&= \left[ e^{-st}f(t)\right]_{0^-}^{\infty} – \int_{0^-}^\infty (-s)e^{-st}f(t)\mathrm{d}t\nonumber\\ &= \cancel{e^{-s\infty}f(\infty)} – \bcancel{e^{-s0^-}}f(0^-)+ s \underbrace{\int_{0^-}^\infty e^{-st}f(t)\mathrm{d}t}_{\mathfrak{L}f(t) = F(s)} \end{align} $$

The first term goes to zero because \(f(\infty)\) is finite which is a condition for existence of the transform. The last term is simply the definition of the Laplace Transform multiplied by \(s\).

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ \tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \laplace s\,F(s)-f(0^-) } \label{eq:derivative} $$
The initial condition is taken at \(t=0^-\). This means that we only need to know this initial conditions before the input signal started.

Second Derivative Property

The second derivative in time is found using the Laplace transform for the first derivative \(\eqref{eq:derivative}\). The general formula

$$ u(t) = \frac{\mathrm{d}^2}{\mathrm{d}t^2}f(t) $$

Introduce \(g(t)=\frac{\mathrm{d}}{\mathrm{d}t}f(t)\)

$$ \left\{ \begin{align} u(t) &= \frac{\mathrm{d}}{\mathrm{d}t}g(t) \nonumber \\ g(t) &= \frac{\mathrm{d}}{\mathrm{d}t}f(t) \nonumber \end{align} \right. $$

From the transform of the first derivative \(\eqref{eq:derivative}\), we find the Laplace transforms of \(\frac{\mathrm{d}}{\mathrm{d}t}g(t)\) and \(\frac{\mathrm{d}}{\mathrm{d}t}f(t)\)

$$ \left. \begin{align} U(s) &= \mathfrak{L}\left\{\,\frac{\mathrm{d}}{\mathrm{d}t}g(t)\,\right\} = s\,G(s)-g(0^-) \nonumber \\ G(s)&=\mathfrak{L}\left\{\,\frac{\mathrm{d}}{\mathrm{d}t}f(t)\,\right\} = s\,F(s)-f(0^-) \nonumber \end{align} \right\} \overset{subst} \Rightarrow $$

Substitute \(G(s)\) in \(U(s)\)

$$ \begin{align} U(s)&=s\left(sF(s)-f(0^-) \right) – g(0^-)\nonumber\\ &=s^2F(s)-sf(0^-) – \left.\frac{\mathrm{d}}{\mathrm{d}t}f(t)\right|_{0^-} \end{align} $$

This brings us to the Laplace transform of the second derivative of \(f(t)\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ \tfrac{\mathrm{d}^2}{\mathrm{d}t^2}f(t) \laplace s^2F(s)-sf(0^-) – f'(0^-) } \label{eq:secondderivative} $$

The initial conditions are taken at \(t=0^-\). This means that we only need to know these initial conditions before the input signal started.

Integration Property

Integral Property Determine the Laplace transform of the integral

$$ u(t) = \int_{0^-}^t f(\tau)\mathrm{d}\tau $$

Apply the Laplace transform definition \(\eqref{eq:laplace}\)

$$ \mathfrak{L}\left\{ \int_{0^-}^t f(\tau)\mathrm{\tau} \right\} = \int_{0^-}^{\infty} \underbrace{\left( \int_{0^-}^t f(\tau)\mathrm{d}\tau \right)}_{u(t)} \underbrace{e^{-st}}_{v'(t)} \mathrm{d}t\Rightarrow $$

$$ \left. \begin{align} \mathfrak{L}\left\{ \int_{0^-}^t f(\tau)\mathrm{\tau} \right\}&=\int_{0^-}^{\infty}u(t)\ v'(t)\ \mathrm{d}t \nonumber \\ \int_a^b u(t)\ v'(t)\ \mathrm{d}t&=\left[ u(t)\ v(t)\right]_a^b – \int_a^b u'(t)\ v(t)\ \mathrm{d}t \nonumber \\ u(t)&=\int_{0^-}^t f(\tau)\mathrm{d}\tau \Rightarrow u'(t)=f(t) \nonumber \\ v'(t)&=e^{-st} \Rightarrow v(t)=-\tfrac{1}{s}e^{-st} \nonumber \end{align} \right\} $$

Again, solve using integration by parts

$$ \begin{align} \mathfrak{L}\left\{ \int_{0^-}^t f(\tau)\mathrm{\tau} \right\} &= \int_{0^-}^{\infty} \underbrace{\left( \int_{0^-}^t f(\tau)\mathrm{d}\tau \right)}_{u(t)} \underbrace{e^{-st}}_{v'(t)} \mathrm{d}t \nonumber\\ &= \left[ \left(\int_{0^-}^t f(\tau)\mathrm{d}\tau\right) \left(-\frac{1}{s}e^{-st }\right) \right]_{0^-}^{\infty} -\int_{0^-}^\infty f(t) \left( -\frac{1}{s}e^{-st} \right) \mathrm{d}t \nonumber \\ &= -\frac{1}{s} \left[ e^{-st } \int_{0^-}^t f(\tau)\mathrm{d}\tau \right]_{0^-}^{\infty} + \frac{1}{s} \underbrace{ \int_{0^-}^\infty f(t) e^{-st} \mathrm{d}t }_{\mathfrak{L}f(t)=F(s)} \nonumber \\ &= -\frac{1}{s} \left( \cancel{e^{-s\infty }\int_{0^-}^\infty f(\tau)\mathrm{d}\tau} \ -\ e^{-s0^- }\cancel{\int_{0^-}^{0^-} f(\tau)\mathrm{d}\tau} \right) + \frac{1}{s}F(s) \end{align} $$

The first term goes to zero because \(f(\infty)\) is finite which is a condition for existence of the transform. In the second term, the exponential goes to one and the integral is \(0\) because the limits are equal. The last term is simply the definition of the Laplace Transform over \(s\).

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ \int_{0^-}^t f(\tau)\mathrm{\tau} \laplace \frac{1}{s}F(s) } \label{eq:integration} $$

Convolution Property

Just to show the strength of the Laplace transfer, we show the convolution property in the time domain of two causal functions

$$ u(t)=f(t) \ast g(t) = \int_{-\infty}^{\infty}f(\lambda)\,g(t-\lambda)\,\mathrm{d}\lambda $$
where \(\ast\) is the convolution operator.

Transformed to the Laplace domain

$$ \begin{align} \mathfrak{L}\left\{\,f(t) \ast g(t)\,\right\} &=\int_{0^-}^{\infty} \left( \int_{-\infty}^{\infty}f(\lambda)\,g(t-\lambda)\,\mathrm{d}\lambda \right) e^{-st}\mathrm{d}t \nonumber\\ &= \int_{-\infty}^{\infty} \int_{0^-}^{\infty}f(\lambda)\,g(t-\lambda)\, e^{-st}\mathrm{d}t\,\mathrm{d}\lambda &\mathrm{change\ order\ of\ integration}\nonumber\\ &= \int_{-\infty}^{\infty} f(\lambda) \int_{0^-}^{\infty}g\underbrace{(t-\lambda)}_{u}\, e^{-st}\mathrm{d}t\,\mathrm{d}\lambda &\mathrm{f(\lambda) \mathrm{\ independent\ of\ }t} \end{align} $$

Substitute \(u=t-\lambda\)

$$ \begin{align} \mathfrak{L}\left\{\,f(t) \ast g(t)\,\right\} &= \int_{-\infty}^{\infty} f(\lambda) \int_{\underline{(-\lambda)^-}}^{\infty}g(u)\, e^{-s(u+\lambda)}\mathrm{d}u\,\mathrm{d}\lambda &g(u)=0,\ \forall u\lt 0 \nonumber\\ &=\int_{-\infty}^{\infty} f(\lambda) \int_{0}^{\infty}g(u)\, e^{-su}\underline{e^{-s\lambda}}\mathrm{d}u\,\mathrm{d}\lambda &e^{-s\lambda}\mathrm{\ independent\ of\ }u \nonumber\\ &=\int_{-\infty}^{\infty} f(\lambda)e^{-s\lambda} \underline{\int_{0}^{\infty}g(u) e^{-su}\mathrm{d}u}\,\mathrm{d}\lambda &\mathrm{inner\ intergral\ independent\ on\ }\lambda \nonumber\\ &=\int_{\underline{-\infty}}^{\infty} f(\lambda)e^{-s\lambda} \mathrm{d}\lambda\ \int_{0}^{\infty}g(u) e^{-su}\mathrm{d}u &f(\lambda)=0,\ \forall \lambda\lt 0 \nonumber\\ &=\underbrace{\int_{0^-}^{\infty} f(\lambda)e^{-s\lambda} \mathrm{d}\lambda}_{F(s)}\forall \lambda\lt 0 \nonumber \\ &= \underbrace{\int_{0}^{\infty}g(u) e^{-su}\mathrm{d}u}_{G(s)} &\mathrm{these\ are\ Laplace\ transforms} \end{align} $$

Gives us the Laplace transfer for the convolution property

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ f(t)\ast g(t) \laplace F(s)\,G(s) } \label{eq:convolution} $$

Function proofs

Impulse Function

own work The impulse function \(\delta(t)\) is often used as an theoretical input signal to study system behavior.  The definition is

$$ u(t)=\delta(t) = \begin{cases} \mathrm{undefined}, & t=0 \\ 0, & \neq 0 \end{cases} \label{eq:impuls_def1} $$
and satisfies the condition
$$ \int_{-\infty}^{\infty}\delta(t) = 1 \label{eq:impuls_def2} $$

in other words, the area is 1 so that \(\delta(t)\) is as high, as \(\mathrm{d}t\) is narrow.

Apply the Laplace transform definition \(\eqref{eq:laplace}\)

$$ \mathcal{L}\left\{\delta(t)\right\} = \Delta(s) = \int_{0^-}^{\infty}e^{-st}\delta(t)\,\mathrm{d}t $$

Since the impulse is \(0\) everywhere but at \(t=0\), the upper limit of the integral can be changed to \(0^+\).

$$ \Delta(s) = \int_{0^-}^{0^+}e^{-st}\delta(t)\,\mathrm{d}t $$

The function \(e^{-st}\) is continuous at \(t=0\), and may be replaced by its value at \(t=0\)

$$ \Delta(s)=\left.e^{-st}\right|_{t=0}\int_{0^-}^{0^+}\delta(t)\,\mathrm{d}t = \int_{0^-}^{0^+}\delta(t)\,\mathrm{d}t $$

Substituting the condition \(\int_{-\infty}^{\infty}\delta(t)=1\) from \(\eqref{eq:impuls_def2}\) gives us the Laplace transform of the impulse function

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ \delta(t) \laplace 1 } \label{eq:impulse} $$

Unit Step Function

own work The unit or Heaviside step function, denoted with \(\gamma(t)\) is defined as a function of \(\gamma(t)\).

$$ u(t) = \gamma(t) = \begin{cases} 0 & t<0 \\ 1 & t\geq 0 \\ \end{cases} \label{eq:unitstep_def_a} $$

The unit step function is related to the impulse function as

$$ \gamma(t) = \int\delta(t)\,\mathrm{d}t $$

Apply the Laplace transform definition \(\eqref{eq:laplace}\)

$$ \begin{align} \Gamma(s)\,&=\int_{0^-}^\infty e^{-st}\,\gamma(t)\,\mathrm{d}t \nonumber \\ &= \int_{0^-}^\infty\,e^{-st}\,1\,\mathrm{d}t \nonumber \\ &= -\frac{1}{s}\left[e^{-st}\right]_{0^-}^\infty \end{align} $$

The upper limit of the integral only goes to zero if the real part of the complex variable \(s\) is positive, so that \(\left.e^{-st}\right|_{s\to\infty}\)

$$ \begin{align} \Gamma(s)\,&=-\frac{1}{s}\left(e^{-s\infty}-e^{-s0}\right) = -\frac{1}{s}\left(0-1\right) \end{align} $$

Gives us the Laplace transfer of the unit step function

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ \gamma(t) \laplace \frac{1}{s} } \label{eq:unitstep} $$

Ramp Function

own work The unit or Heaviside step function, denoted with \(\gamma(t)\) is defined as below [smathmore]. We use \(\gamma(t)\), to avoid confusion with the European symbol for voltage source \(u(t)\), where \(u\) stands for “Potentialunterschied”, which means potential difference.  The capital letter of \(\gamma\) is \(\Gamma\) what looks a bit like the step function.

$$ u(t) = t\,\gamma(t) \label{eq:ramp_def_a} $$

The ramp function is related to the unit step  function as

$$ u(t) = \int\gamma(t)\,\mathrm{d}t $$

Apply the Laplace transform definition \(\eqref{eq:laplace}\)

$$ U(s) = \mathcal{L}\left\{\,t\,\right\}\,=\int_{0^-}^\infty \underbrace{e^{-st}}_{v'(t)}\,\underbrace{t}_{u(t)}\,\mathrm{d}t \label{eq:ramp1} $$

Use integration by parts

$$ \left\{ \begin{align} \int_a^b u(t)\ v'(t)\ \mathrm{d}t&=\left[ u(t)\ v(t)\right]_a^b – \int_a^b u'(t)\ v(t)\ \mathrm{d}t\nonumber\\ u(t) &= t \Rightarrow u'(t)=1 \nonumber \\ v'(t) &= e^{-st} \Rightarrow v(t)=-\tfrac{1}{s}e^{-st} \nonumber \end{align} \right. \label{eq:intbyparts2} $$

Solve \(\eqref{eq:ramp1}\) using integration by parts

$$ \begin{align} \mathfrak{L}\left\{\,t\,\right\}&= \left[ (t) \cdot (-\frac{1}{s}e^{-st})\right]_{0^-}^{\infty} -\int_{0^-}^\infty 1\cdot (-\frac{1}{s}e^{-st})\mathrm{d}t\nonumber\\ &= -\left[ \frac{t}{s}e^{-st}\right]_{0^-}^{\infty} +\frac{1}{s}\underbrace{\int_{0^-}^\infty e^{-st}\mathrm{d}t}_{\Gamma(t)=\frac{1}{s}}\nonumber\\ &= -\left(\frac{\infty}{s}e^{-s\infty} -\cancel{\frac{0}{s}e^{-s0}} \right) +\frac{1}{s^2}\nonumber\\ &= -\left(\cancel{\frac{\infty}{se^{s\infty}}} -0 \right) +\frac{1}{s^2} \end{align} $$

Gives us the Laplace transfer of the ramp function

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ t\,\gamma(t) \laplace \frac{1}{s^2} } \label{eq:ramp} $$

Exponential Function

own work An exponential function time domain, starting at \(t=0\)

$$ u(t)= e^{-ax}\cdot \gamma(t) $$

The step function becomes 1 at the lower limit of the integral, and is \(0\) before that

$$ \begin{align} \mathfrak{L}\left\{\, e^{-at}\gamma(t) \right\} &= \int_{0^-}^{\infty} e^{-at}\gamma(t)\, e^{-st}\mathrm{d}t \nonumber\\ &= \int_{0^-}^{\infty} e^{-(s+a)t}\mathrm{d}t \nonumber\\ &= \left[ \frac{1}{-(s+a)}e^{-(s+a) t} \right]_{0^-}^{\infty} \nonumber\\ &= -\frac{1}{s+a}\left( \bcancel{e^{-(s+a) \infty}} – \cancelto{1}{e^{-(s+a) 0^-}} \right) \nonumber\\ &= \frac{1}{s+a} & a<0 \end{align} $$

Gives us the Laplace transform of the exponential time function

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ e^{-at}\gamma(t) \laplace \frac{1}{s+a} },\ \forall_{a>0 } \label{eq:exponential} $$

Sine Function

own work Another popular input signal is the sine wave, starting at \(t=0\)

$$ u(t) = f(t) = \sin(\omega t)\,\gamma(t) \label{eq:sin_def} $$

Apply the definition of the Laplace transform \(\eqref{eq:laplace}\)

$$ \begin{align} \mathcal{L}\left\{f(t)\right\}=F(s) &=\int_{0^-}^{\infty}e^{-st}\sin(\omega t)\gamma(t) \,\mathrm{d}t \nonumber \\ &=\int_{0^-}^{\infty}e^{-st}\sin(\omega t) \,\mathrm{d}t \end{align}\label{eq:sinlaplace} $$

Apply the Euler identity for sine

$$ \begin{align} F(s)&=\int_{0^-}^{\infty}e^{-st}\,\frac{e^{j\omega t}-e^{-j\omega t}}{2j} \,\mathrm{d}t \nonumber \\ &=\frac{1}{2j}\int_{0^-}^{\infty}e^{-st}\,\left(e^{j\omega t}-e^{-j\omega t}\right) \,\mathrm{d}t \nonumber \\ &=\frac{1}{2j}\int_{0^-}^{\infty}e^{-st}\,e^{j\omega t}\,\mathrm{d}t -\frac{1}{2j}\int_{0^-}^{\infty}e^{-st}\,e^{-j\omega t} \,\mathrm{d}t \nonumber \\ &=\frac{1}{2j}\int_{0^-}^{\infty}e^{(-s+j\omega) t}\,\mathrm{d}t -\frac{1}{2j}\int_{0^-}^{\infty}e^{(-s-j\omega)t} \,\mathrm{d}t \end{align} \label{eq:sin2} $$

The simple definite integral \(\int_{0^-}^{\infty}e^{-(s+a) t}\,\mathrm{d}t\), was already solved as part of \(\eqref{eq:exponential}\)

$$ \int_{0^-}^{\infty}\ e^{-(s+a) t}\,\mathrm{d}t = \frac{1}{s+a} ,\ a \label{eq:sin3} $$

Substitute \(\eqref{eq:sin3}\)

$$ \begin{align} F(s) &= \frac{1}{2j}\left( \frac{1}{s-j\omega} \right) – \frac{1}{2j}\left( \frac{1}{s+j\omega} \right) \nonumber \\ &= \frac{1}{2j}\left( \frac{1}{s-j\omega} – \frac{1}{s+j\omega} \right) \end{align} $$

Bring it under a common denominator

$$ \begin{align} F(s)&= \frac{1}{2j}\left( \frac{1}{(s-j\omega)} \frac{(s+j\omega)}{(s+j\omega)} – \frac{1}{(s+j\omega)} \frac{(s-j\omega)}{(s-j\omega)} \right) \nonumber \\ &= \frac{1}{2j} \frac{(s+j\omega)-(s-j\omega)}{s^2-2j\omega-j^2\omega^2} \nonumber\\ &= \frac{1}{\bcancel{2j}} \frac{\bcancel{2j}\omega}{s^2\cancel{-js\omega+js\omega}+\omega^2} \end{align} $$

Et voilà, the Laplace transform of sine function

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ \sin(\omega t)\,\gamma(t) \laplace \frac{\omega}{s^2+\omega^2} }\ label{eq:sine} $$

Cosine Function

Yet another popular input signal is the cosine wave, starting at \(t=0\)

$$ u(t) = f(t) = \cos(\omega t)\,\gamma(t) \label{eq:cos_def} $$

The Laplace transforms of the cosine is similar to that of the sine function, except that it uses Euler’s identity for cosine

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ \cos(\omega t)\,\gamma(t) \laplace \frac{s}{s^2+\omega^2} } \label{eq:cosine} $$

Decaying Sine Function

Consider a decaying sine wave, starting at \(t=0\)

$$ u(t) = f(t) = e^{-\alpha t}\sin(\omega t)\,\gamma(t) \label{eq:decayingsine_def} $$

Apply the Euler identity for sine

$$ \begin{align} f(t) &= e^{-\alpha t}\sin(\omega t)\,\gamma(t) \nonumber \\ &= e^{-\alpha t}\frac{e^{j\omega t}-e^{-j\omega t}}{2j}\,\gamma(t)\nonumber \\ &= \frac{e^{(j\omega-\alpha)t}-e^{-(j\omega+\alpha) t}}{2j}\,\gamma(t) \nonumber \\ &= \frac{1}{2j}\left(e^{(j\omega-\alpha)t}-e^{-(j\omega+\alpha) t}\right)\gamma(t) \end{align} $$

We recognize the exponential functions, and apply their Laplace transforms \(\eqref{eq:exponential}\)

$$ \begin{align} F(s) &= \frac{1}{2j}\left( \frac{1}{s-(j\omega-\alpha)}- \frac{1}{s+(j\omega+\alpha)} \right)\nonumber\\ &= \frac{1}{2j}\left( \frac{1}{s+\alpha-j\omega}- \frac{1}{s+\alpha+j\omega} \right) \end{align} $$

Put over a common denominator

$$ \begin{align} F(s) &= \frac{1}{2j}\left( \frac{1}{s+\alpha-j\omega}- \frac{1}{s+\alpha+j\omega} \right) \nonumber \\ &= \frac{1}{2j}\left( \frac{1}{(s+\alpha-j\omega)}\frac{(s+\alpha+j\omega)}{(s+\alpha+j\omega)} – \frac{1}{(s+\alpha+j\omega)}\frac{(s+\alpha-j\omega)}{(s+\alpha-j\omega)} \right) \nonumber \\ &= \frac{1}{2j}\left( \frac{(s+\alpha+j\omega) -(s+\alpha-j\omega)}{(s+\alpha)^2-(j\omega)^2} \right) =\frac{1}{2j}\left( \frac{\cancel{s}\cancel{+\alpha}+j\omega\cancel{-s}\cancel{-\alpha}+j\omega}{(s+\alpha)^2+\omega^2} \right) \nonumber \\ &= \frac{1}{\cancel{2j}}\left( \frac{\cancel{2j}\omega}{(s+\alpha)^2+\omega^2} \right) \end{align} $$

The Laplace transforms of the decaying sine

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ e^{-\alpha t}\sin(\omega t)\,\gamma(t) \laplace \frac{\omega}{(s+\alpha)^2+\omega^2} } \label{eq:decayingsine} $$

Decaying Cosine Function

Consider a decaying cosine wave, starting at \(t=0\)

$$ u(t) = f(t) = e^{-\alpha t}\cos(\omega t)\,\gamma(t) \label{eq:decayingcosine_def} $$

The Laplace transforms of the decaying cosine is similar to that of the decaying sine function, except that it uses Euler’s identity for cosine.

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ \begin{align*} e^{-\alpha t}\cos(\omega t)\,\gamma(t) \laplace \frac{s+\alpha}{(s+\alpha)^2+\omega^2} \end{align*} } \label{eq:decayingcosine} $$

Time Delayed Function

A delay in the time domain, starting at \(t-a=0\)

$$ u(t) = f(t-a)\cdot \gamma(t-a) $$

The delayed step function \(\gamma(t)\)

$$ \gamma(t-a) = \begin{cases} 0 & t&a \\ 1 & t\geq a \\ \end{cases} $$

The delayed step function simplifies Laplace transform because \(\gamma(t-a)\) is \(1\) starting at \(t=-a\), and is \(0\) before

$$ \begin{align} \mathfrak{L}\left\{\,f(t-a)\cdot \gamma(t-a)\,\right\} &=\int_{o^-}^{\infty}\left(\, f(t-a)\cdot \gamma(t-a)\, \right) e^{-st}\mathrm{d}t \nonumber \\ &= \int_{a^-}^{\infty} f(t-a)\cdot e^{-st}\mathrm{d}t \end{align} $$

Substitute \(u=t-a\)

$$ \begin{align} \mathfrak{L}\left\{\,f(t-a)\cdot \gamma(t-a)\,\right\} &= \int_{a^-}^{\infty} f(u) e^{-s(u+a)}\mathrm{d}u,&u=t-a \nonumber \\ &= e^{-sa} \underbrace{\int_{0^-}^{\infty} f(u) e^{-su}\mathrm{d}u}_{F(s)} \end{align} $$

The last integral is simply the definition of the Laplace transform.  Together it gives us the Laplace transform of a time delayed function.

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \shaded{ f(t-a)\,\gamma(t-a) \laplace e^{-su}F(s) } \label{eq:timedelay} $$

Initial Value Theorem

The right sided initial value of a function \(f(0^+)\) follows from its Laplace transform of the derivative \(\eqref{eq:derivative}\)

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \begin{align} \tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \laplace s\,F(s)-f(0^-) \end{align} $$

Invoke the definition of the Laplace transform for the First Derivative theorem \(\eqref{eq:derivative}\), and split the integral

$$ \begin{align} s\,F(s)-f(0^-)&=\mathfrak{L}\left\{\, \tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \,\right\} \nonumber\\ &= \int_{0^-}^{\infty} \underbrace{\tfrac{\mathrm{d}}{\mathrm{d}t}f(t)}_{f'(t)}\,e^{-st}\mathrm{d}t &f’=\tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \nonumber\\ &= \int_{0^-}^{\infty} f'(t)\,e^{-st}\mathrm{d}t &\mathrm{split\ integral} \nonumber\\ &= \int_{0^-}^{0^+} f'(t)\,e^{-st}\mathrm{d}t + \int_{0^+}^{\infty} f'(t)\,e^{-st}\mathrm{d}t \end{align} $$

Take the limit as \(s\to\infty\)

$$ \begin{align} \lim_{s\to\infty}\left(s\,F(s)-f(0^-)\right) &= \lim_{s\to\infty}\left( \int_{0^-}^{0^+} f'(t)\,e^{-st}\mathrm{d}t + \int_{0^+}^{\infty} f'(t)\,e^{-st}\mathrm{d}t \right) \end{align} $$

Take the terms out of the limit that don’t depend on \(s\), and when substituting \(s=\infty\) in the second integral, that goes to \(0\)

$$ \begin{align} \lim_{s\to\infty}\left(s\,F(s)\right) – f(0^-) &= \int_{0^-}^{0^+} f'(t)\,e^{-st}\mathrm{d}t + \lim_{s\to\infty}\left( \int_{0^+}^{\infty} f'(t)\,\cancelto{0}{e^{-st}}\mathrm{d}t \right) \nonumber\\ &= \int_{0^-}^{0^+} f'(t)\,e^{-st}\mathrm{d}t, & \mathrm{where\ }\int f'(t)=f(t) \nonumber\\ &= \left[f(t)\right]_{0^-}^{0^+} \nonumber\\ \lim_{s\to\infty}\left(s\,F(s)\right) – \cancel{f(0^-)} &= f(0^+)-\cancel{f(0^-)} \end{align} $$

The initial value theorem follows as

$$ \shaded{ f(0^+) = \lim_{s\to\infty}\left(s\,F(s)\right) } \label{eq:initialvalue} $$

Final Value Theorem

The final value of a function \(f(\infty)\) follows from its Laplace transform of the derivative \(\eqref{eq:derivative}\).  Note that functions such as sine, and cosine don’t a final value

$$ \def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}} \def\laplace{\lfz{\mathscr{L}}} \begin{align} \tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \laplace s\,F(s)-f(0^-) \end{align} $$

Similarly to the initial value theorem, we start with the First Derivative \(\eqref{eq:derivative}\) and apply the definition of the Laplace transform \(\eqref{eq:laplace}\), but this time with the left and right of the equal sign swapped, and split the integral

$$ \begin{align} s\,F(s)-f(0^-) &= \mathfrak{L}\left\{\, \tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \,\right\} \nonumber \\ &= \int_{0^-}^{\infty} \tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \,e^{-st}\mathrm{d}t \end{align} $$

Take the limit as \(s\to 0\)

$$ \lim_{s\to0}\left( s\,F(s)-f(0^-) \right) = \lim_{s\to0}\left( \int_{0^-}^{\infty} \tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \,e^{-st}\mathrm{d}t \right) $$

Take the terms out of the limit that don’t depend on \(s\), and \(\lim_{s\to0}e^{-st}=1\) inside the integral

$$ \lim_{s\to0}\left( s\,F(s)\underline{-f(0^-)} \right) = \lim_{s\to0}\left( \int_{0^-}^{\infty} \tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \,\cancelto{1}{e^{-st}}\mathrm{d}t \right) $$

The integral doesn’t depend on \(s\)

$$ \begin{align} \lim_{s\to0}\left( s\,F(s) \right)-f(0^-) &= \int_{0^-}^{\infty} \tfrac{\mathrm{d}}{\mathrm{d}t}f(t) \,\mathrm{d}t ,&\int\tfrac{\mathrm{d}}{\mathrm{d}t}f(t)\mathrm{d}t=f(t) \nonumber\\ &= [f(t)]_{O^-}^{\infty} \nonumber\\ \lim_{s\to0}\left( s\,F(s) \right)\cancel{-f(0^-)} &= f(\infty)\cancel{-f(0^-)} \end{align} $$

The final value theorem follows as

$$ \shaded{ f(\infty) = \lim_{s\to0}\left( s\,F(s) \right) } \label{eq:finalvalue} $$

Suggested next reading is Transfer Functions.

Copyright © 1996-2022 Coert Vonk, All Rights Reserved