Z-transform

\(
\require{AMSsymbols}
\def\lfz#1{\overset{\Large#1}{\,\circ\kern-6mu-\kern-7mu-\kern-7mu-\kern-6mu\bullet\,}}
\def\lfzraised#1{\raise{10mu}{#1}}
\def\laplace{\lfz{\mathscr{L}}}
\def\fourier{\lfz{\mathcal{F}}}
\def\ztransform{\lfz{\mathcal{Z}}}
\require{cancel}
\newcommand\ccancel[2][black]
{\color{#1}{\cancel{\color{black}{#2}}}}
\newcommand\ccancelto[3][black]
{\color{#1}{\cancelto{#2}{\color{black}{#3}}}}
\)What is now called the Z-transform (named in honor of Lotfi Zadeh) was known to, mathematician and astronomer, Pierre-Simon Laplace around 1785. With the introduction of digitally sampled-data, the transform was re-discovered by Hurewicz in 1947, and developed by Lotfi Zadeh and John Ragazzinie around 1952, as a way to solve linear, constant-coefficient difference equations.

As we will see, the convolution property makes the Z-transform a powerful tool in analyzing sampled-data systems.

Introduction

Just as causal continuous systems are controlled by differential equations

$$
\begin{align}
\sum_{k=0}^{N}a_k\frac{\text{d}^ky(t)}{\text{d}t^k}&=
\sum_{k=0}^{M}b_k\frac{\text{d}^kx(t)}{\text{d}t^k}\quad\Rightarrow\nonumber\\[25mu]
a_0y(t)+a_1y^{\prime}(t)+a_2y^{\prime\prime}(t)+\ldots&=
b_0x(t)+b_1x^{\prime}(t)+b_2x^{\prime\prime}(t)+\ldots\nonumber
\end{align}\nonumber
$$
Linear constant-coefficient differential equation
expresses the relation between input \(x[n]\) and output \(y[n]\)

causal discrete systems operate in accordance with difference equations.

$$
\begin{align}
\sum_{k=0}^N a_k\,y[n-k]&=
\sum_{k=0}^M b_k\,x[n-k]\quad\Rightarrow\nonumber\\[10mu]
a_0y[n]+a_1y[n-1]+a_2y[n-2]+\ldots\nonumber
&=b_0x[n]+b_1x[n-1]+b_2x[n-2]+\ldots\nonumber\\
\end{align}
$$
Linear constant-coefficient difference equation
expresses the relation between input \(x[n]\) and output \(y[n]\)

From the difference equation we can derive the system characteristics such as the impulse response, step response and frequency response.

Unlike the continuous-time case, causal difference equations can be iterated just like a computer program would do. All one needs to do, is to rewrite the difference equation so that the term \(y[n]\) is on the left and then iterating forward in time. This will give each value of the output sequence without ever obtaining a general expression for \(y[n]\). In this article, we however will look for a general analytical expression for \(y[n]\) using the Z-transform.

The Z-transform can be thought of as an operator that transforms a discontinuous sequence to a continuous algebraic function of complex variable \(z\). As we will see, one of the nice feature of this transform is that a convolution in time, transforms to a simple multiplication in the \(z\)-domain.

Unilateral Z-Transform

We solve the difference equations, by taking the Z-transform on both sides of the difference equation, and solve the resulting algebraic equation for output \(Y(z)\), and then do the inverse transform to obtain \(y[n]\).

Assuming causal filters, the output of the filter will be zero for \(t\lt 0\).

Sampling creates a discontinuous function

For digital systems, time is not continuous but passes at discrete intervals. When it measures a continuous-time signal every \(T\) seconds, it is said to be discrete with sampling period \(T\).

Digital System Sampling
Sampling music to be recorded on Compact Disc

To help understand the sampling process, assume a continuous function \(x_c(t)\) as shown below

LFZ Bilateral Continuous Function
Example of continuous function, \(x_c(t)\)

To work toward a mathematical representation of the sampling process, consider a train of evenly spaced impulse functions starting at \(t=0\). This so called Dirac comb, \(s(t)\), has a spacing of \(T>0\) and contains \(t=0\).

LFZ Unilateral Dirac Comb Function
Dirac comb, \(s(t)\)

The Dirac comb \(s(t)\) can be expressed as
$$
s(t)=\sum_{n=0}^\infty{\delta(t-nT)}\label{eq:combcond}
$$ where the impulse function \(\delta(t-nT)\) must satisfy the condition
$$
\int_{-\infty}^{\infty}\delta(t-nT)=1
$$

When multiplying the Dirac comb \(s(t)\) with a continuous-time signal \(x_c(t)\), that signal will scale the comb by that \(x_c(t)\)

Impulse train modulator
Impulse train modulator
LFZ Unilateral Sampled Function
Example of Dirac comb applied to \(x_c(t)\)

The resulting signal \(x_s(t)\) follows from substituting \(s(t)\) from equation \(\eqref{eq:combcond}\) as
$$
\begin{align}
x_s(t)&\triangleq x_c(t)\,s(t)\label{eq:fstar0}\\
&=x_c(t)\sum_{n=0}^{\infty}{\delta(t-nT)}\nonumber\\
&=\sum_{n=0}^{\infty}{x_c(t)\ \underbrace{\delta(t-nT)}_{\text{delayed impulse}}}\label{eq:fstar}
\end{align}
$$

The impulse function \(\delta(t-nT)\) is \(0\) everywhere but at \(t=nT\), the so called sifting property, so we can replace \(x_c(t)\)
$$
\begin{align}
x_c(t)&=x_c(nT)\label{eq:ft}\\
\end{align}
$$ so that $$
x_s(t)=\sum_{n=0}^{\infty}{x_c(nT)\ \delta(t-nT)}\label{eq:fstarnT}
$$

Work towards a continuous function

The goal is to form a continuous algebraic expression, so we can use algebra to manipulate the difference equations.

Start with the Laplace transform of sampled signal \(x_s(t)\) from equation \(\eqref{eq:fstarnT}\)
$$
\begin{align}
x_s(t) \laplace
X_s(s)\triangleq&\int_{0^-}^\infty e^{-st}\overbrace{\sum_{n=0}^{\infty}x_c(nT)\ \delta(t-nT)}^{x_s(t)}\ \mathrm{d}t\nonumber\\
=&\int_{0^-}^\infty \sum_{n=0}^{\infty}e^{-st}\,x_c(nT)\ \delta(t-nT)\ \mathrm{d}t\label{eq:laplace0}
\end{align}
$$

Once more, the impulse function \(\delta(t-nT)\) is \(0\) everywhere but at \(t=nT\), the “sifting property”, so we can replace \(\mathrm{e}^{-st}\) with
$$
\mathrm{e}^{-st}=\mathrm{e}^{-snT}\label{eq:est}
$$

After substituting \(\eqref{eq:est}\) in \(\eqref{eq:laplace0}\), the terms \(\mathrm{e}^{-snT}\) and \(x(nT)\) are independent of \(t\) and can be taken outside of the integration.
$$
\begin{align}
X(s)&=\int_{0^-}^\infty \sum_{n=0}^\infty e^{-s\color{blue}{nT}}x_c(\color{blue}{nT})\ \delta(t-nT)\ \mathrm{d}t\nonumber\\
&=\sum_{n=0}^\infty e^{-s{nT}}x_c(nT)\underbrace{\int_{0^-}^\infty \delta(t-nT)\ \mathrm{d}t}_{\text{=1 according to equation (\ref{eq:combcond})}}\nonumber\\
&=\sum_{n=0}^\infty\ e^{-s{nT}}\ x_c(nT)\label{eq:Fs}
\end{align}
$$

The Z-transform follows

Define \(z\) and \(x[n]\) as
$$
\shaded{\begin{align}
z&\triangleq e^{sT}\nonumber\\
x[n]&\triangleq x_c(nT)\nonumber
\end{align}}\label{eq:z}
$$

The scaling with sample period \(T\) in the form \(x[n]\) matches the notation for computer arrays. Anytime you see an \([n]\) you can translate to seconds replace it with \((nT)\). Be careful with integer expressions, such as \([n – k]\), which stands for \(((n-k)T)\) seconds, not \((nT – k)\).

From equation \(\eqref{eq:z}\) follows
$$
\ln z=sT\ \Rightarrow\ s=\tfrac{1}{T}\ln z\label{eq:s}
$$

Substitute \((\ref{eq:z},\ref{eq:s})\) in \(\eqref{eq:Fs}\) and apply the power rule \(x^{ab}=(x^a)^b\). Call the function \(F(z)\) because \(z\) is the only variable after the substitution \(s=\tfrac{1}{T}\ln z\)
$$
X(z)=\sum_{n=0}^\infty\ \overbrace{z^{-n}}^{(e^{sT})^{-n}}\ x[n]
$$

The unilateral Z-transform of the discrete function \(x[n]\) follows as
$$
\shaded{
x[n]
\ztransform
X(z)=\sum_{n=0}^\infty z^{-n}\ x[n]}
\label{eq:ztransform}
$$ in this complex polynomial, \(z\) is any \(z\in\mathbb{C}\).

Note that we use the notation \(\ztransform\) as equivalent to the more common Z-transform notation \(\mathfrak{Z}\left\{\,x[n]\,\right\}\).

In review: the Z-transform maps a sequence \(x[n]\) to a continuous polynomial \(X(z)\) of the complex variable \(z\).

Normalized frequency

The discrete signal only exists at time \(t=nT\) where \(n={0,1,2,\ldots}\) By normalizing time \(t\) with the sampling interval \(T\), using definition \(\eqref{eq:z}\), we get the natural time, measured in “samples”, on the time-axis

$$
x[n]\triangleq x(nT)\nonumber
$$ we highlight the normalized time, by using the \([n]\) notation

When using normalized time, other time-dependent variables should be normalized as well. The angular frequency \(\omega\), measured in [rad/s], normalizes to the normalized angular frequency with units of [rad/sample] by multiplying it with the sample period \(T\) [s/sample]

Conversion to/from normalized
regular normalized
time [s] \(\xrightarrow{\div T}\) [sample]
angular frequency [rad/s] \(\xrightarrow{\times T}\) [rad/sample]
natural frequency [cycles/s] \(\xrightarrow{\times T}\) [cycles/sample]

Using normalized frequency allows an author to present concepts independent of sample rate, but it comes at a loss of clarity as \(T\) and \(f_s\) are omitted from expressions.

When visualizing variable \(z\), the normalized angular frequency \(\omega T\) corresponds to the angle with the positive horizontal axis.

z-plane frequency

Formulas expressed in terms of \(f_{s}\) and/or \(T\) are not normalized and can be readily converted to normalized frequency by setting those parameters to \(1\). The inverse is accomplished by replacing instances of the angular frequency parameter \(\omega\), with \(\omega T\).

Note that some authors use \(\omega\) for normalized angular frequency in [rad/sample], and \(\Omega\) for angular frequency in [rad/s]. Here we avoid using \(\omega\) for natural frequencies. Instead we use the product \(\omega T\) to refer to natural angular frequency. By doing so, it is clear that the angular frequency \(\omega\) is scaled by the sample time \(T\).

Nyquist–Shannon sampling theorem

The frequency-domain representation of a sampled signal teaches us about the limitations of using a discrete signal. We will show this by doing a Fourier transform on the sampled signal \(x_s(t)\).

Recall equation \(\eqref{eq:fstarnT}\) and \(\eqref{eq:fstar0}\), but this time bilateral

$$
\begin{align}
x_s(t)=x_c(t)\,s(t)
&=\sum_{n=-\infty}^{\infty}{x_c(nT)\ \delta(t-nT)}\nonumber\\
&=\underbrace{\sum_{n=-\infty}^{\infty}x_c(nT)}_{f(t)}\ \ \underbrace{\sum_{n=-\infty}^{\infty}\delta(t-nT)}_{g(t)}\nonumber
\end{align}\nonumber
$$

The convolution theorem states that multiplication in the time-domain corresponds to convolution in the frequency-domain

$$
f(t)\,g(t)\fourier \frac{1}{2\pi}{\Large(}F(\omega)*G(\omega){\Large)}\nonumber
$$ where \(*\) is the convolution sign

Apply the Fourier transform of a product to \(x_s(t)\) and call it \(X_s(\omega)\)
$$
\begin{align}
x_s(t)
=x_c(t)\,s(t)
\fourier&\frac{1}{2\pi}{\Large(}\color{purple}{X_c(\omega)}*S(\omega){\Large)}\triangleq X_s(\omega)\label{eq:fstar2}
\end{align}
$$

Recall the Fourier transform of the Dirac comb \(s(t)\)

$$
s(t)=\sum _{n=-\infty }^{\infty }\delta(t-nT)
\fourier
{\frac {2\pi }{T}}\sum _{k=-\infty }^{\infty }\delta \left(\omega -{\frac {2\pi k}{T}}\right)\triangleq S(\omega)\nonumber
$$

Substituting the Fourier transform of the Dirac comb in \(\eqref{eq:fstar2}\) where \(\frac{2\pi}{T}=2\pi f_s=\omega_s\), the angular sample frequency
$$
\begin{align}
x_s(t)\fourier
X_s(\omega)&=\frac{1}{\ccancel[red]{2\pi}}\color{purple}{X_c(\omega)} * \left(\frac{\ccancel[red]{2\pi}}{T}\sum_{k=-\infty}^{\infty}\delta(\omega-k\omega_s)\right)\nonumber\\
&=\frac{1}{T}\sum_{k=-\infty}^{\infty}\color{purple}{X_c(\omega)} * \delta(\omega-k\omega_s)\label{eq:XsjOmega0}
\end{align}
$$

Recall the convolution with a delayed impulse function \(\delta(\omega-a)\)

$$
F(\omega)*\delta(\omega-a) = F(\omega-a)\nonumber
$$

Apply the convolution to \(\eqref{eq:XsjOmega0}\)
$$
\begin{align}
X_s(\omega)&=\frac{1}{T}\sum_{k=-\infty}^{\infty}\color{purple}{X_c(\omega}-k\omega_s\color{purple}{)}\label{eq:XsjOmega}
\end{align}
$$

Equation \(\eqref{eq:XsjOmega}\) implies that the Fourier transform of \(x_s(t)\) consists of periodically repeated copies of the Fourier transform of \(x_c(t)\). The copies of \(\color{purple}{X_c(\omega)}\) are shifted by integer multiples of the sampling frequency and then superimposed as depicted below.

Frequency Domain sampling no-aliasing
Frequency-domain representation of sampling

Plot (a) represents the frequency spectrum of the continuous signal \(\color{purple}{X_c(\omega)}\) where \(\omega_{\small N}\) is the highest frequency component. Plot (b) shows the frequency spectrum of the Dirac comb \(S(\omega)\). Finally, plot (c) shows \(X_s(\omega)\), the result of the convolution between \(\color{purple}{X_c(\omega)}\) and the \(S(\omega)\).

From (c), we see that the replicas of \(\color{purple}{X_c(\omega)}\) do not overlap when
$$
\omega_s-\omega_{\small N}\geq\omega_{\small N}\quad\Rightarrow\quad\shaded{\omega_s\geq2\,\omega_{\small N}}\label{eq:ineq}
$$

Consequently, \(x_c(t)\) can be recovered from \(x_s(t)\) with an ideal low-pass filter \(H_r\).

Impulse train modulator and LTI Hr
Impulse train modulator and LTI \(H_r\)

Inequality \(\eqref{eq:ineq}\) is captured in the Nyquist–Shannon sampling theorem

Let \(x_c(t)\) be a bandlimited signal with
$$
\begin{align}
X_c(\omega)&=0,&|\omega|\gt \omega_{\small N}\nonumber
\end{align}\nonumber
$$ then \(x_c(t)\) is uniquely determined by its samples $$
\begin{align}
x[n]&=x_c(nT),&n\in\mathbb{Z}\nonumber
\end{align}\nonumber
$$ if $$
\omega_s=\frac{2\pi}{T}\geq2\,\omega_{\small N}\nonumber
$$ where \(\omega_{\small N}\) is called the Nyquist frequency and \(2\omega_{\small N}\) is called the Nyquist rate.

When the inequality of \(\eqref{eq:ineq}\) does not hold, e.g. when the sampling frequency \(\omega_s\) is less than twice the maximum frequency \(\omega_{\small N}\), the copies of \(X_c(\omega)\) overlap, and \(X_c(\omega)\) is no longer recoverable by low-pass filtering as shown below. The resulting distortion is called aliasing.

Frequency Domain sampling aliasing
Frequency-domain representation of sampling with aliasing

\(z\)-plane

We introduced the normalized angular frequency \(\omega \) and how it maps to a polar representation of the complex variable \(z\). Here we will extent this to include the magnitude \(|z|\).

Let’s start with the definition of \(z\) from equation \(\eqref{eq:z}\) and split \(s\) in its real and imaginary parts \(s=\sigma+j\omega\)
$$
z\triangleq\mathrm{e}^{sT}
=\mathrm{e}^{(\sigma+j\omega)T}
=\mathrm{e}^{\sigma T}\,\mathrm{e}^{j\omega T}\label{eq:zalt}
$$

The polar notation for the complex variable \(z\) is a function of the natural angular frequency \(\omega T\) and \(\mathrm{e}^{\sigma T}\)
$$
\shaded{
z\triangleq|z|\,\mathrm{e}^{j\omega T}\quad\text{where}\quad
|z|\triangleq \mathrm{e}^{\sigma T}}
\label{eq:zphi}
$$

This conveniently matches the topology of the \(z\)-plane, where the modules \(|z|\) corresponds to the length of a vector from the origin to \(z\), and \(\omega T\) corresponds to the angle of that vector with the positive horizontal axis.

z-plane, re and im, magnitude and phase

According to the Nyquist–Shannon sampling theorem, a discrete signal can only have frequencies \(|\omega|\) between \(0\) and half the sampling frequency
$$
0\leq|\omega|\leq\frac{\omega_s}{2}\label{eq:zphirange}
$$

The maximum frequency in the discrete signal, the so called Nyquist frequency corresponds to \(\pi\) radians, because with sample period \(T=\frac{1}{f_s}=\frac{2\pi}{\omega_s}\) and the Nyquist frequency \(\omega_{\small N}=\frac{\omega_s}{2}\), the natural angular frequency for the Nyquist frequency follows as
$$
\omega_{\small N}T=\frac{\ccancel[green]{\omega_s}}{\ccancel[red]{2}}\,\frac{\ccancel[red]{2}\pi}{\ccancel[green]{\omega_s}}=\pi
$$ the range for \(\omega T\) follows as
$$
\shaded{-\pi\leq\omega T\leq\pi}
$$

For those familiar with the Laplace transform, we will map specific features between the \(s\) and \(z\)-domain:

  • The origin \(s=0\) of the \(s\)-plane is mapped to \(z=e^0=1\) on the real axis in \(z\)-plane.
  • Each vertical line \(\sigma=\sigma_0\) in \(s\)-plane is mapped to a circle \(|z|=e^{\sigma_0}\) centered about the origin in \(z\)-plane. E.g.
    • Leftmost vertical line \(\sigma\to-\infty\) is mapped as the origin, where \(|z|=\mathrm{e}^{-\infty}=0\)
    • The imaginary axis \(\sigma=0\) is mapped as the unit circle, where \(|z|=\mathrm{e}^0=1\)
    • Rightmost vertical line \(\sigma\to\infty\) is mapped as a circle with an infinite radius, where \(|z|=\mathrm{e}^{\infty}=\infty\).
  • Each horizontal line \(j\omega=j\omega_0\) in \(s\)-plane is mapped to an angle from the origin in \(z\)-plane of angle \(\omega_0 T\) with respect to the positive horizontal direction.

Region of Convergence

Recall the definition of the Z-transform from equations \((\ref{eq:ztransform},\ref{eq:zalt},\ref{eq:zphi})\)

$$
f[n]\ztransform\sum_{n=0}^\infty z^{-n}\ f[n]\\
\text{where }
z\triangleq|z|\,\mathrm{e}^{j\varphi}\text{, }
|z|\triangleq \mathrm{e}^{\sigma T}\text{, }
\varphi\triangleq \omega T
$$

This converges depending on the duration and magnitude of \(f[n]\) as well as on the magnitude \(|z|\). The phase \(\varphi\) has no effect on the convergence.

The power series for the Z-transform is called a Laurent series. The Laurent series, represents an analytic function at every point inside the region of convergence. Therefore, the Z-transform and all its derivatives must be continuous function of \(z\) inside the region of convergence.

Laurent series converge in an annular (=ring shaped) region of the \(z\)-plane, bounded by poles. The set of values of \(z\) for which the Z-transform converges is called the region of convergence (ROC). This means that anytime we use the Z-transform, we need to keep the region of convergence in mind.

Inverse Z-Transform

The forward Z-transform helped us express samples in time as an analytic function on which we can use our algebra tools. Eventually, we have to return to the time domain using the Inverse Z-transform.

The inverse Z-transform can be derived using Cauchy’s integral theorem.

Start with the Z-transform definition from equation \(\eqref{eq:ztransform}\)

$$
\begin{align}
f[m]\,\ztransform\,&F(z)=\sum_{m=0}^\infty z^{-m}\ f[m]\nonumber
\end{align}\nonumber
$$

Multiply both sides by \(z^{n-1}\)
$$
\begin{align}
F(z)\,\color{purple}{z^{n-1}}&=\sum_{m=0}^\infty z^{-m\color{purple}{+n-1}}\,f[m]
\end{align}
$$

Integrate with a counterclockwise contour integral for which the contour encloses the origin and lies entirely within the region of convergence of \(F(z)\)
$$
\begin{align}
\color{purple}{\frac{1}{2\pi j}\oint_C} F(z)\,z^{n-1}\,\color{purple}{\mathrm{d}z}&=\color{purple}{\frac{1}{2\pi j}\oint_C} \sum_{m=0}^\infty z^{-m+n-1}\,f[m]\,\color{purple}{\mathrm{d}z}\nonumber\\
&=\sum_{m=0}^\infty\,f[m]\ \underbrace{\frac{1}{2\pi j}\oint_C \,z^{-(m-n+1)}\,\mathrm{d}z}_{\color{blue}{\text{?}}}
\end{align}
$$

A special case of Caughy integral theorem states

$$
\frac{1}{2\pi j}\,\oint_Cz^{-l}\,\mathrm{d}z\,=\,\begin{cases}1 & l = 1\\ 0 & l\neq1\end{cases}\nonumber
$$

That implies we can replace the integral with a \(\delta\) function.
$$
\begin{align}
\frac{1}{2\pi j}\oint_C F(z)\,z^{n-1}\,\mathrm{d}z
&=\sum_{m=0}^\infty\,f[m]\ \underbrace{\frac{1}{2\pi j}\oint_C \,z^{-(m-n+1)}\,\mathrm{d}z}_{\color{purple}{=1\text{ only when m-n+1=1}}}\nonumber\\
&=\sum_{m=0}^\infty\,f[m]\ \color{purple}{\delta[m-n]}
\end{align}
$$

The Inverse Z-transform follows as
$$
\shaded{f[n]=\frac{1}{2\pi j}\oint_C F(z)\,z^{n-1}\,\mathrm{d}z}
$$ where the contour integral is taken over a counter-clockwise closed contour \(C\) in the region of converge (ROC). The contour \(C\) must encircle all the poles and zeroes of \(F(z)\).

Cauchy’s Residue Theorem

When the transfer functions is rational, a ratio of polynomials, we may use the method described below to calculate the Inverse Z-transform.

Denote the unique poles of \(F(z)\) as \(p_{1\ldots{\small N}}\) and their algebraic multiplicities as \(m_{1\ldots{\small N}}\). As long as \(N\) is finite, which is the case if \(F(z)\) is rational, we can evaluate the inverse Z-Transform via Cuachy’s residue theorem that states

$$
f[n]=\frac{1}{2\pi j}\oint_C F(z)\,z^{n-1}\,\mathrm{d}z=\sum_{p_k\text{ inside }C}\mathrm{Res}\large{(}\,F(z)\,z^{n-1},\,p_k,\,m_k\,{\large{)}}\nonumber
$$ where $$
\mathrm{Res}{\Large(}\,F(z)\,z^{n-1},\,p_k,\,m_k\,{\Large)}=\frac{1}{(m_k-1)!}\left[\frac{\text{d}^{(m_k-1)}}{\text{d}z^{(m_k-1)}}\,{\Large[}\,(z-p_k)^{m_k}\,F(z)\,z^{n-1}\,{\Large]}\right]_{z=p_k}\nonumber
$$ for a single pole $$
\mathrm{Res}{\Large(}\,F(z)\,z^{n-1},\,p_k,\,1\,{\Large)}=\left.\,(z-p_k)\,F(z)\,z^{n-1}\,\right|_{z=p_k}\nonumber
$$

Cauchy’s residue theorem allows us to compute the contour integral by computing derivatives, however tedious.

Other techniques

Given that the \(z\)-transform is a particular type of Laurent series, and the Laurent series in a given annulus of convergense is unique, any technique can be used to generate a power series for \(F(z)\) that converges in the outermost annulus of convergense to obtain the inverse \(z\)-transform.

Inversion techniques available are

  • using the binomial theorem
  • using the convolution theorem
  • performing long division
  • using the initial-value theorem
  • expanding \(F(z)\) in partial fractions
  • power series expansion (for non-rational z-transforms).

We’ll explore these techniques further in Evaluating Discrete Transfer Functions

Z-transform tables

The tables below introduce commonly used properties, common input functions and initial/final value theorems that I collected over time.

The time-domain function is usually given in terms of a discrete index \(n\), rather than time. Since \(t=nT\), we may replace \(f[n]\) with \(f(nT)\) where \(T\) is the sampling period.

Unilateral Z-Transform properties

Unilateral Z-Transform properties
Time domain \(z\)-domain
Z-transform \(f[n]\) \(\lfzraised\ztransform\) \(X(z)=\sum_{n=0}^{\infty}z^{-n}f[n]\) proof
Linearity \(a\,f[n]+b\,g[n]\) \(\lfzraised\ztransform\) \(a\,F[n]+b\,G[n]\) proof
Time delay \(f[n-a]\,\color{grey}{\gamma[n}-a\color{grey}{]}\) \(\lfzraised\ztransform\) \(z^{-a}F(z)\) proof
Time delay #2 \(f[n-a]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(z^{-a}\left(F(z)+\sum_{m=1}^{a} z^m\ f[-m]\right)\) proof
Time advance \(f[n+a]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(z^a\left(F(z)-\sum_{m=0}^{a-1} z^{-m}\ f[m]\right)\) proof
Time advance (\(a=1\)) \(f[n+1]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(z\left(F(z)-f[0]\right)\) proof
Time multiply \(n\,f[n]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(-z\dfrac{\text{d}F(z)}{\text{d}z}\) proof
Modulation \(a^n\,f[n]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(F(a^{-1}z)\) proof
Convolution \((f\ast g)[n]\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(F(z)\ G(z)\) proof
Conjugation \(f^\star[n]\) \(\lfzraised\ztransform\) \(F^{\star}(z^{\star})\) proof
First difference \(f[n]-f[n-1]\) \(\lfzraised\ztransform\) \(\left(1-z^{-1}\right)F(z)\) proof
Accumulation \(\sum_{i=0}^{n}x[i]\) \(\lfzraised\ztransform\) \(F(z)\,\dfrac{z}{z-1}\) proof
Double poles \((n+1)p^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{1}{(1-pz^{-1})^2}\) proof
Real part \(\Re{f[n]}\) \(\lfzraised\ztransform\) \(\dfrac{X(z)+X^{\star}(z^{\star})}{2}\)
Imaginary part \(\Im{f[n]}\) \(\lfzraised\ztransform\) \(\dfrac{X(z)-X^{\star}(z^{\star})}{2j}\)

Unilateral Z-transform pairs

Unilateral Z-transform pairs
Time domain \(z\)-domain (\(z\)) \(z\)-domain (\(z^{-1}\)) ROC
\(\small{\delta[n]\triangleq\begin{cases}1,&n=0\\0,&n\neq0\end{cases}}\) \(\lfzraised\ztransform\) \(1\) \(1\) all \(z\) proof
\(\delta[n-a]\) \(\lfzraised\ztransform\) \(\small{\begin{cases}z^{-a},&a\geq0\\0,&a\lt0\end{cases}}\) \(\small{\begin{cases}z^{-a},&a\geq0\\0,&a\lt0\end{cases}}\) \(z\neq0\) proof
\(\small{\gamma[n]\triangleq\begin{cases} 0,&n\lt0\\1,&n\geq 0\end{cases}}\) \(\lfzraised\ztransform\) \(\dfrac{z}{z-1}\) \(\dfrac{1}{1-z^{-1}}\) \(|z|\gt1\) proof
\(a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z}{z-a}\) \(\dfrac{1}{1-az^{-1}}\) \(|z|\gt |a|\) proof
\(a^{n-1}\,\gamma[n-1]\) \(\lfzraised\ztransform\) \(\dfrac{1}{z-a}\) \(\dfrac{z^{-1}}{a-z^{-1}}\) \(|z|\gt1\) proof
\(n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z}{(z-1)^2}\) \(\dfrac{z^{-1}}{(1-z^{-1})^2}\) \(|z|\gt1\) proof
\(n\,a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{az}{(z-a)^2}\) \(\dfrac{az^{-1}}{(1-az^{-1})^2}\) \(|z|\gt |a|\) proof
\(n^2a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{az(z-1)}{(z-a)^3}\) \(\dfrac{az^{-1}\left(1+az^{-1}\right)}{(1-az^{-1})^3}\) \(|z|\gt |a|\)
\(\tfrac{1}{2}n(n-1)a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{a^2z}{(z-a)^3}\) \(\dfrac{a^2z^{-2}}{(1-az^{-1})^3}\) \(|z|\gt |a|\) proof
\(\small{\left(\begin{array}{c}n+m-1\\m-1\end{array}\right)}\,a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z^m}{(z-a)^m}\) \(\dfrac{1}{(1-az^{-1})^m}\) \(|z|\gt |a|\) proof
\(\small{(-1)^m\left(\begin{array}{c}-n-1\\m-1\end{array}\right)}\,a^n\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z^m}{(z-a)^m}\) \(\dfrac{1}{(1-az^{-1})^m}\) \(|z|\lt |a|\) proof
\(\sin(\omega n)\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z\sin(\omega)}{z^2-2z\cos(\omega)+1}\) \(\dfrac{z^{-1}\sin(\omega)}{1-2z^{-1}\cos(\omega)+z^{-2}}\) \(|z|\gt1\) proof
\(\cos(\omega n)\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{z^2-z\cos(\omega)}{z^2-2z\cos(\omega)+1}\) \(\dfrac{1-z^{-1}\cos(\omega)}{1-2z^{-1}\cos(\omega)+z^{-2}}\) \(|z|\gt1\) proof
\(a^n\sin(\omega n)\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{az\sin(\omega)}{z^2-2az\cos(\omega)+a^2}\) \(\dfrac{az^{-1}\sin(\omega)}{1-2az^{-1}\cos(\omega)+a^2z^{-2}}
\)
\(|z|\gt|a|\) proof
\(a^n\cos(\omega n)\,\color{grey}{\gamma[n]}\) \(\lfzraised\ztransform\) \(\dfrac{1-az\cos(\omega)}{z^2-2az\cos(\omega)+a^2}\) \(\dfrac{z^{-1}\left(z^{-1}-a\cos(\omega)\right)}
{1-2az^{-1}\cos(\omega)+a^2z^{-2}}\)
\(|z|\gt|a|\) proof

The binomial coefficient, used in the table above, is defined as

$$
\left(\begin{array}{c}a\\ b\end{array}\right)\triangleq\frac{a!}{b!(a-b)!}\nonumber
$$

Initial and Final Value Theorem

Initial and final value theorem
Time domain \(z\)-domain
Initial Value \(f(0^+)\) \(f[0]=\lim_{z\to\infty}F(z)\) proof
Final Value \(f(\infty)\) \(\lim_{n\to\infty}f[n]=\lim_{z\to1}(z-1)F(z)\) proof

The proof for these transforms can be found in the post Z-Transforms Proofs.

What’s next

Digital filters can be designed using the Z-transform, similar to how analog filters are designed using the Laplace transform. The properties and functions listed in the tables above provide a foundation the design and analysis of such digital filters.

I recommend reading through the proofs for these Z-transforms. If you want to skip ahead, I suggest Discrete Transfer Functions for follow-up reading.

Embedded software developer
Passionately curious and stubbornly persistent. Enjoys to inspire and consult with others to exchange the poetry of logical ideas.

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.