Listing of GNU/Octave code for RC low-pass filter. Used to generated the graphs in my RC low-pass filter article. Supplements the article RLC Low-pass Filter.\(\)
Appendix
Step response, in GNU/Octave
clc; close all; clear all; format short eng
L=47e-3; # 47mH
C=47e-9; # 47nF
#Rvector = [3.9e3]; # separate real poles
#Rvector = [2e3]; # coinciding real poles
#Rvector = [220 820 1500]; # conjugate complex poles
Rvector = [0]; # conjugate complex poles on im-axis
w=logspace(3,5,200); #*e.^(sigma*t)
f=w/(2*pi);
for R = Rvector
wn=1/sqrt(L*C);
zeta=R/2*sqrt(C/L)
t=linspace(0,2e-3,200);
if (zeta < 1) # complex conjugate poles
sigma=wn*zeta;
wd=wn*sqrt(1-zeta^2);
u=1 - sqrt(sigma^2+wd^2)/wd .* exp(-sigma*t) .* cos(wd*t-atan(sigma/wd));
hold on; h=plot(t,u); hold off;
endif
if (zeta == 1) # coinciding real poles
p=wn*(-zeta+sqrt(zeta^2-1));
u=1 + (p*t-1).*e.^(p*t);
hold on; h=plot(t,u); hold off;
endif
if (zeta > 1) # real poles
p1=wn*(-zeta+sqrt(zeta^2-1));
p2=wn*(-zeta-sqrt(zeta^2-1));
u = 1 + p2/(p1-p2)*e.^(p1*t) + p1/(p2-p1)*e.^(p1*t);
hold on; h=plot(t,u); hold off;
endif
endfor
axis([min(t) max(t) 0 2]); #1.75
xlabel('time [s]'); ylabel('|h(t)|');
t=['Step Response(t'];
t2=['), C=' num2str(C*1e9) 'nF, L=', num2str(L*1e3),'mH'];
if(length(Rvector)==1)
t=[t t2 ', R=' num2str(R/1e3) 'k\Omega']
else
t=[t ',R' t2];
legend(strread(num2str(Rvector,3),'%s'));
endif
t = [t ];
title(t, "fontsize", 15);
Bode magnitude, in GNU/Octave
clc; close all; clear all; format short eng
L=47e-3; # 47mH
C=47e-9; # 47nF
#Rvector = [3.9e3]; # separate real poles
#Rvector = [2e3]; # coinciding real poles
#Rvector = [220 820 1500]; # conjugate complex poles
Rvector = [0]; # conjugate complex poles on im-axis
f=logspace(1,6,200);
w=2*pi*f;
for R = Rvector
wn=1/sqrt(L*C);
zeta=R/2*sqrt(C/L)
if (zeta == 0) # complex conjugate poles on imaginary axis
u=20*log10(wn.^2) - 20*log10(wn.^2-w.^2);
hold on; h=semilogx(f,u); hold off;
hold on;
plot([wn/(2*pi) wn/(2*pi)], get(gca,'YLim'),'k--');
text(wn/(2*pi)*1.1,30,'|p|/2\pi');
hold off
poles = [-wn*zeta+sqrt(1-zeta^2)*j, -wn*zeta-sqrt(1-zeta^2)*j ];
endif
if (zeta < 1 && zeta > 0) # complex conjugate poles on left side of s-plane
u=20*log10(wn.^2) - 20*log10(sqrt((wn.^2-w.^2).^2+(2*zeta*wn*w).^2));
hold on; h=semilogx(f,u); hold off;
hold on;
plot([wn/(2*pi) wn/(2*pi)], get(gca,'YLim'),'k--');
text(wn/(2*pi),25,'\omega_n/2\pi');
hold off
poles = [-wn*zeta+sqrt(1-zeta^2)*j, -wn*zeta-sqrt(1-zeta^2)*j ];
endif
if (zeta == 1) # coinciding real poles
p=1/sqrt(L*C);
u=20*log10(p.^2) - 40*log10(sqrt(w.^2+p.^2));
hold on; h=semilogx(f,u);
plot([wn/(2*pi) wn/(2*pi)], get(gca,'YLim'),'k--');
text(wn/(2*pi),5,'|p|/2\pi');
f1=p/(2*pi);
fmax=max(f);
asymp=-40*log10((fmax-f1)/f1);
plot([min(f) f1 fmax],[0 0 asymp ],'k--');
hold off
poles=[-wn*zeta -wn*zeta];
endif
if (zeta > 1) # separate real poles
p1=wn*(-zeta+sqrt(zeta^2-1));
p2=wn*(-zeta-sqrt(zeta^2-1));
u=20*log10(p1*p2./(sqrt(w.^2+p1.^2).*sqrt(w.^2+p2.^2)));
figure(1);
hold on; h=semilogx(f,u); hold off;
hold on;
f1=-p1/(2*pi); f2=-p2/(2*pi);
plot([f1 f1], get(gca,'YLim'),'k--');
plot([f2 f2], get(gca,'YLim'),'k--');
fmax=max(f);
asymp1=0 - 20*log10((f2-f1)/f1);
asymp2=asymp1 - 40*log10((fmax-f2)/f2);
plot([min(f) f1 f2 fmax],[0 0 asymp1 asymp2],'k--');
text(f1,5,'|p1|/2\pi');
text(f2,5,'|p2|/2\pi');
hold off
poles=[p1+0j p2+0j];
endif
endfor
figure(1);
grid off;
axis([min(f) max(f) -80 40]);
xlabel('frequency [Hz]'); ylabel('20log| H(t)|');
leg=strread(num2str(Rvector,4),'%s');
if (zeta>=1)
leg=[leg;'asymptote'];
endif
t=['Bode Magnitude in dB(f'];
t2=['), C=' num2str(C*1e9) 'nF, L=', num2str(L*1e3),'mH'];
if(length(Rvector)==1)
t=[t t2 ', R=' num2str(R/1e3) 'k\Omega']
else
t=[t ',R' t2];
legend(leg);
endif
t = [t ];
title(t, "fontsize", 15);
Shows the math of a underdamped RLC low pass filter. Visualizes the poles in the Laplace domain. The step and frequency response. Part of the article RLC Low-pass Filter.\(\)
Complex Poles (underdamped case)
For complex conjugate poles, the transfer function can be written as below. Given that \(\zeta\lt1\), the argument of the square root in the poles will be negative. Multiply this argument with \(-j^2\) to highlight the imaginary part apart.
Split the conjugate poles in their real and imaginary parts by defining the poles from equation \(\eqref{eq:case3a_transferpoles}\) as \(p,\,p^*\equiv -\sigma\pm j\omega_d\)
This equation indicates that the conjugate poles \(p, p^*\) lay in the left half of the \(s\)-plane. The length of the line segment from the origin to pole \(p\) represents the natural frequency \(\omega_n\) and the angle of the imaginary axis with that line is \(\arcsin\) of the attenuation \(\zeta\). [MIT-me]
\(s\)-plane for underdamped case
Unit Step Response
Multiplication of the Laplace transform of the unit step function, \(\Gamma(s)\), with the transfer function \(\eqref{eq:case3a_transferpoles}\) gives the unit step response \(Y(s)\).
The constants \(c_1\) and \(c_2\) are complex conjugates of each other since they are equivalent except for the sign on the imaginary part. To highlight this, substitute the values for the poles from \(\eqref{eq:sigmaomegad}\) and write these constants in polar notation
The unit step response \(y(t)\) follows from the inverse Laplace transform of \(\eqref{eq:case3a_heaviside}\), substituting \(c_{0,1,2}\) from \(\eqref{eq:case3a_constants}\), \(\eqref{eq:case3a_c2polar}\) and \(\eqref{eq:case3a_c3polar}\).
Apply the Euler identify for cosine, and reference \(|p|\) and \(\varphi\) from equation \(\eqref{eq:case3a_c2polar}\) and \(\eqref{eq:case3a_c3polar}\), \(\sigma\) and \(\omega_d\) from equation \(\eqref{eq:sigmaomegad}\) and \(\zeta\) and \(\omega_n\) from \(\eqref{eq:case3a_transferpoles}\)
The graph shows the response for different values of \(R\). This underdamped circuit oscillates, with the amplitude exceeding that of the input (\(1\)).
Step response for underdamped case
For the extreme case, where \(R=0\), the response becomes \(\left(1-cos(\omega_n t)\right)\gamma(t)\), oscillating with an amplitude reaching twice the input (\(1\)).
Frequency Response
The frequency response \(y_{ss}(t)\) is defined as the steady state response to a sinusoidal input signal
$$
u(t)=\sin(\omega t)\,\gamma(t)
$$
We can rewrite the transfer functionby substituting the poles from \(\eqref{eq:sigmaomegad}\)
This transfer function with the poles at \(p\) and \(p^\ast\), evaluated for \(s=j\omega\) can be visualized with vectors from the poles to \(j\omega\).
Transfer function evaluated at \(s=j\omega\) for underdamped case1a_constants
Substitute \(s=j\omega\) into the transfer function \(\eqref{eq:case3a_newhs}\)
The graph shows the magnitude of the output for different values of \(R\). The magnitude of the frequency response demonstrates resonant behavior. Note the voltage amplification around the natural frequency \(\omega_n\) .
Bode magnitude for underdamped case
The corresponding Nyquist plot shows that the system gets less stable as the resistor value decreases
Shows the math of a critically-damped RLC low pass filter. Visualizes the poles in the Laplace domain. Visualizes the step and frequency response. Part of the article RLC Low-pass Filter.\(\)
Coinciting Real Poles (critically-damped case)
In the critically-damped case, the two poles from the transfer polynominal coincite on the negative real axis.
Split up this complicated fraction into forms that are in the Laplace Transform table. According to Heaviside, this can be expressed as partial fractions. Note the factor \(\frac{c_2}{s-p}\). [swarthmore, MIT-cu]
Given \(c_0\) and \(c_1\), constant \(c_2\) can be found by substituting any numerical value (other than \(0\) or \(p\)) in equation \(\eqref{eq:case2a_heaviside}\). In this case, we substitute \(s=-p\) [MIT-ex4]
As shown in the graph below, this unit step response is a relatively fast rising exponential curve, demonstrating the shortest possible rise time without overshoot.
Step response for critically-damped case
Frequency Response
The frequency response \(y_{ss}(t)\) is defined as the steady state response to a sinusoidal input signal \(u(t)=\sin(\omega t)\,\gamma(t)\).
This transfer function with the double poles at \(p\), evaluated for \(s=j\omega\) can be visualized with vectors from the poles to \(j\omega\).
Transfer function evaluated at \(s=j\omega\) for critically-damped case
The square of the length of the vector corresponds to \(|(H(j\omega)|\), and minus twice the angles with the real axis corresponds to phase shift \(\angle H(j\omega)\).
Shows the math of a overdamped RLC low pass filter. Visualizes the poles in the Laplace domain. Calculates the step and frequency response. Part of the article RLC Low-pass Filter.\(\)
>
Two Different Real Poles (overdamped case)
The two poles from the transfer polynominal are on separate locations on the negative real axis.
Note that \(p_1\lt p_2\lt0\) and \(|p_1|>|p_2|\), as visualized in the \(s\)-plane
\(s\)-plane for overdamped case
Unit Step Response
The unit step response shows how the system reacts to the input going from \(0\) to \(1\) volt at time \(t=0\). This input is called the Unit Step Function, here represented by \(u(t)=\gamma(t)\). The unit step response gives an impression of the system behavior in the time domain.
Split up this complicated fraction into forms that are in the Laplace Transform table. According to Heaviside, this can be expressed as partial fractions. Note that we need to set up a partial fraction for each descending power of the denominator. [swarthmore, MIT-cu]
The unit step response \(y(t)\) follows from the inverse Laplace transform of \(\eqref{eq:case1a_heaviside}\) and substituting the constants \(\eqref{eq:case1a_constants}\)
As shown in the graph, the unit step response is a relatively slow decaying exponential curve (with \(p_1\lt p_2\lt 0\)). The figure was generated using the source code listed in the appendix.
Unit step response for over dampened case
Frequency Response
The frequency response \(y_{ss}(t)\) is defined as the steady state response to a sinusoidal input signal \(u(t)=\sin(\omega t)\,\gamma(t)\). It describes how well the filter can distinguish between different frequencies.
This transfer function with poles at \(p_1\) and \(p_2\), evaluated for \(s=j\omega\) can be visualized with vectors from the poles to \(j\omega\).
Transfer function evaluated at \(s=j\omega\) for over dampened case
The product of the length of the vector corresponds to \(|(H(j\omega)|\), and minus the sum of the angles with the real axis corresponds to phase shift \(\angle H(j\omega)\).
Solves the differential equation for a RC low-pass filter. Gives the homogeneous and particular solutions. This supplements the article RC Low-pass filter.\(\)
Appendix B
For old times sake, we show the traditional method to solve the differential equation for the passive filters consisting of a resistor and capacitor in series.
The output is the voltage over the capacitor \(y(t)\) as shown in the schematic below.
Schematic RC filter
Assume a switch between the input and the resistor that closes at \(t=t_1\). Further assume \(y(t\leq t_1)=Y_0\).
According to Kirchhoff’s Voltage Law, for \(t\geq t_1\)
If \(u(t)\) is continuous, we can choose either differential equation, but when \(u(t)\) is non-continuous we can’t use \(\eqref{eq:bDV2}\).
Assume the non-homogeneous linear differential equation of a first order High-pass LC-filter, where \(u(t)=\hat{u}\cos(\omega t)\) is the forcing function and the current \(i(t)\) through the inductor is the response. The differential equation for this system is
The solution is a superposition of the natural response and a forced response. The so called, homogeneous solution \(y_h(t)\) and the particular solution \(y_p(t)\)
$$
y(t)=y_h(t)+y_p(t)\label{eq:bTrigRC_hp}
$$
Homogeneous solution
The homogeneous solutions follows from the reduced (=homogeneous) linear differential equation where the forcing function is zero.
The solution base \(y_{h,1}(t)\) follows from substituting the root \(p\) from equation \(\eqref{eq:bTrigRC_p}\) in back in the homogeneous differential equation \(\eqref{eq:bTrigRC_gen}\)
$$
y_{h1}(t)=\mathrm{e}^{-\frac{t}{RC}t}
$$
The homogeneous solution follows as a linear combination of the solution bases (only one in this case) as
where the constant \(c\) follows from the initial conditions.
Particular solution
We will show how to get the particular solution using both trigonometry and complex arithmetic.
Using the trigonometry method
If we force a signal \(\hat{u}\cos(\omega t)\) on a linear system, the output will have the same frequency but with a different phase \(\phi\) and amplitude \(A\).
by assigning the two independent variables \(R\) and \(\omega L\) to two more convenient independent variables \(\gamma\cos\alpha\) and \(\gamma\sin\alpha\)
Divide \(\eqref{eq:bTrigRC_CsinAlpha}\) by \(\eqref{eq:bTrigRC_CcosAlpha}\) to solve for \(\alpha\), and apply the geometric identity \(\sin^2\alpha+\cos^2\alpha=1\) to \(\eqref{eq:bTrigRC_CsinAlpha}\) by \(\eqref{eq:bTrigRC_CcosAlpha}\) to solve for \(C\)
Using a complex forcing function \(\underline{u}(t)\) provides a less involved method of finding the particular solution as introduced in Linear Differential Equations. Using a complex forcing function
Since the forcing function was only the real part of \(\underline{u}(t)\), are only interested in the real part of the complex particular solution \(\eqref{eq:bRLSol}\) as well
The general solution follows from substituting \(\eqref{eq:bTrigRC_hSolution}\) and \(\eqref{eq:bTrigRC_pSolution}\text{ or }\eqref{eq:bCaRC_pSolution}\) in equation \(\eqref{eq:bTrigRC_hp}\).
Listing of GNU/Octave code for RC low-pass filter. Used to generated the graphs in my RC low-pass filter article. This supplements the article RC Low-pass filter.
Appendix A
Unit Step Response in GNU/Octave
GNU/Octave code for RC low-pass filter
clc; close all; clear all; format short eng
R=100; # 100Ohm
C=470e-9; # 470nF
w=logspace(3,5,200);
f=w/(2*pi);
t=linspace(0,2e-3,200);
p=-1/(R*C);
u=1-e.^(p*t);
h=plot(t,u);
axis([min(t) max(t) 0 2]); #1.75
xlabel('time [s]'); ylabel('|h(t)|');
t=['Step Response(t), C=' num2str(C*1e9) 'nF, R=' num2str(R) '\Omega']
title(t, "fontsize", 15);
Frequency Response in GNU/Octave
GNU/Octave code for RC low-pass filter
clc; close all; clear all; format short eng
R=100; # 100Ohm
C=470e-9; # 470nF
f=logspace(1,6,200);
w=2*pi*f;
p=-1/(R*C);
u=-20*log10(sqrt(1+(w*R*C).^2));
h=semilogx(f,u); hold on;
wn=-p;
plot([wn/(2*pi) wn/(2*pi)], get(gca,'YLim'),'k--');
text(wn/(2*pi),5,'|p|/2\pi');
f1=-p/(2*pi);
fmax=max(f);
asymp=-20*log10((fmax-f1)/f1);
plot([min(f) f1 fmax],[0 0 asymp ],'k--');
hold off
poles=[-p -p];
figure(1);
grid off;
axis([min(f) max(f) -80 40]);
xlabel('frequency [Hz]'); ylabel('20log| H(t)|');
leg=[strread(num2str(R,1),'%s');'asymptote'];
t=['Bode Magnitude in dB(f), C=' num2str(C*1e9) 'nF, R=' num2str(R) '\Omega'];
title(t, "fontsize", 15);
hold off;
Nyquist Diagram in GNU/Octave
GNU/Octave code for RC low-pass filter
clc; close all; clear all; format short eng
pkg load control
R=100; # 100Ohm
C=470e-9; # 470nF
p=-1/(R*C);
H = tf([-p], [ 1 -p ]);
[mag, phi, w] = bode(H);
nyquist(H); h=gcf;
axis ([-0.2, 1.2, -.7, .7], "square");
Square Wave in GNU/Octave
clf;
t=linspace(0,2e-3,1e4); # t from 0 to 2 msec, with 10,000 steps
f=2e3; # input frequency [Hz]
R=100; # 100 Ohms
C=470e-9; # 470 nF
M=1e5; # number of harmonics
ut=0; # input signal (square wave)
yt=0; # output signal
w=2*pi*f; # omega
fc=1/(2*pi*R*C) # cutoff (-3dB) frequency
for n=1:2:M,
nwt = n*w*t;
nwRC = n*w*R*C;
ut = ut + 4/pi * sin(nwt) / n;
argH = 1 / sqrt( 1 + nwRC^2 );
angH = -atan(nwRC);
yt = yt + 4/pi * argH * sin(nwt + angH) / n;
end
plot(t*1e3,ut, 'b-',t*1e3,yt)
title('Square Wave input to RC filter')
xlabel('t [msec]')
ylabel('[Volt]')
grid on;
legend('u(t)','y(t)')
saveas(1,"square.svg")
Derives the frequency response of RC low-pass filter using the Laplace transform. Part of a series about the properties of the RC low-pass filter.\(\)
Frequency Response
The frequency response \(y_{ss}(t)\) is defined as the steady state response to a sinusoidal input signal \(u(t)=\sin(\omega t)\,\gamma(t)\). It describes how well the filter can distinguish between different frequencies.
This frequency response for different frequencies can be visualized in a Bode plot or a Nyquist diagram. Each of these are a topic of the remaining sections.
Effect on Input with Harmonics
As a side step, we examine the effect of the filter on a square wave input signal. The Fourier series of the square wave shows that it consists of a base frequency and odd harmonics.
A Bode plots frequency as the horizontal axis and usually consists of two separate plots to that show the magnitude and phase of the frequency response \(y_{tt}\). Since the range of magnitudes may also be large, the amplitude scale is usually expressed in decibels \(20\log_{10}\left|H(j\omega)\right|\) . The frequency axis uses a logarithmic scale as well.
The magnitude of the frequency response has a relatively shallow drop-off.
Frequency response
The phase shift depends on the frequency, causing signals composed of multiple frequencies to be distorted.
Angular frequency \(\omega_c=|p|\), is is known as the cutoff, break, -3dB or half-power frequency because the magnitude of the transfer function \(\eqref{eq:polar2}\) equals \(1/\sqrt{2}\)
This single-pole filter gives has a relatively shallow -20 dB/decade drop-off.
In general, the cutoff frequency is equal to the radial distance of the poles or zeros from the origin of the \(s\)-plane. For information on sketching the Bode magnitude plot from the poles and zeros, refer to Understanding Poles and Zeros [MIT 3.1].
Nyquist plot
The Nyquist plots display both amplitude and phase angle on a single plot, using the angular frequency as the parameter. It helps visualize if a system is stable or unstable.
Plot the frequency transfer function for \(-\infty\lt\omega\gt\infty\), indicating an increase of frequency using an arrow. A dashed line is used for negative frequencies. (The plot was generated using the GNU/Octave as shown in Appendix A.)
Nyquist diagram
From the plot we see that for \(\omega=0\) the gain is 1, and for \(\omega\to\infty\) the gain becomes 0. The high frequency portion of the plot approaches the origin at an angle of -90 degrees. For more information on Nyquist refer to Determining Stability using the Nyquist Plot [swarthmore].
Finished with “Frequency response of RC low-pass filter?”, learn about:
Derives the unit step response of RC low-pass filter. Part of a series about the properties of the RC low-pass filter.\(\)
Unit Step Response
The step response gives an impression of the system behavior when the input signal going from \(0\) to \(1\) volt at time \(t=0\). This input is called the Unit Step Function, here represented by \(u(t)=\gamma(t)\).
Split up this complicated fraction into forms that are in the Laplace Transform table. According to Heaviside, this can be expressed as partial fractions. [swarthmore, MIT-cu]
Substitute \(K=-p\) from the transfer function and find expressions for the constants \(c_{0,1}\), by multiplying with respectively \(s\) and \((s-p)\)
Derives the bandwidth and Q-factor of a RLC resonator. Visualizes the Bode magnitude for different zeta values. Part of the article RLC resonator.\(\)
Bandwidth and Q-factor
Oscillators with a high quality factor oscillate with a smaller range of frequencies and are therefore more stable. The quality factor is defined as the natural frequency \(\omega_n\) multiplied with the ratio of the maximum energy stored and the power loss. The maximum energy stored can be calculated from the maximum energy in the inductor or capacitor. The equation below uses the maximum energy in the inductor \(LI_{rms}^2\). At the natural frequency \(\omega_n\), the impedance of the capacitor and inductor cancel each other and power is only dissipated in the resistor \(RI_{rms}^2\).
The Q factor also relates the frequencies \(\omega_1\) and \(\omega_2\) where the dissipated power equals half the power stored. Consequently, the transfer function \(H(s)\) equals \(\frac{1}{\sqrt{2}}\) (-3dB) as shown in the illustration below
Power
The half power bandwidth BW follows from solving the equation \(H(s)=\frac{1}{\sqrt{2}}\)
The Q factor equals the ratio of resonant frequency \(\omega_n\) to half power bandwidth \(\omega_2-\omega_1\).
$$
Q = \frac{\omega_n}{\omega_2-\omega_1}=\frac{1}{R}\sqrt{\frac{L}{C}}
= \frac{1}{2\zeta}
\label{eq:qfactor2}
$$
High quality factor \(Q>1\) results in a sharp resonance peak.
Bode magnitude for different \(\zeta\)
Note that the frequency-dependent definition can be uses to describe circuits with a single capacitor or inductor, opposed to the frequency-to-bandwidth ratio definition.
Shows the frequency response of a RLC resonator in the overdamped, critically-damped and underdamped cases. Part of the article RLC resonator.
Frequency response
\(\)The dampening coefficient \(\zeta\) determines the behavior of the system. With the physical assumption that the value of \(\frac{1}{LC}\gt 0\) and \(\frac{R}{L}\geq0\), we can identify four classes of pole locations.
Effect of the dampening coefficient on system behavior
Condition
Pole location
ζ
Referred to as
\(R>2\sqrt\frac{L}{C}\)
different locations on the negative real axis
\(\zeta>1\)
overdamped
\(R=2\sqrt\frac{L}{C}\)
coincite on the negative real axis
\(\zeta=1\)
critically damped
\(R\lt 2\sqrt\frac{L}{C}\)
complex conjugate poles in the left half of the s-plane
\(\zeta\lt 1\)
underdamped
The remainder of this post will determine determine the frequency response for each of these classes.
Two Different Real Poles (overdamped case)
In the overdamped case the two poles are on separate locations on the negative real axis.
Note that \(p_1\lt p_2\lt 0\) and \(|p_1|\lt |p_2|\), as visualized in the s-plane
\(s\)-plane
The frequency response is the magnitude (or gain) as a function of the frequency. It describes how well the filter can distinguish between different frequency signals.
A cosinusoidal input signal \(u_i(t)\) with angular frequency \(\omega\), amplitude \(1\) and with the value 1 at \(t=0\), can be expressed as
Therefore, the frequency response may be written in terms of the system poles and zeros by substituting \(s=j\omega\) for \(s\) directly into the factored form of the transfer function
The poles and zero may be interpreted as vectors in the s-plane, originating from the zero or poles \(p_i\) and directed to the point \(s=j\omega\) at which the function is to be evaluated
Evaluated at \(j=s\omega\)
The transfer function can be expressed in polar form
The frequency response has -20 dB/decade drop-offs, and a relatively wide band-pass for \(|p_2|\lt \omega\lt |p_1|\).
Frequency response
Coinciting Real Poles (critically-damped case)
In the critically-dampened case the two poles coincite on the negative real axis.
$$
p = -\frac{R}{2L},\ R=2\sqrt\frac{L}{C}
$$
The poles and zero are on the left real axis \(p\lt 0\), as visualized in the s-plane
\(s\)-plane
The frequency response may be written in terms of the system poles and zeros by substituting \(j\omega\) for \(s\) directly into the factored form of the transfer function
The poles and zero may be interpreted as vectors in the s-plane, originating from the poles \(p\) or zero \(z=0\) and directed to the point \(s=j\omega\) at which the function is to be evaluated
Evaluated at \(s=j\omega\)
The transfer function can be expressed in polar form as
The poles and zero may be interpreted as vectors in the s-plane, originating from the poles (pi) and zero and directed to the point s=jω at which the function is to be evaluated
Evaluates at \(s=j\omega\)
The transfer function can be expressed in polar form as
The graph shows the magnitude of the output for different values of \(R\). Note that the voltage amplification around the natural frequency \(\omega_n\) . The magnitude of the frequency response has -20 dB/decade drop-offs, and a sharp resonance at \(|p|\).
My notes of the excellent lectures 21 and 22 by “Walter Lewin. 8.02 Electricity and Magnetism. Spring 2002. The material is neither affiliated with nor endorsed by MIT, https://youtube.com. License: Creative Commons BY-NC-SA.”
Electric fields can induce electric dipoles in materials. When the molecules or atoms themselves are permanent electric dipoles, an external electric field will try to align them. The degree of success depends entirely on how strong the external electric field is, and on the temperature. If the temperature is low, there is very little thermal agitation, making it easier to align those dipoles.
We have a similar situation with magnetic fields. An external magnetic field can induce magnetic dipoles in materials. It induces magnetic dipoles at the atomic scale. When the atoms/molecules themselves have a permanent magnetic dipole moment, the external field will try to align these dipoles. Again, the degree of success depends on the strength of the external field, and again on the temperature. The lower the temperature, the easier it is to align them.
So the material modifies the external field. We often call this external field, the vacuum field. When you bring material into a vacuum field, the field changes. The field inside is different from the external field.
Magnetic dipole moment
If we have a current in a loop, and the current is running clockwise as seen from below, and the area is \(A\), then the magnetic dipole moment \(\vec\mu\)
$$
\vec \mu = I\,\vec A
\tag{dipole moment}
\label{eq:dipolemoment}
$$
We define \(\vec A\) according to the righthand corkscrew rule. If we come from below clockwise, then the \(\vec A\) is perpendicular to the surface. So the magnetic dipole moment \(\vec\mu\), is also pointing upwards.
If we have \(N\) of these loops, then the magnetic dipole moment will be \(N\) times larger.
Diamagnetism
When you expose any material to a permanent external magnetic field, it will to some degree, oppose that external field. On an atomic scale, the material will generate an EMF that opposes the external field. This has nothing to do with Lenz law. It has nothing to do with the free electrons in conductors which produce an eddy current in a changing magnetic field.
In other words: when we apply a permanent magnetic field, to any material, a magnetic dipole moment is induced to oppose that field. This can only be understood with quantum mechanics. Here, we’ll make no attempt to explain it, but we will accept it.
The magnetic field inside the material, is always a little bit smaller than then external field, because the dipoles oppose the external field.
Paramagnetism
There are many substances where the atoms/molecules themselves have magnetic dipole moments. You can think of them as being little magnets. If you have no external field, then these dipoles are completely chaotically oriented. So the net magnetic field is zero. They are not permanent magnets.
If you bring a paramagnetic material in a magnetic field, its atomic magnetic dipoles will move their north poles a little bit in the direction of the external magnetic field. They align a little with the external field. The degree of success depends on the strength of that field and the temperature. The lower the temperature, the easier it is. If you remove the external field, immediately there is complete, total chaos. There is no permanent magnetism left.
In non-uniform field
If you bring a paramagnetic material in a non-uniform magnetic field, it will be pulled towards the strong side of that field. Suppose we have a magnet, and we bring paramagnetic material in its field. Let’s consider just one atom of the material.
Very much not to scale
The atoms magnetic dipole moment would like to align in the direction to support the field. So if we look from above, the current in this atom/molecule runs clockwise direction. That would be the ideal alignment. This current loop will be attracted to the magnet. Let’s look at a point on the left. At that point, the current goes into the screen, and the magnetic field goes diagonal.
The Lorenz force will be in the direction \(\vec I\times\vec B\), and put it up towards the left. At a point on the right, the force will be to the right. So everywhere around this loop, there is a force that is pointing diagonally outwards up. Clearly, there is a net force up. The matter wants to go towards the magnet.
Essential is that the external magnetic field is non-uniform. (In diamagnetic material, the current would be running in the opposite direction, because it opposes the external field.)
The exception
Diamagnetic materials are always pushed towards the weak part of the field. Paramagnetic and ferromagnetic materials that experience a force towards the strong part of the field, if the field itself is non-uniform.
There is one interesting exception. Oxygen at one atmosphere and 300 °K, has a \(\chi_m\) of \(2\times10^{-6}\). But liquid oxygen at 90 °K, the \(\chi_m\) is 1,800 times larger. Why?
Liquid, in general, is about thousand times denser than gas at one atmosphere. So you have a thousand times more dipoles per cubic meter that, in principle, can align. So you expect a 1:1 correspondence between the density, and the value of \(\chi_m\). Indeed, you see that this value is substantially larger. It is more than a factor 1000 larger is that the temperature is also lower. That gives us another factor of two.
Even though the value of \(\chi_m\) is extraordinarily high for a paramagnetic material, notice that the field inside would only be 0.35% higher than the vacuum field. But that is enough for liquid oxygen to be attracted by a very strong magnet, provided that it has a very non-uniform field outside the magnet. So the force with which liquid oxygen is pulled towards a magnet can be made larger than the weight of the liquid oxygen.
Ferromagnetism
Again the atoms have permanent magnetic dipole moments. But this time, for reasons which can only be understood with quantum mechanics, there are domains the size of about \(0.1\) to \(0.3\,\rm{mm}\).
Domains
In the domains, the dipoles are all aligned. The number of atoms involved in such a domain is typically \(10^{17}\ldots10^{21}\) atoms. The domains are uniformly distributed throughout the ferromagnetic material. So there may not be any net magnetic field.
When we apply an external field, these domains will be forced to align themselves with the magnetic field. The domains a a whole can flip. The degree of success depends on the strength of the field and the temperature. The lower the temperature, the better, because there is less thermal agitation with adds a certain randomness to the process. Inside the ferromagnetic material, the magnetic field can thousands of times stronger than the external field..
If we remove the external field, some of those domains may stay aligned in the direction that the external field was forcing them. Undoubtedly some domains will flip back, because of the temperature, but some may remain oriented and therefore the material, once it has been exposed to an external magnetic field, may have become permanently magnetic.
The only way you can remove that permanent magnetism could be to bang on it with a hammer. Then these domains get very nervous and randomize themselves. Or you can heat them up to undo the orientation of the domains. The domains themselves will remain, but then they average out not to produce any permanent magnetic field.
In non-uniform field
For the same reason, that paramagnetism is pulled towards the strong field, ferromagnetism will also be pulled towards the strong field. Except in the case of ferromagnetism the forces with which the material is pulled towards the magnet, is much larger than in with paramagnetic material.
If we take a paperclip, you can hang it on the south pole of your magnet, or the north pole of your magnet. Ferromagnetic material is always pulled towards the strong field. If we hang a few of those paperclips on there, and you carefully and slowly remove them, you may notice that after you move them, that the paperclips themselves have become magnetic. You can not do this paramagnetic material, because the forces involved are only a few percent of the weight of the material itself. So if you try it with aluminum, it will not stick to the magnet.
Paramagnetic material will be pulled, with huge forces, towards a strong magnetic field, provided that the magnetic field is non-uniform. When it as a strong gradient. it is pulled towards the strong side.
Dependencies
Relative permeability
In all cases, whether we have diamagnetic, paramagnetic or ferromagnetic material, the magnetic field inside is different from what the field would be without the material. In many cases, but not all, is the field inside the material proportional to the vacuum field. We call this proportionality constant is called the relative permeability, \(\kappa_m\).
$$
\vec B_\text{inside} = \kappa_m \, \vec B_\text{vac}
\tag{\(\kappa_m\)}
\label{eq:kappam}
$$
Now we can look at these values for the relative permeability, and we can understand the difference between diamagnetic material, paramagnetic material and ferromagnetic material.
Magnetic susceptibility
In the case of diamagnetic and paramagnetic material, the \(\vec B\)-field inside is only slightly different from the vacuum field. It is common to express \(\kappa_m\) in terms as \(1\) plus something that we call magnetic susceptibility, \(\chi_m\). Because if it is very close to \(1\), it is easier to simply list chi of M.
$$
\kappa_m = 1 + \chi_m
\nonumber
$$
Diamagnetic materials
For diamagnetic materials, the values for \(\chi_m\) are all negative. It is slightly smaller than \(1\), because these induced dipoles oppose the external field.
Diamagnetic materials
Material
\(\chi_m\)
\(\rm{Bi}\)
\(-1.7\times10^{-4}\)
\(\rm{Cu}\)
\(-10^{-5}\)
\(\rm{H_2O}\)
\(-10^{-5}\)
\(\rm{N_2}\,(1\,\rm{atm})\)
\(-7\times 10^{-9}\)
Paramagnetic materials
For paramagnetic, the \(\chi_m\) is positive, and again the values are small. Inside the material, the magnetic fields is a little larger than the vacuum field.
Diamagnetic materials
Material
\(\chi_m\)
Temperature
\(\rm{Al}\)
\(+2\times10^{-5}\)
\(\approx 300\,^oK\)
\(\rm{O_2\,(1\,\rm{atm})}\)
\(+2\times10^{-6}\)
\(\approx 300\,^oK\)
\(\rm{O_2\,(\rm{liquid})}\)
\(+3.5\times10^{-3}\)
\(90\,^oK\)
Ferromagnetic materials
For ferromagnetic materials, it would be absurd to list \(\chi_m\) because it is so large that you can forget about the \(1\). So \(\chi_m\) is about the same as \(\kappa_m\)
$$
\chi_m \approx \kappa_m \approx 10^2 \ldots 10^5
\nonumber
$$
If \(\kappa_m\) is ten thousand, you would have a field inside the ferromagnetic material that is \(10,000\times\) larger than the vacuum field. There is a limit, that we discuss next time.
The three most common ferromagnetic materials are cobalt, nickel and iron. Gadolinium is ferromagnetic in the winter, when the temperature is below 16 °C, but is paramagnetic in the summer.
Curie point
So paramagnetic and ferromagnetic properties depend on the temperature. (Diamagnetic properties do not depend on the temperature.)
At very low temperatures, there is very little thermal agitation. So we can then easier align those dipoles. So the values for \(\kappa_m\) will be different. If you cool ferromagnetic material, you expect the \(\kappa_m\) to go up. You get a stronger field inside.
If you make the material very hot, then it can lose its ferromagnetic properties completely. At a very precise temperature the domains fall apart. The domains themselves no longer exist. That is also something that you need quantum mechanics for to understand. We call this the Curie temperature. For iron this is 1043 °K, or 770 °C, where all of a sudden all the domains disappear and the material becomes paramagnetic.
In other words, if ferromagnetic material would be hanging on a magnet and you heat it up above the Curie point, it will fall off.
Maximum dipole moment
As we have seen: in paramagnetic and ferromagnetic, the \(\kappa_m\) is the result of the intrinsic dipoles of atoms/molecules are aligning with the external field.
The question is: how large can the magnetic dipole moment of a single atom be? How strong can we have a field inside ferromagnetic material? In other words, if we were able to align all the dipole moments of all the atoms, what is the maximum that we can achieve.
Bohr magnetron
To calculate the magnetic dipole moment of an atom, you have to do some quantum mechanics, and that is beyond our scope. We will derive it in a classical way.
Assuming a hydrogen atom, with a proton in the center, and an electron with orbit radius \(R\). The electron \(e^-\) moves with velocity \(v\), so the current goes in the opposite direction.
The mass, charge and Bohr radius of an electron
$$
\begin{align*}
m_e &= 9.1\times10^{-31}\,\rm{kg} \\
e &= 1.6\times 10^{-10}\,\rm{C} \\
R &= 6\times 10^{-11}\,\rm{m}
\end{align*}
$$
The current running around the proton, creates a magnetic field, so the dipole moment \(\vec u\) is upwards
Recall the magnetic dipole moment from equation \(\eqref{eq:dipolemoment}\)
$$
\vec\mu = I \vec A
\nonumber
$$
The area \(A\) is simply
$$
A = \pi R^2 \approx 8\times10^{-21}\,\rm{m}
\nonumber
$$
For the current we have to combine knowledge Newtonian mechanics and electro magnetics. The electron goes around, because the proton and electron attract each other. So there is a Coulomb force \(\vec F\).
The Coulomb force is
$$
\begin{align*}
F &= \frac{q_1\,q_2\,K}{4\pi\varepsilon_0\,r^2} \\
&= \frac{e^2}{4\pi\varepsilon_0\,r^2}
\end{align*}
$$
From Newtonian mechanics we know that, the centripetal force (the force that holds it in orbit)
$$
F = m\,v^2
\nonumber
$$
Combining these, allows us to calculate the velocity of the electron
$$
\begin{align*}
\frac{e^2}{4\pi\varepsilon_0\,R^{\cancel{2}}} &= \frac{m\,v^2}{\cancel{R}} \\
\implies v &= \sqrt{\frac{e^2}{m\,4\pi\,\varepsilon_0\,R}} \\
&\approx 2.3 \times 10^{6}\,\rm{m/s}
\end{align*}
\nonumber
$$
To find the current, we first find the time \(T\) that it takes the electron to go around
$$
\begin{align*}
T &= \frac{2\pi\,R}{v} \\
&\approx 1.4\times10^{-16}\,\rm{sec}
\end{align*}
$$
At every one point on the radius, every \(1.4\times10^{-16}\) seconds, the electron goes by. The definition of current is: charge per unit time
$$
\begin{align*}
I &= \frac{e}{T} \\
&= 1.1\times 10^{-3}\,\rm{A}
\end{align*}
$$
This is a lot. One electron going around a proton represents a current of a milliampere! Now we have the magnetic moment \(\mu\)
$$
\begin{align*}
\mu &= I\,A \\
&\approx (1.1\times 10^{-3})(8\times10^{-21})
\end{align*}
$$
This \(\mu_b\) is called the Bohr magneton
$$
\shaded{
\mu_b\approx 9.3\times 10^{-24}\,\rm{Am^2}
}
\tag{Bohr magnetron}
$$
What we can’t understand with our current knowledge, but will with quantum mechanics, is that the magnetic moment of all electrons in orbit can only be a multiple of this number, nothing in between. It is quantization. It includes even zero, which is even harder to understand.
Spin
In addition to a dipole moment due to the electron going around the proton, the electron itself is a charge which spins around its own axis. That means that a charge is going around on the spinning scale of the electron. That magnetic dipole moment is always the value \(\mu\).
So the net magnetic dipole moment of an atom or molecule is now the vectorial sum of all these dipole moments, of all these electrons going around, the orbital dipole moments, and now you have to add these spin dipoles.
Some of these cancel each other out. The net result is that most atoms/molecules have dipole moments which are either one Bohr magneton or two Bohr magnetons. That is what we will use discussing how strong a field we can create if we align all those magnetic dipoles.
The magnetic field that is produced inside a material when we expose it to an external field, that magnetic field \(\vec B\) is the vacuum field that we can create with a solenoid, plus what we will call \(\vec B^\prime\)
$$
\vec B = \vec B_{vac} + \vec B^\prime
\tag{net field}
\label{eq:netfield}
$$
Here \(\vec B^\prime\) is the result of aligning these dipoles. The degree of success depends on the strength of the external field and of course the temperature.
A big if
If, a big if, they \(\vec B^\prime\) is linearly proportional to \(\vec B_{vac}\), then we can write what we saw earlier
$$
\begin{align*}
B^\prime &= \chi_m\,\vec B_{vac}, & (\vec B = \vec B_{vac} + \vec B^\prime) \\
\implies \vec B &= (1 + \chi_m) \vec B_{vac} \\
&= \kappa_m\,\vec B_{vac}
\end{align*}
$$
This is only a meaningful equation if the sum of the alignment of all these dipoles can be written as linearly proportional with the external field. Let’s explore that in more detail.
Saturation
With paramagnetic material, the linearity always holds up, but with ferromagnetic material that is not the case. With ferromagnetic material is relative easy to align those dipoles, because they already group in domains and the domains flip in unison.
As we will see, with ferromagnetic material we can go into what we call saturation, where all the dipoles are aligned in the same direction. That leaves the question: how strong would that field be?
We will make a rough calculation that gives a pretty good feeling for the numbers. We choose a material whereby the magnetic dipole moment is two Bohr magnetrons
$$
\mu = 2\,\mu_b
\nonumber
$$
We take the situation where they’re all aligned. The illustration shows the electron path around the nuclei in a solid material, so the atoms are nicely packed
All the dipole moments are nicely aligned, so all the magnetic fields support each other. We want to find the magnetic field in one atom. Note that this looks like a solenoid, where you have windings and currents going around.
$$
B = \mu_0\,I\,\underline{\frac{N}{l}}
\nonumber
$$
We need to figure out what would be the factor \(\frac{N}{l}\). Let’s take a material where the atom density \(\mathcal N\) is
$$
\mathcal N = 10^{29}\,\rm{atoms/m^3}
\nonumber
$$
Now we have to introduce the magnetic moment, the two Bohr magnetons.
We take a length of one meter. Each loop has area \(A\), so the volume of this “solenoid” is
$$
\rm{vol} = A\,\rm{m^2}\times 1\,\rm{m} = A\,\rm{m^3}
\nonumber
$$
But the number of atoms per cubic meter is \(mathcal N\), so the number of atoms (windings) in this solenoid per meter is
$$
\rm{windings} = A\,\mathcal N
\nonumber
$$
Now the factor \(A\mathcal N\) is our factor \(\frac{N}{l}\), so for this assumed material we can write
$$
\begin{align*}
B &= \mu_0 \, \underline{I \, A}\, \mathcal N, & (IA = \mu = 2\mu_b) \\
&= u_0\,2\mu_b\,\mathcal N \\
&\approx (1.25\times10^{-6})\,2\,(9.3\times10^{-24})(10^{29}) \\
&\approx 2.3\,\rm{T}
\end{align*}
$$
Let’s use this number and understand what’s going to happen in that ferromagnetic material.
Inside the ferromagnetic material
If we take ferromagnetic material and expose it to an external field, a vacuum field. So we stick it in a solenoid, and we choose the current through the solenoid. The vacuum field is linearly proportional to the current through the solenoid
$$
B = \mu_0\,I\,\underline{\frac{N}{l}}
\nonumber
$$
Don’t confuse this the atomic scale magnetic field.
When we stick the ferromagnetic material inside the solenoid, the magnetic field there is \(B\)
We can’t plot it on a 1:1-scale because \(\kappa_m\) for ferromagnetic material is so large, say \(1000\), because the field inside will be \(1000\times\) higher. If it were to scale then \(\tan\alpha = \kappa_m \approx 10^3\).
Not on a 1:1-scale
At the beginning we get a nice linear curve, but now slowly we’re beginning to reach saturation, where all these dipoles are going to be aligned. What you see then is that this curve bends over and over and will finally reach \(2.3\,\rm{T}\) that we just calculated for our imaginary material plus \(B_{vac}\).
We will now call the \(2.3\,\rm{T}\) field \(B^\prime\). This is the field that is the result of the alignment of all those dipoles. So when I increase the vacuum field, (B^\prime\) goes into saturation and settle for \(2.3\,\rm{T}\) and can no longer increase because all the magnetic dipoles are aligned.
So at point \(A\) the field is no longer \(1000\times\) stronger than the vacuum field. We’re no longer in the linear part. You could also think of it as \(\kappa_m\) being smaller than a thousand. It is no longer proportional to the value of one thousand.
If the temperature of the material is lower, it’s easier to align them, so you will achieve saturation earlier and the cure will go steeper towards saturation.
Not on a 1:1-scale
When it reaches point \(B\), when \(B^\prime\) goes into saturation, we can only increase the \(B\)-field in the material, by increasing the vacuum field. Because \(B^\prime\) is not going to go up again. It will go up very slowly, because the huge multiplication factor of \(1000\times\) is gone now. So the growth is very slow.
Hysteresis curve
What happens once we have driven the material into saturation? What happens when we change the current and make the vacuum field zero again? Now you get a very unusual behavior.
Let’s assume that positive current, creates a vacuum field to the right, and a negative (reversed) current creates a vacuum field to the left.
Varying the external field
O to B – This part of the curve is called “virginal”. Again, we start with zero current and increase it until saturation. At that point \(B\), all the domains have flipped in the direction of the vacuum field. The material itself is now magnetic. We have created permanent magnetism.
B to P – We reduce the current to zero. The vacuum field is still to the left. At point \(P\)
\(B_{vac}=0\), but
\(B^\prime\) is still to the right, so
the net field \(B\) is to the right.
Here we have something bizarre: the vacuum field is to the left, but there is no magnetic field inside the material.
P to Q – We reverse the current, and slowly increase it from zero. So the vacuum field is now to the left. This brings us to location \(Q\), where we have something bizarre: the vacuum field is to the left, but there is no magnetic field inside the material.
\(B_{vac}\) is to the left, but
\(B^\prime\) is still to the right (because the domains are still aligned to the right), so
the net field \(B\) is zero
Q to R – We increasing the (reverse) current. So the vacuum field remains to the left. At point \(R\), the material goes into saturation again.
R to B – We bring the current back to zero and we arrive at point \(S\). There we have permanent magnetism again because some domains stay aligned towards the left.
the vacuum field is zero, and
\(B^\prime\) points to the right.
Again we have permanent magnetism.
S to P – We reverse the current back to the clockwise direction, and increase the current until the material is back in saturation.
Memory
We call this the hysteresis curve. For a given value of the current / vacuum field, we have two possibilities for the magnetic field \(B\). When I expose this material to an external field, I can’t calculate what the magnetic field inside the material will be. It depends on the history of the material.
\(\kappa_m\)
Looking at point \(Q\) and \(Q\), and you ask for \(\kappa_m\), it is almost a ridiculous question. Because \(\kappa_m\) is defined in equation \(\eqref{eq:kappam}\)
At those points we have a vacuum field, but no net field inside, so \(\kappa_m\) has to be zero. In the second and fourth quadrant it is even more bizarre, because \(\kappa_m\) has to be negative.
Removing the permanent magnetism
To make the material virginal again
By taking the material out and heating it up above the Curie point, so the domains completely fall apart. Then you cool it again below the Curie point.
The other way is banging it with a hammer and hope for the best.
The other way is called demagnetization. That is what happens when you steal a book in the library and the alarm goes off. Someone hasn’t demagnetized the magnetic strip in the book. To demagnetize, one would slowly reduce an AC current through the solenoid.
Effect on external field
If we bring ferromagnetic material in the vicinity of a magnet, we change the magnetic field configuration. Suppose we have a magnet and we bring ferromagnetic material close by
The material will see the vacuum field, so its domains try to align a bit. It will get a south and north pole and create a field in the same direction. It will support the external field. The net result is that the field inside the material becomes very strong.
The external field lines will get sucked into the ferromagnetic material. The external field elsewhere will weaken.
The last Maxwell’s equation
Let’s look at the Maxwell equations as we have them so far.
Gauss’s law
$$
\newcommand{oiint}{\subset\!\!\supset\kern-1.65em\iint}
\color{blue}
\phi = \oiint_S \vec E \cdot d\vec A = \frac{\sum_i Q_i}{\kappa\,\varepsilon_0}
\nonumber
$$
The electric flux through a closed surface is equal to all the charge inside divided by \(\kappa \varepsilon_0\).
With electric fields, the \(\kappa\) always lowers the field inside the material. But nothing going to change here.
Gauss’s law for magnetism
$$
\newcommand{oiint}{\subset\!\!\supset\kern-1.65em\iint}
\color{brown}
\oiint_S \vec B \cdot d\vec A = 0
\nonumber
$$
This tells us that magnetic monopoles don’t exist. (Or at least we think they don’t exist.)
Faraday’s law
$$
\color{green}
\oint_C \vec E\cdot d\vec l = -\frac{d}{dt}\iint_R \vec B \cdot d\vec A
\nonumber
$$
When you move conducting loops in magnetic fields, you create electricity. This doesn’t require any adjustment in terms of \(\kappa_m\) either.
Ampère’s law amended by Maxwell
$$
\color{purple}
\oint \vec B \cdot d\vec l = \mu_0\left(I_\rm{encl} + \varepsilon_0\kappa \frac{d}{dt} \iint_S \vec E\cdot d\vec A \right)
\nonumber
$$
Tells us the magnetic field for vacuum. Now we know that’s not true anymore.
Ampère’s law with Maxwell’s addition needs to be adjusted by a factor of \(\kappa_m\), the relative permeability.
$$
\shaded{
\oint \vec B \cdot d\vec l = \textcolor{purple}{\kappa_m}\, \mu_0\left(I_\rm{encl} + \varepsilon_0\kappa \frac{d}{dt} \iint_S \vec E\cdot d\vec A \right)
}
\tag{Ampère/Maxwell add}
\label{eq:maxwell4}
$$
This \(\kappa_m\) is perfectly kosher for paramagnetic and diamagnetic materials. But with ferromagnetic material, you have to be very careful as we have seen with the hysteresis phenomenon. There are even situations where \(\kappa_m\) is negative; where \(\kappa_m\) is zero; and where \(\kappa_m\) can as high as \(10^3\). So we have to be very careful when applying this equation without thinking.
This moment is very special, because we have all four Maxwell’s equations in place. Hope you can appreciate them.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.