Feedback Control of Dynamic Systems_P2

发布时间 2023-12-19 22:39:28作者: 李白的白

187. Problems for Section 5.4: Design Using Dynamic Compensation

5.21 Let

\[G(s) = \frac{1}{s^{2} + 7s + 12}\ \text{~}\text{and}\text{~}\ D_{c}(s) = K\frac{(s + a)}{s + b} \]

Using root-locus techniques, find the values for the parameters \(a,b\), and \(K\) of the compensation \(D_{c}(s)\) that will produce closed-loop poles at \(s = - 1.5 \pm 1.5j\) for the system shown in Fig. 5.59.

Figure 5.59

Unity feedback system

for Problems 5.21-5.27, and 5.32

5.22 Suppose in Fig. 5.59

\[G(s) = \frac{1}{s\left( s^{2} + 3s + 7 \right)}\ \text{~}\text{and}\text{~}\ D(s) = \frac{K}{s + 3} \]

Without using Matlab, sketch the root locus with respect to \(K\) of the characteristic equation for the closed-loop system, paying particular attention to points that generate multiple roots. Find the value of \(K\) at that point, state what the location of the mulitple roots, and how many multiple roots there are.

5.23 Suppose the unity feedback system of Fig. 5.59 has an open-loop plant given by \(G(s) = \frac{1}{s(s + 1)}\). Design a lead compensation \(D_{c}(s) = K\frac{s + z}{s + p}\) to be added in cascade with the plant so that the dominant poles of the closed-loop system are located at \(s = - 3.2 \pm 3.2j\).

5.24 Assume that the unity feedback system of Fig. 5.59 has the open-loop plant

\[G(s) = \frac{s + 7}{s(s + 9)(s + 5)} \]

Design a lag compensation \(D_{c}(s) = K\frac{(s - z)}{s - p}\) to meet the following specifications:

  • The step response rise time is to be less than \(0.45sec\).

  • The step response overshoot is to be less than \(5\%\).

  • The steady-state error to a unit ramp input must not exceed \(10\%\).

5.25 A numerically controlled machine tool positioning servomechanism has a normalised and scaled transfer function given by

\[G(s) = \frac{1}{(s + 0.8)(s + 0.5)} \]

Performance specifications of the system in the unit feedback configuration of Fig. 5.59 are satisfied if the closed-loop poles are located at \(s = - 1 \pm j2\).

(a) Show that this specification cannot be achieved by choosing proportional control alone, \(D_{c}(s) = k_{p}\).

(b) Design a lead compensator \(D_{c}(s) = K\frac{s - z}{s - p}\) that will meet the specification.

5.26 A servomechanism position control has the plant transfer function

\[G(s) = \frac{10}{s(s + 1)(s + 10)}\text{.}\text{~} \]

You are to design a series compensation transfer function \(D_{c}(s)\) in the unity feedback configuration to meet the following closed-loop specifications:

  • The response to a reference step input is to have no more than \(16\%\) overshoot.

Figure 5.60

Elementary magnetic suspension

  • The response to a reference step input is to have a rise time of no more than \(0.4sec\).

  • The steady-state error to a unit ramp at the reference input must be less than 0.05 .

(a) Design a lead compensation that will cause the system to meet the dynamic response specifications, ignoring the error requirement.

(b) What is the velocity constant \(K_{v}\) for your design? Does it meet the error specification?

(c) Design a lag compensation to be used in series with the lead you have designed to cause the system to meet the steady-state error specification.

(d) Give the Matlab plot of the root locus of your final design.

(e) Give the Matlab response of your final design to a reference step.

5.27 Assume the closed-loop system of Fig. 5.59 has a feed forward transfer function \(G(s)\) given by

\[G(s) = \frac{1}{(s + 1)(s + 2)} \]

Design a lag compensation so that the dominant poles of the closed-loop system are located at \(s = - 1.5 \pm j1.5\) and the steady-state error to a unit step input is less than 0.1 .

5.28 An elementary magnetic suspension scheme is depicted in Fig. 5.60. For small motions near the reference position, the voltage \(e\) on the photo detector is related to the ball displacement \(x\) (in meters) by \(e = 100x\). The upward force (in newtons) on the ball caused by the current \(i\) (in amperes) may be approximated by \(f = 0.5i + 20x\). The mass of the ball is \(20\text{ }g\) and the gravitational force is \(9.8\text{ }N/kg\). The power amplifier is a voltage-to-current device with an output (in amperes) of \(i = u + V_{0}\).

(a) Write the equations of motion for this set up.

(b) Give the value of the bias \(V_{0}\) that results in the ball being in equilibrium at \(x = 0\).

(c) What is the transfer function from \(u\) to \(e\) ?

(d) Suppose that the control input \(u\) is given by \(u = - Ke\). Sketch the root locus of the closed-loop system as a function of \(K\).

(e) Assume a lead compensation is available in the form \(\frac{U}{E} = D_{c}(s) =\) \(K\frac{s + z}{s + p}\). Give values of \(K,z\), and \(p\) that yield improved performance over the one proposed in part (d).

5.29 A certain plant with the non-minimum phase transfer function

\[G(s) = \frac{7 - 3s}{s^{2} + s + 5} \]

is in a unity positive feedback system with the controller transfer function \(D_{c}(s)\).

(a) Use Matlab to determine a (negative) value for \(D_{c}(s) = K\) so that the closed-loop system with negative feedback has a damping ratio \(\zeta = 0.707\).

(b) Use Matlab to plot the system's response to a reference step.

5.30 Consider the rocket-positioning system shown in Fig. 5.61.

(a) Show that if the sensor that measures \(x\) has a unity transfer function, the lead compensator

\[H(s) = K\frac{s + 3}{s + 6} \]

stabilizes the system.

(b) Assume that the sensor transfer function is modelled by a single pole with a \(0.1sec\) time constant and unit DC gain, and \(H(s)\) is a PD controller having transfer function \(K(s + 1)\). Using the root-locus procedure, find a value for the gain \(K\) that will provide the maximum damping ratio while its step response has the lowest settling time.

Figure 5.61

Block diagram for rocket-positioning control system

5.31 For the system in Fig. 5.62,

(a) Find the locus of closed-loop roots with respect to \(K\).

(b) Find the maximum value of \(K\) for which the system is stable. Assume \(K = 0.5\) for the remaining parts of this problem.

(c) What is the steady-state error \((e = r - y)\) for a step change in \(r\) ?

(d) What is the steady-state error in \(y\) for a constant disturbance \(w_{1}\) ?

(e) What is the steady-state error in \(y\) for a constant disturbance \(w_{2}\) ?

Figure 5.62

Control system for Problem 5.31

to the system?

5.32 Consider the plant transfer function

\[G(s) = \frac{bs + k}{s^{2}\left\lbrack mMs^{2} + (M + m)bs + (M + m)k \right\rbrack} \]

to be put in the unity feedback loop of Fig. 5.59. This is the transfer function relating the input force \(u(t)\) of mass \(M\) in the non-collocated sensor and actuator problem. In this problem, we will use root-locus techniques to design a controller \(D_{c}(s)\) so that the closed-loop step response has a rise time of less than \(0.5sec\) and an overshoot of less than \(15\%\). You may use Matlab for any of the following questions:

(a) Approximate \(G(s)\) by assuming that \(m \approx 0\), and let \(M = 1,k = 1\), \(b = 0.5\), and \(D_{c}(s) = K\). Can \(K\) be chosen to satisfy the performance specifications? Why or why not?

(b) Repeat part (a) assuming \(D_{c}(s) = K(s + z)\), and show that \(K\) and \(z\) can be chosen to meet the specifications.

(c) Repeat part (b) but with a practical controller given by the transfer function

\[D_{c}(s) = K\frac{p(s + z)}{s + p} \]

and using the value for \(z\) in part(b), pick \(p\) and \(K\) so that the step response similar to part (b) is obtained.

(d) Now suppose that the small mass \(m\) is not negligible, but is given by \(m = M/10\). Check to see if the controller you design in part (c) still meets the given specifications. If not, adjust the controller parameters or suggest a new controller so that the specifications are met.

5.33 Consider the Type 1 system drawn in Fig. 5.63. We would like to design the compensation \(D_{c}(s)\) to meet the following requirements: (1) The steady-state value of \(y\) due to a constant unit disturbance \(w\) should be less than 0.1 , and (2) the damping ratio \(\zeta > 0.7\). Using root-locus techniques,

(a) Show that proportional control alone is not adequate.

(b) Show that proportional-derivative control will work.

(c) Find values of the gains \(k_{p}\) and \(k_{D}\) for \(D_{c}(s) = k_{p} + k_{D^{s}}\) that meet the design specifications with at least \(10\%\) margin.

Figure 5.63

Control system for Problem 5.33

\(\bigtriangleup \ 5.34\) Using a sample rate of \(10\text{ }Hz\), find the \(D_{c}(z)\) that is the discrete equivalent to your \(D_{c}(s)\) from Problem 5.7 using the trapezoid rule. Evaluate the time response using Simulink, and determine whether the damping ratio requirement is met with the digital implementation. (Note: The material to do this problem is covered in the online Appendix W4.5 at www.pearsonglobaleditions.com or in Chapter 8.)

188. Problems for Section 5.5: A Design Example Using the Root

189. Locus

5.35 Consider the positioning servomechanism system shown in Fig. 5.64, where

\[\begin{matrix} e_{i} & \ = K_{o}\theta_{i},\ e_{o} = K_{pot}\theta_{o},\ K_{o} = 10\text{ }V/rad, \\ T & \ = \text{~}\text{motor torque}\text{~} = K_{t}i_{a}, \\ k_{m} & \ = K_{t} = \text{~}\text{torque constant}\text{~} = 0.1\text{ }N \cdot m/A, \\ K_{e} & \ = \text{~}\text{back emf constant}\text{~} = 0.1\text{ }V \cdot sec, \\ R_{a} & \ = \text{~}\text{armature resistance}\text{~} = 10\Omega, \\ \text{~}\text{Gear ratio}\text{~} & \ = 1:1, \\ J_{L} + J_{m} & \ = \text{~}\text{total inertia}\text{~} = 10^{- 3}\text{ }kg \cdot m^{2}, \\ v_{a} & \ = K_{A}\left( e_{i} - e_{f} \right). \end{matrix}\]

Figure 5.64

Positioning servomechanism

(a) What is the range of the amplifier gain \(K_{A}\) for which the system is stable? Estimate the upper limit graphically using a root-locus plot.

(b) Choose a gain \(K_{A}\) that gives roots at \(\zeta = 0.7\). Where are all three closed-loop root locations for this value of \(K_{A}\) ?

5.36 We wish to design a velocity control for a tape-drive servomechanism. The transfer function from current \(I(s)\) to tape velocity \(\Omega(s)\) (in millimeters per millisecond per ampere) is

\[\frac{\Omega(s)}{I(s)} = \frac{23\left( s^{2} + 0.5s + 0.7 \right)}{(s + 1)\left( s^{2} + 0.7s + 1 \right)} \]

We wish to design a Type 1 feedback system so that the response to a reference step satisfies

\[t_{r} \leq 6msec,\ t_{s} \leq 15msec,\ M_{p} \leq 0.15 \]

(a) Use the integral compensator \(k_{I}/s\) to achieve Type 1 behavior, and sketch the root-locus with respect to \(k_{I}\). Show on the same plot the region of acceptable pole locations corresponding to the specifications and is the integral compensator able to help satisfy all the specifications?

Figure 5.65

Figure of cart pendulum for Problem 5.37 (b) Assume a proportional-integral compensator of the form \(k_{p}(s + \alpha)/s\), and select the best possible values of \(k_{p}\) and \(\alpha\) you can find. Sketch the root-locus plot of your design, giving values for \(k_{p}\) and \(\alpha\), and the velocity constant \(K_{v}\) your design achieves. On your plot, indicate the closed-loop poles with a dot \(( \bullet )\) and include the boundary of the region of acceptable root locations.

5.37 The normalized, scaled equations of a cart as drawn in Fig. 5.65 of mass \(m_{c}\) holding an inverted uniform pendulum of mass \(m_{p}\) and length \(\mathcal{l}\) with no friction are

\[\begin{matrix} \overset{¨}{\theta} - \theta & \ = - v, \\ \overset{¨}{y} + \beta\theta & \ = v, \end{matrix}\]

where \(\beta = \frac{3m_{p}}{4\left( m_{c} + m_{p} \right)}\) is a mass ratio bounded by \(0 < \beta < 0.75\). Time is measured in terms of \(\tau = \omega_{o}t\) where \(\omega_{o}^{2} = \frac{3g\left( m_{c} + m_{p} \right)}{\mathcal{l}\left( 4m_{c} + m_{p} \right)}\). The cart motion \(y\) is measured in units of pendulum length as \(y = \frac{3x}{4\mathcal{l}}\) and the input is force normalized by the system weight \(v = \frac{u}{g\left( m_{c} + m_{p} \right)}\). These equations can be used to compute the transfer functions

\[\begin{matrix} & \frac{\Theta}{V} = - \frac{1}{s^{2} - 1}, \\ & \frac{Y}{V} = \frac{s^{2} - 1 + \beta}{s^{2}\left( s^{2} - 1 \right)}. \end{matrix}\]

In this problem, you are to design a control for the system by first closing a loop around the pendulum, Eq. (5.89), then, with this loop closed, closing a second loop around the cart plus pendulum, Eq. (5.90). For this problem, let the mass ratio be \(m_{c} = 5m_{p}\).

(a) Draw a block diagram for the system with \(V\) input and both \(Y\) and \(\theta\) as outputs.

(b) Design a lead compensation \(D_{c}(s) = K\frac{s + z}{s + p}\) for the \(\Theta\) loop to cancel the pole at \(s = - 1\) and place the two remaining poles at \(- 4 \pm j4\). The new control is \(U(s)\), where the force is \(V(s) = U(s) + D_{c}(s)\Theta(s)\). Draw the root locus of the angle loop.

(c) Compute the transfer function of the new plant from \(U\) to \(Y\) with \(D_{c}(s)\) in place.
(d) Design a controller \(D_{C}(s)\) for the cart position with the pendulum loop closed. Draw the root locus with respect to the gain of \(D_{c}(s)\).

(e) Use Matlab to plot the control, cart position, and pendulum position for a unit step change in cart position.

5.38 Consider the 270-ft U.S. Coast Guard cutter Tampa (902) shown in Fig 5.66(a). Parameter identification based on sea-trials data (Trankle, 1987) was used to estimate the hydrodynamic coefficients in the equations of motion. The result is that the response of the heading angle of the ship \(\psi\) to rudder angle \(\delta\) and wind changes \(w\) can be described by the block diagram in Fig 5.66(b) and the second-order transfer functions

\[\begin{matrix} & G_{\delta}(s) = \frac{\psi(s)}{\delta(s)} = \frac{- 0.0184(s + 0.0068)}{s(s + 0.2647)(s + 0.0063)} \\ & G_{w}(s) = \frac{\psi(s)}{w(s)} = \frac{0.0000064}{s(s + 0.2647)(s + 0.0063)} \end{matrix}\]

where

\[\begin{matrix} \psi & \ = \text{~}\text{heading angle, rad}\text{~}, \\ \psi_{r} & \ = \text{~}\text{reference heading angle, rad,}\text{~} \\ r & \ = \text{~}\text{yaw rate,}\text{~}\overset{˙}{\psi},rad/sec, \\ \delta & \ = \text{~}\text{rudder angle,}\text{~}rad, \\ w & \ = \text{~}\text{wind speed}\text{~},m/sec. \end{matrix}\]

(a)

(b)

Figure 5.66

(a) USCG Tampa for Problem 5.38, (b) partial block diagram for the system

(a) Determine the open-loop settling time of \(r\) for a step change in \(\delta\).

(b) In order to regulate the heading angle \(\psi\), design a compensator that uses \(\psi\) and the measurement provided by a yaw-rate gyroscope (that is, by \(\overset{˙}{\psi} = r\) ). The settling time of \(\psi\) to a step change in \(\psi_{r}\) is specified
to be less than \(50sec\), and for a \(5^{\circ}\) change in heading, the maximum allowable rudder angle deflection is specified to be less than \(10^{\circ}\).

(c) Check the response of the closed-loop system you designed in part (b) to a wind gust disturbance of \(10\text{ }m/sec\). (Model the disturbance as a step input.) If the steady-state value of the heading due to this wind gust is more than \({0.5}^{\circ}\), modify your design so it meets this specification as well.

5.39 Golden Nugget Airlines has opened a free bar in the tail of their airplanes in an attempt to lure customers. In order to automatically adjust for the sudden weight shift due to passengers rushing to the bar when it first opens, the airline is mechanizing a pitch-attitude autopilot. Figure 5.67 shows the block diagram of the proposed arrangement. We will model the passenger moment as a step disturbance \(M_{p}(s) = M_{0}/s\), with a maximum expected value for \(M_{0}\) of 0.6 .

(a) Assuming the bar has opened, and the passengers have rushed to it, what value of \(K\) is required to keep the steady-state error in \(\theta\) to less than \(0.02rad\left( \cong 1^{\circ} \right)\) ? (Assume the system is stable.)

(b) Draw a root locus with respect to \(K\).

(c) Based on your root locus, what is the value of \(K\) when the system becomes unstable?

(d) Suppose the value of \(K\) required for acceptable steady-state behavior is 600 . Show that this value yields an unstable system with roots at

\[s = - 2.9, - 13.5, + 1.2 \pm 6.6j \]

(e) You are given a black box with rate gyro written on the side, and told that, when installed, it provides a perfect measure of \(\overset{˙}{\theta}\), with output \(K_{T}\overset{˙}{\theta}\). Assume \(K = 600\) as in part (d) and draw a block diagram indicating how you would incorporate the rate gyro into the autopilot. (Include transfer functions in boxes.)

(f) For the rate gyro in part (e), sketch a root locus with respect to \(K_{T}\).

(g) What is the maximum damping factor of the complex roots obtainable with the configuration in part (e)?

(h) What is the value of \(K_{T}\) for part (g)?

Figure 5.67

Golden Nugget Airlines autopilot

(i) Suppose you are not satisfied with the steady-state errors and damping ratio of the system with a rate gyro in parts (e) through (h). Discuss the advantages and disadvantages of adding an integral term and extra lead networks in the control law. Support your comments using Matlab or with rough root-locus sketches.

5.40 Consider the instrument servomechanism with the parameters given in Fig. 5.68. For each of the following cases, draw a root locus with respect to the parameter \(K\), and indicate the location of the roots corresponding to your final design:

(a) Lead network: Let

\[H(s) = 1,\ D_{c}(s) = K\frac{s + z}{s + p},\ \frac{p}{z} = 6 \]

Select \(z\) and \(K\) so the roots nearest the origin (the dominant roots) yield

\[\zeta \geq 0.4,\ - \sigma \leq - 7,\ K_{v} \geq 16\frac{2}{3}\sec^{- 1} \]

(b) Output-velocity (tachometer) feedback: Let

\[H(s) = 1 + K_{T}s\ \text{~}\text{and}\text{~}\ D_{c}(s) = K \]

Select \(K_{T}\) and \(K\) so the dominant roots are in the same location as those of part (a). Compute \(K_{v}\). If you can, give a physical reason explaining the reduction in \(K_{v}\) when output derivative feedback is used.

(c) Lag network: Let

\[H(s) = 1\ \text{~}\text{and}\text{~}\ D_{c}(s) = K\frac{s + 1}{s + p} \]

Using proportional control, is it possible to obtain a \(K_{v} = 12\) at \(\zeta = 0.4\) ? Select \(K\) and \(p\) so the dominant roots correspond to the proportional-control case but with \(K_{v} = 100\) rather than \(K_{v} = 12\).

Figure 5.68

Control system for

Problem 5.40

5.41 For the quadrotor shown in Figs. 2.13 and 2.14 (see Example 2.5),

(a) Describe what the commands should be to rotors 1,2,3, & 4 in order to produce a yaw torque, \(T_{\psi}\), that has no effect on pitch or roll, and
will not produce any net vertical thrust of the 4 rotors. In other words, find the relation between \(\delta T_{1},\delta T_{2},\delta T_{3},\delta T_{4}\) so that \(T_{\psi}\) produces the desired response.

(b) The system dynamics for the yaw motion of a quadrotor are given in Eq. (2.17). Assuming the value of \(I_{z} = 200\), find a compensation that gives a rise time less 0.2 seconds with an overshoot less than \(20\%\).

190. Problems for Section 5.6: Extensions of the Root Locus Method

5.42 Plot the loci for the \(0^{\circ}\) locus or negative \(K\) for each of the following:

(a) The examples given in Problem 5.3

(b) The examples given in Problem 5.4

(c) The examples given in Problem 5.5

(d) The examples given in Problem 5.6

(e) The examples given in Problem 5.7

(f) The examples given in Problem 5.8

5.43 Suppose you are given the plant

\[L(s) = \frac{1}{s^{3} + 5s^{2} + (4 + \alpha)s + (1 + 2\alpha)} \]

where \(\alpha\) is a system parameter that is subject to variations. Use both positive and negative root-locus methods to determine what variations in \(\alpha\) can be tolerated before instability occurs.

\(\bigtriangleup \ \mathbf{5.44}\) Consider the system in Fig. 5.69.

(a) Use Routh's criterion to determine the regions in the \(K_{1},K_{2}\) plane for which the system is stable.

(b) Use sisotool to verify your answer to part (a).

Figure 5.69

Feedback system for

Problem 5.44

\(\bigtriangleup \ \mathbf{5.45}\) The block diagram of a positioning servomechanism is shown in Fig. 5.70.

(a) Sketch the root locus with respect to \(K\) when no tachometer feedback is present \(K_{T} = 0\).

Figure 5.70

Control system for

Problem 5.45

Figure 5.71

Control system for Problem 5.46 (b) Indicate the root locations corresponding to \(K = 16\) on the locus of part (a). For these locations, estimate the transient-response parameters \(t_{r},M_{p}\), and \(t_{s}\). Compare your estimates to measurements obtained using the step command in Matlab.

(c) For \(K = 16\), draw the root locus with respect to \(K_{T}\).

(d) For \(K = 16\) and with \(K_{T}\) set so \(M_{p} = 0.05(\zeta = 0.707)\), estimate \(t_{r}\) and \(t_{s}\). Compare your estimates to the actual values of \(t_{r}\) and \(t_{S}\) obtained using Matlab.

(e) For the values of \(K\) and \(K_{T}\) in part (d), what is the velocity constant \(K_{v}\) of this system?

\(\bigtriangleup \ \mathbf{5.46}\) Consider the mechanical system shown in Fig. 5.71, where \(g\) and \(a_{0}\) are gains. The feedback path containing \(gs\) controls the amount of rate feedback. For a fixed value of \(a_{0}\), adjusting \(g\) corresponds to varying the location of a zero in the \(s\)-plane.

(a) With \(g = 0\) and \(\tau = 1\), find a value for \(a_{0}\) such that the poles are complex.

(b) Fix \(a_{0}\) at this value, and construct a root locus that demonstrates the effect of varying \(g\).

\(\bigtriangleup \ \mathbf{5.47}\) Sketch the root locus with respect to \(K\) for the system in Fig. 5.72 using the Padé(1,1) approximation and the first-order lag approximation. For both approximations, what is the range of values of \(K\) for which the system is unstable? (Note: The material to answer this question is contained in Appendix W5.6.3 at www.pearsonglobaleditions.com.)

Figure 5.72

Control system for Problem 5.47

\(\bigtriangleup \mathbf{5.48}\) For the equation \(1 + KG(s)\), where

\[G(s) = \frac{1}{s(s + p)\left\lbrack (s + 1)^{2} + 4 \right\rbrack} \]

use Matlab to examine the root locus as a function of \(K\) for \(p\) in the range from \(p = 1\) to \(p = 10\), making sure to include the point \(p = 2\).

191. The Frequency-Response Design Method

192. A Perspective on the Frequency-Response Design Method

The design of feedback control systems in industry is probably accomplished using frequency-response methods more often than any other. Frequency-response design is popular primarily because it provides good designs in the face of uncertainty in the plant model. For example, for systems with poorly known or changing high-frequency resonances, we can temper the feedback compensation to alleviate the effects of those uncertainties. Currently, this tempering is carried out more easily using frequency-response design than with any other method.

Another advantage of using frequency response is the ease with which experimental information can be used for design purposes. Raw measurements of the output amplitude and phase of a plant undergoing a sinusoidal input excitation are sufficient to design a suitable feedback control. No intermediate processing of the data (such as finding poles and zeros or determining system matrices) is required to arrive at the system model. The wide availability of computers has

IM_photo/Shutterstock.
rendered this advantage less important now than it was years ago; however, for relatively simple systems, frequency response is often still the most cost-effective design method. Yet another advantage is that specifications for control systems are typically provided in terms of a system's frequency-response characteristics. Therefore, design in the frequency domain directly ensures that the specifications are met rather than having to transform them to other parameters.

The underlying theory for determining stability in all situations is somewhat challenging and requires a rather broad knowledge of complex variables. However, the methodology of frequency-response design does not require that the designer remembers the details of the theory and the stability rules are fairly straightforward.

193. Chapter Overview

The chapter opens with a discussion of how to obtain the frequency response of a system by analyzing its poles and zeros. An important extension of this discussion is how to use Bode plots to graphically display the frequency response. In Sections 6.2 and 6.3, we will discuss stability briefly, then in more depth the use of the Nyquist stability criterion. In Sections 6.4 through 6.6, we will introduce the notion of stability margins, discuss Bode's gain-phase relationship, and study the closed-loop frequency response of dynamic systems. The gain-phase relationship suggests a very simple rule for compensation design: Shape the frequency-response magnitude so it crosses magnitude 1 with a slope of -1 . As with our treatment of the rootlocus method, we will describe how adding dynamic compensation can adjust the frequency response (see Section 6.7) and improve system stability and/or error characteristics.

In optional Sections 6.7.7 and 6.7.8, we will discuss issues of sensitivity that relate to the frequency response, including material on sensitivity functions and stability robustness. The next two sections on analyzing time delays in the system and Nichols charts will represent additional, somewhat advanced material that may also be considered optional. The final Section 6.10 is a short history of the frequency-response design method.

193.1. Frequency Response

The basic concepts of frequency response were discussed in Section 3.1.2. In this section, we will review those ideas and extend the concepts for use in control system design.

A linear system's response to sinusoidal inputs - called the system's frequency response - can be obtained from knowledge of its pole and zero locations.

To review the ideas, we consider a system described by

\[\frac{Y(s)}{U(s)} = G(s) \]

where the input \(u(t)\) is a sine wave with an amplitude \(A\) :

\[u(t) = Asin\left( \omega_{o}t \right)1(t) \]

This sine wave has a Laplace transform

\[U(s) = \frac{A\omega_{o}}{s^{2} + \omega_{o}^{2}} \]

With zero initial conditions, the Laplace transform of the output is

\[Y(s) = G(s)\frac{A\omega_{o}}{s^{2} + \omega_{o}^{2}} \]

A partial-fraction expansion of Eq. (6.1) [assuming that the poles of \(G(s)\) are distinct] will result in an equation of the form

\[Y(s) = \frac{\alpha_{1}}{s - p_{1}} + \frac{\alpha_{2}}{s - p_{2}} + \cdots + \frac{\alpha_{n}}{s - p_{n}} + \frac{\alpha_{o}}{s + j\omega_{o}} + \frac{\alpha_{o}^{*}}{s - j\omega_{o}} \]

where \(p_{1},p_{2},\ldots,p_{n}\) are the poles of \(G(s),\alpha_{o}\) would be found by performing the partial-fraction expansion, and \(\alpha_{o}^{*}\) is the complex conjugate of \(\alpha_{o}\). The time response that corresponds to \(Y(s)\) is

\[y(t) = \alpha_{1}e^{p_{1}t} + \alpha_{2}e^{p_{2}t} + \cdots + \alpha_{n}e^{p_{n}t} + 2\left| \alpha_{o} \right|cos\left( \omega_{o}t + \phi \right),\ t \geq 0 \]

where

\[\phi = \tan^{- 1}\left\lbrack \frac{Im\left( \alpha_{o} \right)}{Re\left( \alpha_{o} \right)} \right\rbrack \]

If all the poles of the system represent stable behavior (the real parts of \(p_{1},p_{2},\ldots,p_{n} < 0\) ), the natural unforced response will die out eventually, and therefore the steady-state response of the system will be due solely to the sinusoidal term in Eq. (6.3), which is caused by the sinusoidal excitation. Example 3.7 determined the response of the system \(G(s) = \frac{1}{(s + 1)}\) to the input \(u = sin10t\) and showed that response in Fig. 3.5, which is repeated here as Fig. 6.1. It shows that \(e^{- t}\), the natural part of the response associated with \(G(s)\), disappears after several time constants, and the pure sinusoidal response is essentially all that remains. Example 3.7 showed that the remaining sinusoidal term in Eq. (6.3) can be expressed as

\[y(t) = AMcos\left( \omega_{0}t + \phi \right) \]

194. Figure 6.1

Response of

\(G(s) = \frac{1}{(s + 1)}\) to an input of \(sin10t\)

Frequency-response plot

Magnitude and phase

where

\[\begin{matrix} M & \ = \left| G\left( j\omega_{o} \right) \right| = |G(s)|_{s = j\omega_{o}} \\ & \ = \sqrt{\left\{ Re\left\lbrack G\left( j\omega_{o} \right) \right\rbrack \right\}^{2} + \left\{ Im\left\lbrack G\left( j\omega_{o} \right) \right\rbrack \right\}^{2}}, \\ \phi & \ = \tan^{- 1}\left\lbrack \frac{Im\left\lbrack G\left( j\omega_{o} \right) \right\rbrack}{Re\left\lbrack G\left( j\omega_{o} \right) \right\rbrack} \right\rbrack = \angle G\left( j\omega_{o} \right). \end{matrix}\]

In polar form,

\[G\left( j\omega_{o} \right) = Me^{j\phi} \]

Equation (6.4) shows that a stable system with transfer function \(G(s)\) excited by a sinusoid with unit amplitude and frequency \(\omega_{o}\) will, after the response has reached steady-state, exhibit a sinusoidal output with a magnitude \(M\left( \omega_{o} \right)\) and a phase \(\phi\left( \omega_{o} \right)\) at the frequency \(\omega_{0}\). The facts that the output \(y\) is a sinusoid with the same frequency as the input \(u\), and that the magnitude ratio \(M\) and phase \(\phi\) of the output are independent of the amplitude \(A\) of the input, are a consequence of \(G(s)\) being a linear constant system. If the system being excited were a nonlinear or timevarying system, the output might contain frequencies other than the input frequency, and the output-input ratio might be dependent on the input magnitude.

More generally, the magnitude \(M\) is given by \(|G(j\omega)|\), and the phase \(\phi\) is given by \(\angle G(j\omega)\); that is, the magnitude and angle of the complex quantity \(G(s)\) are evaluated with \(s\) taking on values along the imaginary axis \((s = j\omega)\). The frequency response of a system consists of these functions of frequency that tell us how a system will respond to a sinusoidal input of any frequency. We are interested in analyzing the frequency response not only because it will help us understand how a system responds to a sinusoidal input, but also because evaluating \(G(s)\) with \(s\) taking on values along the \(j\omega\) axis will prove to be very useful in determining the stability of a closed-loop system. As we saw in Chapter 3,
the \(j\omega\) axis is the boundary between stability and instability; we will see in Section 6.4 that evaluating \(G(j\omega)\) provides information that allows us to determine closed-loop stability from the open-loop \(G(s)\).

Frequency-Response Characteristics of a Capacitor

Consider the capacitor described by the equation

\[i = C\frac{dv}{dt} \]

where \(v\) is the voltage input and \(i\) is the current output. Determine the sinusoidal steady-state response of the capacitor.

Solution. The transfer function of this circuit is

\[\frac{I(s)}{V(s)} = G(s) = Cs \]

so

\[G(j\omega) = Cj\omega. \]

Computing the magnitude and phase, we find that

\[M = |Cj\omega| = C\omega\ \text{~}\text{and}\text{~}\ \phi = \angle(Cj\omega) = 90^{\circ}. \]

For a unit-amplitude sinusoidal input \(v\), the output \(i\) will be a sinusoid with magnitude \(C\omega\), and the phase of the output will lead the input by \(90^{\circ}\). Note for this example, the magnitude is proportional to the input frequency while the phase is independent of frequency.

Recall from Chapter 5 [see Eq. (5.70)] the transfer function of the lead compensation, which is equivalent to

\[D_{c}(s) = K\frac{Ts + 1}{\alpha Ts + 1},\ \alpha < 1 \]

  1. Analytically determine its frequency-response characteristics and discuss what you would expect from the result.

  2. Use Matlab to plot \(D_{c}(j\omega)\) with \(K = 1,T = 1\), and \(\alpha = 0.1\) for \(0.1 \leq \omega \leq 100\), and verify the features predicted from the analysis in 1 , above.

195. Solution

  1. Analytical evaluation: Substituting \(s = j\omega\) into Eq. (6.8), we get

\[D_{c}(j\omega) = K\frac{Tj\omega + 1}{\alpha Tj\omega + 1}. \]

From Eqs. (6.5) and (6.6) the amplitude is

and the phase is given by

\[M = \left| D_{c} \right| = |K|\frac{\sqrt{1 + (\omega T)^{2}}}{\sqrt{1 + (\alpha\omega T)^{2}}} \]

\[\begin{matrix} \phi & \ = \angle(1 + j\omega T) - \angle(1 + j\alpha\omega T) \\ & \ = \tan^{- 1}(\omega T) - \tan^{- 1}(\alpha\omega T). \end{matrix}\]

At very low frequencies, the amplitude is just \(|K|\), and at very high frequencies, it is \(|K/\alpha|\). Therefore, the amplitude is higher at very high frequency. The phase is zero at very low frequencies and goes back to zero at very high frequencies. At intermediate frequencies, evaluation of the \(\tan^{- 1}( \cdot )\) functions would reveal that \(\phi\) becomes positive. These are the general characteristics of lead compensation.

  1. Computer evaluation: A Matlab script for frequency-response evaluation was shown for Example 3.6. A similar script for the lead compensation:

\(s = tf(\) 's');

sys \(D = (s + 1)/(s/10 + 1)\);

\(w =\) logspace \(( - 1,2);\ \%\) determines frequencies over range of interest

[mag, phase \(\rbrack =\) bode \((sysD,w);\ \%\) computes magnitude and phase over frequency range of interest

\(loglog(w,squeeze(mag))\), grid;

\[axis(\lbrack 0.1100110\rbrack) \]

semilogx(w,squeeze(phase)), grid;

\[axis\left( \left\lbrack \begin{matrix} 0.1 & 100 & 0 \end{matrix}60 \right\rbrack \right)\]

produces the frequency-response magnitude and phase plots shown in Fig 6.2.

The analysis indicated that the low-frequency magnitude should be \(K( = 1)\) and the high-frequency magnitude should be \(K/\alpha( = 10)\), which are both verified by the magnitude plot. The phase plot also verifies that the value approaches zero at high and low frequencies, and that the intermediate values are positive.

In the cases for which we do not have a good model of the system, and wish to determine the frequency-response magnitude and phase experimentally, we can excite the system with a sinusoid varying in frequency. The magnitude \(M(\omega)\) is obtained by measuring the ratio of the output sinusoid to input sinusoid in the steady-state at each frequency. The phase \(\phi(\omega)\) is the measured difference in phase between input and output signals. \(\ ^{1}\)

Figure 6.2

(a) Magnitude;

(b) phase for the lead compensation in

Example 6.2

(a)

(b)

A great deal can be learned about the dynamic response of a system from knowledge of the magnitude \(M(\omega)\) and the phase \(\phi(\omega)\) of its transfer function. In the obvious case, if the signal is a sinusoid, then \(M\) and \(\phi\) completely describe the response. Furthermore, if the input is periodic, then a Fourier series can be constructed to decompose the input into a sum of sinusoids, and again \(M(\omega)\) and \(\phi(\omega)\) can be used with each component to construct the total response. For transient inputs, our best path to understanding the meaning of \(M\) and \(\phi\) is to relate the frequency response \(G(j\omega)\) to the transient responses calculated by the Laplace transform. For example, in Fig. 3.19(b), we plotted the step response of a system having the transfer function

\[G(s) = \frac{1}{\left( s/\omega_{n} \right)^{2} + 2\zeta\left( s/\omega_{n} \right) + 1}, \]

for various values of \(\zeta\). These transient curves were normalized with respect to time as \(\omega_{n}t\). In Fig. 6.3, we plot \(M(\omega)\) and \(\phi(\omega)\) for these same values of \(\zeta\) to help us see what features of the frequency response correspond to the transient-response characteristics. Specifically, Figs. 3.19(b) and 6.3 indicate the effect of damping on system time response and the corresponding effect on the frequency response. They show that the damping of the system can be determined from the transient-response overshoot or from the peak in the magnitude of the frequency response [Fig. 6.3 (a)]. Furthermore, from the frequency response, we see that \(\omega_{n}\) is approximately equal to the bandwidth - the frequency where the magnitude starts to fall off from its low-frequency value. (We will define bandwidth more formally in the next paragraph.) Therefore, the rise time can be estimated from the bandwidth. We also see that the magnitude of peak overshoot is approximately \(1/2\zeta\) for

196. Figure 6.3

Frequency response of Eq. (6.9);

(a) Magnitude;

(b) Phase

(a)

(b)

\(\zeta < 0.5\), so the peak overshoot in the step response can be estimated from the peak overshoot in the frequency response. Thus, we see that essentially the same information is contained in the frequency-response curve as is found in the transient-response curve.

A natural specification for system performance in terms of freBandwidth quency response is the bandwidth, defined to be the maximum frequency

Figure 6.4

Simplified system

definition

Figure 6.5

Definitions of

bandwidth and resonant peak

at which the output of a system will track an input sinusoid in a satisfactory manner. By convention, for the system shown in Fig. 6.4 with a sinusoidal input \(r\), the bandwidth is the frequency of \(r\) at which the output \(y\) is attenuated to a factor of 0.707 times the input. \(\ ^{2}\) Figure 6.5 depicts the idea graphically for the frequency response of the closed-loop transfer function

\[\frac{Y(s)}{R(s)} \triangleq \mathcal{T}(s) = \frac{KG(s)}{1 + KG(s)} \]

The plot is typical of most closed-loop systems, in that (1) the output follows the input \((|\mathcal{T}| \cong 1)\) at the lower excitation frequencies, and (2) the output ceases to follow the input \((|\mathcal{T}| < 1)\) at the higher excitation frequencies. The maximum value of the frequency-response magnitude is referred to as the resonant peak \(M_{r}\).

Bandwidth is a measure of speed of response, and is therefore similar to time-domain measures such as rise time and peak time or the \(s\)-plane measure of dominant-root(s) natural frequency. In fact, if the \(KG(s)\) in Fig. 6.4 is such that the closed-loop response is given by Fig. 6.3, we can see that the bandwidth will equal the natural frequency of the closed-loop root (that is, \(\omega_{BW} = \omega_{n}\) for a closed-loop damping ratio of \(\zeta = 0.7\) ). For other damping ratios, the bandwidth is approximately equal to the natural frequency of the closed-loop roots, with an error typically less than a factor of 2 .

The definition of the bandwidth stated here is meaningful for systems that have a low-pass filter behavior, as is the case for most any physical control system. In other applications, the bandwidth may be defined differently. Also, if the ideal model of the system does not have a high-frequency roll-off (e.g., if it has an equal number of poles and zeros), the bandwidth is infinite; however, this does not occur in nature as nothing responds well at infinite frequencies.

In many cases, the designer's primary concern is the error in the system due to disturbances rather than the ability to track an input. For error analysis, we are more interested in one of the sensitivity functions defined in Section 4.1, \(\mathcal{S}(s)\), rather than \(\mathcal{T}(s)\). For most open-loop systems with high gain at low frequencies, \(\mathcal{S}(s)\) for a disturbance input will have very low values at low frequencies and grows as the frequency of the input or disturbance approaches the bandwidth. For analysis of either \(\mathcal{T}(s)\) or \(\mathcal{S}(s)\), it is typical to plot their response versus the frequency of the input. Either frequency response for control systems design can be evaluated using the computer, or can be quickly sketched for simple systems using the efficient methods described in the following Section 6.1.1. The methods described next are also useful to expedite the design process, as well as to perform sanity checks on the computer output.

196.0.1. Bode Plot Techniques

Display of frequency response is a problem that has been studied for a long time. Before computers, this was accomplished by hand; therefore, it was useful to be able to accomplish this quickly. The most useful technique for hand plotting was developed by \(H\). W. Bode at Bell Laboratories between 1932 and 1942. This technique allows plotting that is quick and yet sufficiently accurate for control systems design. Most control systems designers now have access to computer programs that diminish the need for hand plotting; however, it is still important to develop good intuition so you can quickly identify erroneous computer results, and for this, you need the ability to perform a sanity check and in some cases to determine approximate results by hand.

The idea in Bode's method is to plot magnitude curves using a logarithmic scale and phase curves using a linear scale. This strategy allows us to plot a high-order \(G(j\omega)\) by simply adding the separate terms graphically, as discussed in Appendix WB. This addition is possible because a complex expression with zero and pole factors can be written in polar (or phasor) form as

\[G(j\omega) = \frac{{\overrightarrow{s}}_{1}{\overrightarrow{s}}_{2}}{{\overrightarrow{s}}_{3}{\overrightarrow{s}}_{4}{\overrightarrow{s}}_{5}} = \frac{r_{1}e^{j\theta_{1}}r_{2}e^{j\theta_{2}}}{r_{3}e^{j\theta_{3}}r_{4}e^{j\theta_{4}}r_{5}e^{j\theta_{5}}} = \left( \frac{r_{1}r_{2}}{r_{3}r_{4}r_{5}} \right)e^{j\left( \theta_{1} + \theta_{2} - \theta_{3} - \theta_{4} - \theta_{5} \right)} \]

Composite plot from individual terms
(The overhead arrow indicates a phasor.) Note from Eq. (6.10) the phases of the individual terms are added directly to obtain the phase of the composite expression, \(G(j\omega)\). Furthermore, because

it follows that

\[|G(j\omega)| = \frac{r_{1}r_{2}}{r_{3}r_{4}r_{5}} \]

\[\log_{10}|G(j\omega)| = \log_{10}r_{1} + \log_{10}r_{2} - \log_{10}r_{3} - \log_{10}r_{4} - \log_{10}r_{5} \]

Bode plot

Decibel

Advantages of Bode plots
We see that addition of the logarithms of the individual terms provides the logarithm of the magnitude of the composite expression. The frequency response is typically presented as two curves; the logarithm of magnitude versus \(log\omega\) and the phase versus \(log\omega\). Together, these two curves constitute a Bode plot of the system. Because

\[\log_{10}Me^{j\phi} = \log_{10}M + j\phi\log_{10}e \]

we see the Bode plot shows the real and imaginary parts of the logarithm of \(G(j\omega)\). In communications, it is standard to measure the power gain in decibels (db), or "Power db": 3

\[|G|_{db} = 10\log_{10}\frac{P_{2}}{P_{1}} \]

Here \(P_{1}\) and \(P_{2}\) are the input and output powers. Because power is proportional to the square of the voltage, the power gain is also given by

\[|G|_{db} = 20\log_{10}\frac{V_{2}}{V_{1}} \]

Hence, we can present a Bode plot as the magnitude in decibels versus \(log\omega\), and the phase in degrees versus \(log\omega.\ ^{4}\) In this book, we give Bode plots in the form \(log|G|\) versus \(log\omega\); also, we mark an axis in decibels on the right-hand side of the magnitude plot to give you the choice of working with the representation you prefer. However, for frequencyresponse plots, we are not actually plotting power, and use of Eq. (6.14) can be somewhat misleading. If the magnitude data are derived in terms of \(log|G|\), it is conventional to plot them on a log scale but identify the scale in terms of \(|G|\) only (without "log"). If the magnitude data are given in decibels, the vertical scale is linear such that each decade of \(|G|\) represents \(20db\).

197. Advantages of Working with Frequency Response in Terms of Bode Plots

  1. Dynamic compensator design can be based entirely on Bode plots.

  2. Bode plots can be determined experimentally.

  3. Bode plots of systems in series (or tandem) simply add, which is quite convenient.

  4. The use of a log scale permits a much wider range of frequencies to be displayed on a single plot than is possible with linear scales.

It is important for the control systems engineer to understand the Bode plot techniques for several reasons: This knowledge allows

Bode form of the transfer function

Classes of terms of transfer functions the engineer not only to deal with simple problems, but also to perform a sanity check on computer results for more complicated cases. Often approximations can be used to quickly sketch the frequency response and deduce stability, as well as to determine the form of the needed dynamic compensations. Finally, an understanding of the plotting method is useful in interpreting frequency-response data that have been generated experimentally.

In Chapter 5, we wrote the open-loop transfer function in the form

\[KG(s) = K\frac{\left( s - z_{1} \right)\left( s - z_{2} \right)\cdots}{\left( s - p_{1} \right)\left( s - p_{2} \right)\cdots} \]

because it was the most convenient form for determining the degree of stability from the root locus with respect to the gain \(K\). In working with frequency response, we are only interested in evaluating \(G(s)\) along the \(j\omega\) axis, so it is more convenient to replace \(s\) with \(j\omega\) and to write the transfer functions in the Bode form

\[KG(j\omega) = K_{o}(j\omega)^{n}\frac{\left( j\omega\tau_{1} + 1 \right)\left( j\omega\tau_{2} + 1 \right)\cdots}{\left( j\omega\tau_{a} + 1 \right)\left( j\omega\tau_{b} + 1 \right)\cdots} \]

This form also causes the gain \(K_{o}\) to be directly related to the transferfunction magnitude at very low frequencies. In fact, for systems with \(n = 0,K_{o}\) is the gain at \(\omega = 0\) in Eq. (6.16) and is also equal to the DC gain of the system. Although a straightforward calculation will convert a transfer function in the form of Eq. (6.15) to an equivalent transfer function in the form of Eq. (6.16), note \(K\) and \(K_{o}\) will not usually have the same value in the two expressions.

Transfer functions can also be rewritten according to Eqs. (6.10) and (6.11). As an example, suppose

Then

\[KG(j\omega) = K_{o}\frac{j\omega\tau_{1} + 1}{(j\omega)^{2}\left( j\omega\tau_{a} + 1 \right)} \]

\[\angle KG(j\omega) = \angle K_{o} + \angle\left( j\omega\tau_{1} + 1 \right) - \angle(j\omega)^{2} - \angle\left( j\omega\tau_{a} + 1 \right) \]

and

\[\begin{matrix} log|KG(j\omega)| = & log\left| K_{o} \right| + log\left| j\omega\tau_{1} + 1 \right| - log\left| (j\omega)^{2} \right| \\ & \ - log\left| j\omega\tau_{a} + 1 \right|. \end{matrix}\]

In decibels, Eq. (6.19) becomes

\[\begin{matrix} |KG(j\omega)|_{db} = & 20log\left| K_{o} \right| + 20log\left| j\omega\tau_{1} + 1 \right| - 20log\left| (j\omega)^{2} \right| \\ & \ - 20log\left| j\omega\tau_{a} + 1 \right|. \end{matrix}\]

All transfer functions for the kinds of systems we have talked about so far are composed of three classes of terms:

  1. \(K_{o}(j\omega)^{n}\).

  2. \((j\omega\tau + 1)^{\pm 1}\).

  3. \(\left\lbrack \left( \frac{j\omega}{\omega_{n}} \right)^{2} + 2\zeta\frac{j\omega}{\omega_{n}} + 1 \right\rbrack^{\pm 1}\).

Figure 6.6

Magnitude of \((j\omega)^{n}\)

Class 1: singularities at the origin

Class 2: first-order term

Break point

First, we will discuss the plotting of each individual term and how the terms affect the composite plot including all the terms; then, we will discuss how to draw the composite curve.

  1. \(K_{o}(j\omega)^{n}\) : Because

\[logK_{o}\left| (j\omega)^{n} \right| = logK_{o} + nlog|j\omega| \]

the magnitude plot of this term is a straight line with a slope \(n \times\) (20 db per decade). Examples for different values of \(n\) are shown in Fig. 6.6. \(K_{o}(j\omega)^{n}\) is the only class of term that affects the slope at the lowest frequencies, because all other terms are constant in that region. The easiest way to draw the curve is to locate \(\omega = 1\) then plot \(logK_{o}\) at that frequency. Then draw the line with slope \(n\) through that point..\(\ ^{5}\) The phase of \((j\omega)^{n}\) is \(\phi = n \times 90^{\circ}\); it is independent of frequency and is thus a horizontal line: \(- 90^{\circ}\) for \(n = - 1, - 180^{\circ}\) for \(n = - 2, + 90^{\circ}\) for \(n = + 1\), and so forth.

  1. \((j\omega\tau + 1)\) : The magnitude of this term approaches one asymptote at very low frequencies and another asymptote at very high frequencies:

(a) For \(\omega\tau \ll 1,j\omega\tau + 1 \cong 1\).

(b) For \(\omega\tau \gg 1,j\omega\tau + 1 \cong j\omega\tau\).

If we call \(\omega = 1/\tau\) the break point, then we see that below the break point the magnitude curve is approximately constant \(( = 1)\), while above the break point the magnitude curve behaves approximately like the class 1 term \(K_{o}(j\omega)\). The example plotted in Fig. 6.7, \(G(s) =\)

198. Figure 6.7

Magnitude plot for \(j\omega\tau + 1;\tau = 10\)

\(10s + 1\), shows how the two asymptotes cross at the break point and how the actual magnitude curve lies above that point by a factor of 1.4 (or \(+ 3db\) ). (If the term were in the denominator, it would be below the break point by a factor of 0.707 or \(- 3db\).) Note this term will have only a small effect on the composite magnitude curve below the break point, because its value is equal to \(1( = 0db)\) in this region. The slope at high frequencies is +1 (or \(+ 20db\) per decade). The phase curve can also be easily drawn by using the following low- and high-frequency asymptotes:

(a) For \(\omega\tau \ll 1,\angle 1 = 0^{\circ}\).

(b) For \(\omega\tau \gg 1,\angle j\omega\tau = 90^{\circ}\).

(c) For \(\omega\tau \cong 1,\angle(j\omega\tau + 1) \cong 45^{\circ}\).

For \(\omega\tau \cong 1\), the \(\angle(j\omega + 1)\) curve is tangent to an asymptote going from \(0^{\circ}\) at \(\omega\tau = 0.2\) to \(90^{\circ}\) at \(\omega\tau = 5\), as shown in Fig. 6.8. The figure also illustrates the three asymptotes (dashed lines) used for the phase plot and how the actual curve deviates from the asymptotes by \(11^{\circ}\) at their intersections. Both the composite phase and magnitude curves are unaffected by this class of term at frequencies below the break point by more than a factor of 10 because the term's magnitude is 1 (or \(0db\) ) and its phase is less than \(5^{\circ}\).

Figure 6.8

Phase plot for \(j\omega\tau + 1\);

\[\tau = 10 \]

Class 3: second-order term

Peak amplitude

Composite curve
3. \(\left\lbrack \left( j\omega/\omega_{n} \right)^{2} + 2\zeta\left( j\omega/\omega_{n} \right) + 1 \right\rbrack^{\pm 1}\) : This term behaves in a manner similar to the class 2 term, with differences in detail: The break point is now \(\omega = \omega_{n}\). The magnitude changes slope by a factor of +2 (or \(+ 40db\) per decade) at the break point (and -2 , or \(- 40db\) per decade, when the term is in the denominator). The phase changes by \(\pm 180^{\circ}\), and the transition through the breakpoint region varies with the damping ratio \(\zeta\). Figure 6.3 shows the magnitude and phase for several different damping ratios when the term is in the denominator. Note the magnitude asymptote for frequencies above the break point has a slope of -2 (or \(- 40db\) per decade), and that the transition through the break-point region has a large dependence on the damping ratio. A rough determination of this transition can be made by noting that

\[|G(j\omega)| = \frac{1}{2\zeta}\ \text{~}\text{at}\text{~}\ \omega = \omega_{n} \]

for this class of second-order term in the denominator. If the term was in the numerator, the magnitude would be the reciprocal of the curve plotted in Fig. 6.3(a).

No such handy rule as Eq. (6.21) exists for sketching in the transition for the phase curve; therefore, we would have to resort to Fig. 6.3(b) for an accurate plot of the phase. However, a very rough idea of the transition can be gained by noting that it is a step function for \(\zeta = 0\), while it obeys the rule for two first-order (class 2) terms when \(\zeta = 1\) with simultaneous break-point frequencies. All intermediate values of \(\zeta\) fall between these two extremes. The phase of a second-order term is always \(\pm 90^{\circ}\) at \(\omega_{n}\).

When the system has several poles and several zeros, plotting the frequency response requires that the components be combined into a composite curve. To plot the composite magnitude curve, it is useful to note that the slope of the asymptotes is equal to the sum of the slopes of the individual curves. Therefore, the composite asymptote curve has integer slope changes at each break-point frequency: +1 for a first-order term in the numerator, -1 for a first-order term in the denominator, and \pm 2 for second-order terms. Furthermore, the lowest-frequency portion of the asymptote has a slope determined by the value of \(n\) in the \((j\omega)^{n}\) term and is located by plotting the point \(K_{o}\omega^{n}\) at \(\omega = 1\). Therefore, the complete procedure consists of plotting the lowest-frequency portion of the asymptote, then sequentially changing the asymptote's slope at each break point in order of ascending frequency, and finally drawing the actual curve by using the transition rules discussed earlier for classes 2 and 3 .

The composite phase curve is the sum of the individual curves. Addition of the individual phase curves graphically is made possible by locating the curves so the composite phase approaches the individual curve as closely as possible. A quick but crude sketch of the composite phase can be found by starting the phase curve below the lowest
break point and setting it equal to \(n \times 90^{\circ}\). The phase is then stepped at each break point in order of ascending frequency. The amount of the phase step is \(\pm 90^{\circ}\) for a first-order term and \(\pm 180^{\circ}\) for a second-order term. Break points in the numerator indicate a positive step in phase, while break points in the denominator indicate a negative phase step. \(\ ^{6}\) The plotting rules so far have only considered poles and zeros in the left half-plane (LHP). Changes for singularities in the right half-plane (RHP) will be discussed at the end of the section.

199. Summary of Bode Plot Rules

  1. Manipulate the transfer function into the Bode form given by Eq. (6.16).

  2. Determine the value of \(n\) for the \(K_{o}(j\omega)^{n}\) term (class 1). Plot the low-frequency magnitude asymptote through the point \(K_{o}\) at \(\omega = 1\) with a slope of \(n\) (or \(n \times 20db\) per decade).

  3. Complete the composite magnitude asymptotes: Extend the lowfrequency asymptote until the first frequency break point. Then step the slope by \pm 1 or \pm 2 , depending on whether the break point is from a first- or second-order term in the numerator or denominator. Continue through all break points in ascending order.

  4. The approximate magnitude curve is increased from the asymptote value by a factor of \(1.4( + 3db)\) at first-order numerator break points, and decreased by a factor of \(0.707( - 3db)\) at firstorder denominator break points. At second-order break points, the resonant peak (or valley) occurs according to Fig. 6.3(a), using the relation \(|G(j\omega)| = 1/2\zeta\) at denominator (or \(|G(j\omega)| = 2\zeta\) at numerator) break points.

  5. Plot the low-frequency asymptote of the phase curve, \(\phi = n \times 90^{\circ}\).

  6. As a guide, the approximate phase curve changes by \(\pm 90^{\circ}\) or \(\pm 180^{\circ}\) at each break point in ascending order. For first-order terms in the numerator, the change of phase is \(+ 90^{\circ}\); for those in the denominator the change is \(- 90^{\circ}\). For second-order terms, the change is \(\pm 180^{\circ}\).

  7. Locate the asymptotes for each individual phase curve so their phase change corresponds to the steps in the phase toward or away from the approximate curve indicated by Step 6. Each individual phase curve occurs as indicated by Fig. 6.8 or Fig. 6.3(b).

  8. Graphically add each phase curve. Use grids if an accuracy of about \(\pm 5^{\circ}\) is desired. If less accuracy is acceptable, the composite curve can be done by eye. Keep in mind that the curve will start at the lowest-frequency asymptote and end on the highest-frequency asymptote and will approach the intermediate asymptotes to an extent that is determined by how close the break points are to each other.

\(\ ^{6}\) This approximate method was pointed out to us by our Parisian colleagues.

EXAMPLE 6.3

200. Solution

Plot the Bode magnitude and phase for the system with the transfer function

\[KG(s) = \frac{2000(s + 0.5)}{s(s + 10)(s + 50)} \]

  1. We convert the function to the Bode form of Eq. (6.16):

\[KG(j\omega) = \frac{2\lbrack(j\omega/0.5) + 1\rbrack}{j\omega\lbrack(j\omega/10) + 1\rbrack\lbrack(j\omega/50) + 1\rbrack} \]

  1. We note the term in \(j\omega\) is first order and in the denominator, so \(n = - 1\). Therefore, the low-frequency asymptote is defined by the first term:

\[KG(j\omega) = \frac{2}{j\omega} \]

This asymptote is valid for \(\omega < 0.1\), because the lowest break point is at \(\omega = 0.5\). The magnitude plot of this term has the slope of -1 (or \(- 20db\) per decade). We locate the magnitude by passing through the value 2 at \(\omega = 1\) even though the composite curve will not go through this point because of the break point at \(\omega = 0.5\). This is shown in Fig. 6.9(a).

  1. We obtain the remainder of the asymptotes, also shown in Fig. 6.9(a): The first break point is at \(\omega = 0.5\) and is a first-order term in the numerator, which thus calls for a change in slope of +1 . We therefore draw a line with 0 slope that intersects the original -1 slope. Then, we draw a -1 slope line that intersects the previous one at \(\omega = 10\). Finally, we draw a -2 slope line that intersects the previous -1 slope at \(\omega = 50\).

  2. The actual curve is approximately tangent to the asymptotes when far away from the break points, a factor of \(1.4( + 3db)\) above the asymptote at the \(\omega = 0.5\) break point, and a factor of \(0.7( - 3db)\) below the asymptote at the \(\omega = 10\) and \(\omega = 50\) break points.

  3. Because the phase of \(2/j\omega\) is \(- 90^{\circ}\), the phase curve in Fig. 6.9(b) starts at \(- 90^{\circ}\) at the lowest frequencies.

  4. The result is shown in Fig. 6.9(c).

  5. The individual phase curves, shown dashed in Fig. 6.9(b), have the correct phase change for each term and are aligned vertically so their phase change corresponds to the steps in the phase from the approximate curve in Fig. 6.9(c). Note the composite curve approaches each individual term.

  6. The graphical addition of each dashed curve results in the solid composite curve in Fig. 6.9(b). As can be seen from the figure, the vertical placement of each individual phase curve makes the required graphical addition particularly easy because the composite curve approaches each individual phase curve in turn.

Figure 6.9

Composite plots:

(a) magnitude;

(b) phase;

(c) approximate phase

(a)

(b)

(c)

201. EXAMPLE 6.4

Figure 6.10

Bode plot for a transfer function with complex poles: (a) magnitude; (b) phase

202. Bode Plot with Complex Poles

As a second example, draw the frequency response for the system

\[KG(s) = \frac{10}{s\left\lbrack s^{2} + 0.4s + 4 \right\rbrack} \]

Solution. A system like this is more difficult to plot than the one in the previous example because the transition between asymptotes is dependent on the damping ratio; however, the same basic ideas illustrated in Example 6.3 apply.

This system contains a second-order term in the denominator. Proceeding through the steps, we convert Eq. (6.22) to the Bode form of Eq. (6.16):

\[KG(s) = \frac{10}{4}\frac{1}{s\left( s^{2}/4 + 2(0.1)s/2 + 1 \right)} \]

Starting with the low-frequency asymptote, we have \(n = - 1\) and \(|G(j\omega)| \cong 2.5/\omega\). The magnitude plot of this term has a slope of -1 ( \(- 20db\) per decade) and passes through the value of 2.5 at \(\omega = 1\), as shown in Fig. 6.10(a). For the second-order pole, note \(\omega_{n} = 2\) and

(a)

(b)

203. EXAMPLE 6.5

\(\zeta = 0.1\). At the break-point frequency of the poles, \(\omega = 2\), the slope shifts to -3 ( \(- 60db\) per decade). At the pole break point, the magnitude ratio above the asymptote is \(1/2\zeta = 1/0.2 = 5\). The phase curve for this case starts at \(\phi = - 90^{\circ}\), corresponding to the \(1/s\) term, falls to \(\phi = - 180^{\circ}\) at \(\omega = 2\) due to the pole as shown in Fig. 6.10(b), then approaches \(\phi = - 270^{\circ}\) for higher frequencies. Because the damping is small, the stepwise approximation is a very good one. The true composite phase curve is shown in Fig. 6.10(b).

Bode Plot for Complex Poles and Zeros: Satellite with Flexible Appendages

As a third example, draw the Bode plots for a system with secondorder terms. The transfer function represents a mechanical system with two equal masses coupled with a lightly damped spring. The applied force and position measurement are collocated on the same mass. For the transfer function, the time scale has been chosen so the resonant frequency of the complex zeros is equal to 1 . The transfer function is

\[KG(s) = \frac{0.01\left( s^{2} + 0.01s + 1 \right)}{s^{2}\left\lbrack \left( s^{2}/4 \right) + 0.02(s/2) + 1 \right\rbrack} \]

Solution. Proceeding through the steps, we start with the low-frequency asymptote, \(0.01/\omega^{2}\). It has a slope of \(- 2( - 40db\) per decade \()\) and passes through magnitude \(= 0.01\) at \(\omega = 1\), as shown in Fig. 6.11(a). At the break-point frequency of the zero, \(\omega = 1\), the slope shifts to zero until the break point of the pole, which is located at \(\omega = 2\), when the slope returns to a slope of -2 . To interpolate the true curve, we plot the point at the zero break point, \(\omega = 1\), with a magnitude ratio below the asymptote of \(2\zeta = 0.01\). At the pole break point, the magnitude ratio above the asymptote is \(1/2\zeta = 1/0.02 = 50\). The magnitude curve is a "doublet" of a negative pulse followed by a positive pulse. Figure 6.11(b) shows that the phase curve for this system starts at \(- 180^{\circ}\) (corresponding to the \(1/s^{2}\) term), jumps \(180^{\circ}\) to \(\phi = 0\) at \(\omega = 1\), due to the zeros, then falls \(180^{\circ}\) back to \(\phi = - 180^{\circ}\) at \(\omega = 2\), due to the pole. With such small damping ratios the stepwise approximation is quite good. (We haven't drawn this on Fig. 6.11(b), because it would not be easily distinguishable from the true phase curve.) Thus, the true composite phase curve is a nearly square pulse between \(\omega = 1\) and \(\omega = 2\).

In actual designs, Bode plots are made with a computer. However, acquiring the ability to determine how Bode plots should behave is a useful skill, because it gives the designer insight into how changes in the compensation parameters will affect the frequency response. This allows the designer to iterate to the best designs more quickly.

Figure 6.11

Bode plot for a transfer function with complex poles and zeros:

(a) magnitude;

(b) phase

(a)

(b)

204. Computer-Aided Bode Plot for Complex Poles and Zeros

Repeat Example 6.5 using Matlab.

Solution. To obtain Bode plots using Matlab, we call the function bode as follows:

These commands will result in a Bode plot which matches that in Fig. 6.11 very closely. To obtain the magnitude plot in decibels, the last three lines can be replaced with

bode(sysG).

205. Nonminimum-Phase Systems

A system with a zero in the RHP undergoes a net change in phase when evaluated for frequency inputs between zero and infinity, which, for an associated magnitude plot, is greater than if all poles and zeros were in the LHP. Such a system is called nonminimum phase. As can be deduced from the construction in Fig. WA.3 in online Appendix WA, \(\ ^{7}\) if the zero is in the RHP, then the phase decreases at the zero break point instead of exhibiting the usual phase increase that occurs for an LHP zero. Consider the transfer functions

\[\begin{matrix} & G_{1}(s) = 10\frac{s + 1}{s + 10}, \\ & G_{2}(s) = 10\frac{s - 1}{s + 10}. \end{matrix}\]

Both transfer functions have the same magnitude for all frequencies; that is,

\[\left| G_{1}(j\omega) \right| = \left| G_{2}(j\omega) \right|, \]

as shown in Fig. 6.12(a). But the phases of the two transfer functions are drastically different [see Fig. 6.12(b)]. A "minimum-phase" system (i.e., all zeros in the LHP) with a given magnitude curve will produce the smallest net change in the associated phase, as shown in \(G_{1}\), compared with what the nonminimum-phase system will produce, as shown by the phase of \(G_{2}\). The discrepancy between \(G_{1}\) and \(G_{2}\) with regard to the phase change would be greater if two or more zeros of the plant were in the RHP.

205.0.1. Steady-State Errors

We saw in Section 4.2 that the steady-state error of a feedback system decreases as the gain of the open-loop transfer function increases. In plotting a composite magnitude curve, we saw in Section 6.1.1 that the open-loop transfer function, at very low frequencies, is approximated by

\[KG(j\omega) \cong K_{o}(j\omega)^{n} \]

Therefore, we can conclude that the larger the value of the magnitude on the low-frequency asymptote, the lower the steady-state errors will be for the closed-loop system. This relationship is very useful in the design of compensation: Often we want to evaluate several alternate ways to improve stability, and to do so we want to be able to see quickly how changes in the compensation will affect the steady-state errors.

Position-error constant

For a system of the form given by Eq. (6.16) - that is, where \(n = 0\) in Eq. (6.23) (a Type 0 system) - the low-frequency asymptote is a constant, and the gain \(K_{o}\) of the open-loop system is equal to the position-error constant \(K_{p}\). For a unity feedback system with a

Figure 6.12

Bode plot of minimumand nonminimumphase systems: for

(a) magnitude;

(b) phase

(a)

(b)

unit-step input, the Final Value Theorem (see Section 3.1.6) was used in Section 4.2.1 to show that the steady-state error is given by

\[e_{SS} = \frac{1}{1 + K_{p}} \]

For a unity-feedback system in which \(n = - 1\) in Eq. (6.23), defined to be a Type 1 system in Section 4.2.1, the low-frequency asymptote has a slope of -1 . The magnitude of the low-frequency asymptote is related to the gain according to Eq. (6.23); therefore, we can again read the gain, \(K_{o}/\omega\), directly from the Bode magnitude plot. Equation (4.37) tells us that the velocity-error constant

\[K_{v} = K_{o}, \]

where, for a unity-feedback system with a unit-ramp input, the steadystate error is

\[e_{Ss} = \frac{1}{K_{v}} \]

The easiest way of determining the value of \(K_{v}\) in a Type 1 system is to read the magnitude of the low-frequency asymptote at \(\omega = 1rad/sec\), because this asymptote is \(A(\omega) = K_{v}/\omega\). In some cases, the lowestfrequency break point will be below \(\omega = 1rad/sec\); therefore, the

Figure 6.13

Determination of \(K_{v}\) from the Bode plot for the system \(KG(s) = \frac{10}{s(s + 1)}\)

asymptote needs to extend to \(\omega = 1rad/sec\) in order to read \(K_{\nu}\) directly. Alternately, we could read the magnitude at any frequency on the low-frequency asymptote and compute it from \(K_{v} = \omega A(\omega)\).

206. Computation of \(K_{\nu}\)

As an example of the determination of steady-state errors, a Bode magnitude plot of an open-loop system is shown in Fig. 6.13. Assuming there is unity feedback as in Fig. 6.4, find the velocity-error constant, \(K_{\nu}\).

Solution. Because the slope at the low frequencies is -1 , we know the system is Type 1. The extension of the low-frequency asymptote crosses \(\omega = 1rad/sec\) at a magnitude of 10 . Therefore, \(K_{v} = 10\) and the steadystate error to a unit ramp for a unity-feedback system would be 0.1 . Alternatively, at \(\omega = 0.01\) we have \(|A(\omega)| = 1000\); therefore, from Eq. (6.23) we have

\[K_{o} = K_{v} \cong \omega|A(\omega)| = 0.01(1000) = 10 \]

206.1. Neutral Stability

In the early days of electronic communications, most instruments were judged in terms of their frequency response. It is therefore natural that when the feedback amplifier was introduced, techniques to determine stability in the presence of feedback were based on this response.

Suppose the closed-loop transfer function of a system is known. We can determine the stability of a system by simply inspecting the denominator in factored form (because the factors give the system roots directly) to observe whether the real parts are positive or negative. However, the closed-loop transfer function is usually not known; in fact, the
whole purpose behind understanding the root-locus technique is to be able to find the factors of the denominator in the closed-loop transfer function, given only the open-loop transfer function. Another way to determine closed-loop stability is to evaluate the frequency response of the open-loop transfer function \(KG(j\omega)\), then perform a test on that response. Note that this method also does not require factoring the denominator of the closed-loop transfer function. In this section, we will explain the principles of this method.

Suppose we have a system defined by Fig. 6.14(a) and whose root locus behaves as shown in Fig. 6.14(b); that is, instability results if \(K\) is larger than 2 . The neutrally stable points lie on the imaginary axis-that is, where \(K = 2\) and \(s = j1.0\). Furthermore, we saw in Section 5.1 that all points on the locus have the property that

\[|KG(s)| = 1\ \text{~}\text{and}\text{~}\ \angle G(s) = 180^{\circ}\text{.}\text{~} \]

At the point of neutral stability, we see that these root-locus conditions hold for \(s = j\omega\), so

\[|KG(j\omega)| = 1\ \text{~}\text{and}\text{~}\ \angle G(j\omega) = 180^{\circ}\text{.}\text{~} \]

Thus, a Bode plot of a system that is neutrally stable (i.e., with \(K\) defined such that a closed-loop root falls on the imaginary axis) will satisfy the conditions of Eq. (6.24). Figure 6.15 shows the frequency response for the system whose root locus is plotted in Fig. 6.14(b) for various values of \(K\). The magnitude response corresponding to \(K = 2\) passes through 1 at the same frequency \((\omega = 1rad/sec)\) at which the phase passes through \(180^{\circ}\), as predicted by Eq. (6.24).

Having determined the point of neutral stability, we turn to a key question: Does increasing the gain increase or decrease the system's stability? We can see from the root locus in Fig. 6.14(b) that any value of \(K\) less than the value at the neutrally stable point will result in a stable system. At the frequency \(\omega\) where the phase \(\angle G(j\omega) = - 180^{\circ}\) \((\omega = 1rad/sec)\), the magnitude \(|KG(j\omega)| < 1.0\) for stable values of \(K\) and \(> 1\) for unstable values of \(K\). Therefore, we have the following trial

Figure 6.14

Stability example:

(a) system definition;

(b) root locus

(b)

207. Figure 6.15

Frequency-response magnitude and phase for the system in Fig. 6.14

stability condition, based on the character of the open-loop frequency response:

\[|KG(j\omega)| < 1\ \text{~}\text{at}\text{~}\ \angle G(j\omega) = - 180^{\circ}\text{.}\text{~} \]

This stability criterion holds for all systems for which increasing gain leads to instability and \(|KG(j\omega)|\) crosses the magnitude \(( = 1)\) once, the most common situation. However, there are systems for which an increasing gain can lead from instability to stability; in this case, the stability condition is

\[|KG(j\omega)| > 1\text{~}\text{at}\text{~}\angle G(j\omega) = - 180^{\circ}. \]

There are also cases when \(|KG(j\omega)|\) crosses magnitude \(( = 1)\) more than once. One way to resolve the ambiguity that is usually sufficient is to perform a rough sketch of the root locus. Another more rigorous way to resolve the ambiguity is to use the Nyquist stability criterion, the subject of the next section. However, because the Nyquist criterion is fairly complex, it is important while studying it to bear in mind the theme of this section - namely, that for most systems a simple relationship exists between closed-loop stability and the open-loop frequency response.

207.1. The Nyquist Stability Criterion

For most systems, as we saw in the previous section, an increasing gain eventually causes instability. In the very early days of feedback control design, this relationship between gain and stability margins was assumed to be universal. However, designers found occasionally that in the laboratory the relationship reversed itself; that is, the amplifier would become unstable when the gain was decreased. The confusion caused by these conflicting observations motivated Harry Nyquist of the Bell Telephone Laboratories to study the problem in 1932. His study explained the occasional reversals, and resulted in a more sophisticated analysis without loopholes. Not surprisingly, his test has come to be called the Nyquist stability criterion. It is based on a result from complex variable theory known as the argument principle, \(\ ^{8}\) as we briefly explain in this section. More detail is contained in online Appendix WD.

The Nyquist stability criterion relates the open-loop frequency response to the number of closed-loop poles of the system in the RHP. Study of the Nyquist criterion will allow you to determine stability from the frequency response of a complex system, perhaps with one or more resonances, where the magnitude curve crosses 1 several times and/or the phase crosses \(180^{\circ}\) several times. It is also very useful in dealing with open-loop unstable systems, nonminimum-phase systems, and systems with pure delays (transportation lags).

207.1.1. The Argument Principle

Consider the transfer function \(H_{1}(s)\) whose poles and zeros are indicated in the s-plane in Fig. 6.16(a). We wish to evaluate \(H_{1}\) for values of \(s\) on the clockwise contour \(C_{1}\). (Hence this is called a contour evaluation.) We choose the test point \(s_{o}\) for evaluation. The resulting complex quantity has the form \(H_{1}\left( s_{o} \right) = \overrightarrow{v} = |\overrightarrow{v}|e^{j\alpha}\). The value of the argument of \(H_{1}\left( s_{o} \right)\) is

\[\alpha = \theta_{1} + \theta_{2} - \left( \phi_{1} + \phi_{2} \right) \]

As \(s\) traverses \(C_{1}\) in the clockwise direction starting at \(s_{o}\), the angle \(\alpha\) of \(H_{1}(s)\) in Fig. 6.16(b) will change (decrease or increase), but it will not undergo a net change of \(360^{\circ}\) as long as there are no poles or zeros within \(C_{1}\). This is because none of the angles that make up \(\alpha\) go through a net revolution. The angles \(\theta_{1},\theta_{2},\phi_{1}\), and \(\phi_{2}\) increase or decrease as \(s\) traverses around \(C_{1}\), but they return to their original values as \(s\) returns to \(s_{o}\) without rotating through \(360^{\circ}\). This means that the plot of \(H_{1}(s)\) [see Fig. 6.16(b)] will not encircle the origin. This conclusion follows from the fact that \(\alpha\) is the sum of the angles indicated in Fig. 6.16(a), so the only way that \(\alpha\) can be changed by \(360^{\circ}\) after \(s\) executes one full traverse of \(C_{1}\) is for \(C_{1}\) to contain a pole or zero.

Now consider the function \(H_{2}(s)\), whose pole-zero pattern is shown in Fig. 6.16(c). Note it has a singularity (pole) within \(C_{1}\). Again, we start

208. Figure 6.16

Contour evaluations:

(a) \(s\)-plane plot of poles and zeros of \(H_{1}(s)\) and the contour \(C_{1}\);

(b) \(H_{1}(s)\) for \(s\) on \(C_{1}\);

(c) \(s\)-plane plot of poles and zeros of \(H_{2}(\text{ }s)\) and the contour \(C_{1}\);

(d) \(H_{2}(s)\) for \(s\) on \(C_{1}\)

(a)

(c)

(b)

(d) at the test point \(s_{0}\). As \(s\) traverses in the clockwise direction around \(C_{1}\), the contributions from the angles \(\theta_{1},\theta_{2}\), and \(\phi_{1}\) change, but they return to their original values as soon as \(s\) returns to \(s_{o}\). In contrast, \(\phi_{2}\), the angle from the pole within \(C_{1}\), undergoes a net change of \(- 360^{\circ}\) after one full traverse of \(C_{1}\). Therefore, the argument of \(H_{2}(s)\) undergoes the same change, causing \(H_{2}\) to encircle the origin in the counterclockwise direction, as shown in Fig. 6.16(d). The behavior would be similar if the contour \(C_{1}\) had enclosed a zero instead of a pole. The mapping of \(C_{1}\) would again enclose the origin once in the \(H_{2}(s)\)-plane, except it would do so in the clockwise direction.

Thus we have the essence of the argument principle:

A contour map of a complex function will encircle the origin \(Z - P\) times, where \(Z\) is the number of zeros and \(P\) is the number of poles of the function inside the contour.

For example, if the number of poles and zeros within \(C_{1}\) is the same, the net angles cancel and there will be no net encirclement of the origin.

208.0.1. Application of The Argument Principle to Control Design

To apply the principle to control design, we let the \(C_{1}\) contour in the \(s\)-plane encircle the entire RHP, the region in the \(s\)-plane where a pole

Figure 6.17

An s-plane plot of a contour \(C_{1}\) that encircles the entire RHP

Figure 6.18

Block diagram for

\[Y(s)/R(s) = \]

\[KG(s)/\lbrack 1 + KG(s)\rbrack \]

Nyquist plot; polar plot

would cause an unstable system (see Fig. 6.17). The resulting evaluation of \(H(s)\) will encircle the origin only if \(H(s)\) has an RHP pole or zero.

As stated earlier, what makes all this contour behavior useful is that a contour evaluation of an open-loop \(KG(s)\) can be used to determine stability of the closed-loop system. Specifically, for the system in Fig. 6.18, the closed-loop transfer function is

\[\frac{Y(s)}{R(s)} = \mathcal{T}(s) = \frac{KG(s)}{1 + KG(s)} \]

Therefore, the closed-loop roots are the solutions of

\[1 + KG(s) = 0 \]

and we apply the principle of the argument to the function \(1 + KG(s)\). If the evaluation contour of this function of \(s\) enclosing the entire RHP contains a zero or pole of \(1 + KG(s)\), then the evaluated contour of \(1 + KG(s)\) will encircle the origin. Notice \(1 + KG(s)\) is simply \(KG(s)\) shifted to the right 1 unit, as shown in Fig. 6.19. Therefore, if the plot of \(1 + KG(s)\) encircles the origin, the plot of \(KG(s)\) will encircle -1 on the real axis. Therefore, we can plot the contour evaluation of the open-loop \(KG(s)\), examine its encirclements of -1 , and draw conclusions about the origin encirclements of the closed-loop function \(1 + KG(s)\). Presentation of the evaluation of \(KG(s)\) in this manner is often referred to as a Nyquist plot, or polar plot, because we plot the magnitude of \(KG(s)\) versus the angle of \(KG(s)\).

To determine whether an encirclement is due to a pole or zero, we write \(1 + KG(s)\) in terms of poles and zeros of \(KG(s)\) :

\[1 + KG(s) = 1 + K\frac{b(s)}{a(s)} = \frac{a(s) + Kb(s)}{a(s)}. \]

Figure 6.19

Evaluations of \(KG(s)\) and \(1 + KG(s)\) : Nyquist plots

Equation (6.27) shows the poles of \(1 + KG(s)\) are also the poles of \(G(s)\). Because it is safe to assume the poles of \(G(s)\) [or factors of \(a(s)\) ] are known, the (rare) existence of any of these poles in the RHP can be accounted for. Assuming for now there are no poles of \(G(s)\) in the RHP, an encirclement of -1 by \(KG(s)\) indicates a zero of \(1 + KG(s)\) in the RHP, and thus an unstable root of the closed-loop system.

We can generalize this basic idea by noting that a clockwise contour \(C_{1}\) enclosing a zero of \(1 + KG(s)\)-that is, a closed-loop system root-will result in \(KG(s)\) encircling the -1 point in a clockwise direction. Likewise, if \(C_{1}\) encloses a pole of \(1 + KG(s)\)-that is, if there is an unstable open-loop pole-there will be a counterclockwise \(KG(s)\) encirclement of -1 . Furthermore, if two poles or two zeros are in the RHP, \(KG(s)\) will encircle -1 twice, and so on. The net number of clockwise encirclements, \(N\), equals the number of zeros (closed-loop system roots) in the RHP, \(Z\), minus the number of open-loop poles in the RHP, \(P\) :

\[N = Z - P. \]

This is the key concept of the Nyquist stability criterion.

A simplification in the plotting of \(KG(s)\) results from the fact that any \(KG(s)\) that represents a physical system will have zero response at infinite frequency (i.e., has more poles than zeros). This means that the big arc of \(C_{1}\) corresponding to \(s\) at infinity (see Fig. 6.17) results in \(KG(s)\) being a point of infinitesimally small value near the origin for that portion of \(C_{1}\). Therefore, we accomplish a complete evaluation of a physical system \(KG(s)\) by letting \(s\) traverse the imaginary axis from \(- j\infty\) to \(+ j\infty\) (actually, from \(- j\omega_{h}\) to \(+ j\omega_{h}\), where \(\omega_{h}\) is large enough that \(|KG(j\omega)|\) is much less than 1 for all \(\left. \ \omega > \omega_{h} \right)\). The evaluation of \(KG(s)\) from \(s = 0\) to \(s = j\infty\) has already been discussed in Section 6.1 under the context of finding the frequency response of \(KG(s)\). Because \(G( - j\omega)\) is the complex conjugate of \(G(j\omega)\), we can easily obtain the entire plot of \(KG(s)\) by reflecting the \(0 \leq s \leq + j\infty\) portion about the real axis, to get the \(- j\infty \leq s < 0\) portion. Hence, we see that
closed-loop stability can be determined in all cases by examination of the frequency response of the open-loop transfer function on a polar plot. In some applications, models of physical systems are simplified so as to eliminate some high-frequency dynamics. The resulting reducedorder transfer function might have an equal number of poles and zeros. In that case, the big arc of \(C_{1}\) at infinity needs to be considered.

In practice, many systems behave like those discussed in Section 6.2, so you need not carry out a complete evaluation of \(KG(s)\) with subsequent inspection of the -1 encirclements; a simple look at the frequency response may suffice to determine stability. However, in the case of a complex system for which the simplistic rules given in Section 6.2 become ambiguous, you will want to perform the complete analysis, summarized as follows:

209. Procedure for Determining Nyquist Stability

  1. Plot \(KG(s)\) for \(- j\infty \leq s \leq + j\infty\). Do this by first evaluating \(KG(j\omega)\) for \(\omega = 0\) to \(\omega_{h}\), where \(\omega_{h}\) is so large that the magnitude of \(KG(j\omega)\) is negligibly small for \(\omega > \omega_{h}\), then reflecting the image about the real axis and adding it to the preceding image. The magnitude of \(KG(j\omega)\) will be small at high frequencies for any physical system. The Nyquist plot will always be symmetric with respect to the real axis. The plot is normally created by the NYQUIST Matlab function.

  2. Evaluate the number of clockwise encirclements of -1 , and call that number \(N\). Do this by drawing a straight line in any direction from -1 to \(\infty\). Then count the net number of left-to-right crossings of the straight line by \(KG(s)\). If encirclements are in the counterclockwise direction, \(N\) is negative.

  3. Determine the number of unstable (RHP) poles of \(G(s)\), and call that number \(P\).

  4. Calculate the number of unstable closed-loop roots \(Z\) :

\[Z = N + P \]

For stability, we wish to have \(Z = 0\); that is, no characteristic equation roots in the RHP.

Let us now examine a rigorous application of the procedure for determining stability using Nyquist plots for some examples.

Nyquist Plot for a Second-Order System

Determine the stability properties of the system defined in Fig. 6.20.

Solution. The root locus of the system in Fig. 6.20 is shown in Fig. 6.21. It shows the system is stable for all values of \(K\). The magnitude of the frequency response of \(KG(s)\) is plotted in Fig. 6.22(a) for \(K = 1\), and the

384 Chapter 6 The Frequency-Response Design Method

Figure 6.20

Control system for

Example 6.8

Figure 6.21

Root locus of

\(G(s) = \frac{1}{(s + 1)^{2}}\) with respect to \(K\)

Figure 6.22

Open-loop Bode plot for \(G(s) = \frac{1}{(s + 1)^{2}}\)

(a)

(b)

phase is plotted in Fig. 6.22(b); this is the typical Bode method of presenting frequency response and represents the evaluation of \(G(s)\) over the interesting range of frequencies. The same information is replotted in Fig. 6.23 in the Nyquist (polar) plot form. Note how the points \(A\),

Figure 6.23

Nyquist plot \(\ ^{9}\) of the evaluation of \(KG(s)\) for \(s = C_{1}\) and \(K = 1\)

\(B,C,D\), and \(E\) are mapped from the Bode plot to the Nyquist plot in Fig. 6.23. The arc from \(G(s) = + 1(\omega = 0)\) to \(G(s) = 0(\omega = \infty)\) that lies below the real axis is derived from Fig. 6.22. The portion of the \(C_{1}\) arc at infinity from Fig. 6.17 transforms into \(G(s) = 0\) in Fig. 6.23; therefore, a continuous evaluation of \(G(s)\) with \(s\) traversing \(C_{1}\) is completed by simply reflecting the lower arc about the real axis. This creates the portion of the contour above the real axis and completes the Nyquist (polar) plot. Because the plot does not encircle \(- 1,N = 0\). Also, there are no poles of \(G(s)\) in the RHP, so \(P = 0\). From Eq. (6.28), we conclude that \(Z = 0\), which indicates there are no unstable roots of the closed-loop system for \(K = 1\). Furthermore, different values of \(K\) would simply change the magnitude of the polar plot, but no positive value of \(K\) would cause the plot to encircle -1 , because the polar plot will always cross the negative real axis when \(KG(s) = 0\). Thus the Nyquist stability criterion confirms what the root locus indicated: the closed-loop system is stable for all \(K > 0\).

The Matlab statements that will produce this Nyquist plot are

\(s = tf(^{'}\text{ }s\) ');

sysG \(= 1/(s + 1)^{\land}2\);

nyquist(sysG);

Often the control systems engineer is more interested in determining a range of gains \(K\) for which the system is stable than in testing for stability at a specific value of \(K\). To accommodate this requirement, but to avoid drawing multiple Nyquist plots for various values of the gain, the test can be modified slightly. To do so, we scale \(KG(s)\) by \(K\) and examine \(G(s)\) to determine stability for a range of gains \(K\). This is possible because an encirclement of -1 by \(KG(s)\) is equivalent to

210. EXAMPLE 6.9

Figure 6.24

Control system for

Example 6.9 an encirclement of \(- 1/K\) by \(G(s)\). Therefore, instead of having to deal with \(KG(s)\), we need only consider \(G(s)\), and count the number of the encirclements of the \(- 1/K\) point.

Applying this idea to Example 6.8, we see that the Nyquist plot cannot encircle the \(- 1/K\) point. For positive \(K\), the \(- 1/K\) point will move along the negative real axis, so there will not be an encirclement of \(G(s)\) for any value of \(K > 0\).

(There are also values of \(K < 0\) for which the Nyquist plot shows the system to be stable; specifically, \(- 1 < K < 0\). This result may be verified by drawing the \(0^{\circ}\) locus.)

Nyquist Plot for a Third-Order System

As a second example, consider the system \(G(s) = 1/s(s + 1)^{2}\) for which the closed-loop system is defined in Fig. 6.24. Determine its stability properties using the Nyquist criterion.

Solution. This is the same system discussed in Section 6.2. The root locus in Fig. 6.14(b) shows this system is stable for small values of \(K\), but unstable for large values of \(K\). The magnitude and phase of \(G(s)\) in Fig. 6.25 are transformed into the Nyquist plot shown in Fig. 6.26. Note how the points \(A,B,C,D\), and \(E\) on the Bode plot of Fig. 6.25 map into those on the Nyquist plot of Fig. 6.26. Also note the large arc at infinity that arises from the open-loop pole at \(s = 0\). This pole creates an infinite magnitude of \(G(s)\) at \(\omega = 0\); in fact, a pole anywhere on the imaginary axis will create an arc at infinity. To correctly determine the number of \(- 1/K\) point encirclements, we must draw this arc in the proper half-plane: Should it cross the positive real axis, as shown in Fig. 6.26, or the negative one? It is also necessary to assess whether the arc should sweep out \(180^{\circ}\) (as in Fig. 6.26), \(360^{\circ}\), or \(540^{\circ}\).

A simple artifice suffices to answer these questions. We modify the \(C_{1}\) contour to take a small detour around the pole either to the right (see Fig. 6.27) or to the left. It makes no difference to the final stability question which way the detour goes around the pole, but it is more convenient to go to the right because then no poles are introduced within the \(C_{1}\) contour, keeping the value of \(P\) equal to 0 . Because the phase of \(G(s)\) is the negative of the sum of the angles from all of the poles, we see that the evaluation results in a Nyquist plot moving from \(+ 90^{\circ}\) for \(s\) just below the pole at \(s = 0\), across the positive real axis to \(- 90^{\circ}\) for

Figure 6.25

Bode plot for \(G(s) = 1/s(s + 1)^{2}\)

(a)

(b)

Figure 6.26

Nyquist plot \(\ ^{10}\) for \(G(s) = \frac{1}{s(s + 1)^{2}}\)

\(\ ^{10}\) The shape of this Nyquist plot is a translated strophoid plane curve, meaning "a belt with a twist." The curve was first studied by Barrow in 1670 .

Figure 6.27

\(C_{1}\) contour enclosing

the RHP for the system

in Example 6.9

\(s\) just above the pole. Had there been two poles at \(s = 0\), the Nyquist plot at infinity would have executed a full \(360^{\circ}\) arc, and so on for three or more poles. Furthermore, for a pole elsewhere on the imaginary axis, a \(180^{\circ}\) clockwise arc would also result but would be oriented differently than the example shown in Fig. 6.26.

The Nyquist plot crosses the real axis at \(\omega = 1\) with \(|G| = 0.5\), as indicated by the Bode plot. For \(K > 0\), there are two possibilities for the location of \(- 1/K\) : inside the two loops of the Nyquist plot, or outside the Nyquist contour completely. For large values of $K\left( K_{l} \right.\ $ in Fig. 6.26), \(- 0.5 < - 1/K_{l} < 0\) will lie inside the two loops; hence \(N = 2\), and therefore, \(Z = 2\), indicating that there are two unstable roots. This happens for \(K > 2\). For small values of $K\left( K_{S} \right.\ $ in Fig. 6.26), \(- 1/K\) lies outside the loops; thus \(N = 0\), and all roots are stable. All this information is in agreement with the root locus in Fig. 6.14(b). (When \(K < 0, - 1/K\) lies on the positive real axis, then \(N = 1\), which means \(Z = 1\) and the system has one unstable root. The \(0^{\circ}\) root locus will verify this result.)

For this and many similar systems, we can see that the encirclement criterion reduces to a very simple test for stability based on the openloop frequency response: The system is stable if \(|KG(j\omega)| < 1\) when the phase of \(G(j\omega)\) is \(180^{\circ}\). Note this relation is identical to the stability criterion given in Eq. (6.25); however, by using the Nyquist criterion,

211. EXAMPLE 6.10

Figure 6.28

Control system for Example 6.10

Figure 6.29

Root locus for \(G(s) = \frac{(s + 1)}{s(s/10 - 1)}\) we don't require the root locus to determine whether \(|KG(j\omega)| < 1\) or \(|KG(j\omega)| > 1\).

We draw the Nyquist plot using Matlab, with

\(s = tf(^{'}s^{'})\);

sysG \(= 1/\left( s^{*}(s + 1)^{\land}2 \right)\);

nyquist(sysG)

\[axis\left( \begin{bmatrix} - 3 & 3 & - 3 & 3 \end{bmatrix} \right)\]

The axis command scaled the plot so only points between +3 and -3 on the real and imaginary axes were included. Without manual scaling, the plot would be scaled based on the maximum values computed by Matlab and the essential features in the vicinity of the -1 region would be lost.

For systems that are open-loop unstable, care must be taken because now \(P \neq 0\) in Eq. (6.28). We shall see that the simple rules from Section 6.2 will need to be revised in this case.

212. Nyquist Plot for an Open-Loop Unstable System

The third example is defined in Fig. 6.28. Determine its stability properties using the Nyquist criterion.

Solution. The root locus for this system is sketched in Fig. 6.29 for \(K > 1\). The open-loop system is unstable because it has a pole in the RHP. The open-loop Bode plot is shown in Fig. 6.30. Note in the Bode that \(|KG(j\omega)|\) behaves exactly the same as if the pole had been in the LHP. However, \(\angle G(j\omega)\) increases by \(90^{\circ}\) instead of the usual decrease

Figure 6.30

Bode plot for \(G(s) = \frac{(s + 1)}{s(s/10 - 1)}\)

(a)

(b)

at a pole. Any system with a pole in the RHP is unstable; hence, it is difficult \(\ ^{11}\) to determine its frequency response experimentally because the system never reaches a steady-state sinusoidal response for a sinusoidal input. It is, however, possible to compute the magnitude and phase of the transfer function according to the rules in Section 6.1. The pole in the RHP affects the Nyquist encirclement criterion, because the value of \(P\) in Eq. (6.28) is +1 .

We convert the frequency-response information of Fig. 6.30 into the Nyquist plot in Fig. 6.31(a) as in the previous examples. As before, the \(C_{1}\) detour around the pole at \(s = 0\) in Fig. 6.31(b) creates a large arc at infinity in Fig. 6.31(a). This arc crosses the negative real axis because of the \(180^{\circ}\) phase contribution of the pole in the RHP as shown by Fig. 6.31(b).

The real-axis crossing occurs at \(|G(s)| = 1\) because in the Bode plot \(|G(s)| = 1\) when \(\angle G(s) = + 180^{\circ}\), which happens to be at \(\omega \cong 3rad/sec\).

By expanding \(G(j\omega)\) into its real and imaginary parts, it can be seen that the real part approaches -1.1 as \(\omega \rightarrow \pm \infty\). This is shown to be the case as the asymptotes approach points A and C in Fig. 6.31(a).

Figure 6.31

Example 6.10, (a)

Nyquist plot of

\[G(s) = \frac{(s + 1)}{s(s/10 - 1)},(b)C_{1} \]

contour

(a)

(b)

The contour shows two different behaviors, depending on the values of \(K( > 0)\). For large values of $K\left( K_{l} \right.\ $ in Fig. 6.31(a)), there is one counterclockwise encirclement of the -1 point $\left( - 1/K_{l} \right.\ $ in the figure); hence, \(N = - 1\). However, because \(P = 1\) from the RHP pole, \(Z = N + P = 0\), so there are no unstable system roots and the system is stable for \(K > 1\). For small values of \(K\) [ \(K_{s}\) in Fig. 6.31(a)], \(N = + 1\) because of the clockwise encirclement of $- 1\left( - 1/K_{S} \right.\ $ in the figure) and \(Z = 2\), indicating two unstable roots. These results can be verified qualitatively by the root locus in Fig. 6.29 where we see that low values of \(K\) produce the portions of the loci that are in the RHP (unstable) and that both branches cross into the LHP (stable) for high values of \(K\).

If \(K < 0, - 1/K\) is on the positive real axis so \(N = 0\) and \(Z =\) 1 , indicating the system will have one unstable closed-loop pole. A \(0^{\circ}\) root locus will show a branch of the locus emanating from the pole at \(s = + 10\) to infinity; thus verifying that there will always be one unstable root.

As with all systems, the stability boundary occurs at \(|KG(j\omega)| = 1\) for the phase of \(\angle G(j\omega) = 180^{\circ}\). However, in this case, \(|KG(j\omega)|\) must be greater than 1 to yield the correct number of -1 point encirclements to achieve stability. This polarity reversal of the normal rules can be rigorously determined via the Nyquist plot; however, in practice, it is usually more expedient to sketch the root locus and to determine the correct rules based on its behavior.

To draw the Nyquist plot using Matlab, use the following commands:

The existence of the RHP pole in Example 6.10 affected the Bode plotting rules of the phase curve and affected the relationship between encirclements and unstable closed-loop roots because \(P = 1\) in Eq. (6.28). But we apply the Nyquist stability criterion without any modifications. The same is true for systems with a RHP zero; that is, a nonminimum-phase zero has no effect on the Nyquist stability criterion, but the Bode plotting rules are affected.

Nyquist Plot Characteristics

Find the Nyquist plot for the second-order system

\[G(s) = \frac{s^{2} + 3}{(s + 1)^{2}} \]

and reconcile the plot with the characteristics of \(G(s)\). If the \(G(s)\) is to be included in a feedback system as shown in Fig. 6.18, then determine whether the system is stable for all positive values of \(K\).

Solution. To draw the Nyquist plot using Matlab, use the following commands:

\[\begin{matrix} & s = tf\left( \ ^{'}s^{'} \right) \\ & \text{~}\text{sysG}\text{~} = \left( s^{\land}2 + 3 \right)/(s + 1)^{\land}2 \\ & \text{~}\text{nyquist}\text{~}(\text{~}\text{sysG}\text{~}) \\ & axis\left( \begin{bmatrix} - 2 & 3 & - 3 \end{bmatrix} \right) \end{matrix}\]

The result is shown in Fig. 6.32. Note there are no arcs at infinity for this case due to the lack of any poles at the origin or on the \(j\omega\) axis. Also note the Nyquist curve associated with the Bode plot \((s = + j\omega)\) starts

Figure 6.32

Nyquist plot \(\ ^{12}\) for Example 6.11

at \((3,0)\), ends at \((1,0)\), and, therefore, starts and ends with a phase angle of \(0^{\circ}\). This is as it should be since the numerator and denominator of \(G(s)\) are equal order and there are no singularities at the origin. So the Bode plot should start and end with a zero phase. Also note the Nyquist plot goes through \((0,0)\) as \(s\) goes through \(s = + j\sqrt{3}\), as it should since the magnitude equals zero when \(s\) is at a zero. Furthermore, note the phase goes from \(- 120^{\circ}\) as \(s\) approaches \((0,0)\) to \(+ 60^{\circ}\) as \(s\) departs from \((0,0)\). This behavior follows since a Bode plot phase will jump by \(+ 180^{\circ}\) instantaneously as \(s\) passes through a zero on the \(j\omega\) axis. The phase initially decreases as the plot leaves the starting point at \((3,0)\) because the lowest frequency singularity is the pole at \(s = - 1\).

Changing the gain, \(K\), will increase or decrease the magnitude of the Nyquist plot but it can never cross the negative-real axis. Therefore, the closed-loop system will always be stable for positive \(K\). Exercise: Verify this result by making a rough root-locus sketch by hand.

212.1. Stability Margins

A large fraction of control system designs behave in a pattern roughly similar to that of the system in Section 6.2 and Example 6.9 in Section 6.3; that is, the system is stable for all small gain values and becomes unstable if the gain increases past a certain critical point. Knowing exactly what the margins are for which a control system remains stable is of critical importance. Two commonly used quantities that measure the stability margin for such systems are directly related

\(\ ^{12}\) The shape of this Nyquist plot is a limaçon, a fact pointed out by the third author's son, who was in a 10th grade trigonometry class at the time. Limaçon means "snail" in French from the Latin "limax," and was first investigated by Dürer in 1525 .

Gain margin

Phase margin

Figure 6.33

Nyquist plot for defining GM and PM to the stability criterion of Eq. (6.25): gain margin and phase margin. In this section, we will define and use these two concepts to study system design. Another measure of stability, originally defined by Smith (1958), combines these two margins into one called the vector margin (sometimes called the complex margin) which gives a better indication of stability for complicated cases.

The gain margin (GM) is the factor by which the gain can be increased (or decreased in certain cases) before instability results. For the typical case, it can be read directly from the Bode plot (see Fig. 6.15) by measuring the vertical distance between the \(|KG(j\omega)|\) curve and the magnitude \(= 1\) line at the frequency where \(\angle G(j\omega) = - 180^{\circ}\). We see from the figure that when \(K = 0.1\), the \(GM = 20\) (or \(26db\) ) because \(|KG(j\omega)| = 0.05\). When \(K = 2\), the system is neutrally stable with \(|KG(j\omega)| = 1\), thus \(GM = 1(0db)\). For \(K = 10,|KG(j\omega)| = 5\), the \(GM = 0.2( - 14db)\) and the system is unstable. Note, for this typical system, the GM is the factor by which the gain \(K\) can be increased before instability results; therefore, \(|GM| < 1\) (or \(|GM| < 0db\) ) indicates an unstable system. The GM can also be determined from a root locus with respect to \(K\) by noting two values of \(K\) : (1) at the point where the locus crosses the \(j\omega\)-axis, and (2) at the nominal closed-loop poles. The GM is the ratio of these two values.

Another measure that is used to indicate the stability margin in a system is the phase margin (PM). It is the amount by which the phase of \(G(j\omega)\) exceeds \(- 180^{\circ}\) when \(|KG(j\omega)| = 1\), which is an alternative way of measuring the degree to which the stability conditions of Eq. (6.25) are met. For the case in Fig. 6.15, we see that \(PM \cong 80^{\circ}\) for \(K = 0.1\), \(PM = 0^{\circ}\) for \(K = 2\), and \(PM = - 35^{\circ}\) for \(K = 10\). A positive \(PM\) is required for stability.

The stability margins may also be defined in terms of the Nyquist plot. Figure 6.33 shows that GM and PM are measures of how close the complex quantity \(G(j\omega)\) comes to encircling the -1 point, which is another way of stating the neutral-stability point specified by Eq. (6.24). Again we can see that the GM indicates how much the gain can be raised before instability results in a system like the one in Example 6.9. The \(PM\) is the difference between the phase of \(G(j\omega)\) and \(180^{\circ}\) when \(KG(j\omega)\) crosses the circle \(|KG(s)| = 1\); the positive value of \(PM\) is assigned to

Figure 6.34

GM and PM from the magnitude and phase plot the stable case (i.e., with no Nyquist encirclements). So we see that the two margins measure the distance between the Nyquist plot and the -1 point in two dimensions; the GM measures along the horizontal axis, while the PM measures along the unit circle.

It is easier to determine these margins directly from the Bode plot than from the Nyquist plot. The term crossover frequency, \(\omega_{c}\), is often used to refer to the frequency at which the magnitude is unity, or \(0db\). While the crossover frequency is easily determined from the open-loop frequency-response plot, this frequency is highly correlated with the closed-loop system bandwidth and, therefore, the speed of response of the system. The closed-loop system bandwidth was defined in Section 6.1 and its detailed relationship to the crossover frequency will be discussed in Section 6.6.

The open-loop frequency-response data shown in Figure 6.34 is the same data plotted in Fig. 6.25, but for the case with \(K = 1\). The PM (= \(\left. \ 22^{\circ} \right)\) and \(GM( = 2)\) are apparent from Figure 6.34 and match those that could have been obtained (with more difficulty) from the Nyquist plot shown in Fig. 6.26. The real-axis crossing at -0.5 corresponds to a GM

Figure 6.35

\(PM\) versus \(K\) from the frequency-response data

of \(1/0.5\) or 2 and the PM could be computed graphically by measuring the angle of \(G(j\omega)\) as it crosses the magnitude \(= 1\) circle.

One of the useful aspects of frequency-response design is the ease with which we can evaluate the effects of gain changes. In fact, we can determine the PM from Fig. 6.34 for any value of \(K\) without redrawing the magnitude or phase information. We need only indicate on the figure where \(|KG(j\omega)| = 1\) for selected trial values of \(K\), as has been done with dashed lines in Fig. 6.35. Now we can see that \(K = 5\) yields an unstable PM of \(- 22^{\circ}\), while a gain of \(K = 0.5\) yields a PM of \(+ 45^{\circ}\). Furthermore, if we wish a certain PM (say \(70^{\circ}\) ), we simply read the value of \(|G(j\omega)|\) corresponding to the frequency that would create the desired PM (here \(\omega = 0.2rad/sec\) yields \(70^{\circ}\), where \(|G(j\omega)| = 5\) ), and note that the magnitude at this frequency is \(1/K\). Therefore, a PM of \(70^{\circ}\) will be achieved with \(K = 0.2\).

The PM is more commonly used to specify control system performance because it is most closely related to the damping ratio of the system. This can be seen for the open-loop second-order system

\[G(s) = \frac{\omega_{n}^{2}}{s\left( s + 2\zeta\omega_{n} \right)} \]

which, with unity feedback, produces the closed-loop system

\[\mathcal{T}(s) = \frac{\omega_{n}^{2}}{s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2}} \]

It can be shown that the relationship between the PM and \(\zeta\) in this system is

\[PM = \tan^{- 1}\left\lbrack \frac{2\zeta}{\sqrt{\sqrt{1 + 4\zeta^{4}} - 2\zeta^{2}}} \right\rbrack \]

This function is plotted in Fig. 6.36. Note the function is approximately a straight line up to about \(PM = 60^{\circ}\). The dashed line shows a straightline approximation to the function, where

\[\zeta \cong \frac{PM}{100} \]

It is clear that the approximation holds only for PM below about \(70^{\circ}\). Furthermore, Eq. (6.31) is only accurate for the second-order system of Eq. (6.30). In spite of these limitations, Eq. (6.32) is often used as a rule of thumb for relating the closed-loop damping ratio to PM. It is useful as a starting point; however, it is always important to check the actual damping of a design, as well as other aspects of the performance, before calling the design complete.

The GM for the second-order system [given by Eq. (6.29)] is infinite \((GM = \infty)\), because the phase curve does not cross \(- 180^{\circ}\) as the frequency increases. This would also be true for any first- or second-order system.

Additional data to aid in evaluating a control system based on its PM can be derived from the relationship between the resonant peak \(M_{r}\) and \(\zeta\) seen in Fig. 6.3. Note this figure was derived for the same system [Eq. (6.9)] as Eq. (6.30). We can convert the information in Fig. 6.36 into a form relating \(M_{r}\) to the PM. This is depicted in Fig. 6.37, along with the step-response overshoot \(M_{p}\). Therefore, we see that, given the PM, one can determine the overshoot of the closed-loop step response for a

Figure 6.36

Damping ratio versus PM

213. Figure 6.37

Transient-response overshoot \(\left( M_{p} \right)\) and frequency-response resonant peak \(\left( M_{r} \right)\) versus \(PM\) for \(T(s) = \frac{\omega_{n}^{2}}{s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2}}\)

Importance of PM

Nichols Plot

second-order system with no zeros, which serves as a rough estimate for any system.

Many engineers think directly in terms of the PM when judging whether a control system is adequately stabilized. In these terms, a \(PM = 30^{\circ}\) is often judged to be the lowest adequate value. Furthermore, some value of the PM is often stated specifically as a required specification of the feedback system design. In addition to testing the stability of a system design using the PM, a designer would typically also be concerned with meeting a speed-of-response specification such as bandwidth, as discussed in Section 6.1. In terms of the frequencyresponse parameters discussed so far, the crossover frequency would best describe a system's speed of response. This idea will be discussed further in Sections 6.6 and 6.7.

In some cases, the PM and GM are not helpful indicators of stability. For first- and second-order systems, the phase never crosses the \(180^{\circ}\) line; hence, the GM is always \(\infty\) and not a useful design parameter. For higher-order systems, it is possible to have more than one frequency where \(|KG(j\omega)| = 1\) or where \(\angle KG(j\omega) = 180^{\circ}\), and the margins as previously defined need clarification. An example of this can be seen in Fig. 10.12, where the magnitude crosses 1 three times. In that case, a decision was made to define PM by the first crossing, because the PM at this crossing was the smallest of the three values and thus the most conservative assessment of stability. A Nyquist plot based on the data in Fig. 10.12 would show that the portion of the Nyquist curve closest to the -1 point was the critical indicator of stability, and therefore use of the crossover frequency yielding the minimum value of PM was the logical choice. Alternatively, the Nichols plot discussed in Section 6.9 can be used to resolve any uncertainty in the stability margins. At best, a designer needs to be judicious when applying the margin definitions described in Fig. 6.33. In fact, the actual stability margin of a system can be rigorously assessed only by examining the Nyquist or Nichols plots to determine its closest approach to the -1 point.

Vector margin

Conditionally stable systems

Figure 6.38

Definition of the vector margin on the Nyquist plot

Figure 6.39

Root locus for a conditionally stable system
To aid in this analysis, Smith (1958) introduced the vector margin (sometimes called the complex margin), which he defined to be the distance to the -1 point from the closest approach of the Nyquist plot. \(\ ^{13}\) Figure 6.38 illustrates the idea graphically. Because the vector margin is a single margin parameter, it removes all the ambiguities in assessing stability that come with using GM and PM in combination. In the past it has not been used extensively due to difficulties in computing it. However, with the widespread availability of computer aids, the idea of using the vector margin to describe the degree of stability is much more feasible.

There are certain practical examples in which an increase in the gain can make the system stable. As we saw in Chapter W3.8, these systems are called conditionally stable. A representative root-locus plot for such systems is shown in Fig. 6.39. For a point on the root locus, such as \(A\), an increase in the gain would make the system stable by bringing the unstable roots into the LHP. For point \(B\), either a gain increase or decrease could make the system become unstable. Therefore, several GMs exist that correspond to either gain reduction or gain increase, and the definition of the GM in Fig. 6.33 is not valid.

Figure 6.40

System in which increasing gain leads from instability to stability: (a) root locus; (b) Nyquist plot

(a)

(b)

214. EXAMPLE 6.12

215. EXAMPLE 6.13

216. Stability Properties for a Conditionally Stable System

Determine the stability properties as a function of the gain \(K\) for the system with the open-loop transfer function

\[KG(s) = \frac{K(s + 10)^{2}}{s^{3}} \]

Solution. This is a system for which increasing gain causes a transition from instability to stability. The root locus in Fig. 6.40(a) shows that the system is unstable for \(K < 5\) and stable for \(K > 5\). The Nyquist plot in Fig. 6.40(b) was drawn for the stable value \(K = 7\). Determination of the margins according to Fig. 6.33 yields \(PM = + 10^{\circ}\) (stable) and \(GM =\) 0.7 (unstable). According to the rules for stability discussed earlier, these two margins yield conflicting signals on the system's stability.

We resolve the conflict by counting the Nyquist encirclements in Fig. 6.40(b). There is one clockwise encirclement and one counterclockwise encirclement of the -1 point. Hence there are no net encirclements, which confirms that the system is stable for \(K = 7\). For systems such as this, it is best to resort to the root locus and/or Nyquist plot (rather than the Bode plot) to determine stability.

Nyquist Plot for a System with Multiple Crossover Frequencies

Draw the Nyquist plot for the system

\[\begin{matrix} G(s) & \ = \frac{85(s + 1)\left( s^{2} + 2s + 43.25 \right)}{s^{2}\left( s^{2} + 2s + 82 \right)\left( s^{2} + 2s + 101 \right)} \\ & \ = \frac{85(s + 1)(s + 1 \pm 6.5j)}{s^{2}(s + 1 \pm 9j)(s + 1 \pm 10j)} \end{matrix}\]

and determine the stability margins.

Figure 6.41

Nyquist plot of the complex system in Example 6.13

Figure 6.42

Bode plot of the system in Example 6.13

Solution. The Nyquist plot (see Fig. 6.41) shows qualitatively that there are three crossover frequencies of the magnitude \(= 1\) circle; therefore, there will be three corresponding PM values. The Bode plot for this system (see Fig. 6.42) shows the three crossings of magnitude \(= 1\) at 0.75 , 9.0, and \(10.1rad/sec\) which indicate PM's of \(37^{\circ},80^{\circ}\), and \(40^{\circ}\), respectively. The key indicator of stability in this case is the proximity of the Nyquist plot as it approaches the -1 point while crossing the real axis. Because there is only one crossing of the real axis of the Nyquist plot

(a)

(b)
(and, therefore, one crossing of the \(- 180^{\circ}\) line of the Phase plot), there is only one value of the GM. From the Bode plot, we see the phase crosses \(- 180^{\circ}\) at \(\omega = 10.4rad/sec\) where the magnitude \(= 0.79\). Therefore, the \(GM = 1/0.79 = 1.26\) which is the most useful stability margin for this example. Note if there had been multiple crossings of \(- 180^{\circ}\), the smallest value of the \(GM\) determined at the various \(- 180^{\circ}\) crossings would be the correct value of GM because that is where the system would become unstable as the gain is increased. (Tischler, 2012, pg. 226.)

In summary, many systems behave roughly like Example 6.9, and for them, the GM and PM are well defined and useful. There are also frequent instances of more complicated systems where the Bode plot has multiple magnitude 1 or \(- 180^{\circ}\) crossovers for which the stability criteria defined by Fig. 6.33 are less clear; therefore, we need to determine possible values of GM and PM, then revert back to the Nyquist stability criterion for an in-depth understanding and determination of the correct stability margins.

216.1. Bode's Gain-Phase Relationship

One of Bode's important contributions is the following theorem:

For any stable minimum-phase system (i.e., one with no RHP zeros or poles), the phase of \(G(j\omega)\) is uniquely related to the magnitude of \(G(j\omega)\).

When the slope of \(|G(j\omega)|\) versus \(\omega\) on a log-log scale persists at a constant value for approximately a decade of frequency, the relationship is particularly simple and is given by

\[\angle G(j\omega) \cong n \times 90^{\circ} \]

where \(n\) is the slope of \(|G(j\omega)|\) in units of decade of amplitude per decade of frequency. For example, in considering the magnitude curve alone in Fig. 6.43, we see Eq. (6.33) can be applied to the two frequencies \(\omega_{1} = 0.1\) (where \(n = - 2\) ) and \(\omega_{2} = 10\) (where \(n = - 1\) ), which are a decade removed from the change in slope, to yield the approximate values of phase, \(- 180^{\circ}\) and \(- 90^{\circ}\). The exact phase curve shown in the figure verifies that indeed the approximation is quite good. It also shows that the approximation will degrade if the evaluation is performed at frequencies closer to the change in slope.

An exact statement of the Bode gain-phase theorem is

\[\angle G\left( j\omega_{o} \right) = \frac{1}{\pi}\int_{- \infty}^{+ \infty}\mspace{2mu}\left( \frac{dM}{du} \right)W(u)du\text{~}\text{in radians,}\text{~} \]

where

\[\begin{matrix} M & \ = log\text{~}\text{magnitude}\text{~} = ln|G(j\omega)| \\ u & \ = \text{~}\text{normalized frequency}\text{~} = ln\left( \omega/\omega_{o} \right), \\ dM/du & \ \cong \text{~}\text{slope}\text{~}n,\text{~}\text{as defined in Eq.}\text{~}(6.33), \\ W(u) & \ = \text{~}\text{weighting function}\text{~} = ln(coth|u|/2). \end{matrix}\]

Figure 6.44 is a plot of the weighting function \(W(u)\) and shows how the phase is most dependent on the slope at \(\omega_{o}\); it is also dependent, though to a lesser degree, on slopes at neighboring frequencies. The

Figure 6.43

An approximate gain-phase relationship demonstration

Figure 6.44

Weighting function in Bode's gain-phase theorem

Crossover frequency

217. EXAMPLE 6.14

Disturbance Rejection Bandwidth, \(\omega_{DRB}\) figure also suggests that the weighting could be approximated by an impulse function centered at \(\omega_{o}\). We may approximate the weighting function as

\[W(u) \cong \frac{\pi^{2}}{2}\delta(u) \]

which is precisely the approximation made to arrive at Eq. (6.33) using the "sifting"property of the impulse function (and conversion from radians to degrees).

In practice, Eq. (6.34) is never used, but Eq. (6.33) is used as a guide to infer stability from \(|G(j\omega)|\) alone. When \(|KG(j\omega)| = 1\),

\[\begin{matrix} & \angle G(j\omega) \cong - 90^{\circ}\ \text{~}\text{if}\text{~}n = - 1 \\ & \angle G(j\omega) \cong - 180^{\circ}\ \text{~}\text{if}\text{~}n = - 2 \end{matrix}\]

For stability, we want \(\angle G(j\omega) > - 180^{\circ}\) for the PM to be \(> 0\). Therefore, we adjust the \(|KG(j\omega)|\) curve so it has a slope of -1 at the "crossover" frequency, \(\omega_{c}\) (i.e., where \(|KG(j\omega)| = 1\) ). If the slope is -1 for a decade above and below the crossover frequency, then \(PM \cong 90^{\circ}\); however, to ensure a reasonable PM, it is usually necessary only to insist that a -1 slope \(( - 20db\) per decade) persist for a decade in frequency that is centered at the crossover frequency. We therefore see there is a very simple design criterion:

Adjust the slope of the magnitude curve \(|KG(j\omega)|\) so it crosses over magnitude 1 with a slope of -1 for a decade around \(\omega_{c}\).

This criterion will usually be sufficient to provide an acceptable PM, and hence provide adequate system damping. To achieve the desired speed of response, the system gain is adjusted so the crossover point is at a frequency that will yield the desired bandwidth or speed of response as determined by Eq. (3.68). Recall that the natural frequency \(\omega_{n}\), bandwidth, and crossover frequency are all approximately equal, as will be discussed further in Section 6.6.

218. Use of Simple Design Criterion for Spacecraft Attitude Control

For the spacecraft attitude-control problem defined in Fig. 6.45, find a suitable expression for \(KD_{c}(s)\) that will provide good damping and a bandwidth of approximately \(0.2rad/sec\). Also determine the frequency where the sensitivity function \(|\mathcal{S}| = 0.7( = - 3db)\). This frequency is often referred to as the "Disturbance Rejection Bandwidth," or \(\omega_{DRB}\).

Solution. The magnitude of the frequency response of the spacecraft (see Fig. 6.46) clearly requires some reshaping, because it has a slope of -2 (or \(- 40db\) per decade) everywhere. The simplest compensation

Figure 6.45

Spacecraft

attitude-control system

Figure 6.46

Magnitude of the spacecraft's frequency response

Figure 6.47

Compensated open-loop transfer function

to do the job consists of using proportional and derivative terms (a PD compensator), which produces the relation

\[KD_{c}(s) = K\left( T_{D}s + 1 \right) \]

We will adjust the gain \(K\) to produce the desired bandwidth, and adjust break point \(\omega_{1} = 1/T_{D}\) to provide the -1 slope at the crossover frequency. The actual design process to achieve the desired specifications is now very simple: We pick a value of \(K\) to provide a crossover at \(0.2rad/sec\), and choose a value of \(\omega_{1}\) that is about four times lower than the crossover frequency, so the slope will be -1 in the vicinity of the crossover. Figure 6.47 shows the steps we take to arrive at the final compensation:

Figure 6.48

Closed-loop frequency response of \(\mathcal{T}(s)\) and \(\mathcal{S}(s)\)

  1. Plot \(|G(j\omega)|\).

  2. Modify the plot to include \(\left| D_{c}(j\omega) \right|\), with $\omega_{1} = 0.05rad/sec\left( T_{D} = \right.\ $ 20 ), so the slope will be \(\cong - 1\) at \(\omega = 0.2rad/sec\).

  3. Determine that \(\left| D_{c}G \right| = 100\), where the \(\left| D_{c}G \right|\) curve crosses the line \(\omega = 0.2rad/sec\), which is where we want magnitude 1 crossover to be.

  4. In order for crossover to be at \(\omega = 0.2rad/sec\), compute

\[K = \frac{1}{\left| D_{c}G \right|_{\omega = 0.2}} = \frac{1}{100} = 0.01 \]

Therefore,

\[KD_{c}(s) = 0.01(20s + 1) \]

will meet the specifications, thus completing the design.

If we were to draw the phase curve of \(KD_{c}G\), we would find that \(PM = 75^{\circ}\), which is certainly quite adequate. This result follows because the slope of -1 occurs for a decade centered around the crossover frequency. A plot of the closed-loop frequency-response magnitude, \(\mathcal{T}(s)\), (see Fig. 6.48) shows that, indeed, the crossover frequency and the bandwidth are almost identical in this case; therefore, the desired bandwidth of \(0.2rad/sec\) has been met. The sensitivity function was defined by Eq. (4.23) and for this problem is

\[\mathcal{S} = \frac{1}{1 + KD_{c}G} \]

\(\mathcal{S}(s)\) is also shown on Fig. 6.48, where it can be seen that \(|\mathcal{S}|\) has the value of 0.7 or \(- 3db\) at \(\omega = 0.15rad/sec\). The concept of a disturbance rejection characteristic at a certain frequency \(\left( \omega_{DRB} \right)\) is often specified as a requirement for an acceptable design of a feedback system.

Figure 6.49

Step response for PD compensation

Basically, \(\omega_{DRB}\) is the maximum frequency at which the disturbance rejection (i.e., the sensitivity function, \(\mathcal{S}\) ) is below a certain amount, in this case \(- 3db\); so in this example, \(\omega_{DRB} = 0.15rad/sec\).

The step response of the closed-loop system is shown in Fig. 6.49, and its \(14\%\) overshoot confirms the adequate damping.

218.1. Closed-Loop Frequency Response

The closed-loop bandwidth was defined in Section 6.1 and in Fig. 6.5. Figure 6.3 showed that the natural frequency is always within a factor of two of the bandwidth for a second-order system. In Example 6.14, we designed the compensation so the crossover frequency was at the desired bandwidth and verified by computation that the bandwidth was identical to the crossover frequency. Generally, the match between the crossover frequency and the bandwidth is not as good as in Example 6.14. We can help establish a more exact correspondence by making a few observations. Consider a system in which \(|KG(j\omega)|\) shows the typical behavior

\[\begin{matrix} & \ |KG(j\omega)| \gg 1\text{~}\text{for}\text{~}\omega \ll \omega_{c}, \\ & \ |KG(j\omega)| \ll 1\text{~}\text{for}\text{~}\omega \gg \omega_{c}, \end{matrix}\]

where \(\omega_{c}\) is the crossover frequency. The closed-loop frequencyresponse magnitude is approximated by

\[|\mathcal{T}(j\omega)| = \left| \frac{KG(j\omega)}{1 + KG(j\omega)} \right| \cong \left\{ \begin{matrix} 1, & \omega \ll \omega_{c}, \\ |KG|, & \omega \gg \omega_{c}. \end{matrix} \right.\ \]

In the vicinity of crossover, where \(|KG(j\omega)| = 1,|\mathcal{T}(j\omega)|\) depends heavily on the PM. A PM of \(90^{\circ}\) means that \(\angle G\left( j\omega_{c} \right) = - 90^{\circ}\), and therefore \(\left| \mathcal{T}\left( j\omega_{c} \right) \right| = 0.707\). On the other hand, \(PM = 45^{\circ}\) yields \(\left| \mathcal{T}\left( j\omega_{c} \right) \right| = 1.31\)

The exact evaluation of Eq. (6.36) was used to generate the curves of \(|\mathcal{T}(j\omega)|\) in Fig. 6.50. It shows that the bandwidth for smaller values

Figure 6.50

Closed-loop bandwidth with respect to \(PM\)

of PM is typically somewhat greater than \(\omega_{c}\), though usually it is less than \(2\omega_{c}\); thus

\[\omega_{c} \leq \omega_{BW} \leq 2\omega_{c} \]

Another specification related to the closed-loop frequency response is the resonant-peak magnitude \(M_{r}\), defined in Fig. 6.5. Figures 6.3 and 6.37 show that, for linear systems, \(M_{r}\) is generally related to the damping of the system. In practice, \(M_{r}\) is rarely used; most designers prefer to use the PM to specify the damping of a system, because the imperfections that make systems nonlinear or cause delays usually erode the phase more significantly than the magnitude.

As demonstrated in the last example, it is also important in the design to achieve certain error characteristics and these are often evaluated as a function of the input or disturbance frequency. In some cases, the primary function of the control system is to regulate the output to a certain constant input in the presence of disturbances. For these situations, the key item of interest for the design would be the closed-loop frequency response of the error with respect to disturbance inputs.

218.2. Compensation

As discussed in Chapters 4 and 5, dynamic elements (or compensation) are typically added to feedback controllers to improve the system's stability and error characteristics because the process itself cannot be made to have acceptable characteristics with proportional feedback alone.

Section 4.3 discussed the basic types of feedback: proportional, derivative, and integral. Section 5.4 discussed three kinds of dynamic compensation: lead compensation, which approximates proportionalderivative (PD) feedback, lag compensation, which approximates proportional-integral (PI) control, and notch compensation, which has special characteristics for dealing with resonances. In this section, we discuss these and other kinds of compensation in terms of their frequency-response characteristics.

PD compensation

Figure 6.51

Frequency response of PD control
The frequency-response stability analysis to this point has usually considered the closed-loop system to have the characteristic equation \(1 + KG(s) = 0\). With the introduction of compensation, the closedloop characteristic equation becomes \(1 + KD_{c}(s)G(s) = 0\), and all the previous discussion in this chapter pertaining to the frequency response of \(KG(s)\) applies directly to the compensated case if we apply it to the frequency response of \(KD_{c}(s)G(s)\). We call this quantity \(L(s)\), the "loop gain," or open-loop transfer function of the system, where \(L(s) = KD_{c}(s)G(s)\).

218.2.1. PD Compensation

We will start the discussion of compensation design by using the frequency response with PD control. The compensator transfer function, given by

\[D_{c}(s) = \left( T_{D}s + 1 \right), \]

was shown in Fig. 5.22 to have a stabilizing effect on the root locus of a second-order system. The frequency-response characteristics of Eq. (6.37) are shown in Fig. 6.51. A stabilizing influence is apparent by the increase in phase and the corresponding +1 slope at frequencies above the break point \(1/T_{D}\). We use this compensation by locating \(1/T_{D}\)

219. Lead compensation

Figure 6.52

Lead-compensation frequency response with \(1/\alpha = 10\) so the increased phase occurs in the vicinity of crossover (that is, where \(\left| KD_{c}(s)G(s) \right| = 1\) ), thus increasing the PM.

Note the magnitude of the compensation continues to grow with increasing frequency. This feature is undesirable because it amplifies the high-frequency noise that is typically present in any real system and, as a continuous transfer function, cannot be realized with physical elements. It is also the reason, as stated in Section 5.4, that pure derivative compensation gives trouble.

219.0.1. Lead Compensation

In order to alleviate the high-frequency amplification of the PD compensation, a first-order pole is added in the denominator at frequencies substantially higher than the break point of the PD compensator. Thus the phase increase (or lead) still occurs, but the amplification at high frequencies is limited. The resulting lead compensation has a transfer function of

\[D_{c}(s) = \frac{T_{D}s + 1}{\alpha T_{D}s + 1},\ \alpha < 1 \]

where \(1/\alpha\) is the ratio between the pole/zero break-point frequencies. Figure 6.52 shows the frequency response of this lead compensation. Note a significant amount of phase lead is still provided, but with much less amplification at high frequencies. A lead compensator is generally

used whenever a substantial improvement in damping of the system is required.

The phase contributed by the lead compensation in Eq. (6.38) is given by

\[\phi = \tan^{- 1}\left( T_{D}\omega \right) - \tan^{- 1}\left( \alpha T_{D}\omega \right) \]

It can be shown (see Problem 6.44) that the frequency at which the phase is maximum is given by

\[\omega_{\max} = \frac{1}{T_{D}\sqrt{\alpha}} \]

The maximum phase contribution-that is, the peak of the \(\angle D_{c}(s)\) curve in Fig. 6.52 - corresponds to

or

\[sin\phi_{\max} = \frac{1 - \alpha}{1 + \alpha} \]

\[\alpha = \frac{1 - sin\phi_{\max}}{1 + sin\phi_{\max}} \]

Another way to look at this is the following: The maximum phase occurs at a frequency that lies midway between the two break-point frequencies (sometimes called corner frequencies) on a logarithmic scale,

\[\begin{matrix} log\omega_{\max} & \ = log\frac{1/\sqrt{T_{D}}}{\sqrt{\alpha T_{D}}} \\ & \ = log\frac{1}{\sqrt{T_{D}}} + log\frac{1}{\sqrt{\alpha T_{D}}} \\ & \ = \frac{1}{2}\left\lbrack log\left( \frac{1}{T_{D}} \right) + log\left( \frac{1}{\alpha T_{D}} \right) \right\rbrack \end{matrix}\]

as shown in Fig. 6.52. Alternatively, we may state these results in terms of the pole-zero locations. Rewriting \(D_{c}(s)\) in the form used for rootlocus analysis, we have

Problem 6.44 shows that

\[D_{c}(s) = \frac{s + z}{s + p} \]

\[\omega_{\max} = \sqrt{|z||p|} \]

and

\[log\omega_{\max} = \frac{1}{2}(log|z| + log|p|) \]

These results agree with the previous ones if we choose \(z = - 1/T_{D}\) and \(p = - 1/\alpha T_{D}\) in Eqs. (6.39) and (6.41).

For example, a lead compensator with a zero at \(s = - 2\left( T_{D} = 0.5 \right)\) and a pole at \(s = - 10\left( \alpha T_{D} = 0.1 \right)\) (and thus \(\alpha = \frac{1}{5}\) ) would yield the maximum phase lead at

\[\omega_{\max} = \sqrt{2 \cdot 10} = 4.47rad/sec \]

The amount of phase lead at the midpoint depends only on \(\alpha\) in Eq. (6.40) and is plotted in Fig. 6.53. For \(\alpha = 1/5\), Fig. 6.53 shows that

Figure 6.53

Maximum phase increase for lead compensation

Lead ratio

EXAMPLE 6.15

\(\phi_{\max} = 40^{\circ}\). Note from the figure that we could increase the phase lead up to \(90^{\circ}\) using higher \(\ ^{14}\) values of the lead ratio, \(1/\alpha\); however, Fig. 6.52 shows that increasing values of \(1/\alpha\) also produces higher amplifications at higher frequencies. Thus our task is to select a value of \(1/\alpha\) that is a good compromise between an acceptable PM and an acceptable noise sensitivity at high frequencies. Usually the compromise suggests that a lead compensation should contribute a maximum of \(70^{\circ}\) to the phase. If a greater phase lead is needed, then a double-lead compensation would be suggested, where

\[D_{c}(s) = \left( \frac{T_{D}s + 1}{\alpha T_{D}s + 1} \right)^{2} \]

Even if a system had negligible amounts of noise present and the pure derivative compensation of Eq. (6.37) were acceptable, a continuous compensation would look more like Eq. (6.38) than Eq. (6.37) because of the impossibility of building a pure differentiator. No physical system - mechanical or electrical-responds with infinite amplitude at infinite frequencies, so there will be a limit in the frequency range (or bandwidth) for which derivative information (or phase lead) can be provided. This is also true with a digital implementation. Here, the sample rate limits the high-frequency amplification and essentially places a pole in the compensation transfer function.

220. Lead Compensation for a DC Motor

As an example of designing a lead compensator, let us repeat the design of compensation for the DC motor with the transfer function

\[G(s) = \frac{1}{s(s + 1)} \]

that was carried out in Section 5.4.1. This also represents the model of a satellite tracking antenna (see Fig. 3.60). This time we wish to obtain a steady-state error of less than 0.1 for a unit-ramp input. Furthermore, we desire an overshoot \(M_{p} < 25\%\). Determine the lead compensation satisfying the specifications.

\(\ ^{14}\) Lead ratio \(= 1/\alpha\).

\[e_{SS} = \lim_{s \rightarrow 0}\mspace{2mu} s\left\lbrack \frac{1}{1 + KD_{c}(s)G(s)} \right\rbrack R(s) \]

where \(R(s) = 1/s^{2}\) for a unit ramp, so Eq. (6.45) reduces to

\[e_{ss} = \lim_{s \rightarrow 0}\mspace{2mu}\left\{ \frac{1}{s + KD_{c}(s)\lbrack 1/(s + 1)\rbrack} \right\} = \frac{1}{KD_{c}(0)} \]

Therefore, we find that \(KD_{c}(0)\), the steady-state gain of the compensation, cannot be less than \(10\left( K_{v} \geq 10 \right)\) if it is to meet the error criterion, so we pick \(K = 10\). To relate the overshoot requirement to PM, Fig. 6.37 shows that a PM of \(45^{\circ}\) should suffice. The frequency response of \(KG(s)\) in Fig. 6.54 shows that the \(PM = 20^{\circ}\) if no phase lead is added by compensation. If it were possible to simply add phase without affecting the magnitude, we would need an additional phase of only \(25^{\circ}\) at the \(KG(s)\) crossover frequency of \(\omega = 3rad/sec\). However, maintaining the same low-frequency gain and adding a compensator zero would increase the

Figure 6.54

Frequency response for lead-compensation design

221. Figure 6.55

Root locus for lead compensation design

Figure 6.56

Step response of lead compensator design for Example 6.15

crossover frequency; hence more than a \(25^{\circ}\) phase contribution will be required from the lead compensation. To be safe, we will design the lead compensator so it supplies a maximum phase lead of \(40^{\circ}\). Fig. 6.53 shows that \(1/\alpha = 5\) will accomplish that goal. We will derive the greatest benefit from the compensation if the maximum phase lead from the compensator occurs at the crossover frequency. With some trial and error, we determine that placing the zero at \(\omega = 2rad/sec\) and the pole at \(\omega = 10rad/sec\) causes the maximum phase lead to be at the crossover frequency. The compensation, therefore, is

\[KD_{c}(s) = 10\frac{s/2 + 1}{s/10 + 1} \]

The frequency-response characteristics of \(KD_{c}(s)G(s)\) in Fig. 6.54 can be seen to yield a PM of \(53^{\circ}\), which satisfies the design goals.

  1. The root locus for this design, originally given as Fig. 5.24, is repeated here as Fig. 6.55, with the root locations marked for \(K = 10\). The locus is not needed for the frequency-response design procedure; it is presented here only for comparison with the root-locus design method presented in Chapter 5, which had an equivalent gain of \(K = 14\). For further comparison, Fig. 6.56

shows the time response of the system to a step command. Comparing it to Fig. 5.25, we see the current design is slightly slower, having a rise time \(t_{r} = 0.33sec\) compared to the \(t_{r} = 0.26sec\) for Fig. 5.25.

The design procedure used in Example 6.15 can be summarized as follows:

  1. Determine the low-frequency gain so the steady-state errors are within specification.

  2. Select the combination of lead ratio \(1/\alpha\) and zero values \(\left( 1/T_{D} \right)\) that achieves an acceptable PM at crossover.

  3. The pole location is then at \(\left( 1/\alpha T_{D} \right)\).

This design procedure will apply to many cases; however, keep in mind that the specific procedure followed in any particular design may need to be tailored to its particular set of specifications.

In Example 6.15, there were two specifications: peak overshoot and steady-state error. We transformed the overshoot specification into a PM, but the steady-state error specification we used directly. No speed-of-response type of specification was given; however, it would have impacted the design in the same way that the steady-state error specification did. The speed of response or bandwidth of a system is directly related to the crossover frequency, as we pointed out earlier in Section 6.6. Figure W6.1 shows that the crossover frequency was \(\sim 5rad/sec\). We could have increased it by raising the gain \(K\) and increasing the frequency of the lead compensator pole and zero in order to keep the slope of -1 at the crossover frequency. Raising the gain would also have decreased the steady-state error to be better than the specified limit. The GM was never introduced into the problem because the stability was adequately specified by the PM alone. Furthermore, the GM would not have been useful for this system because the phase never crossed the \(180^{\circ}\) line, and the GM was always infinite.

Design parameters for lead networks

In lead-compensation designs, there are three primary design parameters:

  1. The crossover frequency \(\omega_{c}\), which determines bandwidth \(\omega_{BW}\), rise time \(t_{r}\), and settling time \(t_{s}\);

  2. The PM, which determines the damping coefficient \(\zeta\) and the overshoot \(M_{p}\)

  3. The low-frequency gain, which determines the steady-state error characteristics.

The design problem is to find the best values for the parameters, given the requirements. In essence, lead compensation increases the value of $\omega_{c}/L(0)\left( = \omega_{c}/K_{v} \right.\ $ for a Type 1 system \()\). That means that, if the low-frequency gain is kept the same, the crossover frequency

Design Procedure for Lead Compensation will increase. Or, if the crossover frequency is kept the same, the lowfrequency gain will decrease. Keeping this interaction in mind, the designer can assume a fixed value of one of these three design parameters, then adjust the other two iteratively until the specifications are met. One approach is to set the low-frequency gain to meet the error specifications and add a lead compensator to increase PM at the crossover frequency. An alternative is to pick the crossover frequency to meet a time response specification, then adjust the gain and lead characteristics so the PM specification is met. A step-by-step procedure is outlined next for these two cases. They apply to a sizable class of problems for which a single lead is sufficient. As with all such design procedures, it provides only a starting point; the designer will typically find it necessary to go through several design iterations in order to meet all the specifications.

  1. Determine the gain \(K\) to satisfy error or bandwidth requirements:

(a) to meet error requirements, pick \(K\) to satisfy error constants $\left( K_{p},K_{v} \right.\ $, or \(\left. \ K_{a} \right)\) so \(e_{ss}\) error specification is met, or alternatively,

(b) to meet bandwidth requirements, pick \(K\) so the open-loop crossover frequency is a factor of two below the desired closed-loop bandwidth.

  1. Evaluate the PM of the uncompensated system using the value of \(K\) obtained from Step 1.

  2. Allow for extra margin (about \(10^{\circ}\) ), and determine the needed phase lead \(\phi_{\max}\).

  3. Determine \(\alpha\) from Eq. (6.40) or Fig. 6.53.

  4. Pick \(\omega_{\max}\) to be at the crossover frequency; thus the zero is at \(1/T_{D} = \omega_{\max}\sqrt{\alpha}\) and the pole is at \(1/\alpha T_{D} = \omega_{\max}/\sqrt{\alpha}\).

  5. Draw the compensated frequency response and check the PM.

  6. Iterate on the design. Adjust compensator parameters (poles, zeros, and gain) until all specifications are met. Add an additional lead compensator (that is, a double-lead compensation) if necessary.

While these guidelines will not apply to all the systems you will encounter in practice, they do suggest a systematic trial-and-error process to search for a satisfactory compensator that will usually be successful.

Lead Compensator for a Temperature Control System

The third-order system

\[KG(s) = \frac{K}{(s/0.5 + 1)(s + 1)(s/2 + 1)} \]

is representative of a typical temperature control system. Design a lead compensator such that \(K_{p} = 9\) and the PM is at least \(25^{\circ}\).

Solution. Let us follow the design procedure:

  1. Given the specification for \(K_{p}\), we solve for \(K\) :

\[K_{p} = \lim_{s \rightarrow 0}\mspace{2mu} KG(s) = K = 9 \]

  1. The Bode plot of the uncompensated system, \(KG(s)\), with \(K = 9\) can be created by the Matlab statements below, and is shown in Fig. 6.57 along with the two compensated cases.

\(s = tf\left( t^{'} \right)\);

sysG \(= 9/\left( (s/0.5 + 1)^{*}(\text{ }s + 1)*(\text{ }s/2 + 1) \right)\);

\(w =\) logspace \(( - 1,1)\);

[mag, phase] = bode \((\) sysG, \(w)\);

\(loglog(w\), squeeze (mag)), grid;

semilogx(w,squeeze(phase)), grid;

It is difficult to read the PM and crossover frequencies accurately from the Bode plots; therefore, the Matlab command

\(\lbrack GM,PM,Wcg,Wcp\rbrack = margin(\) mag, phase, \(w)\);

can be invoked. The quantity PM is the phase margin and Wcp is the frequency at which the gain crosses magnitude 1. (GM and, Wcg are the GM and the frequency at which the phase crosses \(- 180^{\circ}\) respectively.) For this example, the output is

\[GM = 1.25,PM = 7.14,Wcg = 1.87,\text{ }Wccp = 1.68 \]

Figure 6.57

Bode plot for the lead-compensation design in Example 6.16

(a)

(b)
which says that the PM of the uncompensated system is \(7^{\circ}\) and that this occurs at a crossover frequency of \(1.7rad/sec\).

  1. Allowing for \(10^{\circ}\) of extra margin, we want the lead compensator to contribute \(25^{\circ} + 10^{\circ} - 7^{\circ} = 28^{\circ}\) at the crossover frequency. The extra margin is typically required because the lead will increase the crossover frequency from the open-loop case, at which point more phase increase will be required.

  2. From Fig. 6.53 , we see that \(\alpha = 1/3\) will produce approximately \(30^{\circ}\) phase increase midway between the zero and pole.

  3. As a first cut, let's place the zero at \(1rad/sec\left( T_{D} = 1 \right)\) and the pole at \(3rad/sec\left( \alpha T_{D} = 1/3 \right)\), thus bracketing the open-loop crossover frequency and preserving the factor of 3 between pole and zero, as indicated by \(\alpha = 1/3\). The lead compensator is

\[D_{c1}(s) = \frac{s + 1}{s/3 + 1} = \frac{1}{0.333}\left( \frac{s + 1}{s + 3} \right)\text{.}\text{~} \]

  1. The Bode plot of the system with \(D_{c1}(s)\) (see Fig. 6.57, middle curve) has a PM of \(16^{\circ}\). We did not achieve the desired PM of \(30^{\circ}\), because the lead shifted the crossover frequency from \(1.7rad/sec\) to \(2.3rad/sec\), thus increasing the required phase increase from the lead. The step response of the system with \(D_{c1}(s)\) (see Fig. 6.58) shows a very oscillatory response, as we might expect from the low \(PM\) of \(16^{\circ}\).

  2. We repeat the design with extra phase increase and move the zero location slightly to the right so the crossover frequency won't be shifted so much. We choose \(\alpha = 1/10\) with the zero at \(s = - 1.5\), so

\[D_{c2}(s) = \frac{s/1.5 + 1}{s/15 + 1} = \frac{1}{0.1}\left( \frac{s + 1.5}{s + 15} \right)\text{.}\text{~} \]

This compensation produces a \(PM = 38^{\circ}\), and the crossover frequency lowers slightly to \(2.2rad/sec\). Figure 6.57 (upper curve) shows the frequency response of the revised design. Figure 6.58 shows a substantial reduction in the oscillations, which you should expect from the higher PM value.

Figure 6.58

Step response for lead-compensation design in Example 6.16

222. EXAMPLE 6.17

Figure 6.59

Bode plot for the lead-compensation design in Example 6.17
Lead-Compensator Design for a Type 1

Servomechanism System

Consider the third-order system

\[KG(s) = K\frac{10}{s(s/2.5 + 1)(s/6 + 1)} \]

This type of system would result for a DC motor with a lag in the shaft position sensor. Design a lead compensator so that the \(PM = 45^{\circ}\) and \(K_{v} = 10\).

Solution. Again, we follow the design procedure given earlier:

  1. As given, \(KG(s)\) will yield \(K_{v} = 10\) if \(K = 1\). Therefore, the \(K_{v}\) requirement is met by \(K = 1\) and the low-frequency gain of the compensation should be 1 .

  2. The Bode plot of the system is shown in Fig. 6.59. The PM of the uncompensated system (lower curve) is approximately \(- 4^{\circ}\), and the crossover frequency is at \(\omega_{c} \cong 4rad/sec\).

  3. Allowing for \(5^{\circ}\) of extra \(PM\), we need \(PM = 45^{\circ} + 5^{\circ} - \left( - 4^{\circ} \right) = 54^{\circ}\) to be contributed by the lead compensator.

  4. From Fig. 6.53 we find \(\alpha\) must be 0.1 to achieve a maximum phase lead of \(54^{\circ}\).

  5. The new gain crossover frequency will be higher than the openloop value of \(\omega_{c} = 4rad/sec\), so let's select the pole and zero of the lead compensation to be at 20 and \(2rad/sec\), respectively. So the candidate compensator is

\[D_{c1}(s) = \frac{s/2 + 1}{s/20 + 1} = \frac{1}{0.1}\frac{s + 2}{s + 20} \]

(a)

(b)

Both Examples 6.16 and 6.17 are third order. Example 6.17 was more difficult to design compensation for, because the error requirement, \(K_{v}\), forced the crossover frequency, \(\omega_{c}\), to be so high that a single lead could not provide enough PM.

222.0.1. PI Compensation

In many problems, it is important to keep the bandwidth low and also to reduce the steady-state error. For this purpose, a proportional-integral (PI) or lag compensator is useful. From Eq. (4.73), we see that PI control has the transfer function

\[D_{c}(s) = \frac{K}{s}\left( s + \frac{1}{T_{I}} \right) \]

which results in the frequency-response characteristics shown in Fig. 6.60. The desirable aspect of this compensation is the infinite gain at zero frequency, which reduces the steady-state errors. This is accomplished, however, at the cost of a phase decrease at frequencies lower than the break point at \(\omega = 1/T_{I}\). Therefore, \(1/T_{I}\) is usually located at a frequency substantially less than the crossover frequency so the system's PM is not affected significantly.

222.0.2. Lag Compensation

As we discussed in Section 5.4, lag compensation approximates PI control. Its transfer function was given by Eq. (5.72) for root-locus design, but for frequency-response design, it is more convenient to write the transfer function of the lag compensation alone in the Bode form

\[D_{c}(s) = \alpha\frac{T_{I}s + 1}{\alpha T_{I}s + 1},\ \alpha > 1 \]

where \(\alpha\) is the ratio between the zero/pole break-point frequencies. The complete controller will almost always include an overall gain \(K\) and perhaps other dynamics in addition to the lag compensation. Although Eq. (6.47) looks very similar to the lead compensation in Eq. (6.38), the

Figure 6.60

Frequency response of PI control

fact is that \(\alpha > 1\) causes the pole to have a lower break-point frequency than the zero. This relationship produces the low-frequency increase in amplitude and phase decrease (lag) apparent in the frequency-response plot in Fig. 6.61 and gives the compensation the essential feature of integral control-an increased low-frequency gain. The typical objective of lag-compensation design is to provide additional gain of \(\alpha\) in the lowfrequency range and to leave the system sufficient PM. Of course, phase lag is not a useful effect, and the pole and zero of the lag compensator are selected to be at much lower frequencies than the uncompensated system crossover frequency in order to keep the effect on the PM to a minimum. Thus, the lag compensator increases the open-loop DC gain, thereby improving the steady-state response characteristics, without changing the transient-response characteristics significantly. If the pole and zero are relatively close together and near the origin (that is, if the value of \(T_{I}\) is large), we can increase the low-frequency gain (and thus \(K_{p},K_{v}\), or \(K_{a}\) ) by a factor \(\alpha\) without moving the closed-loop poles appreciably. Hence, the transient response remains approximately the same while the steady-state response is improved.

We now summarize a step-by-step procedure for lag-compensator design.

Figure 6.61

Frequency response of lag compensation with \(\alpha = 10\)

Design Procedure for Lag Compensation

  1. Determine the open-loop gain \(K\) that will meet the PM requirement without compensation.

  2. Draw the Bode plot of the uncompensated system with crossover frequency from Step 1, and evaluate the low-frequency gain.

  3. Determine \(\alpha\) to meet the low-frequency gain error requirement.

  4. Choose the corner frequency \(\omega = 1/T_{I}\) (the zero of the lag compensator) to be one octave to one decade below the new crossover frequency \(\omega_{c}\).

  5. The other corner frequency (the pole location of the lag compensator) is then \(\omega = 1/\alpha T_{I}\).

  6. Iterate on the design. Adjust compensator parameters (poles, zeros, and gain) to meet all the specifications.

Lag-Compensator Design for Temperature Control System

Again consider the third-order system of Example 6.16:

\[KG(s) = \frac{K}{\left( \frac{1}{0.5}s + 1 \right)(s + 1)\left( \frac{1}{2}s + 1 \right)} \]

Design a lag compensator so the \(PM\) is at least \(40^{\circ}\) and \(K_{p} = 9\).

Solution. We follow the design procedure previously enumerated.

  1. From the open-loop plot of \(KG(s)\), shown for \(K = 9\) in Fig. 6.57, it can be seen a \(PM > 40^{\circ}\) will be achieved if the crossover frequency \(\omega_{c} \lesssim 1rad/sec\). This will be the case if \(K = 3\). So we pick \(K = 3\) in order to meet the PM specification.

  2. The Bode plot of \(KG(s)\) in Fig. 6.62 with \(K = 3\) shows the PM is \(\approx 50^{\circ}\) and the low-frequency gain is now 3 . Exact calculation of the PM using Matlab's margin shows that \(PM = 53^{\circ}\).

  3. The low-frequency gain should be raised by a factor of 3 , which means the lag compensation needs to have \(\alpha = 3\).

  4. We choose the corner frequency for the zero to be approximately a factor of 5 slower than the expected crossover frequency - that is, at \(0.2rad/sec\). So, \(1/T_{I} = 0.2\), or \(T_{I} = 5\).

  5. We then have the value for the other corner frequency: \(\omega = 1/\alpha T_{I} =\) \(\frac{1}{(3)(5)} = 1/15rad/sec\). The compensator is thus

\[D_{c}(s) = 3\frac{5s + 1}{15s + 1} \]

The compensated frequency response is also shown in Fig. 6.62. The low-frequency gain of \(KD_{c}(0)G(0) = 3K = 9\), thus \(K_{p} = 9\) and the PM lowers slightly to \(44^{\circ}\), which satisfies the specifications. The step response of the system, shown in Fig. 6.63, illustrates the reasonable damping that we would expect from \(PM = 44^{\circ}\).

  1. No iteration is required in this case.

Figure 6.62

Frequency response of lag-compensation design in Example 6.18

(a)

(b)

Figure 6.63

Step response of lag-compensation design in Example 6.18

Note Examples 6.16 and 6.18 are both for the same plant, and both had the same steady-state error requirement. One was compensated with lead, and one was compensated with lag. The result is the bandwidth of the lead-compensated design is higher than that for the lag-compensated design by approximately a factor of 3 . This result can be seen by comparing the crossover frequencies of the two designs.

A beneficial effect of lag compensation, an increase in the lowfrequency gain for better error characteristics, was just demonstrated in Example 6.18. However, in essence, lag compensation reduces the value of $\omega_{c}/L(0)\left( = \omega_{c}/K_{v} \right.\ $ for a Type 1 system \()\). That means that, if the crossover frequency is kept the same, the low-frequency gain will increase. Likewise, if the low-frequency gain is kept the same, the crossover frequency will decrease. Therefore, lag compensation could also be interpreted to reduce the crossover frequency and thus obtain a better PM. The procedure for design in this case is partially modified. First, pick the low-frequency gain to meet error requirements, then locate the lag compensation pole and zero in order to provide a crossover frequency with adequate PM. The next example illustrates this design procedure. The end result of the design will be the same no matter what procedure is followed.

Lag Compensation of the DC Motor

Repeat the design of the DC motor control in Example 6.15, this time using lag compensation. Fix the low-frequency gain in order to meet the error requirement of \(K_{v} = 10\); then use the lag compensation to meet the PM requirement of \(45^{\circ}\). Compare the open-loop Bode magnitude plots and the time responses for Examples 6.15 and 6.19.

Solution. The frequency response of the system \(KG(s)\), with the required gain of \(K = 10\), is shown in Fig. 6.64. The uncompensated system has a crossover frequency at approximately \(3rad/sec\) where the \(PM = 20^{\circ}\). The designer's task is to select the lag compensation break points so the crossover frequency is lowered and more favorable PM results. To prevent detrimental effects from the compensation phase lag, the pole and zero position values of the compensation need to be substantially lower than the new crossover frequency. One possible choice is shown in Fig. 6.64: The lag zero is at \(0.1rad/sec\), and the lag pole is at

Figure 6.64

Frequency response of lag-compensation design in Example 6.19

\(0.01rad/sec\). This selection of parameters produces a \(PM\) of \(50^{\circ}\), thus satisfying the specifications. Here the stabilization is achieved by keeping the crossover frequency to a region where \(G(s)\) has favorable phase characteristics. However, note \(\omega_{c} \cong 0.8rad/sec\) for this case compared to the \(\omega_{c} \cong 5rad/sec\) for the Example 6.15 where lead compensation was used. The criterion for selecting the pole and zero locations \(1/T_{I}\) is to make them low enough to minimize the effects of the phase lag from the compensation at the crossover frequency. Generally, however, the pole and zero are located no lower than necessary, because the additional system root (compare with the root locus of a similar system design in Fig. 5.28) introduced by the lag will be in the same frequency range as the compensation zero and will have some effect on the output response, especially the response to disturbance inputs.

The response of the system to a step reference input is shown in Fig. 6.65. It shows no steady-state error to a step input, because this is a Type 1 system. However, the introduction of the slow root from the lag compensation has caused the response to require about \(25sec\) to settle down to the zero steady-state value and the rise time, \(t_{r} = 2\)

Figure 6.65

Step response of

lag-compensation

design in Example 6.19
Important caveat on design strategy

PID compensation

sec compared to \(t_{r} = 0.33sec\) for Example 6.15. This difference in rise time is to be expected based on the difference in crossover frequencies. The overshoot \(M_{p}\) is somewhat larger than you would expect from the guidelines, based on a second-order system shown in Fig. 6.37 for a PM \(= 50^{\circ}\); however, the performance is adequate.

As we saw previously for a similar situation, Examples 6.15 and 6.19 meet an identical set of specifications for the same plant in very different ways. In the first case, the specifications are met with a lead compensation, and a crossover frequency $\omega_{C} = 5rad/sec\left( \omega_{BW} \cong \right.\ $ \(6rad/sec\) ) results. In the second case, the same specifications are met with a lag compensation, and \(\omega_{C} \cong 0.8rad/sec\left( \omega_{BW} \cong 0.9rad/sec \right)\) results. Clearly, had there been specifications for rise time or bandwidth, they would have influenced the choice of compensation (lead or lag). Likewise, if the slow settling to the steady-state value was a problem, it might have suggested the use of lead compensation instead of lag.

In more realistic systems, dynamic elements usually represent the actuator and sensor as well as the process itself, so it is typically impossible to raise the crossover frequency much beyond the value representing the speed of response of the components being used. Although linear analysis seems to suggest that almost any system can be compensated, in fact, if we attempt to drive a set of components much faster than their natural frequencies, the system will saturate, the linearity assumptions will no longer be valid, and the linear design will represent little more than wishful thinking. With this behavior in mind, we see that simply increasing the gain of a system and adding lead compensators to achieve an adequate PM may not always be possible. It may be preferable to satisfy error requirements by adding a lag network so that the closed-loop bandwidth is kept at a more reasonable frequency.

222.0.3. PID Compensation

For problems that need PM improvement at \(\omega_{c}\) and low-frequency gain improvement, it is effective to use both derivative and integral control. By combining Eqs. (6.37) and (6.46), we obtain PID control. A common way to write its transfer function is

Figure 6.66

Frequency response of PID compensation with \(\frac{T_{I}}{T_{D}} = 20\)

\[D_{c}(s) = \frac{K}{s}\left\lbrack \left( T_{D} + 1 \right)\left( s + \frac{1}{T_{I}} \right) \right\rbrack \]

and its frequency-response characteristics are shown in Fig. 6.66. This form is slightly different from that given by Eq. (4.75); however, the effect of the difference is inconsequential. This compensation is roughly equivalent to combining lead and lag compensators in the same design, and so is sometimes referred to as a lead-lag compensator. Hence, it can provide simultaneous improvement in transient and steady-state responses.

223. PID Compensation Design for Spacecraft Attitude Control

A simplified design for spacecraft attitude control was presented in Section 6.5; however, here we have a more realistic situation that includes a sensor lag and a disturbing torque. Figure 6.67 defines the system.

  1. Design a PID controller to have zero steady-state error to a constant-disturbance torque, a PM of \(65^{\circ}\), and as high a bandwidth as is reasonably possible.

Figure 6.67

Block diagram of spacecraft control using PID design in

Example 6.20

  1. Plot the step response versus a command input and the step response to a constant disturbance torque.

  2. Plot the closed-loop frequency response, \(\frac{\Theta}{\Theta_{c}}\), and the sensitivity function, \(\mathcal{S}\).

  3. Determine \(\omega_{BW}\) and \(\omega_{DRB}\).

  4. For a torque disturbance from solar pressure that acts as a sinusoid at the orbital rate \((\omega = 0.001rad/sec\) or \(\approx 100\)-minute period \()\), comment on the usefulness of this controller to attenuate solar pressure effects.

Solution. First, let us take care of the steady-state error. For the spacecraft to be at a steady final value, the total input torque, \(T_{d} + T_{c}\), must equal zero. Therefore, if \(T_{d} \neq 0\), then \(T_{c} = - T_{d}\). The only way this can be true with no error \(\left( e_{ss} = 0 \right)\) is for \(D_{c}(s)\) to contain an integral term. Hence, including integral control in the compensation will meet the steady-state requirement. This could also be verified mathematically by use of the Final Value Theorem (see Problem 6.47).

The frequency response of the spacecraft and sensor, \(GH\), where

\[G(s) = \frac{0.9}{s^{2}}\ \text{~}\text{and}\text{~}H(s) = \left( \frac{2}{s + 2} \right) \]

is shown in Fig. 6.68. The slopes of -2 (that is, \(- 40db\) per decade) and \(- 3( - 60db\) per decade) show that the system would be unstable for any value of \(K\) if no derivative feedback were used. This is clear because of Bode's gain-phase relationship, which shows that the phase would be \(- 180^{\circ}\) for the -2 slope and \(- 270^{\circ}\) for the -3 slope, which would correspond to a PM of \(0^{\circ}\) or \(- 90^{\circ}\), respectively. Therefore, derivative control is required to bring the slope to -1 at the crossover frequency that was shown in Section 6.5 to be a requirement for stability. The problem now is to pick values for the three parameters in Eq. (6.48) \(- K\), \(T_{D}\), and \(T_{I}\)-that will satisfy the specifications.

The easiest approach is to work first on the phase so \(PM = 65^{\circ}\) is achieved at a reasonably high frequency. This can be accomplished primarily by adjusting \(T_{D}\), noting that \(T_{I}\) has a minor effect if sufficiently larger than \(T_{D}\). Once the phase is adjusted, we establish the crossover frequency; then we can easily determine the gain \(K\).

Figure 6.68

Compensation for PID design in Example 6.20

We examine the phase of the PID controller in Fig. 6.66 to determine what would happen to the compensated spacecraft system, \(D_{c}(s)G(s)\), as \(T_{D}\) is varied. If \(1/T_{D} \geq 2rad/sec\), the phase lead from the PID control would simply cancel the sensor phase lag, and the composite phase would never exceed \(- 180^{\circ}\), an unacceptable situation. If \(1/T_{D} \leq 0.01\), the composite phase would approach \(- 90^{\circ}\) for some range of frequencies and would exceed \(- 115^{\circ}\) for an even wider range of frequencies; the latter threshold would provide a PM of \(65^{\circ}\). In the compensated phase curve shown in Fig. \(6.68,1/T_{D} = 0.1\), which is the largest value of \(1/T_{D}\) that could provide the required \(PM\) of \(65^{\circ}\). The phase would never cross the \(- 115^{\circ}\left( 65^{\circ}PM \right)\) line for any \(1/T_{D} > 0.1\). For \(1/T_{D} = 0.1\), the crossover frequency \(\omega_{c}\) that produces the \(65^{\circ}PM\) is \(0.5rad/sec\). For a value of \(1/T_{D} \ll 0.05\), the phase essentially follows the dotted curve in Fig. 6.68, which indicates that the maximum possible \(\omega_{c}\)
is approximately \(1rad/sec\) and is provided by \(1/T_{D} = 0.05\). Therefore, \(0.05 < 1/T_{D} < 0.1\) is the only sensible range for \(1/T_{D}\); anything less than 0.05 would provide no significant increase in bandwidth, while anything more than 0.1 could not meet the PM specification. Although the final choice is somewhat arbitrary, we have chosen \(1/T_{D} = 0.1\) for our final design.

Our choice for \(1/T_{I}\) is a factor of 20 lower than \(1/T_{D}\); that is, \(1/T_{I} = 0.005\). A factor less than 20 would negatively impact the phase at crossover, thus lowering the PM. Furthermore, it is generally desirable to keep the compensated magnitude as large as possible at frequencies below \(\omega_{c}\) in order to have a faster transient response and smaller errors; maintaining \(1/T_{D}\) and \(1/T_{I}\) at the highest possible frequencies will bring this about. An alternate approach for this problem would have been to pick \(1/T_{D} = 0.05\) in order to have a larger phase increase. This would have allowed a higher value of \(1/T_{I}\) which would have provided for a faster response of the integral portion of the controller. Note for this system that the sensor break point at \(2rad/sec\) is limiting how high \(1/T_{D}\) can be selected. Problem 6.63 examines alternate designs for this system.

The only remaining task is to determine the proportional part of the PID controller, or \(K\). Unlike the system in Example 6.18, where we selected \(K\) in order to meet a steady-state error specification, here we select a value of \(K\) that will yield a crossover frequency at the point corresponding to the required PM of \(65^{\circ}\). The basic procedure for finding \(K\) (discussed in Section 6.6) consists of plotting the compensated system amplitude with \(K = 1\), finding the amplitude value at crossover, then setting \(1/K\) equal to that value. Figure 6.68 shows that when \(K = 1\), \(\left| D_{c}(s)G(s) \right| = 20\) at the desired crossover frequency \(\omega_{c} = 0.5rad/sec\). Therefore,

\[\frac{1}{K} = 20,\ \text{~}\text{so}\text{~}\ K = \frac{1}{20} = 0.05 \]

The compensation equation that satisfies all of the specifications is now complete:

\[D_{c}(s) = \frac{0.05}{s}\lbrack(10s + 1)(s + 0.005)\rbrack \]

It is interesting to note this system would become unstable if the gain were lowered so that \(\omega_{c} \leq 0.02rad/sec\), the region in Fig. 6.68 where the phase of the compensated system is less than \(- 180^{\circ}\). As mentioned in Section 6.4, this situation is referred to as a conditionally stable system. A root locus with respect to \(K\) for this and any conditionally stable system would show the portion of the locus corresponding to very low gains in the RHP.

The response of the system for a unit step \(\theta_{com}\) is found from

\[\mathcal{T}(s) = \frac{\Theta}{\Theta_{c}} = \frac{D_{c}G}{1 + D_{c}GH} \]

and is shown in Fig. 6.69(a). It exhibits well damped behavior, as should be expected with a \(65^{\circ}PM\). The response of the system for a step disturbance torque of \(T_{d}\) is found from

(a)

(b)

Figure 6.69

Transient response for PID example: (a) unit step command response; (b) step torque disturbance response

\[\frac{\Theta}{T_{d}} = \frac{G}{1 + D_{c}GH} \]

Very low values of disturbance torques exist in space, for example a constant \(T_{d} = 0.0175\text{ }N\)-m yields the response shown in Fig. 6.69(b). Note that the integral control term does eventually drive the error to zero; however, it is slow due to the presence of a closed-loop pole and zero both in the vicinity of \(s = - 0.005\). They resulted from the integral term \(1/T_{I}\) being located slow enough to not impact the PM unduly. If the slow disturbance response is not acceptable, increasing \(1/T_{I}\) will speed up the response; however, it will also decrease the PM and damping of the system. Alternatively, it would also be possible to select a lower value of \(1/T_{D}\), thus giving some extra PM and allowing for a higher value of \(1/T_{I}\) without sacrificing the desired PM. Problem 6.63 provides the reader with the opportunity to examine other design possibilities for this system.

The sensitivity function, \(\mathcal{S}\), represents a general indication of the response of a system to errors and is often plotted along with the closedloop frequency response. The frequency response of \(\mathcal{T}(s)\) and \(\mathcal{S}(s)\) [Eqs. (4.12) and (4.13)] for the system are shown in Fig. 6.70, where

\[\mathcal{S}(s) = \frac{1}{1 + D_{c}GH} \]

When these two curves cross the magnitude \(0.707( - 3db)\) line, the values of \(\omega_{BW}\) and \(\omega_{DRB}\) are determined as shown in the figure. The result is that \(\omega_{BW} = 0.7rad/sec\) and \(\omega_{DRB} = 0.3rad/sec\). Most disturbances on satellites have a periodicitiy at the orbital rate of \(0.001rad/sec\). We see from the figure that the sensitivity function, \(\mathcal{S}\), is approximately \(10^{- 5}\) at that frequency, which implies a large attenuation of errors. There is decreasing error attenuation as the disturbance frequency increases, and there is almost no error attenuation at the system bandwidth of \(\approx 0.7\) \(rad/sec\), as you would expect. Another guide to the errors on orbit is

Figure 6.70

Frequency responses of the closed-loop transfer function, \(\mathcal{T}(j\omega)\), and the sensitivity function, \(\mathcal{S}(j\omega)\)

Summary of

Compensation

Characteristics

apparent from Fig. 6.69(b). Here we see a step error essentially dies out to zero in approximately \(1000sec\) due to the integral control feature. This compares with the orbital period of \(100\text{ }\min\), or \(6000sec\). Therefore, we see orbital disturbances will be heavily attenuated by this controller.

Note from the design process that the bandwidth was limited by the response characteristics of the sensor, which had a bandwidth of \(2rad/sec\). Therefore, the only way to improve the error characteristics would be to increase the bandwidth of the sensor. On the other hand, increasing the bandwidth of the sensor may introduce jitter from the high-frequency sensor noise. Thus we see one of the classic trade-off dilemmas: the designer has to make a judgment as to which feature (low errors due to disturbances or low errors due to sensor noise) is the more important to the overall system performance.

  1. \(PD\) control adds phase lead at all frequencies above the break point. If there is no change in gain on the low-frequency asymptote, PD compensation will increase the crossover frequency and the speed of response. The increase in magnitude of the frequency response at the higher frequencies will increase the system's sensitivity to noise.

  2. Lead compensation adds phase lead at a frequency band between the two break points, which are usually selected to bracket the crossover frequency. If there is no change in gain on the low-frequency asymptote, lead compensation will increase both the crossover frequency and the speed of response over the uncompensated system.

  3. PI control increases the frequency-response magnitude at frequencies below the break point, thereby decreasing steadystate errors. It also contributes phase lag below the break point, which must be kept at a low enough frequency to avoid degrading the stability excessively.

  4. Lag compensation increases the frequency-response magnitude at frequencies below the two break points, thereby decreasing steady-state errors. Alternatively, with suitable adjustments in \(K\), lag compensation can be used to decrease the frequencyresponse magnitude at frequencies above the two break points, so that \(\omega_{c}\) yields an acceptable PM. Lag compensation also contributes phase lag between the two break points, which must be kept at frequencies low enough to keep the phase decrease from degrading the PM excessively. This compensation will typically provide a slower response than using lead compensation.

223.0.1. Design Considerations

We have seen in the preceding designs that characteristics of the openloop Bode plot of the loop gain, \(L(s)\left( = KD_{c}G \right)\), determine performance with respect to steady-state errors, low-frequency errors, and dynamic response including stability margins. Other properties of feedback, developed in Chapter 4, include reducing the effects of sensor noise and parameter changes on the performance of the system.

The consideration of steady-state errors or low-frequency errors due to command inputs and disturbances has been an important design component in the different design methods presented. Design for acceptable errors due to command inputs and disturbances can be thought of as placing a lower bound on the low-frequency gain of the open-loop system. Another aspect of the sensitivity issue concerns the high-frequency portion of the system. So far, Chapter 4 and Sections 5.4 and 6.7 have briefly discussed the idea that, to alleviate the effects of sensor noise, the gain of the system at high frequencies must be kept low. In fact, in the development of lead compensation, we added a pole to pure derivative control specifically to reduce the effects of sensor noise at the higher frequencies. It is not unusual for designers to place an extra pole in the compensation, that is, to use the relation

\[D_{c}(s) = \frac{T_{D}s + 1}{\left( \alpha T_{D}s + 1 \right)^{2}} \]

in order to introduce even more attenuation for noise reduction.

A second consideration affecting high-frequency gains is that many systems have high-frequency dynamic phenomena, such as mechanical resonances, that could have an impact on the stability of a system.

Gain stabilization

Phase stabilization

Figure 6.71

Effect of high-frequency plant uncertainty
In very-high-performance designs, these high-frequency dynamics are included in the plant model, and a compensator is designed with a specific knowledge of those dynamics. A standard approach to designing for unknown high-frequency dynamics is to keep the high-frequency gain low, just as we did for sensor-noise reduction. The reason for this can be seen from the gain-frequency relationship of a typical system, as shown in Fig. 6.71. The only way instability can result from high-frequency dynamics is if an unknown high-frequency resonance causes the magnitude to rise above 1 . Conversely, if all unknown highfrequency phenomena are guaranteed to remain below a magnitude of 1, stability can be guaranteed. The likelihood of an unknown resonance in the plant \(G\) rising above 1 can be reduced if the nominal high-frequency loop gain \((L)\) is lowered by the addition of extra poles in \(D_{c}(s)\). When the stability of a system with resonances is assured by tailoring the high-frequency magnitude never to exceed 1 , we refer to this process as amplitude or gain stabilization. Of course, if the resonance characteristics are known exactly, a specially tailored compensation, such as one with a notch at the resonant frequency, can be used to change the phase at a specific frequency to avoid encirclements of -1 , thus stabilizing the system even though the amplitude does exceed magnitude 1 . This method of stabilization is referred to as phase stabilization. A drawback to phase stabilization is that the resonance information is often not available with adequate precision or varies with time; therefore, the method is more susceptible to errors in the plant model used in the design. Thus, we see sensitivity to plant uncertainty and sensor noise are both reduced by sufficiently low loop gain at high-frequency.

These two aspects of sensitivity-high- and low-frequency behavior - can be depicted graphically, as shown in Fig. 6.72. There is a minimum low-frequency gain allowable for acceptable steady-state and low-frequency error performance, and a maximum high-frequency gain allowable for acceptable noise performance and for low probability of instabilities caused by plant-modeling errors. We define the lowfrequency lower bound on the frequency response as \(W_{1}\) and the upper

bound as \(W_{2}^{- 1}\), as shown in the figure. Between these two bounds the control engineer must achieve a gain crossover near the required bandwidth; as we have seen, the crossover must occur at a slope of -1 or slightly steeper for good PM and hence damping.

For example, if a control system was required to follow a sinusoidal reference input with frequencies from 0 to \(\omega_{1}\) with errors no greater than \(1\%\), the function \(W_{1}\) would be 100 from \(\omega = 0\) to \(\omega_{1}\). Similar ideas enter into defining possible values for the \(W_{2}^{- 1}\) function which would constrain the open-loop gain to be below \(W_{2}^{- 1}\) for frequencies above \(\omega_{2}\). These ideas will be discussed further in the following subsections.

224. \(\Delta\) 6.7.7 Specifications in Terms of the Sensitivity Function

We have seen how the gain and phase margins give useful information about the relative stability of nominal systems and can be used to guide the design of lead and lag compensations. However, the GM and PM are only two numbers and have limitations as guides to the design of realistic control problems. We can express more complete design specifications in the frequency domain if we first give frequency descriptions for the external signals, such as the reference and disturbance, and consider the sensitivity function defined in Section 4.1. For example, we have so far described dynamic performance by the transient response to simple steps and ramps. A more realistic description of the actual complex input signals is to represent them as random processes with corresponding frequency power density spectra. A less sophisticated description, which is adequate for our purposes, is to assume the signals can be represented as a sum of sinusoids with frequencies in a specified range. For example, we can usually describe the frequency content of the reference input as a sum of sinusoids with relative amplitudes given by a magnitude function \(|R|\), such as that plotted in Fig. 6.73, which represents a signal with sinusoidal components having about the same amplitudes up to some value \(\omega_{1}\) and very small amplitudes for

Figure 6.72

Design criteria for low sensitivity

Figure 6.73

Plot of typical reference spectrum

Sensitivity function

Figure 6.74

Closed-loop block diagram

frequencies above that. With this assumption, the response tracking specification can be expressed by a statement such as "the magnitude of the system error is to be less than the bound \(e_{b}\) (a value such as 0.01) for any sinusoid of frequency \(\omega_{o}\) in the range \(0 \leq \omega_{0} \leq \omega_{1}\) with amplitude given by \(\left| R\left( j\omega_{o} \right) \right|\)." To express such a performance requirement in terms that can be used in design, we consider again the unity-feedback system drawn in Fig. 6.74. For this system, the error is given by

\[E(j\omega) = \frac{1}{1 + D_{c}G}R \triangleq \mathcal{S}(j\omega)R \]

where we have used the sensitivity function

\[\mathcal{S} \triangleq \frac{1}{1 + D_{c}G} \]

In addition to being the factor multiplying the system error, the sensitivity function is also the reciprocal of the distance of the Nyquist curve, \(D_{c}G\), from the critical point -1 . A large value for \(\mathcal{S}\) indicates a Nyquist plot that comes close to the point of instability. The frequencybased error specification based on Eq. (6.50) can be expressed as \(|E| =\) \(|\mathcal{S}||R| \leq e_{b}\). In order to normalize the problem without needing to

Figure 6.75

Plot of example performance function, \(W_{1}\)

define both the spectrum \(R\) and the error bound each time, we define the real function of frequency \(W_{1}(\omega) = |R|/e_{b}\) and the requirement can be written as

\[|\mathcal{S}|W_{1} \leq 1 \]

225. Performance Bound Function

A unity-feedback system is to have an error less than 0.005 for all unity amplitude sinusoids below frequency \(100Hertz\). Draw the performance frequency function \(W_{1}(\omega)\) for this design.

Solution. The spectrum, from the problem description, is unity for \(0 \leq\) \(\omega \leq 200\pi rad/sec\). Because \(e_{b} = 0.005\), the required function is given by a rectangle of amplitude \(1/0.005 = 200\) over the given range. The function is plotted in Fig. 6.75.

The expression in Eq. (6.52) can be translated to the more familiar Bode plot coordinates and given as a requirement on loop gain by observing that over the frequency range when errors are small the loop gain is large. In that case \(|\mathcal{S}| \approx 1/\left| D_{c}G \right|\), and the requirement is approximately

\[\begin{matrix} \frac{W_{1}}{\left| D_{c}G \right|} \leq 1 \\ \left| D_{c}G \right| \geq W_{1} \end{matrix}\]

Stability robustness

Figure 6.76

Plot of typical plant uncertainty, \(W_{2}\)
This requirement can be seen as an extension of the steady-state error requirement from just \(\omega = 0\) to the range \(0 \leq \omega_{o} \leq \omega_{1}\).

In addition to the requirement on dynamic performance, the designer is usually required to design for stability robustness. By this we mean that, while the design is done for a nominal plant transfer function, the actual system is expected to be stable for an entire class of transfer functions that represents the range of changes that are expected to be faced as temperature, age, and other operational and environmental factors vary the plant dynamics from the nominal case. A realistic way to express this uncertainty is to describe the plant transfer function as having a multiplicative uncertainty:

\[G(j\omega) = G_{o}(j\omega)\left\lbrack 1 + W_{2}(\omega) \bigtriangleup (j\omega) \right\rbrack \]

In Eq. (6.54), the real function \(W_{2}\) is a magnitude function that expresses the size of changes as a function of frequency that the transfer function is expected to experience. In terms of \(G\) and \(G_{o}\), the expression is

\[W_{2} = \left| \frac{G - G_{o}}{G_{o}} \right|\text{.}\text{~} \]

The shape of \(W_{2}\) is almost always very small for low frequencies (we know the model very well there) and increases substantially as we go to higher frequencies, where unmodeled system dynamics are common. A typical shape is sketched in Fig. 6.76. The complex function, \(\bigtriangleup (j\omega)\), represents the uncertainty in phase and is restricted only by the constraint

\[0 \leq |\Delta| \leq 1 \]

Complementary sensitivity function
EXAMPLE 6.22
We assume the nominal design has been done and is stable, so that the Nyquist plot of \(D_{c}G_{o}\) satisfies the Nyquist stability criterion. In this case, the nominal characteristic equation \(1 + D_{c}G_{o} = 0\) is never satisfied for any real frequency. If the system is to have stability robustness, the characteristic equation using the uncertain plant as described by Eq. (6.54) must not go to zero for any real frequency for any value of \(\bigtriangleup\). The requirement can be written as

\[\begin{matrix} 1 + D_{c}G & \ \neq 0 \\ 1 + D_{c}G_{o}\left\lbrack 1 + W_{2} \bigtriangleup \right\rbrack & \ \neq 0 \\ \left( 1 + D_{c}G_{o} \right)\left( 1 + \mathcal{T}W_{2} \bigtriangleup \right) & \ \neq 0 \end{matrix}\]

where we have defined the complementary sensitivity function as

\[\mathcal{T}(j\omega) \triangleq D_{c}G_{o}/\left( 1 + D_{c}G_{o} \right) = 1 - \mathcal{S}. \]

Because the nominal system is stable, the first term in Eq. (6.57), \(\left( 1 + D_{c}G_{o} \right)\), is never zero. Thus, if Eq. (6.57) is not to be zero for any frequency and any \(\bigtriangleup\), then it is necessary and sufficient that

\[\left| \mathcal{T}W_{2} \bigtriangleup \right| < 1 \]

which reduces to

\[|\mathcal{T}|W_{2} < 1 \]

making use of Eq. (6.56). As with the performance specification, for single-input-single-output unity-feedback systems this requirement can be approximated by a more convenient form. Over the range of high frequencies where \(W_{2}\) is non-negligible because there is significant model uncertainty, \(D_{c}G_{o}\) is small. Therefore we can approximate \(\mathcal{T} \approx D_{c}G_{o}\), and the constraint reduces to

\[\begin{matrix} \left| D_{c}G_{o} \right|W_{2} < 1 \\ \ \left| D_{c}G_{o} \right| < \frac{1}{W_{2}}. \end{matrix}\]

The robustness issue is important to design and can affect the highfrequency open-loop frequency response, as discussed earlier. However, as also discussed earlier, it is important to limit the high-frequency magnitude in order to attenuate noise effects.

226. Typical Plant Uncertainty

The uncertainty in a plant model is described by a function \(W_{2}\) that is zero until \(\omega = 3000\), increases linearly from there to a value of 100 at \(\omega = 10,000\), and remains at 100 for higher frequencies. Plot the constraint on \(D_{c}G_{o}\) to meet this requirement.

Solution. Where \(W_{2} = 0\), there is no constraint on the magnitude of loop gain; above \(\omega = 3000,1/W_{2} = D_{c}G_{o}\) is a hyperbola from \(\infty\) to 0.01 at \(\omega = 10,000\) and remains at 0.01 for \(\omega > 10,000\). The bound is sketched in Fig. 6.77.

Figure 6.77

Plot of constraint on \(\left| D_{c}G_{o} \right|\left( = \left| W_{2}^{- 1} \right| \right)\)

In practice, the magnitude of the loop gain is plotted on \(log - log\) (Bode) coordinates, and the constraints of Eqs. (6.53) and (6.60) are included on the same plot. A typical sketch is drawn in Fig. 6.72. The designer is expected to construct a loop gain that will stay above \(W_{1}\) for frequencies below \(\omega_{1}\), cross over the magnitude 1 line \(\left( \left| D_{c}G \right| = 0 \right)\) in the range \(\omega_{1} \leq \omega \leq \omega_{2}\), and stay below \(1/W_{2}\) for frequencies above \(\omega_{2}\).

227. \(\Delta\) 6.7.8 Limitations on Design in Terms of the Sensitivity Function

One of the major contributions of Bode was to derive important limitations on transfer functions that set limits on achievable design specifications. For example, one would like to have the system error kept small for the widest possible range of frequencies, and yet have a system that is robustly stable for a very uncertain plant. In terms of the plot in Fig. 6.78, we want \(W_{1}\) and \(W_{2}\) to be very large in their respective frequency ranges, and for \(\omega_{1}\) to be pushed up close to \(\omega_{2}\). Thus the loop gain is expected to plunge with a large negative slope from being greater than \(W_{1}\) to being less than \(1/W_{2}\) in a very short span, while maintaining a good PM to assure stability and good dynamic performance. The Bode gain-phase formula given earlier shows that this is impossible with a linear controller, by showing that the minimum possible phase is determined by an integral depending on the slope of the magnitude curve. If the slope is constant for a substantial range around \(\omega_{o}\), then Eq. (6.34) can be approximated by

\[\left. \ \phi\left( \omega_{o} \right) \approx \frac{\pi}{2}\frac{dM}{du} \right|_{u = 0} \]

Figure 6.78

Tracking and stability robustness constraints on the Bode plot; an example of impossible constraints

228. Robustness Constraints

If \(W_{1} = W_{2} = 100\), and we want \(PM = 30^{\circ}\), what is the minimum ratio of \(\omega_{2}/\omega_{1}\) ?

Solution. The slope is

\[\frac{logW_{1} - log\frac{1}{W_{2}}}{log\omega_{1} - log\omega_{2}} = \frac{2 + 2}{log\frac{\omega_{1}}{\omega_{2}}} = - 1.667 \]

Thus, the \(\log\) of the ratio is \(log\omega_{1}/\omega_{2} = - 2.40\) and \(\omega_{2} = 251\omega_{1}\).

If there are no RHP poles, then the integral is zero. This means that if we make the \(\log\) of the sensitivity function very negative over some frequency band to reduce errors in that band, then, of necessity, \(ln|\mathcal{S}|\) will be positive over another part of the band, and errors will be amplified there. This characteristic is sometimes referred to as the "water bed effect." If there are unstable poles, the situation is worse, because the positive area where sensitivity magnifies the error must exceed the negative area where the error is reduced by the feedback. If the system is minimum phase, then it is, in principle, possible to keep the magnitude of the sensitivity small by spreading the sensitivity increase over all positive frequencies to infinity, but such a design requires an excessive bandwidth and is rarely practical. If a specific bandwidth is imposed, then the sensitivity function is constrained to take on a finite, possibly large, positive value at some point below the bandwidth. As implied by the definition of the vector margin (VM) in Section 6.4 (Fig. 6.38), a large \(\mathcal{S}_{\max}\) corresponds to a Nyquist plot that comes close to the -1 critical point and a system having a small vector margin, because

\[VM = \frac{1}{\mathcal{S}_{\max}} \]

If the system is not minimum-phase, the situation is worse. An alternative to Eq. (6.62) is true if there is a nonminimum-phase zero of \(D_{c}G_{o}\), a zero in the RHP. Suppose the zero is located at \(z_{o} = \sigma_{o} + j\omega_{o}\), where \(\sigma_{o} > 0\). Again, we assume there are \(n_{p}\) RHP poles at locations \(p_{i}\) with conjugate values \(\overline{p_{i}}\). Now, the condition can be expressed as a two-sided weighted integral

\[\int_{- \infty}^{\infty}\mspace{2mu} ln(|\mathcal{S}|)\frac{\sigma_{o}}{\sigma_{o}^{2} + \left( \omega - \omega_{o} \right)^{2}}d\omega = \pi\sum_{i = 1}^{n_{p}}\mspace{2mu} ln\left| \frac{\overline{p_{i}} + z_{o}}{p_{i} - z_{o}} \right| \]

In this case, we do not have the "roll-off" restriction, and there is no possibility of spreading the positive area over high frequencies, because the weighting function goes to zero with frequency. The important point

Figure 6.79

Sensitivity function for Example 6.24

229. \(\Delta\ 6.8\) Time Delay

The Laplace transform of a pure time delay is \(G_{D}(s) = e^{- sT_{d}}\), which can be approximated by a rational function (Padé approximate) as shown in

online Appendix W5.6.3. Although this same approximation could be

\(s = tf(^{'}s^{'})\);

sys \(S = s^{*}(s + 1)^{*}(s + 10)/\left( s^{\land}3 + 11^{*}s^{\land}2 + 60*s + 100 \right)\);

\(\lbrack mag,ph,w\rbrack = bode(\) sys \(S)\);

\(loglog(w\), squeeze(mag)), grid

The largest value of \(\mathcal{S}\) is given by \(M = max(mag)\) and is 1.366 , from which the vector margin is \(VM = 0.732\).

Figure 6.80

Phase lag due to pure time delay

Time-delay magnitude

Time-delay phase

EXAMPLE 6.25

used with frequency-response methods, an exact analysis of the delay is possible.

The frequency response of the delay is given by the magnitude and phase of \(\left. \ e^{- sT_{d}} \right|_{s = j\omega}\). The magnitude is

\[\left| G_{D}(j\omega) \right| = \left| e^{- j\omega T_{d}} \right| = 1,\ \text{~}\text{for all}\text{~}\omega \]

This result is expected, because a time delay merely shifts the signal in time and has no effect on its magnitude. The phase is

\[\angle G_{D}(j\omega) = - \omega T_{d} \]

in radians, and it grows increasingly negative in proportion to the frequency. This, too, is expected, because a fixed time delay \(T_{d}\) becomes a larger fraction or multiple of a sine wave as the period drops, due to increasing frequency. A plot of \(\angle G_{D}(j\omega)\) is drawn in Fig. 6.80. Note the phase lag is greater than \(270^{\circ}\) for values of \(\omega T_{d}\) greater than about \(5rad\). This trend implies it would be virtually impossible to stabilize a system (or to achieve a positive PM) with a crossover frequency greater than \(\omega = 5/T_{d}\), and it would be difficult for frequencies greater than \(\omega \cong 3/T_{d}\). These characteristics essentially place a constraint on the achievable bandwidth of any system with a time delay. (See Problem 6.64 for an illustration of this constraint.)

The frequency domain concepts such as the Nyquist criterion apply directly to systems with pure time delay. This means that no approximations (Padé type or otherwise) are needed and the exact effect of time delay can be applied to a Bode plot, as shown in the following example.

230. Effect of Sampling on Stability

When implementing a control system with a digital computer to create compensation, the output of the plant is sampled periodically, used for computer calculations, then output as the control at the same sample rate. The effect of this is to create a delay that, on average, is half the sample period, \(T_{s}\). Determine the effect on the PM in Example 6.15 if
it were implemented with a digital controller with a sample period of \(T_{s} = 0.05sec\) and estimate what that would do to the step response overshoot. How slowly could you sample if it was necessary to limit the decrease in the PM to less than \(20^{\circ}\) ?

Solution. A sample period of \(T_{s} = 0.05sec\) will inject a time delay of \(T_{s}/2 = 0.05/2 = 0.025 = T_{d}\) sec. From Eq. (6.67), we see the phase lag due to this sampling at Example 6.15's crossover frequency of \(5rad/sec\), where we measure the PM, is \(\angle G_{D} = - \omega T_{d} = - (5)(0.025) = - 0.125\) \(rad = - 7^{\circ}\). Therefore, the \(PM\) will decrease from \(53^{\circ}\) for the continuous implementation to approximately \(46^{\circ}\) for the digital implementation. Figure 6.37 shows that the overshoot, \(M_{p}\), will be degraded from \(\approx 16\%\) to \(\approx 22\%\). This is a very approximate analysis, but gives a rough idea of what to expect when implementing a controller via sampling and a digital computer.

In order to limit the phase lag to \(20^{\circ}\) at \(\omega = 5rad/sec\), we see from Eq.(6.67) that the maximum tolerable \(T_{d} = 20/(5*57.3) = 0.07sec\), so the slowest sampling acceptable would be \(T_{s} = 0.14sec\). Note, however, this large decrease in the PM would result in the overshoot increasing from \(\approx 20\%\) to \(\approx 40\%\).

The example illustrates that a time delay, whether introduced by digital sampling or by any other source, has a very severe effect on the achievable bandwidth. Evaluation of the effect using Eq. (6.67) or Fig. 6.80 is simple and straightforward, thus giving a quick analysis of the limitations imposed by any delay in the system.

230.0.1. Time Delay via the Nyquist Diagram

One can also evaluate the effect of a time delay using a Nyquist diagram, and this is shown in Appendix W6.8.1 available online at www.pearsonglobaleditions.com.

231. \(\Delta\ 6.9\) Alternative Presentation of Data

Other ways to present frequency-response data have been developed to aid both in understanding design issues and in easing the designer's work load. Their use in easing the work load has largely been eliminated with the common use of computer-aided design; however, one technique that continues to be widely used in the design process is the Nichols chart. For those interested, we also present the inverse Nyquist method in online Appendix W6.9.2 available at www.pearsonglobaleditions.com.

231.0.1. Nichols Chart

A rectangular plot of \(log|G(j\omega)|\) versus \(\angle G(j\omega)\) can be drawn by simply transferring the information directly from the separate magnitude

Mand \(N\) circles

Resonant frequency and phase portions in a Bode plot; one point on the new curve thus results from a given value of the frequency \(\omega\). This means the new curve is parameterized as a function of frequency. As with the Bode plots, the magnitude information is plotted on a logarithmic scale, while the phase information is plotted on a linear scale. This template was suggested by N. Nichols and is usually referred to as a Nichols chart. The idea of plotting the magnitude of \(G(j\omega)\) versus its phase is similar to the concept of plotting the real and imaginary parts of \(G(j\omega)\), which formed the basis for the Nyquist plots shown in Sections 6.3 and 6.4. However, it is difficult to capture all the pertinent characteristics of \(G(j\omega)\) on the linear scale of the Nyquist plot. The log scale for magnitude in the Nichols chart alleviates this difficulty, allowing this kind of presentation to be useful for design.

For any value of the complex transfer function \(G(j\omega)\), Section 6.6 showed there is a unique mapping to the unity-feedback closed-loop transfer function

\[\mathcal{T}(j\omega) = \frac{G(j\omega)}{1 + G(j\omega)} \]

or in polar form,

\[\mathcal{T}(j\omega) = M(\omega)e^{j\alpha(\omega)}, \]

where \(M(\omega)\) is the magnitude of the closed-loop transfer function and \(\alpha(\omega)\) is the phase of the closed-loop transfer function. Specifically, let us define \(M\) and \(N\) such that

\[\begin{matrix} M & \ = \left| \frac{G}{1 + G} \right|, \\ \alpha = \tan^{- 1}(N) & \ = \angle\frac{G}{1 + G}. \end{matrix}\]

It can be proven that the contours of constant closed-loop magnitude and phase are circles when \(G(j\omega)\) is presented in the linear Nyquist plot. These circles are referred to as the \(\mathbf{M}\) and \(\mathbf{N}\) circles, respectively.

The Nichols chart also contains contours of constant closedloop magnitude and phase based on these relationships, as shown in Fig. 6.81; however, they are no longer circles, because the Nichols charts are semilog plots of magnitude versus linear phase. A designer can therefore graphically determine the bandwidth of a closed-loop system from the plot of the open-loop data on a Nichols chart by noting where the open-loop curve crosses the 0.70 contour of the closed-loop magnitude and determining the frequency of the corresponding data point. Likewise, a designer can determine the resonant-peak amplitude \(M_{r}\) by noting the value of the magnitude of the highest closed-loop contour tangent to the curve. The frequency associated with the magnitude and phase at the point of tangency is sometimes referred to as the resonant frequency \(\omega_{r}\). Similarly, a designer can determine the GM by observing the value of the gain where the Nichols plot crosses the \(- 180^{\circ}\) line, and the PM by observing the phase where the plot crosses the amplitude

Figure 6.81

Nichols chart

EXAMPLE 6.26
1 line. \(\ ^{15}\) Matlab provides for easy drawing of a Nichols chart via the nichols command.

232. Nichols Chart for PID Example

Determine the a) bandwidth, b) resonant-peak magnitude, and c) PM of the compensated system whose frequency response is shown in Fig. 6.68.

Solution. The open-loop magnitude and phase information of the compensated design example seen in Fig. 6.68 is shown on a Nichols chart in Fig. 6.82. When comparing the two figures, it is important to divide the magnitudes in Fig. 6.68 by a factor of 20 in order to obtain \(\left| D_{c}(s)G(s) \right|\) rather than the normalized values used in Fig. 6.68. Because the curve

\(\ ^{15}\) James, H. M., N. B. Nichols, and R. S. Phillips (1947).

Figure 6.82

Nichols chart for determining bandwidth, \(M_{r}\), and PM for Example 6.26

crosses the closed-loop magnitude 0.70 contour at \(\omega = 0.8rad/sec\), we see that the bandwidth of this system is \(0.8rad/sec\). The PM is determined by the phase when the curve crosses the magnitude \(= 1\) line. Because the largest-magnitude contour touched by the curve is 1.20 , we also see that \(M_{r} = 1.2\).

For the system of Example 6.13, whose Nyquist plot is shown in Fig. 6.41, determine the PM and GM using the Nichols plot. Comment on which margin is the more critical.

Solution. Figure 6.83 shows a Nichols chart with frequency-response data from Fig. 6.42. Note the PM for the magnitude 1 crossover frequency is \(37^{\circ}\) and the GM is \(1.26( = 1/0.8)\). It is clear from this

Figure 6.83

Nichols chart of the complex system in Examples 6.13 and 6.27

presentation of the data that the most critical portion of the curve is where it crosses the \(- 180^{\circ}\) line; hence, the GM is the most relevant stability margin in this example.

For complex systems for which the -1 encirclements need to be evaluated, the magnitude log scale of the Nichols chart enables us to examine a wider range of frequencies than a Nyquist plot does, as well as allowing us to read the gain and phase margins directly. Although Matlab will directly compute PM and GM, the algorithm may lead to suspicious results for very complex cases, and the analyst may want to verify the result using the Matlab nichols \(m\)-file so the actual encirclements can be examined and the bases for the PM and GM better

Exclusion zone for a stability specification understood. In some cases, the specifications for the desired margins are stated in terms of an "exclusion zone" around the -1 point on the Nichols chart (magnitude \(= 1\), phase \(= - 180^{\circ}\) ). The zone is typically
an ellipse or similar shape with the vertical and horizontal axes limits given. To satisfy the specification, the frequency-response data on the Nichols chart must not pierce any portion of the ellipse; thus, this sort of stability margin requirement is similar to the vector margin described in Section 6.7.8

Historically, the Nichols chart was used to aid the design process when done without benefit of a computer. A change in gain, for example, can be evaluated by sliding the curve vertically on transparent paper over a standard Nichols chart as shown in Fig. 6.81. The GM, PM, and bandwidth were then easy to read off the chart, thus allowing evaluations of several values of gain with a minimal amount of effort. With access to computer-aided methods, however, we can now calculate the bandwidth and perform many repetitive evaluations of the gain or any other parameter with a few key strokes. Some modern design techniques, such as the Quantitative Feedback Theory ("QFT," Horowitz and Sidi, 1992), still heavily rely on the Nichols chart as the central tool to guide the feedback design.

232.0.1. The Inverse Nyquist Diagram

The inverse Nyquist diagram simplifies a determination of the stability margins and has been used in the past. It is described in more detail in Appendix W6.9.2 available online at www.pearsonglobaleditions.com.

232.1. Historical Perspective

As discussed in Chapter 5, engineers before 1960s did not have access to computers to help in their analyses. Therefore, any method that allowed the determination of stability or response characteristics that did not require factoring the characteristic equation was highly useful. The invention of the electronic feedback amplifier by H. S. Black in 1927 at Bell Telephone Laboratories provided extra incentive to develop methods for feedback control design, and the development of the frequency-response method was the first that enabled design iteration for this purpose.

The development of the feedback amplifier is briefly described in an interesting article based on a talk by Hendrik W. Bode (1960) reproduced in Bellman and Kalaba (1964). With the introduction of electronic amplifiers, long-distance telephoning became possible in the decades following World War I. However, as distances increased, so did the loss of electrical energy; in spite of using larger-diameter wire, increasing numbers of amplifiers were needed to replace the lost energy. Unfortunately, large numbers of amplifiers resulted in much distortion since the small nonlinearity of the vacuum tubes then used in electronic amplifiers was multiplied many times. To solve the problem of reducing distortion, Black proposed the feedback amplifier. As discussed in Chapter 4, the more we wish to reduce errors (or distortion), the
higher the feedback needs to be. The loop gain from actuator to plant to sensor to actuator must be made very large. But the designers found that too high a gain produced a squeal and the feedback loop became unstable. In this technology, the dynamics were so complex (with differential equations of order 50 being common) that Routh's criterion, the only way of solving for stability at the time, was not very helpful. So the communications engineers at Bell Telephone Laboratories, familiar with the concept of frequency response and the mathematics of complex variables, turned to complex analysis. In 1932, H. Nyquist published a paper describing how to determine stability from a graphical plot of the open-loop frequency response. Bode then developed his plotting methods in 1938 that made them easy to create without extensive calculations or help from a computer. From the plotting methods and Nyquist's stability theory, an extensive methodology of feedback amplifier design was developed by Bode (1945) and extensively used still in the design of feedback controls. The reasons for using the method today are primarily to allow for a good design no matter what the unmodeled dynamics are, to expedite the design process even when carried out with a computer that is fully capable of solving the characteristic equation, and to provide a visual tool to examine the design. After developing the frequency-response design methods prior to World War II, Bode went on to help in electronic fire control devices during the war. The methods that he had developed for feedback amplifiers proved highly applicable to servomechanisms for the effort. Bode characterized this crossover of control system design methods as being a "sort of shotgun marriage."

233. SUMMARY

  • The frequency-response Bode plot is a graph of the transfer function magnitude in logarithmic scale and the phase in linear scale versus frequency in logarithmic scale. For a transfer function \(G(s)\),

\[\begin{matrix} M & \ = |G(j\omega)| = |G(s)|_{s = j\omega} \\ & \ = \sqrt{\{ Re\lbrack G(j\omega)\rbrack\}^{2} + \{ Im\lbrack G(j\omega)\rbrack\}^{2}} \\ \phi & \ = \tan^{- 1}\left\lbrack \frac{Im\lbrack G(j\omega)\rbrack}{Re\lbrack G(j\omega)\rbrack} \right\rbrack = \angle G(j\omega). \end{matrix}\]

  • For a transfer function in Bode form,

\[KG(\omega) = K_{0}(j\omega)^{n}\frac{\left( j\omega\tau_{1} + 1 \right)\left( j\omega\tau_{2} + 1 \right)\cdots}{\left( j\omega\tau_{a} + 1 \right)\left( j\omega\tau_{b} + 1 \right)\cdots} \]

the Bode frequency response can be easily plotted by hand using the rules described in Section 6.1.1.

  • Bode plots can be obtained using computer algorithms (bode in Matlab), but hand-plotting skills are still extremely helpful.

  • For a second-order system, the peak magnitude of the Bode plot is related to the damping by

\[|G(j\omega)| = \frac{1}{2\zeta}\ \text{~}\text{at}\text{~}\omega = \omega_{n} \]

  • A method of determining the stability of a closed-loop system based on the frequency response of the system's open-loop transfer function is the Nyquist stability criterion. Rules for plotting the Nyquist plot are described in Section 6.3. The number of RHP closed-loop roots is given by

\[Z = N + P \]

where

\[\begin{matrix} & N = \text{~}\text{number of clockwise encirclements of the}\text{~} - 1\text{~}\text{point}\text{~} \\ & P = \text{~}\text{number of open-loop poles in the RHP.}\text{~} \end{matrix}\]

For a stable closed-loop system, \(Z\) must be 0 , resulting in \(N = - P\).

  • The Nyquist plot may be obtained using computer algorithms (nyquist in Matlab).

  • The gain margin (GM) and phase margin (PM) can be determined directly by inspecting the open-loop Bode plot or the Nyquist plot. Also, use of Matlab's margin function determines the values directly.

  • For a standard second-order system, the PM is related to the closed-loop damping by Eq. (6.32),

\[\zeta \cong \frac{PM}{100} \]

  • The bandwidth of the system is a measure of speed of response. For control systems, it is defined as the frequency corresponding to \(0.707( - 3db)\) in the closed-loop magnitude Bode plot and is approximately given by the crossover frequency \(\omega_{c}\), which is the frequency at which the open-loop gain curve crosses magnitude 1 .

  • The vector margin is a single-parameter stability margin based on the closest point of the Nyquist plot of the open-loop transfer function to the critical point \(- 1/K\).

  • For a stable minimum-phase system, Bode's gain-phase relationship uniquely relates the phase to the gain of the system and is approximated by Eq. (6.33),

\[\angle G(j\omega) \cong n \times 90^{\circ} \]

where \(n\) is the slope of \(|G(j\omega)|\) in units of decade of amplitude per decade of frequency. The relationship shows that, in most cases, stability is ensured if the gain plot crosses the magnitude 1 line with a slope of -1 .

  • Experimental frequency-response data of the open-loop system can be used directly for analysis and design of a closed-loop control system with no analytical model.

Figure 6.84

Typical unity feedback system

  • For the system shown in Fig. 6.84, the open-loop Bode plot is the frequency response of \(GD_{c}\), and the closed-loop frequency response is obtained from \(\mathcal{T}(s) = GD_{c}/\left( 1 + GD_{c} \right)\).

  • The frequency-response characteristics of several types of compensation have been described, and examples of design using these characteristics have been discussed. Design procedures were given for lead and lag compensators in Section 6.7. The examples in that section show the ease of selecting specific values of design variables, a result of using frequency-response methods. A summary was provided at the end of Section 6.7.5.

  • Lead compensation, given by Eq. (6.38),

\[D_{c}(s) = \frac{T_{D}s + 1}{\alpha T_{D} + 1},\ \alpha < 1, \]

is a high-pass filter and approximates PD control. It is used whenever substantial improvement in damping of the system is required. It tends to increase the speed of response of a system for a fixed low-frequency gain.

  • Lag compensation, given by Eq. (6.47),

\[D_{c}(s) = \alpha\frac{T_{I}s + 1}{\alpha T_{I}s + 1},\ \alpha > 1 \]

is a low-pass filter and approximates PI control. It is usually used to increase the low-frequency gain of the system so as to improve steady-state response for fixed bandwidth. For a fixed lowfrequency gain, it will decrease the speed of response of a system.

  • PID compensation can be viewed as a combination of lead and lag compensation.

  • Tracking-error reduction and disturbance rejection can be specified in terms of the low-frequency gain of the Bode plot. Sensor-noise rejection can be specified in terms of high-frequency attenuation of the Bode plot (see Fig. 6.72).

$\Delta\ $ The Nichols plot is an alternate representation of the frequency response as a plot of gain versus phase and is parameterized as a function of frequency.

$\Delta\ $ - Time delay can be analyzed exactly in a Bode plot or a Nyquist plot.

234. REVIEW QUESTIONS

6.1 Why did Bode suggest plotting the magnitude of a frequency response on \(log - log\) coordinates?

6.2 Define a decibel.

6.3 What is the transfer-function magnitude if the gain is listed as \(14db\) ?

6.4 Define gain crossover.

6.5 Define phase crossover.

6.6 Define phase margin, PM.

6.7 Define gain margin, GM.

6.8 What Bode plot characteristic is the best indicator of the closed-loop step response overshoot?

6.9 What Bode plot characteristic is the best indicator of the closed-loop step response rise time?

6.10 What is the principal effect of a lead compensation on Bode plot performance measures?

6.11 What is the principal effect of a lag compensation on Bode plot performance measures?

6.12 How do you find the \(K_{v}\) of a Type 1 system from its Bode plot?

6.13 Why do we need to know beforehand the number of open-loop unstable poles in order to tell stability from the Nyquist plot?

6.14 What is the main advantage in control design of counting the encirclements of \(- 1/K\) of \(D_{c}(j\omega)G(j\omega)\) rather than encirclements of -1 of \(KD_{c}(j\omega)G(j\omega)\) ?

6.15 Define a conditionally stable feedback system. How can you identify one on a Bode plot?

$\bigtriangleup \ $ 6.16 A certain control system is required to follow sinusoids, which may be any frequency in the range \(0 \leq \omega_{\mathcal{l}} \leq 450rad/sec\) and have amplitudes up to 5 units, with (sinusoidal) steady-state error to be never more than 0.01 . Sketch (or describe) the corresponding performance function \(W_{1}(\omega)\).

235. PROBLEMS

236. Problems for Section 6.1: Frequency Response

6.1 (a) Show \(\alpha_{0}\) in Eq. (6.2), with \(A = U_{o}\) and \(\omega_{o} = \omega\), is

\[\alpha_{0} = \left. \ \left\lbrack G(s)\frac{U_{0}\omega}{s - j\omega} \right\rbrack \right|_{s = - j\omega} = - U_{0}G( - j\omega)\frac{1}{2j} \]

and

\[\alpha_{0}^{*} = \left. \ \left\lbrack G(s)\frac{U_{0}\omega}{s + j\omega} \right\rbrack \right|_{s = + j\omega} = U_{0}G(j\omega)\frac{1}{2j} \]

(b) By assuming the output can be written as

\[y(t) = \alpha_{0}e^{- j\omega t} + \alpha_{0}^{*}e^{j\omega t} \]

derive Eqs. (6.4)-(6.6).

6.2 (a) Calculate the magnitude and phase of

\[G(s) = \frac{1}{s + 7} \]

by hand for \(\omega = 1,2,7,10,20,50\), and \(100rad/sec\).
(b) Sketch the asymptotes for \(G(s)\) according to the Bode plot rules, and compare these with your computed results from part (a).

6.3 Sketch the asymptotes of the Bode plot magnitude and phase for each of the following open loop transfer functions. After completing the hand sketches verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{6000}{s(s + 300)}\)

(b) \(L(s) = \frac{500}{s(0.2s + 1)(0.1s + 1)}\)

(c) \(L(s) = \frac{1}{s(5s + 1)(s + 40)}\)

(d) \(L(s) = \frac{5000}{(s + 7)(s + 18)^{3}}\)

(e) \(L(s) = \frac{10(s + 2)}{s(s + 20)(s + 200)}\)

(f) \(L(s) = \frac{2(s + 0.3)}{s(s + 0.1)(s + 0.5)^{2}}\)

(g) \(L(s) = \frac{(s + 17)(s + 13)}{s(s + 52)(s + 5)}\)

(h) \(L(s) = \frac{10s(s + 50)}{(s + 10)(s + 70)}\)

(i) \(L(s) = \frac{1000s}{(s + 2)(s + 60)(s + 500)}\)

6.4 Real poles and zeros. Sketch the asymptotes of the Bode plot magnitude and phase for each of the following open-loop transfer functions. After completing the hand sketches verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{5}{s(s + 4)(s + 9)(s + 17)}\)

(b) \(L(s) = \frac{5(s + 12)}{s(s + 4)(s + 9)(s + 17)}\)

(c) \(L(s) = \frac{5(s + 7)(s + 12)}{s(s + 4)(s + 9)(s + 17)}\)

(d) \(L(s) = \frac{5(s + 7)(s + 1)}{s(s + 4)(s + 9)(s + 17)}\)

6.5 Complex poles and zeros. Sketch the asymptotes of the Bode plot magnitude and phase for each of the following open-loop transfer functions. After completing the hand sketches verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{1}{s^{2} + 4s + 21}\)

(b) \(L(s) = \frac{1}{s\left( s^{2} + 2s + 9 \right)}\)

(c) \(L(s) = \frac{\left( s^{2} + 5s + 11 \right)}{s\left( s^{2} + 5s + 15 \right)}\)

(d) \(L(s) = \frac{\left( s^{2} + 1 \right)}{s\left( s^{2} + 6 \right)}\)

(e) \(L(s) = \frac{\left( s^{2} + 6 \right)}{s\left( s^{2} + 1 \right)}\)

6.6 Multiple poles at the origin. Sketch the asymptotes of the Bode plot magnitude and phase for each of the following open-loop transfer functions. After completing the hand sketches, verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{1}{s^{2}(s + 0.7)}\)

(b) \(L(s) = \frac{100}{s^{3}(s + 80)}\)

(c) \(L(s) = \frac{1}{s^{4}(2s + 5)}\)

(d) \(L(s) = \frac{s + 7.5}{s^{2}(s + 75)}\)

(e) \(L(s) = \frac{1.5s + 1}{s^{3}(s + 0.1)}\)

(f) \(L(s) = \frac{(s + 9)^{2}}{s^{3}(s + 50)}\)

(g) \(L(s) = \frac{(s + 0.6)^{2}}{s^{3}(s + 1.3)^{2}}\)

6.7 Mixed real and complex poles. Sketch the asymptotes of the Bode plot magnitude and phase for each of the following open-loop transfer functions. After completing the hand sketches verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{(s + 0.5)}{s(5s + 1)\left( s^{2} + 0.2s + 0.6 \right)}\)

(b) \(L(s) = \frac{(2s + 4)}{s^{2}(s + 8)\left( s^{2} + 5s + 27 \right)}\)

(c) \(L(s) = \frac{(s + 0.75)^{2}}{s^{2}(1.2s + 8.1)\left( s^{2} + 5s + 27 \right)}\)

(d) \(L(s) = \frac{(s + 50)\left( 2s^{2} + 5s + 4 \right)}{s^{2}(s + 5)\left( s^{2} + 60s + 120 \right)}\)

(e) \(L(s) = \frac{\left\lbrack (s + 2)^{2} + 2 \right\rbrack}{s^{2}\left( s^{2} + 7s + 5 \right)}\)

6.8 Right half-plane poles and zeros. Sketch the asymptotes of the Bode plot magnitude and phase for each of the following open-loop transfer functions. Make sure the phase asymptotes properly take the RHP singularity into account by sketching the complex plane to see how the \(\angle L(s)\) changes as \(s\) goes from 0 to \(+ j\infty\). After completing the hand sketches verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{s + 4}{s + 12}\frac{1}{s^{2} - 9}\); (The model for a case of magnetic levitation with lead compensation.)

(b) \(L(s) = \frac{s + 4}{s(s + 6)}\frac{1}{s^{2} - 22}\); (The magnetic levitation system with integral control and lead compensation.)

(c) \(L(s) = \frac{11s - 7}{s^{2}}\)

(d) \(L(s) = \frac{s^{2} + 4s + 3}{s(s + 2.5)^{2}\left( s^{2} - 3s + 5 \right)}\)

Figure 6.85

Magnitude portion of Bode plot for

Problem 6.9. (e) \(L(s) = \frac{(s + 7.5)}{s(s - 0.5)(s + 20)^{2}}\)

(f) \(L(s) = \frac{1}{\left. \ (s - 9)\left\lbrack (s + 2)^{2} + 5 \right) \right\rbrack}\)

6.9 A certain system is represented by the asymptotic Bode diagram shown in Fig. 6.85. Find and sketch the response of this system to a unit step input (assuming zero initial conditions).

6.10 Prove that magnitude slope of -2 in a Bode plot corresponds to \(- 40db\) per decade or \(- 12db\) per octave.

6.11 A second-order system with a damping ratio \(\zeta = 0.6\) and an additional zero is given by

\[G(s) = \frac{\left( \frac{s}{\alpha} + 1 \right)}{s^{2} + 1.2s + 1} \]

Use Matlab to compare the \(M_{p}\) from the step response of the system for \(a = 0.01,0.1,1,10\), and 100 with the \(M_{r}\) from the frequency response of each case. Is there a correlation between \(M_{r}\) and \(M_{p}\) ?

6.12 A second order system with \(\zeta = 0.6\) and an additional pole is given by

\[G(s) = \frac{2}{\left\lbrack \left( \frac{s}{p} \right) + 1 \right\rbrack\left( s^{2} + 1.2\sqrt{2}s + 2 \right)} \]

Draw Bode plots with \(p = 0.01,0.1,1,10\), and 100 . What conclusions can you draw about the effect of an extra pole on the bandwidth compared to the bandwidth for the second-order system with no extra pole?

6.13 For the closed-loop transfer function

\[T(s) = \frac{\omega_{n}^{2}}{s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2}}, \]

derive the following expression for the bandwidth \(\omega_{BW}\) of \(T(s)\) in terms of \(\omega_{n}\) and \(\zeta\) :

\[\omega_{BW} = \omega_{n}\sqrt{1 - 2\zeta^{2} + \sqrt{2 + 4\zeta^{4} - 4\zeta^{2}}} \]

Assuming \(\omega_{n} = 1\), plot \(\omega_{BW}\) for \(0 \leq \zeta \leq 1\).

6.14 Consider the system whose transfer function is

\[G(s) = \frac{A_{0}\omega_{0}s}{s^{2} + \frac{\omega_{0}}{Q}s + \omega_{0}^{2}} \]

This is a model of a tuned circuit with quality factor \(Q\).

(a) Compute the magnitude and phase of the transfer function analytically, and plot them for \(Q = 0.5,1,2\), and 5 as a function of the normalized frequency \(\omega/\omega_{0}\).

(b) Define the bandwidth as the distance between the frequencies on either side of \(\omega_{0}\) where the magnitude drops to \(3db\) below its value at \(\omega_{0}\), and show the bandwidth is given by

\[BW = \frac{1}{2\pi}\left( \frac{\omega_{0}}{Q} \right) \]

(c) What is the relation between \(Q\) and \(\zeta\) ?

6.15 A DC voltmeter schematic is shown in Fig. 6.86. The pointer is damped so its maximum overshoot to a step input is \(10\%\).

(a) What is the undamped natural frequency of the system?

(b) What is the damped natural frequency of the system?

(c) Plot the frequency response using Matlab to determine what input frequency will produce the largest magnitude output?

(d) Suppose this meter is now used to measure a \(1 - V\) AC input with a frequency of \(2rad/sec\). What amplitude will the meter indicate after initial transients have died out? What is the phase lag of the output with respect to the input? Use a Bode plot analysis to answer these questions. Use the lsim command in Matlab to verify your answer in part (d).

237. Problems for Section 6.2: Neutral Stability

6.16 Determine the range of \(K\) for which the closed-loop systems (see Fig. 6.18) are stable for each of the cases below by making a Bode plot for \(K = 1\) and imagining the magnitude plot sliding up or down until instability results. Verify your answers by using a very rough sketch of a root-locus plot.

(a) \(KG(s) = \frac{K(s + 2)}{s + 20}\)

Figure 6.86

Voltmeter schematic

\[\begin{matrix} I & \ = 40 \times 10^{- 6}\text{ }kg \cdot m^{2} \\ k & \ = 4 \times 10^{- 6}\text{ }kg \cdot m^{2}/\sec^{2} \\ T & \ = \text{~}\text{input torque}\text{~} = K_{m}v \\ v & \ = \text{~}\text{input voltage}\text{~} \\ K_{m} & \ = 4 \times 10^{- 6}\text{ }N \cdot m/V \end{matrix}\]

(b) \(KG(s) = \frac{K}{(s + 10)(s + 1)^{2}}\)

(c) \(KG(s) = \frac{K(s + 10)(s + 1)}{(s + 100)(s + 5)^{3}}\)

6.17 Determine the range of \(K\) for which each of the listed systems is stable by making a Bode plot for \(K = 1\) and imagining the magnitude plot sliding up or down until instability results. Verify your answers by using Matlab with the marginal stability value of \(K\).

(a) \(KG(s) = \frac{K(s + 1)}{s(s + 10)}\)

(b) \(KG(s) = \frac{K(s + 1)}{s^{2}(s + 10)}\)

(c) \(KG(s) = \frac{K}{(s + 2)\left( s^{2} + 9 \right)}\)

(d) \(KG(s) = \frac{K(s + 1)^{2}}{s^{3}(s + 10)}\)

238. Problems for Section 6.3: The Nyquist Stability Criterion

6.18 (a) Sketch the Nyquist plot for an open-loop system with transfer function \(1/s^{2}\); that is, sketch

\[\left. \ \frac{1}{s^{2}} \right|_{s = C_{1}}, \]

where \(C_{1}\) is a contour enclosing the entire RHP, as shown in Fig. 6.17. (Hint: Assume \(C_{1}\) takes a small detour around the poles at \(s =\) 0 , as shown in Fig. 6.27)

(b) Repeat part (a) for an open-loop system whose transfer function is \(G(s) = \frac{1}{s^{2} + \omega_{0}^{2}}\).

6.19 Sketch the Nyquist plot based on the Bode plots for each of the following systems, then compare your result with that obtained by using the Matlab command nyquist: Don't be concerned with the details of exactly where the curve goes, but do make sure it crosses the real axis at the right spot, has the correct number of -1 encirclements and goes off to infinity in the correct direction.

(a) \(KG(s) = \frac{K(s + 2)}{s + 10}\)

(b) \(KG(s) = \frac{K}{(s + 10)(s + 2)^{2}}\)

(c) \(KG(s) = \frac{K(s + 10)(s + 1)}{(s + 100)(s + 2)^{3}}\)

(d) Using your plots, estimate the range of \(K\) for which each system is stable, and qualitatively verify your result by using a rough sketch of a root-locus plot.

6.20 Draw a Nyquist plot for

\[KG(s) = \frac{K(s + 1)}{s(s + 3)} \]

choosing the contour to be to the right of the singularity on the \(j\omega\)-axis. Next, using the Nyquist criterion, determine the range of \(K\) for which the system is stable. Then redo the Nyquist plot, this time choosing the contour to be to the left of the singularity on the imaginary axis. Again,

Figure 6.87

Control system for Problem 6.21 using the Nyquist criterion, check the range of \(K\) for which the system is stable. Are the answers the same? Should they be?

6.21 Draw the Nyquist plot for the system in Fig. 6.87. Using the Nyquist stability criterion, determine the range of \(K\) for which the system is stable. Consider both positive and negative values of \(K\).

6.22 (a) For \(\omega = 0.1\) to \(100rad/sec\), sketch the phase of the minimum-phase system

\[G(s) = \left. \ \frac{s + 1}{s + 10} \right|_{s = j\omega} \]

and the nonminimum-phase system

\[G(s) = - \left. \ \frac{s - 1}{s + 10} \right|_{s = j\omega} \]

noting that \(\angle(j\omega - 1)\) decreases with \(\omega\) rather than increasing.

(b) Does an RHP zero affect the relationship between the -1 encirclements on a polar plot and the number of unstable closed-loop roots in Eq. (6.28)?

(c) Sketch the phase of the following unstable system for \(\omega = 0.1\) to 100 \(rad/sec\) :

\[G(s) = \left. \ \frac{s + 1}{s - 10} \right|_{s = j\omega} \]

(d) Check the stability of the systems in (a) and (c) using the Nyquist criterion on \(KG(s)\). Determine the range of \(K\) for which the closedloop system is stable, and check your results qualitatively by using a rough root-locus sketch.

6.23 Nyquist plots and the classical plane curves: Determine the Nyquist plot, using Matlab, for the systems given below, with \(K = 1\), and verify that the beginning point and end point for the \(j\omega > 0\) portion have the correct magnitude and phase:

(a) The classical curve called Cayley's Sextic, discovered by Maclaurin in 1718:

\[KG(s) = K\frac{1}{(s + 1)^{3}} \]

(b) The classical curve called the Cissoid, meaning ivy-shaped:

\[KG(s) = K\frac{1}{s(s + 1)} \]

(c) The classical curve called the Folium of Kepler, studied by Kepler in 1609:

\[KG(s) = K\frac{1}{(s - 1)(s + 1)^{2}} \]

(d) The classical curve called the Folium (not Kepler's):

\[KG(s) = K\frac{1}{(s - 1)(s + 2)} \]

(e) The classical curve called the Nephroid, meaning kidney-shaped:

\[KG(s) = K\frac{2(s + 1)\left( s^{2} - 4s + 1 \right)}{(s - 1)^{3}} \]

(f) The classical curve called Nephroid of Freeth, named after the English mathematician T. J. Freeth:

\[KG(s) = K\frac{(s + 1)\left( s^{2} + 3 \right)}{4(s - 1)^{3}} \]

(g) A shifted Nephroid of Freeth:

\[KG(s) = K\frac{\left( s^{2} + 1 \right)}{(s - 1)^{3}} \]

239. Problems for Section 6.4: Stability Margins

6.24 The Nyquist plots for some actual control systems resemble the one shown in Fig. 6.88. What are the gain and phase margin(s) for the system of Fig. 6.88, given that \(\alpha = 0.4,\beta = 1.3\), and \(\phi = 40^{\circ}\). Describe what happens to the stability of the system as the gain goes from zero to a very large value. Sketch what the corresponding root locus must look like for such a system. Also, sketch what the corresponding Bode plots would look like for the system.

Figure 6.88

Nyquist plot for

Problem 6.24

6.25 The Bode plot for

\[G(s) = \frac{100\lbrack(s/10) + 1\rbrack}{s\lbrack(s/1) - 1\rbrack\lbrack(s/100) + 1\rbrack} \]

Figure 6.89

Bode plot for

Problem 6.25 is shown in Fig. 6.89.

(a) Why does the phase start at \(- 270^{\circ}\) at the low frequencies?

(b) Sketch the Nyquist plot for \(G(s)\).

(c) Is the closed-loop system for the Bode plot shown in Fig. 6.89 stable?

(d) Will the system be stable if the gain is lowered by a factor of 100 ? Make a rough sketch of a root locus for the system, and qualitatively confirm your answer.

6.26 Suppose in Fig. 6.90,

\[G(s) = \frac{25(s + 1)}{s(s + 2)\left( s^{2} + 2s + 16 \right)} \]

Use Matlab's margin to calculate the PM and GM for \(G(s)\) and, on the basis of the Bode plots, conclude which margin would provide more useful information to the control designer for this system.

Figure 6.90

Control system for Problem 6.26

Figure 6.91

Control system for

Problem 6.27
6.27 Consider the system given in Fig. 6.91.

(a) Use Matlab to obtain Bode plots for \(K = 1\), then use the plots to estimate the range of \(K\) for which the system will be stable.

(b) Verify the stable range of \(K\) by using margin to determine PM for selected values of \(K\).

(c) Use rlocus to determine the values of \(K\) at the stability boundaries.

(d) Sketch the Nyquist plot of the system, and use it to verify the number of unstable roots for the unstable ranges of \(K\).

(e) Using Routh's criterion, determine the ranges of \(K\) for closed-loop stability of this system.

6.28 Suppose in Fig. 6.90,

\[G(s) = \frac{3.2(s + 1)}{s(s + 2)\left( s^{2} + 0.2s + 16 \right)} \]

Use Matlab's margin to calculate the PM and GM for \(G(s)\), and comment on whether you think this system will have well-damped closed-loop roots.

6.29 For a given system, show that the ultimate period \(P_{u}\) and the corresponding ultimate gain \(K_{u}\) for the Ziegler-Nichols method can be found by using the following:

(a) Nyquist diagram

(b) Bode plot

(c) Root locus

6.30 If a system has the open-loop transfer function

\[G(s) = \frac{\omega_{n}^{2}}{s\left( s + 2\zeta\omega_{n} \right)} \]

with unity feedback, then the closed-loop transfer function is given by

\[T(s) = \frac{\omega_{n}^{2}}{s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2}} \]

Verify the values of the PM shown in Fig. 6.36 for \(\zeta = 0.1,0.4\), and 0.7 .

6.31 Consider the unity feedback system with the open-loop transfer function

\[G(s) = \frac{K}{s\left( \frac{s}{0.4} + 1 \right)\left( \frac{s^{2}}{4} + \frac{s}{5} + 1 \right)} \]

(a) Use Matlab to draw the Bode plots for \(G(j\omega)\) assuming \(K = 1\).

(b) What gain \(K\) is required for a PM of \(50^{\circ}\) ? What is the GM for this value of \(K\) ?

(c) What is \(K_{v}\) when the gain \(K\) is set for \(PM = 50^{\circ}\) ?

(d) Create a root locus with respect to \(K\), and indicate the roots for a PM of \(50^{\circ}\).

6.32 For the system depicted in Fig. 6.92(a), the transfer-function blocks are defined by

\[G(s) = \frac{1}{(s + 2)^{2}(s + 4)}\ \text{~}\text{and}\text{~}\ H(s) = \frac{1}{s + 1} \]

(a) Using rlocus and rlocfind, determine the value of \(K\) at the stability boundary.

(b) Using rlocus and rlocfind, determine the value of \(K\) that will produce roots with damping corresponding to \(\zeta = 0.707\).

(c) What is the GM of the system if the gain is set to the value determined in part (b)? Answer this question without using any frequency-response methods.

(d) Create the Bode plots for the system, and determine the GM that results for \(PM = 65^{\circ}\). What damping ratio would you expect for this PM?

(e) Sketch a root locus for the system shown in Fig. 6.92 (b). How does it differ from the one in part (a)?

(f) For the systems in Figs. 6.92 (a) and (b), how does the transfer function \(Y_{2}(s)/R(s)\) differ from \(Y_{1}(s)/R(s)\) ? Would you expect the step response to \(r(t)\) to be different for the two cases?

(a)

(b)

Figure 6.92

Block diagram for Problem 6.32: (a) unity feedback; (b) \(H(s)\) in feedback

6.33 For the system shown in Fig. 6.93, use Bode and root-locus plots to determine the gain and frequency at which instability occurs. What gain (or gains) gives a \(PM\) of \(20^{\circ}\) ? What is the \(GM\) when \(PM = 20^{\circ}\) ?

Figure 6.93

Control system for

Problem 6.33

Figure 6.94

Magnetic tape-drive speed control

Figure 6.95

Control system for

Problems 6.35, 6.69, and 6.70

Figure 6.96

Control system for

Problem 6.36
6.34 A magnetic tape-drive speed-control system is shown in Fig. 6.94. The speed sensor is slow enough that its dynamics must be included. The speed-measurement time constant is \(\tau_{m} = 0.5sec\); the reel time constant is \(\tau_{r} = J/b = 4sec\), where \(b =\) the output shaft damping constant \(= 1\text{ }N \cdot m \cdot sec\); and the motor time constant is \(\tau_{1} = 1sec\).

(a) Determine the gain \(K\) required to keep the steady-state speed error to less than \(7\%\) of the reference-speed setting.

(b) Determine the gain and phase margins of the system. Is this a good system design?

6.35 For the system in Fig. 6.95, determine the Nyquist plot and apply the Nyquist criterion

(a) to determine the range of values of \(K\) (positive and negative) for which the system will be stable, and

(b) to determine the number of roots in the RHP for those values of \(K\) for which the system is unstable. Check your answer by using a rough root-locus sketch.

6.36 For the system shown in Fig. 6.96, determine the Nyquist plot and apply the Nyquist criterion

(a) to determine the range of values of \(K\) (positive and negative) for which the system will be stable, and

Figure 6.97

Control system for

Problem 6.37

(b) to determine the number of roots in the RHP for those values of \(K\) for which the system is unstable. Check your answer by using a rough root-locus sketch.

6.37 For the system shown in Fig. 6.97, determine the Nyquist plot and apply the Nyquist criterion

(a) to determine the range of values of \(K\) (positive and negative) for which the system will be stable, and

(b) to determine the number of roots in the RHP for those values of \(K\) for which the system is unstable. Check your answer by using a rough root-locus sketch.

6.38 The Nyquist diagrams for two stable, open-loop systems are sketched in Fig. 6.98. The proposed operating gain is indicated as \(K_{0}\), and arrows indicate increasing frequency. In each case, give a rough estimate of the following quantities for the closed-loop (unity feedback) system:

(a) Phase margin;

(b) Damping ratio;

(c) Range of gain for stability (if any);

(d) System type (0, 1, or 2\()\).

Figure 6.98

Nyquist plots for Problem 6.38

(a)

(b)

6.39 The steering dynamics of a ship are represented by the transfer function

\[\frac{V(s)}{\delta_{r}(s)} = G(s) = \frac{K\lbrack - (s/0.142) + 1\rbrack}{s(s/0.325 + 1)(s/0.0362 + 1)} \]

where \(V\) is the ship's lateral velocity in meters per second, and \(\delta_{r}\) is the rudder angle in radians.

Figure 6.99

Magnitude frequency response for

Problem 6.41 (a) Use the Matlab command bode to plot the log magnitude and phase of \(G(j\omega)\) for \(K = 0.2\).

(b) On your plot, indicate the crossover frequency, PM, and GM.

(c) Is the ship-steering system stable with \(K = 0.2\) ?

(d) What value of \(K\) would yield a \(PM\) of \(30^{\circ}\), and what would the crossover frequency be?

6.40 For the open-loop system

\[KG(s) = \frac{K(s + 0.1)}{s^{2}(s + 9)^{2}} \]

determine the value of \(K\) at the stability boundary and the values of \(K\) at the points where \(PM = 50^{\circ}\).

240. Problems for Section 6.5: Bode's Gain-Phase Relationship

6.41 The frequency response of a plant in a unity feedback configuration with no controller is sketched in Fig. 6.99. Assume the plant is open-loop stable and is minimum phase, and it has an integrator.

(a) What is the velocity constant \(K_{v}\) for the system as drawn?

(b) What is the damping ratio of the complex poles?

(c) What is the PM of the system as shown? (Estimate to within \(\pm 10^{\circ}\) )

(d) Assuming the reference input is corrupted with a noise consisting of a sinusoidal signal of \(\omega = 20rad/sec\) ? Approximately estimate the factor by which the noise is attenuated at the output.

6.42 For the system

\[G(s) = \frac{100(s/a + 1)}{s(s + 1)(s/b + 1)} \]

where \(b = 10a\), find the approximate value of \(a\) that will yield the best PM by sketching only candidate values of the frequency-response magnitude.

241. Problem for Section 6.6: Closed-Loop Frequency Response

6.43 For the open-loop system

\[KG(s) = \frac{K(s + 0.1)}{s^{2}(s + 9)^{2}} \]

determine the value for \(K\) that will yield \(PM \geq 30^{\circ}\) and the maximum possible closed-loop bandwidth. Use Matlab to find the bandwidth.

242. Problems for Section 6.7: Compensation Design

6.44 For the lead compensator

\[D_{c}(s) = \frac{T_{D^{s} + 1}}{\alpha T_{D^{s} + 1}} \]

where \(\alpha < 1\),

(a) Show the phase of the lead compensator is given by

\[\phi = \tan^{- 1}\left( T_{D}\omega \right) - \tan^{- 1}\left( \alpha T_{D}\omega \right) \]

(b) Show the frequency where the phase is maximum is given by

\[\omega_{\max} = \frac{1}{T_{D}\sqrt{\alpha}} \]

and the maximum phase corresponds to

\[sin\phi_{\max} = \frac{1 - \alpha}{1 + \alpha} \]

(c) Rewrite your expression for \(\omega_{\max}\) to show the maximum-phase frequency occurs at the geometric mean of the two corner frequencies on a logarithmic scale:

\[log\omega_{\max} = \frac{1}{2}\left( log\frac{1}{T_{D}} + log\frac{1}{\alpha T_{D}} \right) \]

(d) To derive the same results in terms of the pole-zero locations, rewrite \(D_{c}(s)\) as

\[D_{c}(s) = \frac{s + z}{s + p} \]

then show that the phase is given by

\[\phi = \tan^{- 1}\left( \frac{\omega}{|z|} \right) - \tan^{- 1}\left( \frac{\omega}{|p|} \right) \]

such that

\[\omega_{\max} = \sqrt{|z||p|} \]

Hence, the frequency at which the phase is maximum is the square root of the product of the pole and zero locations.

6.45 For the third-order servo system

\[G(s) = \frac{10,000}{s(s + 20)(s + 30)} \]

Figure 6.100

Control system for Problem 6.46 design a lead compensator so that \(PM \geq 60^{\circ}\) and \(\omega_{BW} \geq 10rad/sec\) using Bode plot sketches.

6.46 For the system shown in Fig. 6.100, suppose

\[G(s) = \frac{5}{s(s + 1)(s/5 + 1)} \]

Use Bode plot sketches to design a lead compensation \(D_{c}(s)\) with unity \(DC\) gain so that \(PM \geq 40^{\circ}\). Then verify and refine your design by using Matlab. What is the approximate bandwidth of the system?

6.47 Derive the transfer function from \(T_{d}\) to \(\theta\) for the system in Fig. 6.67. Then apply the Final Value Theorem (assuming \(T_{d} =\) constant) to determine whether \(\theta(\infty)\) is nonzero for the following two cases:

(a) When \(D_{c}(s)\) has no integral term: \(\lim_{s \rightarrow 0}\mspace{2mu} D_{c}(s) =\) constant;

(b) When \(D_{c}(s)\) has an integral term:

\[D_{c}(s) = \frac{D_{c}^{'}(s)}{s} \]

In this case, \(\lim_{s \rightarrow 0}\mspace{2mu} D_{c}^{'}(s) =\) constant.

6.48 The inverted pendulum has a transfer function given by Eq. (2.31), which is similar to

\[G(s) = \frac{1}{s^{2} - 1}\text{.}\text{~} \]

(a) Use Bode plot sketches to design a lead compensator to achieve a \(PM\) of \(30^{\circ}\). Then verify and refine your design by using Matlab.

(b) Sketch a root locus and correlate it with the Bode plot of the system.

(c) Could you obtain the frequency response of this system experimentally?

6.49 The open-loop transfer function of a unity feedback system is

\[G(s) = \frac{K}{s\left( \frac{s}{3} + 1 \right)\left( \frac{s}{40} + 1 \right)} \]

(a) Design a lag compensator for \(G(s)\) using Bode plot sketches so that the closed-loop system satisfies the following specifications:

(i) The steady-state error to a unit ramp reference input is less than 0.05 .

(ii) \(PM \geq 40^{\circ}\).

(b) Verify and refine your design by using Matlab.

6.50 The open-loop transfer function of a unity feedback system is

\[G(s) = \frac{K}{(s + 10)\left( \frac{s^{2}}{5} + \frac{s}{6} + 1 \right)} \]

(a) Design a lead compensator for \(G(s)\) using Bode plot sketches so that the closed-loop system satisfies the following specifications:

(i) The steady-state error to a unit step reference input is less than 0.01 .

(ii) For the dominant closed-loop poles the damping ratio \(\zeta \geq 0.4\).

(b) Verify your design with a direct computation of the damping of the dominant closed-loop poles.

6.51 A DC motor with negligible armature inductance is to be used in a position control system. Its open-loop transfer function is given by

\[G(s) = \frac{37}{s\left( \frac{s}{7} + 1 \right)} \]

(a) Design a compensator for the motor using Bode plot sketches so that the closed-loop system satisfies the following specifications:

(i) The steady-state error to a unit ramp input is less than \(1/150\).

(ii) The unit step response has an orshoot of less than \(20\%\).

(iii) The bandwidth of the compensated system is no less than that of the uncompensated system.

(b) Verify and/or refine your design including a direct computation step response overshoot.

6.52 The open-loop transfer function of a unity feedback system is

\[G(s) = \frac{K}{s\left( 1 + \frac{s}{0.5} \right)\left( 1 + \frac{s}{1.5} \right)} \]

(a) Sketch the system block diagram including input reference commands and sensor noise.

(b) Design a compensator for \(G(s)\) using Bode plot sketches so that the closed-loop system satisfies the following specifications:

(i) The steady-state error to a unit ramp input is less than 0.03 .

(ii) \(PM \geq 45^{\circ}\).

(iii) The steady-state error for sinusoidal inputs with \(\omega < 0.02rad/sec\) is less than \(1/300\).

(iv) Noise components introduced with the sensor signal at frequencies greater than \(50rad/sec\) are to be attenuated at the output by at least a factor of 1000 .

(c) Verify and/or refine your design including a computation of the closed-loop frequency response to verify (iv).

6.53 The transfer function for a quadrotor attitude control system between a pitch control input, \(T_{lon}\), and the pitch angle, \(\theta\), is

\[\frac{\theta(s)}{T_{\text{lon}\text{~}}(s)} = G_{1}(s) = \frac{1}{s^{2}(s + 2)} \]

Design a lead compensator, \(D_{c}(s)\), using frequency design so:

\[\begin{matrix} \omega_{n} & \ \geq 1rad/sec \\ \zeta & \ \geq 0.44 \end{matrix}\]

Compare your design with that arrived at using root locus design in Example 5.12.

6.54 Consider a satellite attitude-control system with the transfer function

\[G(s) = \frac{0.05(s + 25)}{s^{2}\left( s^{2} + 0.1s + 4 \right)} \]

Amplitude-stabilize the system using lead compensation so that \(GM \geq\) \(2(6db)\), and \(PM \geq 45^{\circ}\), keeping the bandwidth as high as possible with a single lead.

6.55 In one mode of operation, the autopilot of a jet transport is used to control altitude. For the purpose of designing the altitude portion of the autopilot loop, only the long-period airplane dynamics are important. The linearized relationship between altitude and elevator angle for the long-period dynamics is

\[G(s) = \frac{h(s)}{\delta(s)} = \frac{20(s + 0.01)}{s\left( s^{2} + 0.01s + 0.0025 \right)}\frac{ft/sec}{\deg}. \]

The autopilot receives from the altimeter an electrical signal proportional to altitude. This signal is compared with a command signal (proportional to the altitude selected by the pilot), and the difference provides an error signal. The error signal is processed through compensation, and the result is used to command the elevator actuators. A block diagram of this system is shown in Fig. 6.101. You have been given the task of designing the compensation. Begin by considering a proportional control law \(D_{c}(s) = K\).

Figure 6.101

Control system for

Problem 6.55

(a) Use Matlab to draw a Bode plot of the open-loop system for \(D_{c}(s) =\) \(K = 1\).

(b) What value of \(K\) would provide a crossover frequency (i.e., where \(|G| = 1\) ) of \(0.16rad/sec\) ?

(c) For this value of \(K\), would the system be stable if the loop were closed?

(d) What is the PM for this value of \(K\) ?

(e) Sketch the Nyquist plot of the system, and locate carefully any points where the phase angle is \(180^{\circ}\) or the magnitude is unity.

(f) Use Matlab to plot the root locus with respect to \(K\), and locate the roots for your value of \(K\) from part (b).
(g) What steady-state error would result if the command was a step change in altitude of \(1000ft\) ?

For parts (h) and (i), assume a compensator of the form

\[D_{c}(s) = \frac{T_{D^{s} + 1}}{\alpha T_{D^{s}} + 1} \]

(h) Choose the parameters \(K,T_{D}\), and \(\alpha\) so the crossover frequency is \(0.16rad/sec\) and the \(PM\) is greater than \(50^{\circ}\). Verify your design by superimposing a Bode plot of \(D_{c}(s)G(s)/K\) on top of the Bode plot you obtained for part (a), and measure the PM directly.

(i) Use Matlab to plot the root locus with respect to \(K\) for the system, including the compensator you designed in part (h). Locate the roots for your value of \(K\) from part (h).

(j) Altitude autopilots also have a mode in which the rate of climb is sensed directly and commanded by the pilot.

(i) Sketch the block diagram for this mode.

(ii) Modify the \(G(s)\) stated above for the case where the variable to be controlled is the rate of altitude change.

(iii) Design \(D_{c}(s)\) so the system has the same crossover frequency as the altitude hold mode and the PM is greater than \(50^{\circ}\).

6.56 For a system with open-loop transfer function

\[G(s) = \frac{10}{s\lbrack(s/1.4) + 1\rbrack\lbrack(s/3) + 1\rbrack} \]

design a lag compensator with unity \(DC\) gain so that \(PM \geq 35^{\circ}\). What is the approximate bandwidth of this system?

6.57 For the ship-steering system in Problem 6.39,

(a) Design a compensator that meets the following specifications:

(i) Velocity constant \(K_{v} = 2\),

(ii) \(PM \geq 50^{\circ}\), and

(iii) Unconditional stability ( \(PM > 0\) for all \(\omega \leq \omega_{c}\), the crossover frequency).

(b) For your final design, draw a root locus with respect to \(K\), and indicate the location of the closed-loop poles.

6.58 Consider a unity-feedback system with

\[G(s) = \frac{1}{s(s/20 + 1)\left( s^{2}/100^{2} + 0.5s/100 + 1 \right)} \]

(a) A lead compensator is introduced with \(\alpha = 1/5\) and a zero at \(1/T =\) 20. How must the gain be changed to obtain crossover at \(\omega_{c} = 31.6\) \(rad/sec\), and what is the resulting value of \(K_{v}\) ?

(b) With the lead compensator in place, what is the required value of \(K\) for a lag compensator that will readjust the gain to a \(K_{v}\) value of 100 ?

(c) Place the pole of the lag compensator at \(3.16rad/sec\), and determine the zero location that will maintain the crossover frequency at
\(\omega_{c} = 31.6rad/sec\). Plot the compensated frequency response on the same graph.

(d) Determine the PM of the compensated design.

6.59 Golden Nugget Airlines had great success with their free bar near the tail of the airplane. (See Problem 5.39.) However, when they purchased a much larger airplane to handle the passenger demand, they discovered there was some flexibility in the fuselage that caused a lot of unpleasant yawing motion at the rear of the airplane when in turbulence, which caused the revelers to spill their drinks. The approximate transfer function for the rigid body roll/yawl motion, called the "Dutch roll" mode (see Section 10.3.1) is

\[\frac{r(s)}{\delta_{r}(s)} = \frac{8.75\left( 4s^{2} + 0.4s + 1 \right)}{(s/0.01 + 1)\left( s^{2} + 0.24s + 1 \right)} \]

where \(r\) is the airplane's yaw rate and \(\delta_{r}\) is the rudder angle. In performing a finite element analysis (FEA) of the fuselage structure and adding those dynamics to the Dutch roll motion, they found that the transfer function needed additional terms which reflected the fuselage lateral bending that occurred due to excitation from the rudder and turbulence. The revised transfer function is

\[\frac{r(s)}{\delta_{r}(s)} = \frac{8.75\left( 4s^{2} + 0.4s + 1 \right)}{(s/0.01 + 1)\left( s^{2} + 0.24s + 1 \right)} \cdot \frac{1}{\left( s^{2}/\omega_{b}^{2} + 2\zeta s/\omega_{b} + 1 \right)}, \]

where \(\omega_{b}\) is the frequency of the bending mode \(( = 10rad/sec)\) and \(\zeta\) is the bending mode damping ratio \(( = 0.02)\). Most swept-wing airplanes have a "yaw damper," which essentially feeds back yaw rate measured by a rate gyro to the rudder with a simple proportional control law. For the new Golden Nugget airplane, the proportional feedback gain \(K = 1\), where

\[\delta_{r}(s) = - Kr(s) \]

(a) Make a Bode plot of the open-loop system, determine the PM and GM for the nominal design, and plot the step response and Bode magnitude of the closed-loop system. What is the frequency of the lightly damped mode that is causing the difficulty?

(b) Investigate remedies to quiet down the oscillations, but maintain the same low-frequency gain in order not to affect the quality of the Dutch roll damping provided by the yaw rate feedback. Specifically, investigate each of the following, one at a time:

(i) Increasing the damping of the bending mode from \(\zeta = 0.02\) to \(\zeta = 0.04\) and (would require adding energy-absorbing material in the fuselage structure).

(ii) Increasing the frequency of the bending mode from \(\omega_{b} = 10\) \(rad/sec\) to \(\omega_{b} = 20rad/sec\) (would require stronger and heavier structural elements).

(iii) Adding a low-pass filter in the feedback - that is, replacing \(K\) in Eq. (6.75) with \(KD_{c}(s)\), where

\[D_{c}(s) = \frac{1}{s/\tau_{p} + 1} \]

(iv) Adding a notch filter as described in Section 5.4.3. Pick the frequency of the notch zero to be at \(\omega_{b}\), with a damping of \(\zeta = 0.04\), and pick the denominator poles to be \((s/100 + 1)^{2}\), keeping the \(DC\) gain of the filter \(= 1\).

(c) Investigate the sensitivity of the preceding two compensated designs (iii and iv) by determining the effect of a reduction in the bending mode frequency of \(- 10\%\). Specifically, reexamine the two designs by tabulating the GM, PM, closed-loop bending mode damping ratio, and resonant-peak amplitude, and qualitatively describe the differences in the step response.

(d) What do you recommend to Golden Nugget to help their customers quit spilling their drinks? (Telling them to get back in their seats is not an acceptable answer for this problem! Make the recommendation in terms of improvements to the yaw damper.)

\(\bigtriangleup \ \mathbf{6.60}\) Consider a system with the open-loop transfer function (loop gain)

\[G(s) = \frac{1}{s(s + 1)(s/10 + 1)} \]

(a) Create the Bode plot for the system, and find GM and PM.

(b) Compute the sensitivity function and plot its magnitude frequency response.

(c) Compute the vector margin (VM).

$\bigtriangleup \ $ 6.61 Prove the sensitivity function \(\mathcal{S}(s)\) has magnitude greater than 1 inside a circle with a radius of 1 centered at the -1 point. What does this imply about the shape of the Nyquist plot if closed-loop control is to outperform open-loop control at all frequencies?

\(\bigtriangleup \ \mathbf{6.62}\) Consider the system in Fig. 6.100 with the plant transfer function

\[G(s) = \frac{10}{s(s/10 + 1)} \]

(a) We wish to design a compensator \(D_{c}(s)\) that satisfies the following design specifications:

(i) \(K_{v} = 100\)

(ii) \(PM \geq 45^{\circ}\),

(iii) Sinusoidal inputs of up to \(1rad/sec\) to be reproduced with \(\leq 2\%\) error, and

(iv) Sinusoidal inputs with a frequency of greater than \(100rad/sec\) to be attenuated at the output to \(\leq 5\%\) of their input value.

(b) Create the Bode plot of \(G(s)\), choosing the open-loop gain so that \(K_{v} = 100\).

(c) Show a sufficient condition for meeting the specification on sinusoidal inputs is that the magnitude plot lies outside the shaded regions in Fig. 6.102. Recall that

\[\frac{Y}{R} = \frac{KG}{1 + KG}\ \text{~}\text{and}\text{~}\ \frac{E}{R} = \frac{1}{1 + KG} \]

Figure 6.102

Control system constraints for Problem 6.62

(d) Explain why introducing a lead network alone cannot meet the design specifications.

(e) Explain why a lag network alone cannot meet the design specifications.

(f) Develop a full design using a lead-lag compensator that meets all the design specifications without altering the previously chosen lowfrequency open-loop gain.

6.63 The transfer function for a quadrotor drone between altitude control input, \(F_{\text{alt}\text{~}}\), and the altitude, \(h\), is

\[\frac{h(s)}{F_{\text{alt}\text{~}}(s)} = G_{h}(s) = \frac{1}{s^{2}(s + 10)} \]

(a) Based on the rotor arrangements discussed in Example 2.5, determine how to command the four rotors so a vertical force, \(F_{\text{alt}\text{~}}\), is commanded with no effect on the pitch, roll, or yaw angles.

(b) Design a lead compensator, \(D_{c}(s)\), with a lead ratio of 20 using frequency design so that \(\zeta \geq 0.6\), while achieving the maximum possible natural frequency, \(\omega_{n}\).

243. $\bigtriangleup \ $ Problems for Section 6.8: Time Delay

6.64 Assume the system

\[G(s) = \frac{e^{- T_{d}s}}{s + 10} \]

has a 0.2 -sec time delay \(\left( T_{d} = 0.2sec \right)\). While maintaining a phase margin \(\geq 40^{\circ}\), find the maximum possible bandwidth by using the following:

(a) One lead-compensator section

\[D_{c}(s) = K\frac{s + a}{s + b} \]

where \(b/a = 100\).

(b) Two lead-compensator sections

\[D_{c}(s) = K\left( \frac{s + a}{s + b} \right)^{2} \]

where \(b/a = 10\).
(c) Comment on the statement in the text about the limitations on the bandwidth imposed by a delay.

6.65 Determine the range of \(K\) for which the following systems are stable:

(a) \(G(s) = K\frac{e^{- 4s}}{s}\)

(b) \(G(s) = K\frac{e^{- s}}{s(s + 2)}\)

6.66 Consider the heat exchanger of Example 2.18 with the open-loop transfer function

\[G(s) = \frac{e^{- 5s}}{(10s + 1)(60s + 1)} \]

(a) Design a lead compensator that yields \(PM \geq 45^{\circ}\) and the maximum possible closed-loop bandwidth.

(b) Design a PI compensator that yields \(PM \geq 45^{\circ}\) and the maximum possible closed-loop bandwidth.

Figure 6.103

Control system for Problem 6.67

244. Problems for Section 6.9: Alternative Presentations of Data

6.67 A feedback control system is shown in Fig. 6.103. The closed-loop system is specified to have an overshoot of less than \(30\%\) to a step input.

(a) Determine the corresponding PM specification in the frequency domain and the corresponding closed-loop resonant-peak value \(M_{r}\). (See Fig. 6.37.)

(b) From Bode plots of the system, determine the maximum value of \(K\) that satisfies the PM specification.

(c) Plot the data from the Bode plots [adjusted by the \(K\) obtained in part (b)] on a copy of the Nichols chart in Fig. 6.81, and determine the resonant-peak magnitude \(M_{r}\). Compare that with the approximate value obtained in part (a).

(d) Use the Nichols chart to determine the resonant-peak frequency \(\omega_{r}\) and the closed-loop bandwidth.

6.68 The Nichols plots of an uncompensated and a compensated system are shown in Fig. 6.104.

(a) What are the resonance peaks of each system?

(b) What are the PM and GM of each system?

(c) What are the bandwidths of each system?

(d) What type of compensation is used?

Figure 6.104

Nichols plots for Problem 6.68

6.69 Consider the system shown in Fig. 6.95.

(a) Construct an inverse Nyquist plot of \(\lbrack Y(j\omega)/E(j\omega)\rbrack^{- 1}\). (See Appendix W6.9.2 online at www.pearsonglobaleditions.com.)

(b) Show how the value of \(K\) for neutral stability can be read directly from the inverse Nyquist plot.

(c) For \(K = 4\), 2, and 1, determine the gain and phase margins.

(d) Construct a root-locus plot for the system, and identify corresponding points in the two plots. To what damping ratios \(\zeta\) do the GM and PM of part (c) correspond?

6.70 An unstable plant has the transfer function

\[\frac{Y(s)}{F(s)} = \frac{s + 1}{(s - 1)^{2}} \]

A simple control loop is to be closed around it, in the same manner as in the block diagram in Fig. 6.95.
(a) Construct an inverse Nyquist plot of \(Y/F\). (See Appendix W6.9.2.)

(b) Choose a value of \(K\) to provide a PM of \(45^{\circ}\). What is the corresponding GM?

(c) What can you infer from your plot about the stability of the system when \(K < 0\) ?

(d) Construct a root-locus plot for the system, and identify corresponding points in the two plots. In this case, to what value of \(\zeta\) does \(PM = 45^{\circ}\) correspond?

6.71 Consider the system shown in Fig. 6.105(a).

(a) Construct a Bode plot for the system.

(b) Use your Bode plot to sketch an inverse Nyquist plot. (See Appendix W6.9.2.)

(c) Consider closing a control loop around \(G(s)\), as shown in Fig. 6.105(b). Using the inverse Nyquist plot as a guide, read from your Bode plot the values of GM and PM when \(K = 0.7,1.0,1.4\), and 2. What value of \(K\) yields \(PM = 30^{\circ}\) ?

(d) Construct a root-locus plot, and label the same values of \(K\) on the locus. To what value of \(\zeta\) does each pair of PM/GM values correspond? Compare \(\zeta\) versus PM with the rough approximation in Fig. 6.36 .

Figure 6.105

Control system for

Problem 6.71

\[U \circ G(s) = \frac{4}{s(s + 2)^{2}}◯Y \]

245. State-Space Design

246. A Perspective on State-Space Design

In addition to the transform techniques of root locus and frequency response, there is a third major method of designing feedback control systems: the state-space method. We will introduce the statevariable method of describing differential equations. In state-space design, the control engineer designs a dynamic compensation by working directly with the state-variable description of the system. Like the transform techniques, the aim of the state-space method is to find a compensation \(D_{c}(s)\) (such as that shown in Fig. 7.1) that satisfies the design specifications. Because the state-space method of describing the plant and computing the compensation is so different from the transform techniques, it may seem at first to be solving an entirely different problem. We selected the examples and analysis given toward the end of this chapter to help convince you that, indeed, state-space design results in a compensator with a transfer function \(D_{c}(s)\) that is equivalent to those \(D_{c}(s)\) compensators obtained with the other two methods.

Because it is particularly well suited to the use of computer techniques, state-space design is increasingly studied and used today by control engineers.

Figure 7.1

A control system design definition

247. Chapter Overview

This chapter begins by considering the purposes and advantages of using state-space design. We will discuss selection of state-variables and state-space models for various dynamic systems through several examples in Section 7.2. Models in state-variable form enhance our ability to apply the computational efficiency of computer-aided design tools such as Matlab. In Section 7.3, we will show that it is beneficial to look at the state-variable form in terms of an analog computer simulation model. In Section 7.4, we will review the development of state-variable equations from block diagrams. We then solve for the dynamic response, using state equations for both hand and computer analysis. Having covered these preliminary fundamentals, we next proceed to the major task of control system design via state-space. The steps of the design method are as follows:

  1. Select closed-loop pole (root as referred to in previous chapters) locations and develop the control law for the closed-loop system that corresponds to satisfactory dynamic response (see Sections 7.5 and 7.6).

  2. Design an estimator (see Section 7.7).

  3. Combine the control law and the estimator (see Section 7.8).

  4. Introduce the reference input (see Sections 7.5.2 and 7.9).

After working through the central design steps, we will briefly explore the use of integral control in state-space (Section 7.10). The next three sections of this chapter consider briefly some additional concepts pertaining to the state-space method; because they are relatively advanced, they may be considered optional to some courses or readers. Finally, Section 7.15 provides some historical perspective for the material in this chapter.

247.1. Advantages of State-Space

The idea of state-space comes from the state-variable method of describing differential equations. In this method, the differential equations describing a dynamic system are organized as a set of first-order differential equations in the vector-valued state of the system, and the solution is visualized as a trajectory of this state vector in space. State-space control design is the technique in which the control engineer designs a dynamic compensation by working directly with the state-variable description of the system. We will see that the ordinary differential
equations (ODEs) of physical dynamic systems can be manipulated into state-variable form. In the field of mathematics, where ODEs are studied, the state-variable form is called the normal form for the equations. There are several good reasons for studying equations in this form, three of which are listed here:

  • To study more general models: The ODEs do not have to be linear or stationary. Thus, by studying the equations themselves, we can develop methods that are very general. Having them in state-variable form gives us a compact, standard form for study. Furthermore, the techniques of state-space analysis and design easily extend to systems with multiple inputs and/or multiple outputs. Of course, in this text, we study mainly linear time-invariant (LTI) models with single input and output (for the reasons given earlier).

  • To introduce the ideas of geometry into differential equations: In physics, the plane of position versus velocity of a particle or rigid body is called the phase plane, and the trajectory of the motion can be plotted as a curve in this plane. The state is a generalization of that idea to include more than two dimensions. While we cannot easily plot more than three dimensions, the concepts of distance, of orthogonal and parallel lines, and other concepts from geometry can be useful in visualizing the solution of an ODE as a path in state-space.

  • To connect internal and external descriptions: The state of a dynamic system often directly describes the distribution of internal energy in the system. For example, for electro-mechanical systems, it is common to select the following as state-variables: position (potential energy), velocity (kinetic energy), capacitor voltage (electric energy), inductor current (magnetic energy), and thermal systems temperature (thermal energy). The internal energy can always be computed from the state-variables. By a system of analysis to be described shortly, we can relate the state to the system inputs and outputs, and thus connect the internal variables to the external inputs and to the sensed outputs. In contrast, the transfer function relates only the input to the output and does not show the internal behavior. The state form keeps the latter information, which is sometimes important.

Use of the state-space approach has often been referred to as modern control design, and use of transfer-function-based methods, such as root locus and frequency response, referred to as classical control design. However, because the state-space method of description for ODEs has been in use for over 100 years and was introduced to control design in the late 1950s, it seems somewhat misleading to refer to it as modern. We prefer to refer to the two design approaches as the state-space methods and the transform methods.

Advantages of state-space design are especially apparent when the system to be controlled has more than one control input or more than
one sensed output. However, in this book, we shall examine the ideas of state-space design using the simpler Single-Input-Single-Output (SISO) systems. The design approach used for the systems described in state form is "divide and conquer." First, we design the control as if all of the state were measured and available for use in the control law. This provides the possibility of assigning arbitrary dynamics for the system. Having a satisfactory control law based on full-state feedback, we introduce the concept of an observer and construct estimates of the state based on the sensed output. We then show that these estimates can be used in place of the actual state-variables. Finally, we introduce the external reference-command inputs to complete the structure. Only at this point can we recognize that the resulting compensation has the same essential structure as that developed with transform methods.

Before we can begin the design using state descriptions, it is necessary to develop some analytical results and tools from matrix linear algebra for use throughout the chapter. We assume you are familiar with such elementary matrix concepts as the identity matrix, triangular and diagonal matrices, and the transpose of a matrix. We also assume that you have some familiarity with the mechanics of matrix algebra, including adding, multiplying, and inverting matrices. More advanced results will be developed in Section 7.4 in the context of the dynamic response of a linear system. All of the linear algebra results used in this chapter are repeated in Appendix WB available online at www.pearsonglobaleditions.com for your reference and review.

247.2. System Description in State-Space

The motion of any finite dynamic system can be expressed as a set of first-order ODEs. This is often referred to as the state-variable representation. For example, the use of Newton's law and the free-body diagram in Section 2.1 typically lead to second-order differential equations- that is, equations that contain the second derivative, such as \(\overset{¨}{x}\) in Eq. (2.3) or \(\overset{¨}{\theta}\) in Eq. (2.11). The latter equation can be expressed as

\[\begin{matrix} & {\overset{˙}{x}}_{1} = x_{2} \\ & {\overset{˙}{x}}_{2} = \frac{u}{I} \end{matrix}\]

where

\[\begin{matrix} u & \ = F_{c}d + M_{D} \\ x_{1} & \ = \theta \\ x_{2} & \ = \overset{˙}{\theta} \\ {\overset{˙}{x}}_{2} & \ = \overset{¨}{\theta} \end{matrix}\]

Standard form of linear differential equations
The output of this system is \(\theta\), the satellite attitude. These same equations can be represented in the state-variable form as the vector equation

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} + \mathbf{B}u \]

where the input is \(u\) and the output is

\[y = \mathbf{Cx} + Du\text{.}\text{~} \]

The column vector \(\mathbf{x}\) is called the state of the system and contains \(n\) elements for an \(n\) th-order system. For mechanical systems, the state vector elements usually consist of the positions and velocities of the separate bodies, as is the case for the example in Eqs. (7.1) and (7.2). The quantity \(\mathbf{A}\) is an \(n \times n\) system matrix, \(\mathbf{B}\) is an \(n \times 1\) input matrix, \(\mathbf{C}\) is a \(1 \times n\) row matrix referred to as the output matrix, and \(D\) is a scalar called the direct transmission term. To save space, we will sometimes refer to a state vector by its transpose,

\[\mathbf{x} = \begin{bmatrix} x_{1} & x_{2} & \ldots & x_{n} \end{bmatrix}^{T},\]

which is equivalent to

\[\mathbf{x} = \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{bmatrix}\]

The differential equation models of more complex systems, such as those developed in Chapter 2 on mechanical, electrical, and electromechanical systems, can be described by state-variables through selection of positions, velocities, capacitor voltages, and inductor currents as suitable state-variables.

In this chapter, we will consider control systems design using the state-variable form. For the case in which the relationships are nonlinear [such as the case in Eqs. (2.22) and (2.97)], the linear form cannot be used directly. One must linearize the equations as we did in Chapter 2 to fit the form (see also Chapter 9).

The state-variable method of specifying differential equations is used by computer-aided control systems design software packages (for example Matlab). Therefore, in order to specify linear differential equations to the computer, you need to know the values of the matrices A, B, C, and the constant \(D\).

Determine the A, B, C, \(D\) matrices in the state-variable form for the satellite attitude control model in Example 2.3 with \(M_{D} = 0\).

Solution. Define the attitude and the angular velocity of the satellite as the state-variables so that \(\mathbf{x} \triangleq \lbrack\theta\omega\rbrack^{T}\). \(\ ^{1}\) The single second-order equation (2.11) can then be written in an equivalent way as two first-order equations:

\[\begin{matrix} \overset{˙}{\theta} & \ = \omega \\ \overset{˙}{\omega} & \ = \frac{d}{I}F_{c} \end{matrix}\]

These equations are expressed, using Eq. (7.3), \(\overset{˙}{\mathbf{x}} = \mathbf{Ax} + \mathbf{B}u\), as

\[\begin{bmatrix} \overset{˙}{\theta} \\ \overset{˙}{\omega} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\begin{bmatrix} \theta \\ \omega \end{bmatrix} + \begin{bmatrix} 0 \\ d/I \end{bmatrix}F_{c}\]

The output of the system is the satellite attitude, \(y = \theta\). Using Eq. (7.4), \(y = \mathbf{Cx} + Du\), this relation is expressed as

\[y = \begin{bmatrix} 1 & 0 \end{bmatrix}\begin{bmatrix} \theta \\ \omega \end{bmatrix}\]

Therefore, the matrices for the state-variable form are

\[\mathbf{A} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 0 \\ d/I \end{bmatrix},\ \mathbf{C} = \begin{bmatrix} 1 & 0 \end{bmatrix},\ D = 0\]

and the input \(u \triangleq F_{c}\).

For this very simple example, the state-variable form is a more cumbersome way of writing the differential equation than the second-order version in Eq. (2.11). However, the method is not more cumbersome for most systems, and the advantages of having a standard form for use in computer-aided design have led to widespread use of the state-variable form.

The next example has more complexity and shows how to use Matlab to find the solution of linear differential equations.

Cruise Control Step Response

(a) Rewrite the equation of motion from Example 2.1 in state-variable form, where the output is the car position \(x\).

(b) Use Matlab to find the response of the velocity of the car for the case in which the input jumps from being \(u = 0\) at time \(t = 0\) to a constant \(u = 750\text{ }N\) thereafter. Assume the car mass \(m\) is \(1500\text{ }kg\), and \(b = 60\text{ }N \cdot sec/m\).

248. Solution.

(a) Equations of motion: First, we need to express the differential equation describing the plant, Eq. (2.3), as a set of simultaneous first-order equations. To do so, we define the position and the velocity of the car as the state-variables \(x\) and \(v\), so \(\mathbf{x} = \begin{bmatrix} x & v \end{bmatrix}^{T}\). The single second-order equation, Eq. (2.3), can then be rewritten as a set of two first-order equations:

\[\begin{matrix} \overset{˙}{x} & \ = v, \\ \overset{˙}{v} & \ = - \frac{b}{m}v + \frac{1}{m}u. \end{matrix}\]

Next, we use the standard form of Eq. (7.3), \(\overset{˙}{\mathbf{x}} = \mathbf{Ax} + \mathbf{B}u\), to express these equations:

\[\begin{bmatrix} \overset{˙}{x} \\ \overset{˙}{v} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 0 & - b/m \end{bmatrix}\begin{bmatrix} x \\ v \end{bmatrix} + \begin{bmatrix} 0 \\ 1/m \end{bmatrix}u\text{.}\text{~}\]

249. Motion of a Hanging Crane in State Variable Form

Based on Example 2.8, determine the state-space equation for the motion of a hanging crane shown in Fig. 2.21. Assume the friction term can be neglected.

Solution. In order to write the equations in the state-variable form (that is, a set of simultaneous first-order differential equations), the angular

The output of the system is the car position \(y = x\), which is expressed in matrix form as

\[y = \begin{bmatrix} 1 & 0 \end{bmatrix}\begin{bmatrix} x \\ v \end{bmatrix}\]

or

\[y = \mathbf{Cx}. \]

So the state-variable-form matrices defining this example are

\[\mathbf{A} = \begin{bmatrix} 0 & 1 \\ 0 & - b/m \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 0 \\ 1/m \end{bmatrix},\ \mathbf{C} = \begin{bmatrix} 1 & 0 \end{bmatrix},\ D = 0\]

(b) Time response: The equations of motion are those given in part (a), except that now the output is \(v\). Therefore, the output matrix is

\[\mathbf{C} = \begin{bmatrix} 0 & 1 \end{bmatrix}\]

The coefficients required are \(b/m = 0.04\) and \(1/m = 6.67 \times 10^{- 4}\). The numerical values for the matrices defining the system are thus

\[\mathbf{A} = \begin{bmatrix} 0 & 1 \\ 0 & - 0.04 \end{bmatrix},\mathbf{B} = \begin{bmatrix} 0 \\ 6.67 \times 10^{- 4} \end{bmatrix},\mathbf{C} = \begin{bmatrix} 0 & 1 \end{bmatrix},D = 0.\]

The step function in Matlab computes the time response of a linear system to a unit-step input. Because the system is linear, the output for this case can be multiplied by the magnitude of the input step to derive a step response of any amplitude. Equivalently, the \(\mathbf{B}\) matrix can be multiplied by the magnitude of the unit step.

The statements

\[\begin{matrix} & A = \lbrack 01;0 - 0.04\rbrack; \\ & B = \lbrack 0;1/1500\rbrack; \\ & C = \lbrack 01\rbrack; \\ & D = 0; \\ & \text{~}\text{sys = ss(A, 750*B,C,D);}\text{~}\begin{matrix} \text{~}\text{\textbackslash\% step gives unit step response,}\text{~} \\ \text{~}\text{step(sys);}\text{~}750*\text{~}\text{B gives}\text{~}\%\text{~}\text{u=750 N.}\text{~} \end{matrix} \end{matrix}\]

compute and plot the time response for a unit step with a \(750 - N\) magnitude. The step response is shown in Fig. 7.2.

Figure 7.2

Response of the car velocity to a step in \(u\)

position and velocity for the hanging crane as the state elements (that is \(\mathbf{x} = \begin{bmatrix} x_{1} & x_{2} \end{bmatrix} = \begin{bmatrix} \theta & \overset{˙}{\theta} \end{bmatrix}\) and the force being applied to the trolley is taken as the input \(u\) and the output is \(\theta = \begin{bmatrix} 1 & 0 \end{bmatrix}\mathbf{x}\). Hence, \({\overset{˙}{x}}_{1} = x_{2}\). Before \({\overset{˙}{x}}_{2}\) can be defined, Eq. (2.28) can be simplified such that the acceleration term for the trolley can be eliminated and thus,

\[\left( m_{t} + m_{p} \right)\left( I + m_{p}l^{2} \right)\overset{¨}{\theta} + \left( m_{t} + m_{p} \right)m_{p}gl\theta = \left( m_{p}l \right)^{2}\overset{¨}{\theta} - \left( m_{p}l \right)u \]

Re-arranging it into the standard form and expressing it in terms of \(x_{1}\) and \({\overset{˙}{x}}_{2}\), we get

\[{\overset{˙}{x}}_{2} = - \frac{\left( m_{t} + m_{p} \right)m_{p}gl}{\left( m_{t} + m_{p} \right)\left( I + m_{p}l^{2} \right) - \left( m_{p}l \right)^{2}}x_{1} - \frac{m_{p}l}{\left( m_{t} + m_{p} \right)\left( I + m_{p}l^{2} \right) - \left( m_{p}l \right)^{2}}u \]

Therefore, the standard matrices that define the state equations are

\[\begin{matrix} & \mathbf{A} = \begin{bmatrix} 0 & 1 \\ - \frac{\left( m_{t} + m_{p} \right)m_{p}gl}{\left( m_{t} + m_{p} \right)\left( I + m_{p}l^{2} \right) - \left( m_{p}l \right)^{2}} & 0 \end{bmatrix}, \\ & \mathbf{B} = \begin{bmatrix} 0 \\ \frac{- m_{p}l}{\left( m_{t} + m_{p} \right)\left( I + m_{p}l^{2} \right) - \left( m_{p}l \right)^{2}} \end{bmatrix}, \\ & \mathbf{C} = \begin{bmatrix} 1 & 0 \end{bmatrix},D = 0. \end{matrix}\]

For the loudspeaker in Fig. 2.31 and the circuit driving it in Fig. 2.32, find the state-space equations relating the input voltage \(v_{a}\) to the output cone displacement \(x\). Assume effective circuit resistance is \(R = 0.5\Omega\) and the inductance is \(L = 0.15mH\), and keep the mass \(M\) and friction coefficient \(b\) as unknowns.

Solution. Recall the two coupled equations, (2.54) and (2.58), that constitute the dynamic model for the loudspeaker:

\[\begin{matrix} & M\overset{¨}{x} + b\overset{˙}{x} = 0.43i \\ & L\frac{di}{dt} + Ri = v_{a} - 0.43\overset{˙}{x} \end{matrix}\]

which leads to the standard matrices

\[\begin{matrix} & \mathbf{A} = \begin{bmatrix} 0 & 1 & 0 \\ 0 & - b/M & 0.43/M \\ 0 & - 2867 & - 3333 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 0 \\ 0 \\ 6667 \end{bmatrix}, \\ & \mathbf{C} = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix},\ D = 0, \end{matrix}\]

where now the input \(u \triangleq v_{a}\).

250. Modeling a DC Motor in State-Variable Form

Find the state-space equations for the DC motor with the equivalent electric circuit shown in Fig. 2.34(a).

Solution. Recall the equations of motion [Eqs. (2.62) and (2.63)] from Chapter 2:

\[\begin{matrix} J_{m}{\overset{¨}{\theta}}_{m} + b{\overset{˙}{\theta}}_{m} & \ = K_{t}{\overset{˙}{i}}_{a} \\ L_{a}\frac{di_{a}}{dt} + R_{a}i_{a} & \ = v_{a} - K_{e}{\overset{˙}{\theta}}_{m} \end{matrix}\]

A state vector for this third-order system is \(\mathbf{x} = \begin{bmatrix} \theta_{m} & {\overset{˙}{\theta}}_{m} & i_{a} \end{bmatrix}^{T}\), which leads to the standard matrices

\[\begin{matrix} & \mathbf{A} = \begin{bmatrix} 0 & 1 & 0 \\ 0 & - \frac{b}{J_{m}} & \frac{K_{t}}{J_{m}} \\ 0 & - \frac{K_{e}}{L_{a}} & - \frac{R_{a}}{L_{a}} \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 0 \\ 0 \\ \frac{1}{L_{a}} \end{bmatrix}, \\ & \mathbf{C} = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix},\ D = 0, \end{matrix}\]

where the input \(u \triangleq v_{a}\).

The state-variable form can be applied to a system of any order. Example 7.6 illustrates the method for a fourth-order system.

251. EXAMPLE 7.6

\[\begin{matrix} & \mathbf{A} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ - \frac{k}{I_{1}} & - \frac{b}{I_{1}} & \frac{k}{I_{1}} & \frac{b}{I_{1}} \\ 0 & 0 & 0 & 1 \\ \frac{k}{I_{2}} & \frac{b}{I_{2}} & - \frac{k}{I_{2}} & - \frac{b}{I_{2}} \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 0 \\ \frac{1}{I_{1}} \\ 0 \\ 0 \end{bmatrix}, \\ & \mathbf{C} = \begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix},\ D = 0. \end{matrix}\]

Difficulty arises when the differential equation contains derivatives of the input \(u\). Techniques to handle this situation will be discussed in Section 7.4.

251.1. Block Diagrams and State-Space

Perhaps the most effective way of understanding the state-variable equations is via an analog computer block-diagram representation. The structure of the representation uses integrators as the central element, which are quite suitable for a first-order, state-variable representation of dynamic equations for a system. Even though the analog computers are almost extinct, analog computer implementation is still a useful concept for state-variable design, and in the circuit design of analog compensation. \(\ ^{2}\)

The analog computer was a device composed of electric components designed to simulate ODEs. The basic dynamic component of the analog computer is an integrator, constructed from an operational amplifier with a capacitor feedback and a resistor feed-forward as shown in Fig. 2.30. Because an integrator is a device whose input is the derivative of its output (as shown in Fig. 7.3) if, in an analog-computer simulation, we identify the outputs of the integrators as the state, we will then automatically have the equations in state-variable form. Conversely, if a system is described by state-variables, we can construct an

Figure 7.3

An integrator

Figure 7.4

Components of an analog computer

Now we assume we have this highest derivative, and note the lower order terms can be obtained by integration as shown in Fig. 7.6(a). Finally we apply Eq. (7.8) to complete the realization shown in Fig. 7.6(b). To obtain the state description, we simply define the state-variables as the output of the integrators \(x_{1} = \overset{¨}{y},x_{2} = \overset{˙}{y},x_{3} = y\), to obtain

analog-computer simulation of that system by taking one integrator for each state-variable and connecting its input according to the given equation for that state-variable, as expressed in the state-variable equations. The analog-computer diagram is a picture of the state equations.

The components of a typical analog computer used to accomplish these functions are shown in Fig. 7.4. Notice the operational amplifier has a sign change that gives it a negative gain.

Find a state-variable description and the transfer function of the thirdorder system shown in Fig. 7.5 whose differential equation is

\[\dddot{y} + 7.5\overset{¨}{y} + 13\overset{˙}{y} + 6.5y = 7u\text{.}\text{~} \]

Solution. We solve for the highest derivative term in the ODE to obtain

\[\dddot{y} = - 7.5\overset{¨}{y} - 13\overset{˙}{y} - 6.5y + 7u\text{.}\text{~} \]

Summer \(\begin{matrix} e_{1} \circ - 10 & 10 \\ e_{2} \circ \longrightarrow & \longrightarrow \end{matrix}\)
Potentiometer \(e_{1} \circ \ \circ \ \circ \ e_{0} = ke_{1}0 \leqslant k \leqslant 1\)

Figure 7.5

Block diagram for a third-order system

Figure 7.6

Block diagram

of a system to solve

\[\dddot{y} + 7.5\overset{¨}{y} + 13\overset{˙}{y} + 6.5y = \]

\(7u\), using only integrators as dynamic elements: (a) intermediate diagram;

(b) final diagram

(a)

(b)

\[\begin{matrix} & {\overset{˙}{x}}_{1} = - 7.5x_{1} - 13x_{2} - 6.5x_{3} \\ & {\overset{˙}{x}}_{2} = x_{1} \\ & {\overset{˙}{x}}_{3} = x_{2} \end{matrix}\]

which provides the state-variable description

\[\mathbf{A} = \begin{bmatrix} - 7.5 & - 13 & - 6.5 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 7 \\ 0 \\ 0 \end{bmatrix},\ \mathbf{C} = \begin{bmatrix} 0 & 0 & 1 \end{bmatrix},\ D = 0\]

The Matlab statement

\[\lbrack\text{~}\text{num, den]}\text{~} = ss2tf(A,B,C,D); \]

will yield the transfer function

\[\frac{Y(s)}{U(s)} = \frac{7}{s^{3} + 7.5s^{2} + 13s + 6.5} \]

If the transfer function were desired in factor form, it could be obtained by transforming either the ss or tf description. Therefore, either of the Matlab statements

\[\lbrack z,p,k\rbrack = ss2zp(A,B,C,D) \]

and

\[\lbrack z,p,k\rbrack = tf2zp(\text{~}\text{num}\text{~},\text{~}\text{den}\text{~}) \]

would result in

\[z = \left\lbrack \begin{matrix} \rbrack, \\ z \end{matrix} = \begin{bmatrix} - 5.26 & - 1.23 & - 1 \end{bmatrix}^{'},\ k = 7 \right.\ \]

This means the transfer function could also be written in factored form as

\[\frac{Y(s)}{U(s)} = G(s) = \frac{7}{(s + 5.26)(s + 1.23)(s + 1)} \]

251.2. Analysis of the State Equations

In the previous section, we introduced and illustrated the process of selecting a state and organizing the equations in state form. In this section, we review that process and describe how to analyze the dynamic response using the state description. In Section 7.4.1, we begin by relating the state description to block diagrams and the Laplace transform description and to consider the fact that for a given system the choice of state is not unique. We show how to use this nonuniqueness to select among several canonical forms for the one that will help solve the particular problem at hand; a control canonical form makes feedback gains of the state easy to design. After studying the structure of state equations in Section 7.4.2, we consider the dynamic response and show how transfer-function poles and zeros are related to the matrices of the state descriptions. To illustrate the results with hand calculations, we offer a simple example that represents the model of a thermal system. For more realistic examples, a computer-aided control systems design software package such as Matlab is especially helpful; relevant Matlab commands will be described from time to time.

251.2.1. Block Diagrams and Canonical Forms

We begin with a thermal system that has a simple transfer function

\[G(s) = \frac{b(s)}{a(s)} = \frac{s + 2}{s^{2} + 7s + 12} = \frac{2}{s + 4} + \frac{- 1}{s + 3}. \]

The roots of the numerator polynomial \(b(s)\) are the zeros of the transfer function, and the roots of the denominator polynomial \(a(s)\) are the poles. Notice we have represented the transfer function in two forms, as a ratio of polynomials and as the result of a partial-fraction expansion. In order to develop a state description of this system (and this is a generally useful technique), we construct a block diagram that corresponds to the transfer function (and the differential equations) using only isolated integrators as the dynamic elements. We present several special forms which we call canonical forms. One such block diagram, structured in control canonical form, is illustrated in Fig. 7.7. The central feature of this structure is that each state-variable feeds back to the control input, \(u\), through the coefficients of the system matrix \(\mathbf{A}_{c}\).

Once we have drawn the block diagram in this form, we can identify the state description matrices simply by inspection; this is possible because when the output of an integrator is a state-variable, the input of that integrator is the derivative of that variable. For example, in Fig. 7.7, the equation for the first state-variable is

\[{\overset{˙}{x}}_{1} = - 7x_{1} - 12x_{2} + u\text{.}\text{~} \]

Continuing in this fashion, we get

\[\begin{matrix} {\overset{˙}{x}}_{2} & \ = x_{1} \\ y & \ = x_{1} + 2x_{2} \end{matrix}\]

Figure 7.7

A block diagram

representing Eq. (7.9)

in control canonical

form

252. Matlab tf2ss

Control canonical form

These three equations can then be rewritten in the matrix form

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{A}_{c}\mathbf{x} + \mathbf{B}_{c}u, \\ y & \ = \mathbf{C}_{c}\mathbf{x}, \end{matrix}\]

where

\[\begin{matrix} & \mathbf{A}_{c} = \begin{bmatrix} - 7 & - 12 \\ 1 & 0 \end{bmatrix},\ \mathbf{B}_{c} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \\ & \mathbf{C}_{c} = \begin{bmatrix} 1 & 2 \end{bmatrix},\ D_{c} = 0, \end{matrix}\]

and where the subscript \(c\) refers to control canonical form.

Two significant facts about this form are that the coefficients 1 and 2 of the numerator polynomial \(b(s)\) appear in the \(\mathbf{C}_{c}\) matrix, and (except for the leading term) the coefficients 7 and 12 of the denominator polynomial \(a(s)\) appear (with opposite signs) as the first row of the \(\mathbf{A}_{c}\) matrix. Armed with this knowledge, we can thus write down by inspection the state matrices in control canonical form for any system whose transfer function is known as a ratio of numerator and denominator polynomials. If \(b(s) = b_{1}s^{n - 1} + b_{2}s^{n - 2} + \cdots + b_{n}\) and \(a(s) = s^{n} + a_{1}s^{n - 1} + a_{2}s^{n - 2} + \cdots + a_{n}\), the Matlab steps are

num \(= b = \begin{bmatrix} b_{1} & b_{2} & \cdots & b_{n} \end{bmatrix}\)

\[den = a = \begin{bmatrix} 1 & a_{1} & a_{2} & \cdots & a_{n} \end{bmatrix}\]

\(\begin{bmatrix} Ac, & BC, & Cc & Dc \end{bmatrix} = tf2ss(\) num, den).

We read tf2ss as "transfer function to state-space." The result will be

\[\begin{matrix} & \mathbf{A}_{c} = \begin{bmatrix} - a_{1} & - a_{2} & \cdots & \cdots & - a_{n} \\ 1 & 0 & \cdots & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & & \ddots & 0 & \vdots \\ 0 & 0\cdots & \cdots & 1 & 0 \end{bmatrix},\ \mathbf{B}_{c} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \\ & \mathbf{C}_{c} = \begin{bmatrix} b_{1} & b_{2} & \cdots & \cdots & b_{n} \end{bmatrix},\ D_{c} = 0. \end{matrix}\]

The block diagram of Fig. 7.7 and the corresponding matrices of Eq. (7.12) are not the only way to represent the transfer function \(G(s)\). A block diagram corresponding to the partial-fraction expansion of \(G(s)\)

Figure 7.8

Block diagram for Eq. (7.12) in modal canonical form

Modal form

is given in Fig. 7.8. Using the same technique as before, with the statevariables marked as shown in the figure, we can determine the matrices directly from the block diagram as being

\[\begin{matrix} \overset{˙}{\mathbf{z}} & \ = \mathbf{A}_{m}\mathbf{z} + \mathbf{B}_{m}u \\ y & \ = \mathbf{C}_{m}\mathbf{z} + D_{m}u \end{matrix}\]

where

\[\begin{matrix} \mathbf{A}_{m} = \begin{bmatrix} - 4 & 0 \\ 0 & - 3 \end{bmatrix}, & \mathbf{B}_{m} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \\ \mathbf{C}_{m} = \begin{bmatrix} 2 & - 1 \end{bmatrix}, & D_{m} = 0, \end{matrix}\]

and the subscript \(m\) refers to modal canonical form. The name for this form derives from the fact that the poles of the system transfer function are sometimes called the normal modes of the system. The important fact about the matrices in this form is that the system poles (in this case -4 and -3 ) appear as the elements along the diagonal of the \(\mathbf{A}_{m}\) matrix, and the residues, the numerator terms in the partial-fraction expansion (in this example 2 and -1 ), appear in the \(\mathbf{C}_{m}\) matrix.

Expressing a system in modal canonical form can be complicated by two factors: (1) the elements of the matrices will be complex when the poles of the system are complex, and; (2) the system matrix cannot be diagonal when the partial-fraction expansion has repeated poles. To solve the first problem, we express the complex poles of the partialfraction expansion as conjugate pairs in second-order terms so that all the elements remain real. The corresponding \(\mathbf{A}_{m}\) matrix will then have \(2 \times 2\) blocks along the main diagonal representing the local coupling between the variables of the complex-pole set. To handle the second difficulty, we also couple the corresponding state-variables so the poles appear along the diagonal with off-diagonal terms indicating the coupling. A simple example of this latter case is the satellite system from Example 7.1, whose transfer function is \(G(s) = 1/s^{2}\). The system matrices for this transfer function in a modal form are

\[\mathbf{A}_{m} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix},\ \mathbf{B}_{m} = \begin{bmatrix} 0 \\ 1 \end{bmatrix},\ \mathbf{C}_{m} = \begin{bmatrix} 1 & 0 \end{bmatrix},\ D_{m} = 0\]

253. EXAMPLE 7.8

254. Figure 7.9

Block diagram for a fourth-order system in modal canonical form with shading indicating portion in control canonical form

255. State Equations in Modal Canonical Form

A "quarter car model" [see Example 2.2] with one resonant model has a transfer function given by

\[G(s) = \frac{5s + 10}{s^{2}\left( s^{2} + 5s + 10 \right)} = \frac{1}{s^{2}} - \frac{1}{s^{2} + 5s + 10} \]

Find state matrices in model form describing this system.

Solution. The transfer function has been given in real partial-fraction form. To get state-description matrices, we draw a corresponding block diagram with integrators only, assign the state, and write down the corresponding matrices. This process is not unique, so there are several acceptable solutions to the problem as stated, but they will differ in only trivial ways. A block diagram with a satisfactory assignment of variables is given in Fig. 7.9.

Notice the second-order term to represent the complex poles has been realized in control canonical form. There are a number of other possibilities that can be used as alternatives for this part. This particular form allows us to write down the system matrices by inspection:

\[\begin{matrix} & \mathbf{A} = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & - 5 & - 10 \\ 0 & 0 & 1 & 0 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix}, \\ & \mathbf{C} = \begin{bmatrix} 0 & 1 & 0 & - 1 \end{bmatrix},\ D = 0. \end{matrix}\]

Thus far, we have seen that we can obtain the state description from a transfer function in either control or modal form. Because these matrices represent the same dynamic system, we might ask as to what is the relationship between the matrices in the two forms (and their corresponding state-variables)? More generally, suppose we have a set of state equations that describe some physical system in no particular form, and we are given a problem for which the control canonical form would

State description and output equation

256. Transformation of state

be helpful. (We will see such a problem in Section 7.5.) Is it possible to calculate the desired canonical form without obtaining the transfer function first? Answering these questions requires a look at the topic of state transformations.

Consider a system described by the state equations

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}u, \\ y & \ = \mathbf{Cx} + Du \end{matrix}\]

As we have seen, this is not a unique description of the dynamic system. We consider a change of state from \(\mathbf{x}\) to a new state \(\mathbf{z}\) that is a linear transformation of \(\mathbf{x}\). For a nonsingular transformation matrix \(\mathbf{T}\), we let

\[\mathbf{x} = \mathbf{Tz} \]

By substituting Eq. (7.19) into Eq. (7.18a), we have the dynamic equations in terms of the new state \(\mathbf{z}\) :

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{T}\overset{˙}{\mathbf{z}} = \mathbf{ATz} + \mathbf{B}u \\ \overset{˙}{\mathbf{z}} & \ = \mathbf{T}^{- 1}\mathbf{ATz} + \mathbf{T}^{- 1}\mathbf{B}u \\ \overset{˙}{\mathbf{z}} & \ = \overline{\mathbf{A}}\mathbf{z} + \overline{\mathbf{B}}u \end{matrix}\]

In Eq. (7.20c),

\[\begin{matrix} \overline{\mathbf{A}} & \ = \mathbf{T}^{- 1}\mathbf{AT} \\ \overline{\mathbf{B}} & \ = \mathbf{T}^{- 1}\mathbf{B} \end{matrix}\]

We then substitute Eq. (7.19) into Eq. (7.18b) to get the output in terms of the new state \(\mathbf{z}\) :

\[\begin{matrix} y & \ = \mathbf{CTz} + Du \\ & \ = \overline{\mathbf{C}}\mathbf{z} + \bar{D}u \end{matrix}\]

Here

\[\overline{\mathbf{C}} = \mathbf{CT},\ \bar{D} = D \]

Given the general matrices \(\mathbf{A},\mathbf{B}\), and \(\mathbf{C}\) and scalar \(D\), we would like to find the transformation matrix \(\mathbf{T}\) such that \(\overline{\mathbf{A}},\overline{\mathbf{B}},\overline{\mathbf{C}}\), and \(\bar{D}\) are in a particular form, for example, control canonical form. To find such a \(\mathbf{T}\), we assume that \(\mathbf{A},\mathbf{B},\mathbf{C}\), and \(D\) are already in the required form, further assume the transformation \(\mathbf{T}\) has a general form, and match terms. Here we will work out the third-order case; how to extend the analysis to the general case should be clear from the development. It goes like this.

First, we rewrite Eq. (7.21a) as

\[\overline{\mathbf{A}}\mathbf{T}^{- 1} = \mathbf{T}^{- 1}\mathbf{A} \]

If \(\overline{\mathbf{A}}\) is in control canonical form, and we describe \(\mathbf{T}^{- 1}\) as a matrix with rows \(\mathbf{t}_{1},\mathbf{t}_{2}\), and \(\mathbf{t}_{3}\), then

\[\begin{bmatrix} - a_{1} & - a_{2} & - a_{3} \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}\begin{bmatrix} \mathbf{t}_{1} \\ \mathbf{t}_{2} \\ \mathbf{t}_{3} \end{bmatrix} = \begin{bmatrix} \mathbf{t}_{1}\mathbf{A} \\ \mathbf{t}_{2}\mathbf{A} \\ \mathbf{t}_{3}\mathbf{A} \end{bmatrix}\]

Working out the third and second rows gives the matrix equations

\[\begin{matrix} \mathbf{t}_{2} & \ = \mathbf{t}_{3}\mathbf{A} \\ \mathbf{t}_{1} & \ = \mathbf{t}_{2}\mathbf{A} = \mathbf{t}_{3}\mathbf{A}^{2} \end{matrix}\]

From Eq. (7.21b), assuming \(\overline{\mathbf{B}}\) is also in control canonical form, we have the relation

\[\mathbf{T}^{- 1}\mathbf{B} = \overline{\mathbf{B}} \]

or

\[\begin{bmatrix} \mathbf{t}_{1}\mathbf{B} \\ \mathbf{t}_{2}\mathbf{B} \\ \mathbf{t}_{3}\mathbf{B} \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\]

Combining Eqs. (7.24) and (7.25), we get

\[\begin{matrix} & \mathbf{t}_{3}\mathbf{B} = 0 \\ & \mathbf{t}_{2}\mathbf{B} = \mathbf{t}_{3}\mathbf{AB} = 0 \\ & \mathbf{t}_{1}\mathbf{B} = \mathbf{t}_{3}\mathbf{A}^{2}\mathbf{B} = 1 \end{matrix}\]

These equations can, in turn, be written in matrix form as

\[\mathbf{t}_{3}\begin{bmatrix} \mathbf{B} & \mathbf{AB} & \mathbf{A}^{2}\mathbf{B} \end{bmatrix} = \begin{bmatrix} 0 & 0 & 1 \end{bmatrix},\]

or

\[\mathbf{t}_{3} = \begin{bmatrix} 0 & 0 & 1 \end{bmatrix}\mathcal{C}^{- 1},\]

Controllability matrix transformation to control canonical form
Controllable systems

where the controllability matrix \(\mathcal{C} = \begin{bmatrix} \mathbf{B} & \mathbf{AB} & \mathbf{A}^{2}\mathbf{B} \end{bmatrix}\). Having \(\mathbf{t}_{3}\), we can now go back to Eq. (7.24) and construct all the rows of \(\mathbf{T}^{- 1}\).

To sum up, the recipe for converting a general state description of dimension \(n\) to control canonical form is as follows:

  • From \(\mathbf{A}\) and \(\mathbf{B}\), form the controllability matrix

\[\mathcal{C} = \begin{bmatrix} \mathbf{B} & \mathbf{AB} & \cdots & \mathbf{A}^{n - 1}\mathbf{B} \end{bmatrix}.\]

  • Compute the last row of the inverse of the transformation matrix as

\[\mathbf{t}_{n} = \begin{bmatrix} 0 & 0 & \cdots & 1 \end{bmatrix}\mathcal{C}^{- 1}\]

  • Construct the entire transformation matrix as

\[\mathbf{T}^{- 1} = \begin{bmatrix} \mathbf{t}_{n}\mathbf{A}^{n - 1} \\ \mathbf{t}_{n}\mathbf{A}^{n - 2} \\ \vdots \\ \mathbf{t}_{n} \end{bmatrix}\]

  • Compute the new matrices from \(\mathbf{T}^{- 1}\), using Eqs. (7.21a), (7.21b), and (7.22).

When the controllability matrix \(\mathcal{C}\) is nonsingular, the corresponding \(\mathbf{A}\) and \(\mathbf{B}\) matrices are said to be controllable. This is a technical property that usually holds for physical systems and will be important when we consider feedback of the state in Section 7.5. We will also consider a few physical illustrations of loss of controllability at that time.

Because computing the transformation given by Eq. (7.29) is numerically difficult to do accurately, it is almost never done. The reason for developing this transformation in some detail is to show how such changes of state could be done in theory and to make the following important observation:

One can always transform a given state description to control canonical form if (and only if) the controllability matrix \(\mathcal{C}\) is nonsingular.

If we need to test for controllability in a real case with numbers, we use a numerically stable method that depends on converting the system matrices to "staircase" form rather than on trying to compute the controllability matrix. Problem 7.30 at the end of the chapter will call for consideration of this method.

An important question regarding controllability follows directly from our discussion so far: What is the effect of a state transformation on controllability? We can show the result by using Eqs. (7.27), (7.21a), and (7.21b). The controllability matrix of the system \((\mathbf{A},\mathbf{B})\) is

\[\mathcal{C}_{\mathbf{x}} = \begin{bmatrix} \mathbf{B} & \mathbf{AB} & \cdots & \mathbf{A}^{n - 1}\mathbf{B} \end{bmatrix}.\]

After the state transformation, the new description matrices are given by Eqs. (7.21a) and (7.21b), and the controllability matrix changes to

\[\begin{matrix} \mathcal{C}_{\mathbf{Z}} & \ = \begin{bmatrix} \overline{\mathbf{B}} & \overline{\mathbf{A}}\overline{\mathbf{B}} & \cdots & {\overline{\mathbf{A}}}^{n - 1}\overline{\mathbf{B}} \end{bmatrix} \\ & \ = \begin{bmatrix} \mathbf{T}^{- 1}\mathbf{B} & \mathbf{T}^{- 1}\mathbf{AT}\mathbf{T}^{- 1}\mathbf{B} & \cdots & \mathbf{T}^{- 1}\mathbf{A}^{n - 1}\mathbf{TT}^{- 1}\mathbf{B} \end{bmatrix} \\ & \ = \mathbf{T}^{- 1}\mathcal{C}_{\mathbf{x}}. \end{matrix}\]

Thus, we see that \(\mathcal{C}_{\mathbf{Z}}\) is nonsingular if and only if \(\mathcal{C}_{\mathbf{x}}\) is nonsingular, yielding the following observation:

A change of state by a nonsingular linear transformation does not change controllability.

We return once again to the transfer function of Eq. (7.9), this time to represent it with the block diagram having the structure known as observer canonical form (Fig. 7.10). The corresponding matrices for this form are

\[\begin{matrix} \mathbf{A}_{o} = \begin{bmatrix} - 7 & 1 \\ - 12 & 0 \end{bmatrix}, & \mathbf{B}_{o} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}, \\ \mathbf{C}_{o} = \begin{bmatrix} 1 & 0 \end{bmatrix}, & D_{o} = 0. \end{matrix}\]

The significant fact about this canonical form is that the output feeds back to each one of the state-variables through the coefficients of the system matrix \(\mathbf{A}_{0}\).

Figure 7.10

Observer canonical form for the second-order thermal system

Let us now consider what happens to the controllability of this system as the zero at -2 is varied. For this purpose, we replace the second element 2 of \(\mathbf{B}_{o}\) with the variable zero location \(- z_{o}\) and form the controllability matrix:

\[\begin{matrix} \mathcal{C}_{\mathbf{x}} & \ = \begin{bmatrix} \mathbf{B}_{o} & \mathbf{A}_{o}\mathbf{B}_{o} \end{bmatrix} \\ & \ = \begin{bmatrix} 1 & - 7 - z_{0} \\ - z_{0} & - 12 \end{bmatrix}. \end{matrix}\]

The determinant of this matrix is a function of \(z_{o}\) :

\[\begin{matrix} det\left( \mathcal{C}_{\mathbf{x}} \right) & \ = - 12 + z_{o}\left( - 7 - z_{o} \right) \\ & \ = - \left( z_{o}^{2} + 7z_{o} + 12 \right) \end{matrix}\]

This polynomial is zero for \(z_{o} = - 3\) or -4 , implying that controllability is lost for these values. What does this mean? In terms of the parameter \(z_{o}\), the transfer function is

\[G(s) = \frac{s - z_{o}}{(s + 3)(s + 4)} \]

If \(z_{o} = - 3\) or -4 , there is a pole-zero cancellation and the transfer function reduces from a second-order system to a first-order one. When \(z_{o} = - 3\), for example, the mode at -3 is decoupled from the input and control of this mode is lost.

Notice we have taken the transfer function given by Eq. (7.9) and given it two realizations, one in control canonical form, and one in observer canonical form. The control form is always controllable for any value of the zero, while the observer form loses controllability if the zero cancels either of the poles. Thus, these two forms may represent the same transfer function, but it may not be possible to transform the state of one to the state of the other (in this case, from observer to control canonical form). Although a transformation of state cannot affect controllability, the particular state selected from a transfer function can:

Controllability is a function of the state of the system and cannot be decided from a transfer function.

Transformation to modal form

Eigenvectors

Eigenvalues

Matlab eig
To discuss controllability more at this point would take us too far afield. The closely related property of observability and the observer canonical form will be discussed in Section 7.7.1. A more detailed discussion of these properties of dynamic systems is given in Appendix WC available online www.pearsonglobaleditions.com, for those who would like to learn more.

We return now to the modal form for the equations, given by Eqs. (7.14a) and (7.14b) for the example transfer function. As mentioned before, it is not always possible to find a modal form for transfer functions that have repeated poles, so we assume our system has only distinct poles. Furthermore, we assume the general state equations given by Eqs. (7.18a) and (7.18b) apply. We want to find a transformation matrix \(\mathbf{T}\) defined by Eq. (7.19) such that the transformed Eqs. (7.21a) and (7.22) will be in modal form. In this case, we assume the A matrix is diagonal, and \(\mathbf{T}\) is composed of the columns \(\mathbf{t}_{1},\mathbf{t}_{2}\), and \(\mathbf{t}_{3}\). With this assumption, the state transformation Eq. (7.21a) becomes

\[\begin{matrix} \mathbf{T}\overline{\mathbf{A}} = \mathbf{AT} \\ \begin{bmatrix} \mathbf{t}_{1} & \mathbf{t}_{2} & \mathbf{t}_{3} \end{bmatrix}\begin{bmatrix} p_{1} & 0 & 0 \\ 0 & p_{2} & 0 \\ 0 & 0 & p_{3} \end{bmatrix} = \mathbf{A}\begin{bmatrix} \mathbf{t}_{1} & \mathbf{t}_{2} & \mathbf{t}_{3} \end{bmatrix}. \end{matrix}\]

Equation (7.34) is equivalent to the three vector-matrix equations

\[p_{i}\mathbf{t}_{i} = \mathbf{A}\mathbf{t}_{i},\ i = 1,2,3. \]

In matrix algebra, Eq. (7.35) is a famous equation, whose solution is known as the eigenvector/eigenvalue problem. Recall that \(\mathbf{t}_{i}\) is a vector, \(\mathbf{A}\) is a matrix, and \(p_{i}\) is a scalar. The vector \(\mathbf{t}_{i}\) is called an eigenvector of \(\mathbf{A}\), and \(p_{i}\) is called the corresponding eigenvalue. Because we saw earlier that the modal form is equivalent to a partial-fraction-expansion representation with the system poles along the diagonal of the state matrix, it should be clear that these eigenvalues are precisely the poles of our system. The transformation matrix that will convert the state description matrices to modal form has as its columns the eigenvectors of \(\mathbf{A}\), as shown in Eq. (7.34) for the third-order case. As it happens, there are robust, reliable computer algorithms to compute eigenvalues and the eigenvectors of quite large systems using the QR algorithm. In Matlab, the command \(p = eig(A)\) is the way to compute the poles if the system equations are in state form.

Notice also that Eq. (7.35) is homogeneous in that, if \(\mathbf{t}_{i}\) is an eigenvector, so is \(\alpha\mathbf{t}_{i}\), for any scalar \(\alpha\). In most cases, the scale factor is selected so the length (square root of the sum of squares of the magnitudes of the elements) is unity. Matlab will perform this operation. Another option is to select the scale factors so that the input matrix \(\overline{\mathbf{B}}\) is composed of all 1's. The latter choice is suggested by a partial-fraction expansion with each part realized in control canonical form. If the system is real, then each element of \(\mathbf{A}\) is real, and if \(p = \sigma + j\omega\) is a pole, so is the conjugate, \(p^{*} = \sigma - j\omega\). For these eigenvalues, the eigenvectors

257. EXAMPLE 7.9

are also complex and conjugate. It is possible to compose the transformation matrix using the real and complex parts of the eigenvectors separately, so the modal form is real but has \(2 \times 2\) blocks for each pair of complex poles. Later, we will see the result of the Matlab function that does this, but first, let us look at the simple real-poles case.

Transformation of Thermal System from Control to Modal Form

Find the matrix to transform the control form matrices in Eq. (7.12) into the modal form of Eq. (7.14).

Solution. According to Eqs. (7.34) and (7.35), we need first to find the eigenvectors and eigenvalues of the \(\mathbf{A}_{c}\) matrix. We take the eigenvectors to be

\[\begin{bmatrix} t_{11} \\ t_{21} \end{bmatrix}\text{~}\text{and}\text{~}\begin{bmatrix} t_{12} \\ t_{22} \end{bmatrix}\]

The equations using the eigenvector on the left are

\[\begin{matrix} \begin{bmatrix} - 7 & - 12 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} t_{11} \\ t_{21} \end{bmatrix} & \ = p\begin{bmatrix} t_{11} \\ t_{21} \end{bmatrix}, \\ - 7t_{11} - 12t_{21} & \ = pt_{11}, \\ t_{11} & \ = pt_{21}. \end{matrix}\]

Substituting Eq. (7.36c) into Eq. (7.36b) results in

\[\begin{matrix} - 7pt_{21} - 12t_{21} & \ = p^{2}t_{21}, \\ p^{2}t_{21} + 7pt_{21} + 12t_{21} & \ = 0, \\ p^{2} + 7p + 12 & \ = 0, \\ p & \ = - 3, - 4. \end{matrix}\]

We have found (again!) that the eigenvalues (poles) are -3 and -4 ; furthermore, Eq. (7.36c) tells us that the two eigenvectors are

\[\begin{bmatrix} - 4t_{21} \\ t_{21} \end{bmatrix}\text{~}\text{and}\text{~}\begin{bmatrix} - 3t_{22} \\ t_{22} \end{bmatrix}\]

where \(t_{21}\) and \(t_{22}\) are arbitrary nonzero scale factors. We want to select the two scale factors such that both elements of \(\mathbf{B}_{m}\) in Eq. (7.14a) are unity. The equation for \(\mathbf{B}_{m}\) in terms of \(\mathbf{B}_{c}\) is \(\mathbf{TB}_{m} = \mathbf{B}_{c}\), and its solution is \(t_{21} = - 1\) and \(t_{22} = 1\). Therefore, the transformation matrix and its inverse \(\ ^{3}\) are

\[\mathbf{T} = \begin{bmatrix} 4 & - 3 \\ - 1 & 1 \end{bmatrix},\ \mathbf{T}^{- 1} = \begin{bmatrix} 1 & 3 \\ 1 & 4 \end{bmatrix}\]

Elementary matrix multiplication shows that, using \(\mathbf{T}\) as defined by Eq. (7.38), the matrices of Eqs. (7.12) and (7.14) are related as follows:

\[\begin{matrix} \mathbf{A}_{m} = \mathbf{T}^{- 1}\mathbf{A}_{c}\mathbf{T}, & \mathbf{B}_{m} = \mathbf{T}^{- 1}\mathbf{B}_{c}, \\ \mathbf{C}_{m} = \mathbf{C}_{c}\mathbf{T}, & D_{m} = D_{c}. \end{matrix}\]

These computations can be carried out by using the following Matlab statements:

\[\begin{matrix} & T = \lbrack 4 - 3; - 11\rbrack \\ & Am = inv(T)^{*}Ac\ ^{*}\text{ }T; \\ & Bm = inv(T)^{*}Bc; \\ & Cm = Ccc^{*}; \\ & Dm = Dc; \end{matrix}\]

The next example has four state-variables and, in state-variable form, is too complicated for hand calculations. However, it is a good example for illustrating the use of computer software designed for the purpose. The model we will use is based on a physical state after amplitude and time scaling have been done.

258. Using Matlab to Find Poles and Zeros of Piper Dakato Airplane

The state space representation of the transfer function between the elevator input and pitch attitude for the Piper Dakota, Eq. (5.78), is shown below. Find the eigenvalues of the system matrix. Also, compute the transformation of the equations of the airplane in their given form to the modal canonical form. The system matrices are

\[\begin{matrix} A = \begin{bmatrix} - 5.03 & - 40.21 & - 1.5 & - 2.4 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix}, & B = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \\ C = \begin{bmatrix} 0 & 160 & 512 & 280 \end{bmatrix}, & D = 0.0 \end{matrix}\]

Solution. To compute the eigenvalues by using Matlab, we write

\[P = eig(A)\text{,}\text{~} \]

which results in

\[P = \begin{bmatrix} - 2.5000 + 5.8095i \\ - 2.5000 - 5.8095i \\ - 0.0150 + 0.2445i \\ - 0.0150 - 0.2445i \end{bmatrix}\]

Notice that the system has all poles in the left half-plane (LHP).

To transform to modal form, we use the Matlab function canon:

sys \(G = ss(A,B,C,D)\);

[sysGm,TI]=canon(sysG,'modal');

\(\lbrack Am,Bm,Cm,Dm\rbrack = ssdata(sysGm\) )

The result of this computation is

\[Am = A_{m} = \begin{bmatrix} - 2.5000 & 5.8090 & 0 & 0 \\ - 5.8090 & - 2.5000 & 0 & 0 \\ 0 & 0 & - 0.0150 & 0.2445 \\ 0 & 0 & - 0.2445 & - 0.0150 \end{bmatrix}\]

Notice the two complex poles appear in the \(2 \times 2\) blocks along the diagonal of \(\mathbf{A}_{m}\). The rest of the computations from canon are

\[\begin{matrix} Bm & \ = \mathbf{B}_{m} = \begin{bmatrix} 7.7760 \\ - 22.6800 \\ - 3.2010 \\ 0.3066 \end{bmatrix} \\ Cm & \ = \mathbf{C}_{m} = \begin{bmatrix} - 1.0020 & 0.1809 & - 2.8710 & 8.8120 \end{bmatrix} \\ Dm & \ = D_{m} = 0 \\ TI & \ = T^{- 1} = \begin{bmatrix} 7.7761 & - 112.0713 & - 2.9026 & - 6.7383 \\ - 22.6776 & - 102.5493 & - 4.4167 & - 6.1121 \\ - 3.2007 & - 15.9764 & - 127.8924 & 1.0776 \\ 0.3066 & 2.3199 & 16.1981 & 31.4852 \end{bmatrix} \end{matrix}\]

It happens that canon was written to compute the inverse of the transformation we are working with (as you can see from TI in the previous equation), so we need to invert our Matlab results. The inverse is computed from

\[T = inv(TI) \]

and results in

\[T = \mathbf{T} = \begin{bmatrix} 0.0307 & - 0.0336 & 0.0005 & 0.0000 \\ - 0.0068 & - 0.0024 & - 0.0000 & - 0.0019 \\ 0.0001 & 0.0011 & - 0.0078 & 0.0005 \\ 0.0002 & - 0.0001 & 0.0040 & 0.0316 \end{bmatrix}\]

The eigenvectors computed with \(\lbrack V,P\rbrack = eig(A)\) are

\[V = \begin{bmatrix} 0.9874 + 0.0000i & 0.9874 + 0.0000i & 0.0026 - 0.0140i & 0.0026 + 0.0140i \\ - 0.0617 - 0.1434i & - 0.0617 + 0.1434i & - 0.0577 - 0.0071i & - 0.0577 + 0.0071i \\ - 0.0170 + 0.0179i & - 0.0170 - 0.0179i & - 0.0145 + 0.2370i & - 0.0145 - 0.2370i \\ 0.0037 + 0.0013i & 0.0037 - 0.0013i & 0.9695 + 0.0000i & 0.9695 + 0.0000i \end{bmatrix}\]

EXAMPLE 7.11

258.0.1. Dynamic Response from the State Equations

Having considered the structure of the state-variable equations, we now turn to finding the dynamic response from the state description and to the relationships between the state description and our earlier discussion of the frequency response and poles and zeros in Chapter 6. Let us begin with the general state equations given by Eqs. (7.18a) and (7.18b), and consider the problem in the frequency domain. Taking the Laplace transform of

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} + \mathbf{B}u \]

we obtain

\[s\mathbf{X}(s) - \mathbf{x}(0) = \mathbf{AX}(s) + \mathbf{B}U(s) \]

which is now an algebraic equation. If we collect the terms involving \(\mathbf{X}(s)\) on the left side of Eq. (7.42), keeping in mind that in matrix multiplication order is very important, we find that \(\ ^{4}\)

\[(s\mathbf{I} - \mathbf{A})\mathbf{X}(s) = \mathbf{B}U(s) + \mathbf{x}(0). \]

If we premultiply both sides by the inverse of \((s\mathbf{I} - \mathbf{A})\), then

\[\mathbf{X}(s) = (s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B}U(s) + (s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{x}(0). \]

The output of the system is

\[\begin{matrix} Y(s) & \ = \mathbf{CX}(s) + DU(s), \\ & \ = \mathbf{C}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B}U(s) + \mathbf{C}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{x}(0) + DU(s) \end{matrix}\]

This equation expresses the output response to both an initial condition and an external forcing input. Collecting the terms involving \(U(s)\) and assuming zero initial conditions result in the transfer function of the system,

\[G(s) = \frac{Y(s)}{U(s)} = \mathbf{C}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B} + D \]

Thermal System Transfer Function from the State Description

Use Eq. (7.45) to find the transfer function of the thermal system described by Eqs. (7.12a) and (7.12b).

Solution. The state-variable description matrices of the system are

\[\begin{matrix} \mathbf{A} = \begin{bmatrix} - 7 & - 12 \\ 1 & 0 \end{bmatrix}, & \mathbf{B} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \\ \mathbf{C} = \begin{bmatrix} 1 & 2 \end{bmatrix}, & D = 0. \end{matrix}\]

To compute the transfer function according to Eq. (7.45), we form

$s\mathbf{I} - \mathbf{A} = \begin{bmatrix}
s + 7 & 12 \

  • 1 & s
    \end{bmatrix}$and compute

\[(s\mathbf{I} - \mathbf{A})^{- 1} = \frac{\begin{bmatrix} s & - 12 \\ 1 & s + 7 \end{bmatrix}}{s(s + 7) + 12}\]

We then substitute Eq. (7.46) into Eq. (7.45) to get

\[\begin{matrix} G(s) & \ = \frac{\begin{bmatrix} 1 & 2 \end{bmatrix}\begin{bmatrix} s & - 12 \\ 1 & s + 7 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix}}{s(s + 7) + 12} \\ & \ = \frac{\begin{bmatrix} 1 & 2 \end{bmatrix}\begin{bmatrix} s \\ 1 \end{bmatrix}}{s(s + 7) + 12} \\ & \ = \frac{(s + 2)}{(s + 3)(s + 4)}. \end{matrix}\]

The results can also be found using the Matlab statements,

[num, den] \(= ss2tf(A,B,C,D)\)

and yield num \(= \begin{bmatrix} 0 & 1 & 2 \end{bmatrix}\) and den \(= \begin{bmatrix} 1 & 7 & 12 \end{bmatrix}\), which agrees with the hand calculations above.

Because Eq. (7.45) expresses the transfer function in terms of the general state-space descriptor matrices \(\mathbf{A},\mathbf{B},\mathbf{C}\), and \(D\), we are able to express poles and zeros in terms of these matrices. We saw earlier that by transforming the state matrices to diagonal form, the poles appear as the eigenvalues on the main diagonal of the A matrix. We now take a systems theory point of view to look at the poles and zeros as they are involved in the transient response of a system.

As we saw in Chapter 3, a pole of the transfer function \(G(s)\) is a value of generalized frequency \(s\) such that, if \(s = p_{i}\), then the system can respond to an initial condition as \(K_{i}e^{p_{i}t}\), with no forcing function \(u\). In this context, \(p_{i}\) is called a natural frequency or natural mode of the system. If we take the state-space equations (7.18a and 7.18b) and set the forcing function \(u\) to zero, we have

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} \]

If we assume some (as yet unknown) initial condition

\[\mathbf{x}(0) = \mathbf{x}_{0} \]

and that the entire state motion behaves according to the same natural frequency, then the state can be written as \(\mathbf{x}(t) = e^{p_{i}t}\mathbf{x}_{0}\). It follows from Eq. (7.50) that

\[\overset{˙}{\mathbf{x}}(t) = p_{i}e^{p_{i}t}\mathbf{x}_{0} = \mathbf{Ax} = \mathbf{A}e^{p_{i}t}\mathbf{x}_{0} \]

or

\[\mathbf{A}\mathbf{x}_{0} = p_{i}\mathbf{x}_{0} \]

Transfer function poles from state equations
We can rewrite Eq. (7.53) as

\[\left( p_{i}\mathbf{I} - \mathbf{A} \right)\mathbf{x}_{0} = 0. \]

Equations (7.53) and (7.54) constitute the eigenvector/eigenvalue problem we saw in Eq. (7.35) with eigenvalues \(p_{i}\) and, in this case, eigenvectors \(\mathbf{x}_{0}\) of the matrix \(\mathbf{A}\). If we are just interested in the eigenvalues, we can use the fact that for a nonzero \(\mathbf{x}_{0}\), Eq. (7.54) has a solution if and only if

\[det\left( p_{i}\mathbf{I} - \mathbf{A} \right) = 0 \]

These equations show again that the poles of the transfer function are the eigenvalues of the system matrix A. The determinant equation (7.55) is a polynomial in the eigenvalues \(p_{i}\) known as the characteristic equation. In Example 7.9, we computed the eigenvalues and eigenvectors of a particular matrix in control canonical form. As an alternative computation for the poles of that system, we could solve the characteristic equation (7.55). For the system described by Eqs. (7.12a) and (7.12b), we can find the poles from Eq. (7.55) by solving

\[\begin{matrix} det(s\mathbf{I} - \mathbf{A}) & \ = 0, \\ det\begin{bmatrix} s + 7 & 12 \\ - 1 & s \end{bmatrix} & \ = 0, \\ s(s + 7) + 12 & \ = (s + 3)(s + 4) = 0. \end{matrix}\]

This confirms again that the poles of the system are the eigenvalues of A.

We can also determine the transmission zeros of a system from the state-variable description matrices \(\mathbf{A},\mathbf{B},\mathbf{C}\), and \(D\) using a systems theory point of view. From this perspective, a zero is a value of generalized frequency \(s\) such that the system can have a nonzero input and state and yet have an output of zero. If the input is exponential at the zero frequency \(z_{i}\), given by

\[u(t) = u_{0}e^{z_{i}t}, \]

then the output is identically zero:

\[y(t) \equiv 0. \]

The state-space description of Eqs. (7.57) and (7.58) would be

\[u = u_{0}e^{z_{i}t},\ \mathbf{x}(t) = \mathbf{x}_{0}e^{z_{i}t},\ y(t) \equiv 0. \]

Thus

\[\overset{˙}{\mathbf{x}} = z_{i}e^{z_{i}t}\mathbf{x}_{0} = \mathbf{A}e^{z_{i}t}\mathbf{x}_{0} + \mathbf{B}u_{0}e^{z_{i}t} \]

or

\[\begin{bmatrix} z_{i}\mathbf{I} - \mathbf{A} & - \mathbf{B} \end{bmatrix}\begin{bmatrix} \mathbf{x}_{0} \\ u_{0} \end{bmatrix} = \mathbf{0}\]

and

\[y = \mathbf{Cx} + Du = \mathbf{C}e^{z_{i}t}\mathbf{x}_{0} + Du_{0}e^{z_{i}t} \equiv 0 \]

Transfer function zeros from state equations

EXAMPLE 7.12
Combining Eqs. (7.61) and (7.62), we get

\[\begin{bmatrix} z_{i}\mathbf{I} - \mathbf{A} & - \mathbf{B} \\ \mathbf{C} & D \end{bmatrix}\begin{bmatrix} \mathbf{x}_{0} \\ u_{0} \end{bmatrix} = \begin{bmatrix} \mathbf{0} \\ 0 \end{bmatrix}\]

From Eq. (7.63), we can conclude that a zero of the state-space system is a value of \(z_{i}\) where Eq. (7.63) has a nontrivial solution. With one input and one output, the matrix is square, and a solution to Eq. (7.63) is equivalent to a solution to

\[det\begin{bmatrix} z_{i}\mathbf{I} - \mathbf{A} & - \mathbf{B} \\ \mathbf{C} & D \end{bmatrix} = 0\]

Zeros for the Thermal System from a State Description

Compute the zero(s) of the thermal system described by Eq. (7.12).

Solution. We use Eq. (7.64) to compute the zeros:

\[\begin{matrix} & det\begin{bmatrix} s + 7 & 12 & - 1 \\ - 1 & s & 0 \\ 1 & 2 & 0 \end{bmatrix} = 0 \\ & \ - 2 - s = 0 \\ & s = - 2. \end{matrix}\]

Note this result agrees with the zero of the transfer function given by Eq. (7.9). The result can also be found using the following Matlab statements:

\[\begin{matrix} & \text{~}\text{sysG}\text{~} = ss(Ac,Bc,Cc,Dc) \\ & \lbrack z\rbrack = tzero(sysG) \end{matrix}\]

and yields \(z = - 2.0\).

Equation (7.55) for the characteristic equation and Eq. (7.64) for the zeros polynomial can be combined to express the transfer function in a compact form from state-description matrices as

\[G(s) = \frac{det\begin{bmatrix} s\mathbf{I} - \mathbf{A} & - \mathbf{B} \\ \mathbf{C} & D \end{bmatrix}}{det(s\mathbf{I} - \mathbf{A})}\]

(See Appendix WB available online at www.pearsonglobaleditions.com for more details.) While Eq. (7.65) is a compact formula for theoretical studies, it is very sensitive to numerical errors. A numerically stable algorithm for computing the transfer function is described in EmamiNaeini and Van Dooren (1982). Given the transfer function, we can compute the frequency response as \(G(j\omega)\), and as discussed earlier, we can use Eqs. (7.54) and (7.63) to find the poles and zeros, upon which the transient response depends, as we saw in Chapter 3.

259. EXAMPLE 7.13

Matlab ss2tf

Matlab roots

Matlab tzero

260. Analysis of the State Equations of Piper Dakota

Compute the poles, zeros, and the transfer function for the state space equations of the Piper Dakota given in Example 7.10.

Solution. There are two different ways to compute the answer to this problem. The most direct is to use the Matlab function ss2tf (state-space to transfer function), which will give the numerator and denominator polynomials directly. This function permits multiple inputs and outputs; the fifth argument of the function tells which input is to be used. We have only one input here, but must still provide the argument. The computation of the transfer function is

\[\lbrack\text{~}\text{Num, Den}\text{~}\rbrack = ss2tf(A,B,C,D,1) \]

which results in

\[\begin{matrix} \text{~}\text{Num}\text{~} & \ = \begin{bmatrix} 0 & 0 & 160 & 512 & 280 \end{bmatrix} \\ \text{~}\text{Den}\text{~} & \ = \begin{bmatrix} 1.00 & 5.03 & 40.21 & 1.50 & 2.40 \end{bmatrix} \end{matrix}\]

It is interesting to check to see whether the poles and zeros determined this way agree with those found by other means. To find the roots of a polynomial such as the one corresponding to Den, we use the Matlab function roots:

\[roots(\text{~}\text{Den}\text{~}) = \begin{bmatrix} - 2.5000 + 5.8095i \\ - 2.5000 - 5.8095i \\ - 0.0150 + 0.2445i \\ - 0.0150 - 0.2445i \end{bmatrix}\]

which yields the poles of the system. Checking with Example 7.10, we confirm that they agree.

How about the zeros? We can find these by finding the roots of the numerator polynomial. We compute the roots of the polynomial Num:

\[roots(Num) = \begin{bmatrix} - 2.5000 \\ - 0.7000 \end{bmatrix}\]

The zeros can be computed by the equivalent of Eq. (7.63) with the function tzero (transmission zeros).

\[\begin{matrix} \text{~}\text{sysG}\text{~} & \ = ss(A,B,C,D) \\ \lbrack ZER\rbrack & \ = tzero(\text{~}\text{sysG}\text{~}) \end{matrix}\]

yields

\[ZER = \begin{bmatrix} - 2.5000 \\ - 0.7000 \end{bmatrix}\]

261. Estimator/observer

The control law and the estimator together form the compensation

Figure 7.11

Schematic diagram of state-space
From these results we can write down, for example, the transfer function as

\[\begin{matrix} G(s) & \ = \ \frac{160s^{2} + 512s + 280}{s^{4} + 5.03s^{3} + 40.21s^{2} + 1.5s + 2.4} \\ & \ = \frac{160(s + 2.5)(s + 0.7)}{\left( s^{2} + 5s + 40 \right)\left( s^{2} + 0.03s + 0.06 \right)} \end{matrix}\]

261.1. Control-Law Design for Full-State Feedback

One of the attractive features of the state-space design method is that it consists of a sequence of independent steps, as mentioned in the chapter overview. The first step, discussed in Section 7.5.1, is to determine the control law. The purpose of the control law is to allow us to assign a set of pole locations for the closed-loop system that will correspond to satisfactory dynamic response in terms of rise time and other measures of transient response. In Section 7.5.2, we will show how to introduce the reference input with full-state feedback, and in Section 7.6, we will describe the process of finding the poles for good design.

The second step - necessary if the full state is not available - is to design an estimator (sometimes called an observer), which computes an estimate of the entire state vector when provided with the measurements of the system indicated by Eq. (7.18b). We will examine estimator design in Section 7.7.

The third step consists of combining the control law and the estimator. Figure 7.11 shows how the control law and the estimator fit together and how the combination takes the place of what we have been previously referring to as compensation. At this stage, the control-law calculations are based on the estimated state rather than the actual state. In Section 7.8, we will show that this substitution is reasonable, and also that using the combined control law and estimator results in closed-loop pole locations that are the same as those determined when designing the control and estimator separately.

Compensation

Control law

Control characteristic equation

Figure 7.12

Assumed system for control-law
The fourth and final step of state-space design is to introduce the reference input in such a way that the plant output will track external commands with acceptable rise-time, overshoot, and settling-time values. At this point in the design, all the closed-loop poles have been selected, and the designer is concerned with the zeros of the overall transfer function. Figure 7.11 shows the command input \(r\) introduced in the same relative position as was done with the transform design methods; however, in Section 7.9, we will show how to introduce the reference at another location, resulting in different zeros and (usually) superior control.

261.1.1. Finding the Control Law

The first step in the state-space design method, as mentioned earlier, is to find the control law as feedback of a linear combination of the state-variables- that is,

\[\begin{matrix} & \text{~}\text{les-that is,}\text{~} \\ & u = - \mathbf{Kx} = - \begin{bmatrix} K_{1} & K_{2} & \cdots & K_{n} \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{bmatrix}. \end{matrix}\]

We assume, for feedback purposes, that all the elements of the state vector are at our disposal which is why we refer to this as "full-state," feedback. In practice, of course, this would usually be a ridiculous assumption; moreover, a well-trained control designer knows that other design methods do not require so many sensors. The assumption that all state-variables are available merely allows us to proceed with this first step.

Equation (7.67) tells us that the system has a constant matrix in the state-vector feedback path, as shown in Fig. 7.12. For an \(n\) th-order system, there will be \(n\) feedback gains, \(K_{1},\ldots,K_{n}\), and because there are \(n\) roots of the system, it is possible that there are enough degrees of freedom to select arbitrarily any desired root location by choosing the proper values of \(K_{i}\). This freedom contrasts sharply with root-locus design, in which we have only one parameter and the closed-loop poles are restricted to the locus.

Substituting the feedback law given by Eq. (7.67) into the system described by Eq. (7.18a) yields

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} - \mathbf{BKx} \]

The characteristic equation of this closed-loop system is

\[det\lbrack s\mathbf{I} - (\mathbf{A} - \mathbf{BK})\rbrack = 0 \]

When evaluated, this yields an \(n\) th-order polynomial in \(s\) containing the gains \(K_{1},\ldots,K_{n}\). The control-law design then consists of picking the gains \(\mathbf{K}\) so the roots of Eq. (7.69) are in desirable locations. Selecting desirable root locations is an inexact science that may require some iteration by the designer. Issues in their selection are considered in Examples 7.14 to 7.16 as well as in Section 7.6. For now, we assume the desired locations are known, say,

\[s = s_{1},s_{2},\ldots,s_{n} \]

Then the corresponding desired (control) characteristic equation is

\[\alpha_{c}(s) = \left( s - s_{1} \right)\left( s - s_{2} \right)\ldots\left( s - s_{n} \right) = 0 \]

Hence, the required elements of \(\mathbf{K}\) are obtained by matching coefficients in Eqs. (7.69) and (7.70). This forces the system's characteristic equation to be identical to the desired characteristic equation and the closed-loop poles to be placed at the desired locations.

262. EXAMPLE 7.14

263. Control Law for a Pendulum

Suppose you have a pendulum with frequency \(\omega_{0}\) and a state-space description given by

\[\begin{bmatrix} {\overset{˙}{x}}_{1} \\ {\overset{˙}{x}}_{2} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ - \omega_{0}^{2} & 0 \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}u\]

Find the control law that places the closed-loop poles of the system so they are both at \(- 2\omega_{0}\). In other words, you wish to double the natural frequency and increase the damping ratio \(\zeta\) from 0 to 1 .

Solution. From Eq. (7.70), we find that

\[\begin{matrix} \alpha_{c}(s) & \ = \left( s + 2\omega_{0} \right)^{2} \\ & \ = s^{2} + 4\omega_{0}s + 4\omega_{0}^{2} \end{matrix}\]

Equation (7.69) tells us that

\[\begin{matrix} & det\lbrack s\mathbf{I} - (\mathbf{A} - \mathbf{BK})\rbrack \\ & \ = det\left\{ \begin{bmatrix} s & 0 \\ 0 & s \end{bmatrix} - \left( \begin{bmatrix} 0 & 1 \\ - \omega_{0}^{2} & 0 \end{bmatrix} - \begin{bmatrix} 0 \\ 1 \end{bmatrix}\begin{bmatrix} K_{1} & K_{2} \end{bmatrix} \right) \right\}, \end{matrix}\]

or

\[s^{2} + K_{2}s + \omega_{0}^{2} + K_{1} = 0 \]

Equating the coefficients with like powers of \(s\) in Eqs. (7.72b) and (7.73) yields the system of equations

\[\begin{matrix} K_{2} & \ = 4\omega_{0} \\ \omega_{0}^{2} + K_{1} & \ = 4\omega_{0}^{2} \end{matrix}\]

and therefore,

\[\begin{matrix} & K_{1} = 3\omega_{0}^{2}, \\ & K_{2} = 4\omega_{0}. \end{matrix}\]

Figure 7.13

Impulse response of the undamped oscillator with full-state feedback \(\omega_{0} = 1\)

Thus the control law in concise form is

\[\mathbf{K} = \begin{bmatrix} K_{1} & K_{2} \end{bmatrix} = \begin{bmatrix} 3\omega_{0}^{2} & 4\omega_{0} \end{bmatrix}\]

Figure 7.13 shows the response of the closed-loop system to the initial conditions \(x_{1} = 1.0,x_{2} = 0.0\), and \(\omega_{0} = 1\). It shows a very well damped response, as would be expected from having two roots at \(s = - 2\). The Matlab command impulse was used to generate the plot.

Calculating the gains using the technique illustrated in Example 7.14 becomes rather tedious when the order of the system is higher than 3. There are, however, special "canonical" forms of the state-variable equations for which the algebra for finding the gains is especially simple. One such canonical form that is useful in control law design is the control canonical form as discussed in Section 7.4.1. Consider the third-order system \(\ ^{5}\)

\[\dddot{y} + a_{1}\overset{¨}{y} + a_{2}\overset{˙}{y} + a_{3}y = b_{1}\overset{¨}{u} + b_{2}\overset{˙}{u} + b_{3}u\text{,}\text{~} \]

which corresponds to the transfer function

\[G(s) = \frac{Y(s)}{U(s)} = \frac{b_{1}s^{2} + b_{2}s + b_{3}}{s^{3} + a_{1}s^{2} + a_{2}s + a_{3}} = \frac{b(s)}{a(s)} \]

Suppose we introduce an auxiliary variable (referred to as the partial state) \(\xi\), which relates \(a(s)\) and \(b(s)\) as shown in Fig. 7.14(a). The transfer function from \(U\) to \(\xi\) is

\[\frac{\xi(s)}{U(s)} = \frac{1}{a(s)} \]

or

\[\dddot{\xi} + a_{1}\overset{¨}{\xi} + a_{2}\overset{˙}{\xi} + a_{3}\xi = u\text{.}\text{~} \]

Figure 7.14

Derivation of control canonical form
It is easy to draw a block diagram corresponding to Eq. (7.77) if we rearrange the equation as follows:

\[\dddot{\xi} = - a_{1}\overset{¨}{\xi} - a_{2}\overset{˙}{\xi} - a_{3}\xi + u \]

The summation is indicated in Fig. 7.14(b), where each \(\xi\) on the righthand side is obtained by sequential integration of \(\dddot{\xi}\). To form the output, we go back to Fig. 7.14(a) and note that

\[Y(s) = b(s)\xi(s) \]

which means that

\[y = b_{1}\overset{¨}{\xi} + b_{2}\overset{˙}{\xi} + b_{3}\xi \]

We again pick off the outputs of the integrators, multiply them by \(\left\{ b_{i} \right\}\) 's, and form the right-hand side of Eq. (7.74) using a summer to yield the output as shown in Fig. 7.14(c). In this case, all the feedback loops return to the point of the application of the input, or "control" variable, and hence the form is referred to as the control canonical form as discussed in Section 7.4.1. Reduction of the structure by Mason's rule or by elementary block diagram operations verifies that this structure has the transfer function given by \(G(s)\).

(a)

(b)

(c)

Taking the state as the outputs of the three integrators numbered, by convention, from the left, namely,

\[x_{1} = {\overset{¨}{\xi}}_{1},x_{2} = \overset{˙}{\xi},x_{3} = \xi \]

we obtain

\[\begin{matrix} & {\overset{˙}{x}}_{1} = \dddot{\xi} = - a_{1}x_{1} - a_{2}x_{2} - a_{3}x_{3} + u \\ & {\overset{˙}{x}}_{2} = x_{1} \\ & {\overset{˙}{x}}_{3} = x_{2} \end{matrix}\]

We may now write the matrices describing the control canonical form in general:

\[\begin{matrix} \mathbf{A}_{c} & \ = \begin{bmatrix} - a_{1} & - a_{2} & \cdots & \cdots & - a_{n} \\ 1 & 0 & \cdots & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & & \ddots & & 0 \\ 0 & 0 & \cdots & 1 & 0 \end{bmatrix},\ \mathbf{B}_{c} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \\ \mathbf{C}_{c} & \ = \begin{bmatrix} b_{1} & b_{2} & \cdots & \cdots & b_{n} \end{bmatrix},\ D_{c} = 0. \end{matrix}\]

The special structure of this system matrix is referred to as the upper companion form because the characteristic equation is \(a(s) = s^{n} +\) \(a_{1}s^{n - 1} + a_{2}s^{n - 2} + \cdots + a_{n}\) and the coefficients of this monic "companion" polynomial are the elements in the first row of \(\mathbf{A}_{c}\). If we now form the closed-loop system matrix \(\mathbf{A}_{c} - \mathbf{B}_{c}\mathbf{K}_{c}\), we find that

\[\mathbf{A}_{c} - \mathbf{B}_{c}\mathbf{K}_{c} = \begin{bmatrix} - a_{1} - K_{1} & - a_{2} - K_{2} & \cdots & \cdots & - a_{n} - K_{n} \\ 1 & 0 & \cdots & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & & \ddots & & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{bmatrix}\]

By visually comparing Eqs. (7.83a) and (7.84), we see the closedloop characteristic equation is

\[s^{n} + \left( a_{1} + K_{1} \right)s^{n - 1} + \left( a_{2} + K_{2} \right)s^{n - 2} + \cdots + \left( a_{n} + K_{n} \right) = 0 \]

Therefore, if the desired pole locations result in the characteristic equation given by

\[\alpha_{c}(s) = s^{n} + \alpha_{1}s^{n - 1} + \alpha_{2}s^{n - 2} + \cdots + \alpha_{n} = 0 \]

then the necessary feedback gains can be found by equating the coefficients in Eqs. (7.85) and (7.86):

\[K_{1} = - a_{1} + \alpha_{1},K_{2} = - a_{2} + \alpha_{2},\ldots,K_{n} = - a_{n} + \alpha_{n} \]

We now have an algorithm for a design procedure: Given a system of order \(n\) described by an arbitrary (A, B) and given a desired \(n\) thorder monic characteristic polynomial \(\alpha_{c}(s)\), we (1) transform (A, B) to control canonical form \(\left( \mathbf{A}_{c},\mathbf{B}_{c} \right)\) by changing the state \(\mathbf{x} = \mathbf{Tz}\) and

Ackermann's formula for pole placement
(2) solve for the control gains by inspection using Eq. (7.87) to give the control law \(u = - \mathbf{K}_{c}\mathbf{z}\). Because this gain is for the state in the control form, we must (3) transform the gain back to the original state to get \(\mathbf{K} = \mathbf{K}_{c}\mathbf{T}^{- 1}\).

An alternative to this transformation method is given by Ackermann's formula (1972), which organizes the three-step process of converting to \(\left( \mathbf{A}_{c},\mathbf{B}_{c} \right)\), solving for the gains, and converting back again into the very compact form

\[\mathbf{K} = \begin{bmatrix} 0 & \cdots & 0 & 1 \end{bmatrix}\mathcal{C}^{- 1}\alpha_{c}(\mathbf{A})\]

such that

\[\mathcal{C} = \begin{bmatrix} \mathbf{B} & \mathbf{AB} & \mathbf{A}^{2}\mathbf{B} & \cdots & \mathbf{A}^{n - 1}\mathbf{B} \end{bmatrix},\]

where \(\mathcal{C}\) is the controllability matrix we saw in Section 7.4, \(n\) gives the order of the system and the number of state-variables, and \(\alpha_{c}(\mathbf{A})\) is a matrix defined as

\[\alpha_{c}(\mathbf{A}) = \mathbf{A}^{n} + \alpha_{1}\mathbf{A}^{n - 1} + \alpha_{2}\mathbf{A}^{n - 2} + \cdots + \alpha_{n}\mathbf{I}, \]

where the \(\alpha_{i}\) are the coefficients of the desired characteristic polynomial Eq. (7.86). Note Eq. (7.90) is a matrix equation. Refer to Appendix WD available online at www.pearsonglobaleditions.com for the derivation of Ackermann's formula.

264. Ackermann's Formula for Undamped Oscillator

(a) Use Ackermann's formula to solve for the gains for the undamped oscillator of Example 7.14. (b) Verify the calculations with Matlab for \(\omega_{0} = 1\).

265. Solution

(a) The desired characteristic equation is \(\alpha_{c}(s) = \left( s + 2\omega_{0} \right)^{2}\). Therefore, the desired characteristic polynomial coefficients,

\[\alpha_{1} = 4\omega_{0},\ \alpha_{2} = 4\omega_{0}^{2}, \]

are substituted into Eq. (7.90) and the result is

\[\begin{matrix} \alpha_{c}(\mathbf{A}) = & \begin{bmatrix} - \omega_{0}^{2} & 0 \\ 0 & - \omega_{0}^{2} \end{bmatrix} + 4\omega_{0}\begin{bmatrix} 0 & 1 \\ - \omega_{0}^{2} & 0 \end{bmatrix} \\ & \ + 4\omega_{0}^{2}\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \\ = & \begin{bmatrix} 3\omega_{0}^{2} & 4\omega_{0} \\ - 4\omega_{0}^{3} & 3\omega_{0}^{2} \end{bmatrix}. \end{matrix}\]

The controllability matrix is

\[\mathcal{C} = \begin{bmatrix} \mathbf{B} & \mathbf{AB} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\]

which yields

\[\mathcal{C}^{- 1} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\]

As was mentioned earlier, computation of the controllability matrix has very poor numerical accuracy, and this carries over to Ackermann's formula. Equation (7.88), implemented in Matlab with the function acker, can be used for the design of SISO systems with a small \(( \leq 10)\) number of state-variables. For more complex cases a more reliable formula is available, implemented in Matlab with place. A modest limitation on place is that, because it is based on assigning closed-loop eigenvectors, none of the desired closed-loop poles may be repeated; that is, the poles must be distinct, \(\ ^{6}\) a requirement that does not apply to acker.

The fact that we can shift the poles of a system by state feedback to any desired location is a rather remarkable result. The development in this section reveals that this shift is possible if we can transform (A, B) to the control form \(\left( \mathbf{A}_{c},\mathbf{B}_{c} \right)\), which in turn is possible if the system is controllable. In rare instances, the system may be uncontrollable, in which case no possible control will yield arbitrary pole locations. Uncontrollable systems have certain modes, or subsystems, that are unaffected by the control. This usually means that parts of the system are physically disconnected from the input. For example, in modal canonical form for a system with distinct poles, one of the modal state-variables is not connected to the input if there is a zero entry in the \(\mathbf{B}_{m}\) matrix. A good physical understanding of the system being controlled would prevent any attempt to design a controller for an uncontrollable system. As we saw earlier, there are algebraic tests for controllability; however, no mathematical test can replace the control engineer's understanding of the physical system. Often the physical situation is such that every mode

\(\overline{\ ^{6}\text{~}\text{One may get around this restriction by moving the repeated poles by very small amounts}\text{~}}\) to make them distinct.

b) The Matlab statements

\[\begin{matrix} & \text{~}\text{wo}\text{~} = 1 \\ & A = \left\lbrack 01; - w_{0}^{*}\text{~}\text{wo}\text{~}0 \right\rbrack \\ & B = \lbrack 0;1\rbrack \\ & \text{~}\text{pc}\text{~} = \left\lbrack - 2^{*}\text{~}\text{wo;}\text{~} - 2*\text{~}\text{wo}\text{~} \right\rbrack \end{matrix}\]

\[\text{~}\text{yield}\text{~}\mathbf{K} = \begin{bmatrix} 3 & 4 \end{bmatrix}\text{, which agrees with the hand calculations above.}\text{~}\]

Finally, we substitute Eqs. (7.92) and (7.91a) into Eq. (7.88) to get

\[\mathbf{K} = \begin{bmatrix} K_{1} & K_{2} \end{bmatrix}\]

\[\mathbf{K} = \begin{bmatrix} 3\omega_{0}^{2} & 4\omega_{0} \end{bmatrix}\]

Matlab Acker, Place .....
on then dist.

An example of weak controllability is controllable to some degree, and, while the mathematical tests indicate the system is controllable, certain modes are so weakly controllable that designs to control them are virtually useless.

Airplane control is a good example of weak controllability of certain modes. Pitch plane motion \(\mathbf{x}_{p}\) is primarily affected by the elevator \(\delta_{e}\) and weakly affected by rolling motion \(\mathbf{x}_{r}\). Rolling motion is essentially affected only by the ailerons \(\delta_{a}\). The state-space description of these relationships is

\[\begin{bmatrix} {\overset{˙}{\mathbf{x}}}_{p} \\ {\overset{˙}{\mathbf{x}}}_{r} \end{bmatrix} = \begin{bmatrix} \mathbf{A}_{p} & \varepsilon \\ 0 & \mathbf{A}_{r} \end{bmatrix}\begin{bmatrix} \mathbf{x}_{p} \\ \mathbf{x}_{r} \end{bmatrix} + \begin{bmatrix} \mathbf{B}_{p} & 0 \\ 0 & \mathbf{B}_{r} \end{bmatrix}\begin{bmatrix} \delta_{e} \\ \delta_{a} \end{bmatrix},\]

where the matrix of small numbers \(\varepsilon\) represents the weak coupling from rolling motion to pitching motion. A mathematical test of controllability for this system would conclude that pitch plane motion (and therefore altitude) is controllable by the ailerons as well as by the elevator! However, it is impractical to attempt to control an airplane's altitude by rolling the aircraft with the ailerons.

Another example will illustrate some of the properties of pole placement by state feedback and the effects of loss of controllability on the process.

266. How Zero Location Can Affect the Control Law

A specific thermal system is described by Eq. (7.32a) in observer canonical form with a zero at \(s = z_{0}\). (a) Find the state feedback gains necessary for placing the poles of this system at the roots of \(s^{2} + 2\zeta\omega_{n}s +\) \(\omega_{n}^{2}\) (that is, at \(- \zeta\omega_{n} \pm j\omega_{n}\sqrt{1 - \zeta^{2}}\) ). (b) Repeat the computation with Matlab, using the parameter values \(z_{0} = 2,\zeta = 0.5\), and \(\omega_{n} = 2rad/sec\).

Solution

(a) The state description matrices are

\[\begin{matrix} \mathbf{A}_{o} = \begin{bmatrix} - 7 & 1 \\ - 12 & 0 \end{bmatrix}, & \mathbf{B}_{o} = \begin{bmatrix} 1 \\ - z_{0} \end{bmatrix}, \\ \mathbf{C}_{o} = \begin{bmatrix} 1 & 0 \end{bmatrix}, & D_{o} = 0. \end{matrix}\]

First, we substitute these matrices into Eq. (7.69) to get the closedloop characteristic equation in terms of the unknown gains and the zero position:

\[s^{2} + \left( 7 + K_{1} - z_{0}K_{2} \right)s + 12 - K_{2}\left( 7z_{0} + 12 \right) - K_{1}z_{0} = 0 \]

Next, we equate the coefficients of this equation to the coefficients of the desired characteristic equation to get

\[\begin{matrix} K_{1} - z_{0}K_{2} & \ = 2\zeta\omega_{n} - 7, \\ - z_{0}K_{1} - \left( 7z_{0} + 12 \right)K_{2} & \ = \omega_{n}^{2} - 12 \end{matrix}\]

The solutions to these equations are

\[\begin{matrix} & K_{1} = \frac{z_{0}\left( 14\zeta\omega_{n} - 37 - \omega_{n}^{2} \right) + 12\left( 2\zeta\omega_{n} - 7 \right)}{\left( z_{0} + 3 \right)\left( z_{0} + 4 \right)} \\ & K_{2} = \frac{z_{0}\left( 7 - 2\zeta\omega_{n} \right) + 12 - \omega_{n}^{2}}{\left( z_{0} + 3 \right)\left( z_{0} + 4 \right)} \end{matrix}\]

(b) The following Matlab statements can be used to find the solution:

\[\begin{matrix} & \text{~}\text{Ao}\text{~} = \left\lbrack \begin{matrix} - 7 & 1; - 120\rbrack \\ zo = 2; & \\ \text{~}\text{Bo}\text{~} = \lbrack 1; - zo\rbrack & \end{matrix} \right.\ \\ & \text{~}\text{pc = roots}\text{~}\left( \begin{bmatrix} 1 & 2 \end{bmatrix} \right) \\ & K = \text{~}\text{place}(\text{~}\text{Ao, Bo, pc)}\text{~} \end{matrix}\]

These statements yield \(K = \lbrack - 3.800.60\rbrack\), which agrees with the hand calculations. If the zero were close to one of the open-loop poles, say \(z_{0} = - 2.99\), then we find \(K = \lbrack 2052.5 - 688.1\rbrack\).

Two important observations should be made from this example. The first is that the gains grow as the zero \(z_{0}\) approaches either -3 or -4 , the values where this system loses controllability. In other words, as controllability is almost lost, the control gains become very large.

The system has to work harder and harder to achieve control as controllability slips away.

Apart from controllability, any actuator has limited dynamic range and saturation limits. Therefore, even though for a system that is controllable, the poles can be placed in arbitrary locations, some locations may be quite undesirable as they would drive the actuators into saturation.

The second important observation illustrated by the example is that both \(K_{1}\) and \(K_{2}\) grow as the desired closed-loop bandwidth given by \(\omega_{n}\) is increased. From this, we can conclude that

To move the poles a long way requires large gains.

These observations lead us to a discussion of how we might go about selecting desired pole locations in general. Before we begin that topic, we will complete the design with full-state feedback by showing how the reference input might be applied to such a system and what the resulting response characteristics are.

266.0.1. Introducing the Reference Input with Full-State Feedback

Thus far, the control has been given by Eq. (7.67), or \(u = -\) Kx. In order to study the transient response of the pole-placement designs to input commands, it is necessary to introduce the reference input into the system. An obvious way to do this is to change the control to \(u =\) \(- \mathbf{Kx} + r\). However, the system will now almost surely have a nonzero steady-state error to a step input. The way to correct this problem is to compute the steady-state values of the state and the control input that will result in zero output error and then force them to take these values. If the desired final values of the state and the control input are \(\mathbf{x}_{SS}\) and \(u_{SS}\) respectively, then the new control formula should be

\[u = u_{ss} - \mathbf{K}\left( \mathbf{x} - \mathbf{x}_{ss} \right) \]

so that when \(\mathbf{x} = \mathbf{x}_{sS}\) (no error), \(u = u_{ss}\). To pick the correct final values, we must solve the equations so that the system will have zero steadystate error to any constant input. The system differential equations are the standard ones:

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}u, \\ y & \ = \mathbf{Cx} + Du \end{matrix}\]

In the constant steady-state, Eqs. (7.95a) and (7.95b) reduce to the pair

\[\begin{matrix} \mathbf{0} = \mathbf{A}\mathbf{x}_{ss} + \mathbf{B}u_{ss}, \\ y_{ss} = \mathbf{C}\mathbf{x}_{ss} + Du_{ss}. \end{matrix}\]

Gain calculation for reference input

Control equation with reference input
We want to solve for the values for which \(y_{ss} = r_{sS}\) for any value of \(r_{ss}\). To do this, we make \(\mathbf{x}_{ss} = \mathbf{N}_{\mathbf{x}}r_{ss}\) and \(u_{ss} = N_{u}r_{ss}\). With these substitutions, we can write Eqs. (7.96) as a matrix equation; the common factor of \(r_{ss}\) cancels out to give the equation for the gains:

\[\begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & D \end{bmatrix}\begin{bmatrix} \mathbf{N}_{\mathbf{x}} \\ N_{u} \end{bmatrix} = \begin{bmatrix} \mathbf{0} \\ 1 \end{bmatrix}\]

This equation can be solved for \(\mathbf{N}_{\mathbf{x}}\) and \(N_{u}\) to get

\[\begin{bmatrix} \mathbf{N}_{\mathbf{x}} \\ N_{u} \end{bmatrix} = \begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & D \end{bmatrix}^{- 1}\begin{bmatrix} \mathbf{0} \\ 1 \end{bmatrix}\]

With these values, we finally have the basis for introducing the reference input so as to get zero steady-state error to a step input:

\[\begin{matrix} u & \ = N_{u}r - \mathbf{K}\left( \mathbf{x} - \mathbf{N}_{\mathbf{x}}r \right) \\ & \ = - \mathbf{Kx} + \left( N_{u} + \mathbf{KN}_{\mathbf{x}} \right)r \end{matrix}\]

The coefficient of \(r\) in parentheses is a constant that can be computed beforehand. We give it the symbol \(\bar{N}\), so

Figure 7.15

Block diagram for introducing the reference input with full-state feedback:

(a) with state and control gains; (b) with a single composite gain

Introducing the Reference Input

Compute the necessary gains for zero steady-state error to a step command at \(x_{1}\), and plot the resulting unit step response for the oscillator in Example 7.14 with \(\omega_{0} = 1\).

Solution. We substitute the matrices of Eq. (7.71) (with \(\omega_{0} = 1\) and \(\mathbf{C} = \begin{bmatrix} 1 & 0 \end{bmatrix}\) because \(\left. \ y = x_{1} \right)\) into Eq. (7.97) to get

\[\begin{bmatrix} 0 & 1 & 0 \\ - 1 & 0 & 1 \\ 1 & 0 & 0 \end{bmatrix}\begin{bmatrix} \mathbf{N}_{\mathbf{x}} \\ N_{u} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\]

The solution is \(x = a \smallsetminus b\) in Matlab (where \(a\) and \(b\) are the left- and righthand side matrices, respectively),

\[\begin{matrix} & \mathbf{N}_{\mathbf{x}} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \\ & N_{u} = 1, \end{matrix}\]

and, for the given control law, \(\mathbf{K} = \begin{bmatrix} 3 & \omega_{0}^{2}4\omega_{0} \end{bmatrix} = \begin{bmatrix} 3 & 4 \end{bmatrix}\),

\[\bar{N} = N_{u} + \mathbf{K}\mathbf{N}_{\mathbf{x}} = 4 \]

The corresponding step response (using the Matlab step command) is plotted in Fig. 7.16.

Note there are two equations for the control-Eqs. (7.98b) and (7.99). While these expressions are equivalent in theory, they differ in practical implementation in that Eq. (7.98b) is usually more robust to parameter errors than Eq. (7.99), particularly when the plant includes a pole at the origin and Type 1 behavior is possible. The difference is most clearly illustrated by the next example.

Figure 7.16

Step response of oscillator to a reference input

267. EXAMPLE 7.18

DC Motor

268. Reference Input to a Type 1 System: DC Motor

in state-variable form is described by the matrices:

Compute the input gains necessary to introduce a reference input with zero steady-state error to a step for the DC motor of Example 5.1, which

\[\begin{matrix} \mathbf{A} = \begin{bmatrix} 0 & 1 \\ 0 & - 1 \end{bmatrix}, & \mathbf{B} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \\ \mathbf{C} = \begin{bmatrix} 1 & 0 \end{bmatrix}, & D = 0. \end{matrix}\]

Assume the state feedback gain is \(\begin{bmatrix} K_{1} & K_{2} \end{bmatrix}\).

Solution. If we substitute the system matrices of this example into the equation for the input gains, Eq. (7.97), we find that the solution is

\[\begin{matrix} \mathbf{N}_{x} & \ = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \\ N_{u} & \ = 0, \\ \bar{N} & \ = K_{1}. \end{matrix}\]

With these values, the expression for the control using \(\mathbf{N}_{x}\) and \(N_{u}\) [Eq. (7.98b)] reduces to

\[u = - K_{1}\left( x_{1} - r \right) - K_{2}x_{2}, \]

while the one using \(\bar{N}\) [Eq. (7.99)] becomes

\[u = - K_{1}x_{1} - K_{2}x_{2} + K_{1}r. \]

The block diagrams for the systems using each of the control equations are given in Fig. 7.17. When using Eq. (7.99), as shown in Fig. 7.17(b), it is necessary to multiply the input by a gain \(K_{1}( = \bar{N})\) exactly equal to that used in the feedback. If these two gains do not match exactly, there

Figure 7.17

Alternative structures for introducing the reference input: (a) Eq. (7.98b); (b) Eq. (7.99)

(a)

(b)

will be a steady-state error. On the other hand, if we use Eq. (7.98b), as shown in Fig. 7.17(a), there is only one gain to be used on the difference between the reference input and the first state, and zero steady-state error will result even if this gain is slightly in error. The system of Fig. 7.17(a) is more robust than the system of Fig. 7.17(b).

With the reference input in place, the closed-loop system has input \(r\) and output \(y\). From the state description, we know the system poles are at the eigenvalues of the closed-loop system matrix, \(\mathbf{A} - \mathbf{BK}\). In order to compute the closed-loop transient response, it is necessary to know where the closed-loop zeros of the transfer function from \(r\) to \(y\) are. They are to be found by applying Eq. (7.64) to the closed-loop description, which we assume has no direct path from input \(u\) to output \(y\), so \(D = 0\). The zeros are values of \(s\) such that

\[det\begin{bmatrix} s\mathbf{I} - (\mathbf{A} - \mathbf{BK}) & - \bar{N}\mathbf{B} \\ \mathbf{C} & 0 \end{bmatrix} = 0\]

We can use two elementary facts about determinants to simplify Eq. (7.102). In the first place, if we divide the last column by \(\bar{N}\), which is a scalar, then the point where the determinant is zero remains unchanged. The determinant is also not changed if we multiply the last column by \(\mathbf{K}\) and add it to the first (block) column, with the result that the BK term is cancelled out. Thus, the matrix equation for the zeros reduces to

\[det\begin{bmatrix} s\mathbf{I} - \mathbf{A} & - \mathbf{B} \\ \mathbf{C} & 0 \end{bmatrix} = 0\]

Equation (7.103) is the same as Eq. (7.64) for the zeros of the plant before the feedback was applied. The important conclusion is that

When full-state feedback is used as in Eq. (7.98b) or (7.99), the zeros remain unchanged by the feedback.

268.1. Selection of Pole Locations for Good Design

The first step in the pole-placement design approach is to decide on the closed-loop pole locations. When selecting pole locations, it is always useful to keep in mind that the required control effort is related to how far the open-loop poles are moved by the feedback. Furthermore, when a zero is near a pole, the system may be nearly uncontrollable and, as we saw in Section 7.5, moving such poles requires large control gains and thus a large control effort; however, the designer is able to temper the choices to take control effort into account. Therefore, a pole-placement philosophy that aims to fix only the undesirable aspects of the openloop response and avoids either large increases in bandwidth or efforts to move poles that are near zeros will typically allow smaller gains, and thus smaller control actuators, than a philosophy that arbitrarily picks all the poles without regard to the original open-loop pole and zero locations.

In this section, we discuss two techniques to aid in the pole-selection

Two methods of pole selection process. The first approach — dominant second-order poles - deals with pole selection without explicit regard for their effect on control effort; however, the designer is able to temper the choices to take control effort into account. The second method (called optimal control, or symmetric root locus) does specifically address the issue of achieving a balance between good system response and control effort.

268.1.1. Dominant Second-Order Poles

The step response corresponding to the second-order transfer function with complex poles at radius \(\omega_{n}\) and damping ratio \(\zeta\) was discussed in Chapter 3. The rise time, overshoot, and settling time can be deduced directly from the pole locations. We can choose the closedloop poles for a higher-order system as a desired pair of dominant second-order poles, and select the rest of the poles to have real parts corresponding to sufficiently damped modes, so the system will mimic a second-order response with reasonable control effort. We also must make sure that the zeros are far enough into the LHP to avoid having any appreciable effect on the second-order behavior. A system with several lightly damped high-frequency vibration modes plus two rigidbody low-frequency modes lends itself to this philosophy. Here we can pick the low-frequency modes to achieve desired values of \(\omega_{n}\) and \(\zeta\) and select the rest of the poles to increase the damping of the high-frequency modes, while holding their frequency constant in order to minimize control effort. To illustrate this design method, we obviously need a system
of higher than second-order; we will use the drone system described in Example 5.12.

Pole Placement as a Dominant Second-Order System

Design the feedback control for the drone system (see Example 5.12)

\[G(s) = \frac{1}{s^{2}(s + 2)} \]

by the dominant second-order poles method to have a rise time of \(1sec\) or less and an overshoot of less than \(5\%\).

Solution. We use the Matlab command ssdata(G) to find a state space realization for \(G(s)\)

\[\begin{matrix} \mathbf{A} = \begin{bmatrix} - 2 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}, & \mathbf{B} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \\ \mathbf{C} = \begin{bmatrix} 0 & 0 & 1 \end{bmatrix}, & D = 0. \end{matrix}\]

From the plots of the second-order transients in Fig. 3.19, a damping ratio \(\zeta = 0.7\) will meet the overshoot requirement. We choose a natural frequency of \(4rad/sec\) and the two dominant poles located at \(- 2 \pm j2\). There are three poles in all, so the other pole needs to be placed far to the left of the dominant pair; for our purposes, "far" means the transients due to the fast pole should be over (significantly faster) well before the transients due to the dominant poles, and we assume a factor of higher than 4 in the respective undamped natural frequencies to be adequate. From these considerations, the desired poles are given by

\[pc = \lbrack - 2 + 2*j; - 2 - 2*j; - 12\rbrack \]

With these desired poles, we can use the function acker, to find the control gains

\[\mathbf{K} = \begin{bmatrix} 14 & 56 & 96 \end{bmatrix}\]

These are found with the following Matlab statements:

\[\begin{matrix} & A = \begin{bmatrix} - 2 & 0 & 0;1 & 0 & 0;0 & 1 & 0 \end{bmatrix}; \\ & B = \lbrack 1;0;0\rbrack \\ & pc = \left\lbrack - 2 + 2^{*}j; - 2 - 2*j; - 12 \right\rbrack; \\ & K = acker(A,B,pc) \end{matrix}\]

The step response and the corresponding plots for this and another design (to be discussed in Section 7.6.2) are given in Fig. 7.18 and Fig. 7.19. Notice the rise time is approximately \(0.8sec\), and the overshoot is about \(4\%\), as specified.

Because the design process is iterative, the poles we selected should be seen as only a first step, to be followed by further modifications to meet the specifications as accurately as necessary. For this example, we happened to select adequate pole locations on the first try.

Figure 7.18

Step responses of drone designs

Figure 7.19

Control efforts for drone designs

268.1.2. Symmetric Root Locus (SRL)

A most effective and widely used technique of linear control systems design is the optimal linear quadratic regulator (LQR). The simplified version of the LQR problem is to find the control such that the performance index

\[\mathcal{J} = \int_{0}^{\infty}\mspace{2mu}\left\lbrack \rho z^{2}(t) + u^{2}(t) \right\rbrack dt \]

is minimized for the system

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}u \\ z & \ = \mathbf{C}_{1}\mathbf{x} \end{matrix}\]

Symmetric root locus

SRL equation where \(\rho\) in Eq. (7.106) is a weighting factor of the designer's choice. A remarkable fact is that the control law that minimizes \(\mathcal{J}\) is given by linear-state feedback

\[u = - \mathbf{Kx}. \]

Here the optimal value of \(\mathbf{K}\) is that which places the closed-loop poles at the stable roots (those in the LHP) of the symmetric root-locus (SRL) equation (Kailath, 1980)

\[1 + \rho G_{0}( - s)G_{0}(s) = 0 \]

where \(G_{0}\) is the open-loop transfer function from \(u\) to \(z\) :

\[G_{0}(s) = \frac{Z(s)}{U(s)} = \mathbf{C}_{1}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B} = \frac{N(s)}{D(s)} \]

Note that this is a root-locus problem as discussed in Chapter 5 with respect to the parameter \(\rho\), which weighs the relative cost of (tracking error) \(z^{2}\) with respect to the control effort \(u^{2}\) in the performance index equation (7.106). Note also that \(s\) and \(- s\) affect Eq. (7.109) in an identical manner; therefore, for any root \(s_{0}\) of Eq. (7.109), there will also be a root at \(- s_{0}\). We call the resulting root locus a SRL, since the locus in the LHP will have a mirror image in the right half-plane (RHP); that is, they are symmetric with respect to the imaginary axis. We may thus choose the optimal closed-loop poles by first selecting the matrix \(\mathbf{C}_{1}\), which defines the tracking error and which the designer wishes to keep small, then choosing \(\rho\), which balances the importance of this tracking error against the control effort. Notice the output we select as tracking error does not need to be the plant sensor output. That is why we call the output in Eq. (7.107) \(z\) rather than \(y\).

Selecting a set of stable poles from the solution of Eq. (7.109) results in desired closed-loop poles, which we can then use in a pole-placement calculation such as Ackermann's formula [Eq. (7.88)] to obtain K. As with all root loci for real transfer functions \(G_{0}\), the locus is also symmetric with respect to the real axis; thus there is symmetry with respect to both the real and imaginary axes. We can write the SRL equation in the standard root-locus form

\[1 + \rho\frac{N( - s)N(s)}{D( - s)D(s)} = 0 \]

obtain the locus poles and zeros by reflecting the open-loop poles and zeros of the transfer function from \(U\) to \(Z\) across the imaginary axis (which doubles the number of poles and zeros), then sketch the locus. Note the locus could be either a \(0^{\circ}\) or \(180^{\circ}\) locus, depending on the sign of \(G_{0}( - s)G_{0}(s)\) in Eq. (7.109). A quick way to determine which type of locus to use $\left( 0^{\circ} \right.\ $ or \(\left. \ 180^{\circ} \right)\) is to pick the one that has no part on the imaginary axis. The real-axis rule of root locus plotting will reveal this right away. For the controllability assumptions we have made here, plus the assumption that all the system modes are present in the chosen output \(z\), the optimal closed-loop system is guaranteed to be stable; thus no part of the locus can be on the imaginary axis.

Figure 7.20

SRL for a first-order system

EXAMPLE 7.20

EXAMPLE 7.21

SRL Design for Satellite Attitude Control

Draw the SRL for the satellite system with \(z = y\).

Solution. The equations of motion are

\[\begin{matrix} & \overset{˙}{\mathbf{x}} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}u, \\ & y = \begin{bmatrix} 1 & 0 \end{bmatrix}\mathbf{x}. \end{matrix}\]

We then calculate from Eqs. (7.115) and (7.116) so

\[G_{0}(s) = \frac{1}{s^{2}} \]

The symmetric \(180^{\circ}\) loci are shown in Fig. 7.21. The Matlab statements to generate the SRL are

\[s = tf\left( \ ^{'}s^{'} \right); \]

sysGG=1/ \(s^{\land}4\);

rlocus(sysGG);

Figure 7.21

SRL for the satellite

Figure 7.22

Design trade-off curve for satellite plant

It is interesting to note the (stable) closed-loop poles have damping of \(\zeta = 0.707\). We would choose two stable roots for a given value of \(\rho\), for example, \(s = - 1 \pm j1\) for \(\rho = 4.07\), on the SRL and use them for pole-placement and control-law design.

Choosing different values of \(\rho\) can provide us with pole locations that achieve varying balances between a fast response (small values of \(\int z^{2}dt\) ) and a low control effort (small values of \(\int u^{2}dt\) ). Figure 7.22 shows the design trade-off curve for the satellite (double-integrator) plant [see Eq. (7.15)] for various values of \(\rho\) ranging from 0.01 to 100. The curve has two asymptotes (dashed lines) corresponding to low (large \(\rho\) ) and high (small \(\rho\) ) penalty on the control usage. In practice, usually a value of \(\rho\) corresponding to a point close to the knee

Figure 7.23

Nyquist plot for LQR design

of the trade-off curve is chosen. This is because it provides a reasonable compromise between the use of control and the speed of response. For the satellite plant, the value of \(\rho = 1\) corresponds to the knee of the curve. In this case, the closed-loop poles have a damping ratio of \(\zeta = 0.707\) ! Figure 7.23 shows the associated Nyquist plot, which has a phase margin \(PM = 65^{\circ}\) and infinite gain margin. These excellent stability properties are a general feature of LQR designs. However, recall that this method assumes all the state variables are available (measured) for feedback, which is not the case in general. The state variables that are not measured may be estimated as shown in the next section, but the excellent LQR stability properties may not be attainable.

It is also possible to locate optimal pole locations for the design of an open-loop unstable system using the SRL and LQR method.

269. SRL Design for an Inverted Pendulum

Draw the SRL for the linearized equations of the simple inverted pendulum with \(\omega_{o} = 1\). Take the output, \(z\), to be the sum of twice the position plus the velocity (so as to weight or penalize both position and velocity).

Solution. The equations of motion are

\[\overset{˙}{\mathbf{x}} = \begin{bmatrix} 0 & 1 \\ \omega_{0}^{2} & 0 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 0 \\ - 1 \end{bmatrix}u\]

For the specified output of \(2 \times\) position + velocity, we compute the output by

\[z = \begin{bmatrix} 2 & 1 \end{bmatrix}\mathbf{x}\]

We then calculate from Eqs. (7.118) and (7.119) so

\[G_{0}(s) = - \frac{s + 2}{s^{2} - \omega_{0}^{2}} \]

The symmetric \(0^{\circ}\) loci are shown in Fig. 7.24. The Matlab statements to generate the SRL are (for \(\omega_{o} = 1\) ),

Figure 7.24

SRL for the inverted pendulum

\[\begin{matrix} & s = tf(^{'}s^{'})\text{;}\text{~} \\ & G = - (s + 2)/\left( s^{\land}2 - 1 \right); \\ & G1 = - ( - s + 2)/\left( s^{\land}2 - 1 \right)\text{;}\text{~} \\ & sysGG = G^{*}G1\text{;}\text{~} \\ & \text{~}\text{rlocus(sysGG);}\text{~} \end{matrix}\]

For \(\rho = 1\), we find that the closed-loop poles are at \(- 1.36 \pm j0.606\), corresponding to $\mathbf{K} = \begin{bmatrix}

  • 2.23 & - 2.73
    \end{bmatrix}$. If we substitute the system matrices of this example into the equation for the input gains, Eq. (7.97), we find that the solution is

\[\begin{matrix} \mathbf{N}_{x} & \ = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \\ N_{u} & \ = 1, \\ \bar{N} & \ = - 1.23. \end{matrix}\]

With these values, the expression for the control using \(\mathbf{N}_{x}\) and \(N_{u}\) [Eq. (7.98b)] the controller reduces to

\[u = - \mathbf{Kx} + \bar{N}r. \]

The corresponding step response for position is shown in Fig. 7.25.

As a final example in this section, we consider again the drone system and introduce LQR design using the computer directly to solve for the optimal control law. From Eqs. (7.106) and (7.108), we know that the information needed to find the optimal control is given by the system matrices \(\mathbf{A}\) and \(\mathbf{B}\) and the output matrix \(\mathbf{C}_{1}\). Most computeraided software packages, including Matlab, use a more general form of Eq. (7.106):

\[\mathcal{J} = \int_{0}^{\infty}\mspace{2mu}\left( \mathbf{x}^{T}\mathbf{Qx} + \mathbf{u}^{T}\mathbf{Ru} \right)dt \]

Figure 7.25

Step response for the inverted pendulum

Matlab lqr

Bryson's rule

270. EXAMPLE 7.23

Equation (7.121) reduces to the simpler form of Eq. (7.106) if we take \(\mathbf{Q} = \rho\mathbf{C}_{1}^{T}\mathbf{C}_{1}\) and \(\mathbf{R} = 1\). The direct solution for the optimal control gain is the Matlab statement

\[K = lqr(A,B,Q,R)\text{.}\text{~} \]

One reasonable method to start the LQR design iteration is suggested by Bryson's rule (Bryson and Ho, 1969). In practice, an appropriate choice to obtain acceptable values of \(\mathbf{x}\) and \(\mathbf{u}\) is to initially choose diagonal matrices \(\mathbf{Q}\) and \(\mathbf{R}\) such that

\[\begin{matrix} & Q_{ii} = 1/\text{~}\text{maximum acceptable value of}\text{~}\left\lbrack x_{i}^{2} \right\rbrack \\ & R_{ii} = 1/\text{~}\text{maximum acceptable value of}\text{~}\left\lbrack u_{i}^{2} \right\rbrack \end{matrix}\]

The weighting matrices are then modified during subsequent iterations to achieve an acceptable trade-off between performance and control effort.

271. LQR Design for a Drone

(a) Find the optimal control for the drone Example 7.19, using \(\theta\) as the output for the performance index. Let \(\rho = 100\). Compare the results with that of dominant second-order obtained before.

(b) Compare the LQR designs for \(\rho = 1,10,100\).

272. Solution

(a) All we need to do here is to substitute the matrices into Eq.(7.122), form the feedback system, and plot the response. The performance index matrix is the scalar \(\mathbf{R} = 1\); the most difficult part of the problem is finding the state-cost matrix \(\mathbf{Q}\). With the output-cost variable \(z = \theta\), the output matrix from Example 7.19 is

\[\mathbf{C} = \begin{bmatrix} 0 & 0 & 1 \end{bmatrix},\]

and with \(\rho = 1\), the required matrix is

\[\begin{matrix} \mathbf{Q} & \ = \mathbf{C}^{T}\mathbf{C} \\ & \ = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}. \end{matrix}\]

The gain is given by Matlab, using the following statements:

\(A = \lbrack - 200;100;010\rbrack\);

\(B = \lbrack 1;0;0\rbrack\);

\[C = \begin{bmatrix} 0 & 0 & 1 \end{bmatrix};\]

\(R = 1\);

rho \(= 100\);

\(Q = {rho}^{*}C^{'*}C\);

\[K = lgr(A,B,Q,R) \]

The Matlab computed gain is

\[\mathbf{K} = \begin{bmatrix} 2.8728 & 9.8720 & 10.0000 \end{bmatrix}\]

The results of the step responses and the corresponding control efforts are plotted in Fig. 7.18 and Fig. 7.19 (using step) with the earlier responses for comparison. Obviously, there is a vast range of choice for the elements of \(\mathbf{Q}\) and \(\mathbf{R}\), so substantial experience is needed in order to use the LQR method effectively.

(b) The LQR designs may be repeated as in part (a) with the same \(\mathbf{Q}\) and \(\mathbf{R}\), but with \(\rho = 1,10,100\). Figure 7.26 shows a comparison of \(\theta\) step and the corresponding control efforts for the three designs. As seen from the results, the smaller values of \(\rho\) correspond to higher cost on the control and slower response, whereas the larger values of \(\rho\) correspond to lower cost on the control and relatively fast response.

273. Limiting Behavior of LQR Regulator Poles

It is interesting to consider the limiting behavior of the optimal closedloop poles as a function of the root-locus parameter (that is, \(\rho\) ) although, in practice, neither case would be used.

"Expensive control" case \((\rho \rightarrow 0)\) : Equation (7.106) primarily penalizes the use of control energy. If the control is expensive, the optimal control does not move any of the open-loop poles except for those that are in the RHP. The poles in the RHP are simply moved to their mirror images in the LHP. The optimal control does this to stabilize the system using minimum control effort, and makes no attempt to move any of the poles of the system that are already in the LHP. The closedloop pole locations are simply the starting points on the SRL in the

274. Figure 7.26

(a) Step responses of drone for LQR designs (b) Control efforts for drone designs

(a)

(b)

LHP. The optimal control does not speed up the response of the system in this case. For the satellite plant, the vertical dashed line in Fig. 7.22 corresponds to the "expensive control" case and illustrates that the very low control usage results in a very large error in \(z\).

"Cheap control" case \((\rho \rightarrow \infty)\) : In this case, control energy is no object and arbitrary control effort may be used by the optimal control law. The control law then moves some of the closed-loop pole locations right on top of the zeros in the LHP. The rest are moved to infinity along the SRL asymptotes. If the system is nonminimum phase, some of the closed-loop poles are moved to mirror images of these zeros in

LQR gain and phase margins the LHP, as shown in Example 7.22. The rest of the poles go to infinity and do so along a Butterworth filter pole pattern, as shown in Example 7.21. The optimal control law provides the fastest possible response time consistent with the LQR cost function. The feedback gain matrix \(\mathbf{K}\) becomes unbounded in this case. For the double-integrator plant, the horizontal dashed line in Fig. 7.22 corresponds to the "cheap control" case.

275. Robustness Properties of LQR Regulators

It has been proven (Anderson and Moore, 1990) that the Nyquist plot for LQR design avoids a circle of unity radius centered at the -1 point as shown in Fig. 7.23. This leads to extraordinary phase and gain margin properties. It can be shown (see Problem 7.33) that the return difference must satisfy

\[\left| 1 + \mathbf{K}(j\omega\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B} \right| \geq 1. \]

Let us rewrite the loop gain as the sum of its real and imaginary parts:

\[L(j\omega) = \mathbf{K}(j\omega\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B} = Re(L(j\omega)) + jIm(L(j\omega)) \]

Equation (7.124) implies that

\[\left( \lbrack Re(L(j\omega)\rbrack + 1)^{2} + \left\lbrack Im(L(j\omega)\rbrack^{2} \geq 1 \right.\ \right.\ \]

which means the Nyquist plot must indeed avoid a circle centered at -1 with unit radius. This implies that \(\frac{1}{2} < GM < \infty\), which means that the "upward" gain margin is \(GM = \infty\) and the "downward" gain margin is \(GM = \frac{1}{2}\) (see also Problem 6.24 of Chapter 6). Hence, the LQR gain matrix, K, can be multiplied by a large scalar or reduced by half with guaranteed closed-loop system stability. The phase margin, PM, is at least \(\pm 60^{\circ}\). These margins are remarkable, and it is not realistic to assume they can be achieved in practice, because of the presence of modeling errors and lack of sensors!

275.0.1. Comments on the Methods

The two methods of pole selection described in Sections 7.6.1 and 7.6.2 are alternatives the designer can use for an initial design by pole placement. Note the first method (dominant second-order) suggests selecting closed-loop poles without regard to the effect on the control effort required to achieve that response. In some cases, therefore, the resulting control effort may be unrealistically high. The second method (SRL), on the other hand, selects poles that result in some balance between system errors and control effort. The designer can easily examine the relationship between shifts in that balance (by changing \(\rho\) ) and system root locations, time response, and feedback gains. Whatever initial poleselection method we use, some modification is almost always necessary to achieve the desired balance of bandwidth, overshoot, sensitivity, control effort, and other practical design requirements. Further insight
into pole selection will be gained from the examples that illustrate compensation in Section 7.8, and from the case studies in Chapter 10.

275.1. Estimator Design

The control law designed in Section 7.5 assumed all the state-variables are available for feedback. In most cases, not all the state-variables are measured. The cost of the required sensors may be prohibitive, or it may be physically impossible to measure all of the state-variables, as in, for example, a nuclear power plant. In this section, we demonstrate how to reconstruct all of the state-variables of a system from a few measurements. If the estimate of the state is denoted by \(\widehat{\mathbf{x}}\), it would be convenient whether we could replace the true state in the control law given by Eq. (7.99) with the estimates, so the control becomes \(u = - \mathbf{K}\widehat{\mathbf{x}} + \bar{N}r\). This is indeed possible, as we shall see in Section 7.8, so construction of a state estimate is a key part of state-space control design.

275.1.1. Full-Order Estimators

One method of estimating the state is to construct a full-order model of the plant dynamics,

\[\overset{˙}{\widehat{\mathbf{x}}} = \mathbf{A}\widehat{\mathbf{x}} + \mathbf{B}u \]

where \(\widehat{\mathbf{x}}\) is the estimate of the actual state \(\mathbf{x}\). We know \(\mathbf{A},\mathbf{B}\), and \(u(t)\). Hence this estimator will be satisfactory if we can obtain the correct initial condition \(\mathbf{x}(0)\) and set \(\widehat{\mathbf{x}}(0)\) equal to it. Figure 7.27 depicts this open-loop estimator. However, it is precisely the lack of information about \(\mathbf{x}(0)\) that requires the construction of an estimator. Otherwise, the estimated state would track the true state exactly. Thus, if we made a poor estimate for the initial condition, the estimated state would have a continually growing error or an error that goes to zero too slowly to be of use. Furthermore, small errors in our knowledge of the system \((\mathbf{A},\mathbf{B})\) would also cause the estimate to diverge from the true state.

To study the dynamics of this estimator, we define the error in the estimate to be

\[\widetilde{\mathbf{x}} \triangleq \mathbf{x} - \widehat{\mathbf{x}} \]

Then, the dynamics of this error system are given by

\[\overset{˙}{\widetilde{\mathbf{x}}} = \overset{˙}{\mathbf{x}} - \overset{˙}{\widehat{\mathbf{x}}} = \mathbf{A}\widetilde{\mathbf{x}},\ \widetilde{\mathbf{x}}(0) = \mathbf{x}(0) - \widehat{\mathbf{x}}(0) \]

We have no ability to influence the rate at which the state estimate converges to the true state.

Figure 7.27

Block diagram for the open-loop estimator

Figure 7.28

Block diagram for the closed-loop estimator

Feed back the output error to correct the state estimate equation.

Estimate-error characteristic equation

We now invoke the golden rule: When in trouble, use feedback. Consider feeding back the difference between the measured and estimated outputs and correcting the model continuously with this error signal. The equation for this scheme, as shown in Fig. 7.28, is

\[\overset{˙}{\widehat{\mathbf{x}}} = \mathbf{A}\widehat{\mathbf{x}} + \mathbf{B}u + \mathbf{L}(y - \mathbf{C}\widehat{\mathbf{x}}). \]

Here, \(\mathbf{L}\) is a proportional gain defined as

\[\mathbf{L} = \left\lbrack l_{1},l_{2},\ldots,l_{n} \right\rbrack^{T}, \]

and is chosen to achieve satisfactory error characteristics. The dynamics of the error can be obtained by subtracting the estimate [see Eq. (7.130)] from the state [see Eq. (7.41)], to get the error equation

\[\overset{˙}{\widetilde{\mathbf{x}}} = (\mathbf{A} - \mathbf{LC})\widetilde{\mathbf{x}} \]

The characteristic equation of the error is now given by

\[det\lbrack s\mathbf{I} - (\mathbf{A} - \mathbf{LC})\rbrack = 0 \]

If we can choose \(\mathbf{L}\) so \(\mathbf{A} - \mathbf{LC}\) has stable and reasonably fast eigenvalues, \(\widetilde{\mathbf{x}}\) will decay to zero and remain there-independent of the known forcing function \(u(t)\) and its effect on the state \(\mathbf{x}(t)\) and irrespective of the initial condition \(\widetilde{\mathbf{x}}(0)\). This means \(\widehat{\mathbf{x}}(t)\) will converge to \(\mathbf{x}(t)\), regardless of the value of \(\widehat{\mathbf{x}}(0)\); furthermore, we can choose the dynamics of the error to be stable as well as much faster than the open-loop dynamics determined by \(\mathbf{A}\).

Note in obtaining Eq. (7.132), we have assumed that \(\mathbf{A},\mathbf{B}\), and \(\mathbf{C}\) are identical in the physical plant and in the computer implementation of the estimator. If we do not have an accurate model of the plant \((\mathbf{A},\mathbf{B},\mathbf{C})\), the dynamics of the error are no longer governed by Eq. (7.132). However, we can typically choose \(\mathbf{L}\) so the error system is still at least stable and the error remains acceptably small, even with (small) modeling errors and disturbing inputs. It is important to emphasize that the nature of the plant and the estimator are quite different. The plant is a physical system such as a chemical process or servomechanism, whereas the estimator is usually a digital processor computing the estimated state according to Eq. (7.130).

The selection of \(\mathbf{L}\) can be approached in exactly the same fashion as \(\mathbf{K}\) is selected in the control-law design. If we specify the desired location of the estimator error poles as

\[s_{i} = \beta_{1},\beta_{2},\ldots,\beta_{n} \]

then the desired estimator characteristic equation is

\[\alpha_{e}(s) \triangleq \left( s - \beta_{1} \right)\left( s - \beta_{2} \right)\cdots\left( s - \beta_{n} \right) \]

We can then solve for \(\mathbf{L}\) by comparing coefficients in Eqs. (7.133) and (7.134).

Design an estimator for the simple pendulum. Compute the estimator gain matrix that will place both the estimator error poles at \(- 10\omega_{0}\) (five times as fast as the controller poles selected in Example 7.14). Verify the result using Matlab for \(\omega_{0} = 1\). Evaluate the performance of the estimator.

Solution. The equations of motion are

\[\begin{matrix} & \overset{˙}{\mathbf{x}} = \begin{bmatrix} 0 & 1 \\ - \omega_{0}^{2} & 0 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}u, \\ & y = \begin{bmatrix} 1 & 0 \end{bmatrix}\mathbf{x}. \end{matrix}\]

We are asked to place the two estimator error poles at \(- 10\omega_{0}\). The corresponding characteristic equation is

\[\alpha_{e}(s) = \left( s + 10\omega_{0} \right)^{2} = s^{2} + 20\omega_{0}s + 100\omega_{0}^{2} \]

From Eq. (7.133), we get

\[det\lbrack s\mathbf{I} - (\mathbf{A} - \mathbf{LC})\rbrack = s^{2} + l_{1}s + l_{2} + \omega_{0}^{2} \]

Comparing the coefficients in Eqs. (7.136) and (7.137), we find that

\[\mathbf{L} = \begin{bmatrix} l_{1} \\ l_{2} \end{bmatrix} = \begin{bmatrix} 20\omega_{0} \\ 99\omega_{0}^{2} \end{bmatrix}\]

The result can also be found from Matlab. For example, for \(\omega_{0} = 1\), the following Matlab statements:

yield \(\mathbf{L} = \begin{bmatrix} 20 & 99 \end{bmatrix}^{T}\) and agrees with the preceding hand calculations.

Performance of the estimator can be tested by adding the actual state feedback to the plant and plotting the estimation errors. Note this is not the way the system will ultimately be built, but this approach provides a means of validating the estimator performance. Combining Eq. (7.68) of the plant with state feedback with Eq. (7.130) of the

Matlab commands impulse, initial

Figure 7.29

Estimator connected to the plant estimator with output feedback results in the following overall system equations:

\[\begin{matrix} \begin{bmatrix} \overset{˙}{\mathbf{x}} \\ \widehat{\mathbf{x}} \end{bmatrix} & \ = \begin{bmatrix} \mathbf{A} - \mathbf{BK} & \mathbf{0} \\ \mathbf{LC} - \mathbf{BK} & \mathbf{A} - \mathbf{LC} \end{bmatrix}\begin{bmatrix} \mathbf{x} \\ \widehat{\mathbf{x}} \end{bmatrix}, \\ y & \ = \begin{bmatrix} \mathbf{C} & \mathbf{0} \end{bmatrix}\begin{bmatrix} \mathbf{x} \\ \widehat{\mathbf{x}} \end{bmatrix}, \\ y - \widehat{y} & \ = \begin{bmatrix} \mathbf{C} & - \mathbf{C} \end{bmatrix}\begin{bmatrix} \mathbf{x} \\ \widehat{\mathbf{x}} \end{bmatrix}. \end{matrix}\]

A block diagram of the setup is drawn in Fig. 7.29.

The response of this closed-loop system with \(\omega_{0} = 1\) to an initial condition \(\mathbf{x}_{0} = \lbrack 1.0,0.0\rbrack^{T}\) and \({\widehat{\mathbf{x}}}_{0} = \lbrack 0,0\rbrack^{T}\) is shown in Fig. 7.30, where \(\mathbf{K}\) is obtained from Example 7.14 and \(\mathbf{L}\) comes from Eq. (7.138). The response may be obtained using the Matlab commands impulse or initial. Note the state estimates converge to the actual state-variables after an initial transient even though the initial value of \(\widehat{\mathbf{x}}\) had a large error. Also note the estimation error decays approximately five times faster than the decay of the state itself, as we designed it to do.

Figure 7.30

Initial-condition response of oscillator showing \(x\) and \(\widehat{x}\)

276. Observer Canonical Form

As was the case for control-law design, there is a canonical form for which the estimator gain design equations are particularly simple and the existence of a solution is obvious. We introduced this form in Section 7.4.1. The equations are in the observer canonical form and have the structure:

\[\begin{matrix} {\overset{˙}{\mathbf{x}}}_{o} & \ = \mathbf{A}_{o}\mathbf{x}_{o} + \mathbf{B}_{o}u \\ y & \ = \mathbf{C}_{o}\mathbf{x}_{o} \end{matrix}\]

where

\[\begin{matrix} \mathbf{A}_{o} & \ = \begin{bmatrix} - a_{1} & 1 & 0 & 0 & \ldots & 0 \\ - a_{2} & 0 & 1 & 0 & \ldots & \vdots \\ \vdots & \vdots & \ddots & & & 1 \\ - a_{n} & 0 & & 0 & & 0 \end{bmatrix},\ \mathbf{B}_{O} = \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{n} \end{bmatrix}, \\ \mathbf{C}_{o} & \ = \begin{bmatrix} 1 & 0 & 0 & \ldots & & 0 \end{bmatrix},\ D_{o} = 0. \end{matrix}\]

Figure 7.31

Observer canonical form of a third-order system
A block diagram for the third-order case is shown in Fig. 7.31. In observer canonical form, all the feedback loops come from the output, or observed signal. Like the control canonical form, the observer canonical form is a "direct" form because the values of the significant elements in the matrices are obtained directly from the coefficients of the numerator and denominator polynomials of the corresponding transfer function \(G(s)\). The matrix \(\mathbf{A}_{o}\) is called a left companion matrix to the characteristic equation because the coefficients of the equation appear on the left side of the matrix.

One of the advantages of the observer canonical form is that the estimator gains can be obtained from it by inspection. The estimator error closed-loop matrix for the third-order case is

\[\mathbf{A}_{o} - \mathbf{LC}_{o} = \begin{bmatrix} - a_{1} - l_{1} & 1 & 0 \\ - a_{2} - l_{2} & 0 & 1 \\ - a_{3} - l_{3} & 0 & 0 \end{bmatrix}\]

which has the characteristic equation

\[s^{3} + \left( a_{1} + l_{1} \right)s^{2} + \left( a_{2} + l_{2} \right)s + \left( a_{3} + l_{3} \right) = 0 \]

Ackermann's estimator formula

Duality of estimation and control and the estimator gain can be found by comparing the coefficients of Eq. (7.144) with \(\alpha_{e}(s)\) from Eq. (7.134).

In a development exactly parallel with the control-law case, we can find a transformation to take a given system to observer canonical form if and only if the system has a structural property that in this case we call observability. Roughly speaking, observability refers to our ability to deduce information about all the modes of the system by monitoring only the sensed outputs. Unobservability results when some mode or subsystem is disconnected physically from the output and therefore no longer appears in the measurements. For example, if only derivatives of certain state-variables are measured, and these state-variables do not affect the dynamics, a constant of integration is obscured. This situation occurs with a plant having the transfer function \(1/s^{2}\) if only velocity is measured, for then it is impossible to deduce the initial value of the position. On the other hand, for an oscillator, a velocity measurement is sufficient to estimate position because the acceleration, and consequently the velocity observed, are affected by position. The mathematical test for determining observability is that the observability matrix,

\[\mathcal{O} = \begin{bmatrix} \mathbf{C} \\ \mathbf{CA} \\ \vdots \\ \mathbf{CA}^{n - 1} \end{bmatrix}\]

must have independent columns. In the one output case we will study, \(\mathcal{O}\) is square, so the requirement is that \(\mathcal{O}\) be nonsingular or have a nonzero determinant. In general, we can find a transformation to observer canonical form if and only if the observability matrix is nonsingular. Note this is analogous to our earlier conclusions for transforming system matrices to control canonical form.

As with control-law design, we could find the transformation to observer form, compute the gains from the equivalent of Eq. (7.144), and transform back. An alternative method of computing \(\mathbf{L}\) is to use Ackermann's formula in estimator form, which is

\[\mathbf{L} = \alpha_{e}(\mathbf{A})\mathcal{O}^{- 1}\begin{bmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{bmatrix}\]

where \(\mathcal{O}\) is the observability matrix given in Eq. (7.145).

277. Duality

You may already have noticed from this discussion the considerable resemblance between estimation and control problems. In fact, the two problems are mathematically equivalent. This property is called duality. Table 7.1 shows the duality relationships between the estimation and control problems. For example, Ackermann's control formula

TABLE 7.1

Matlab commands acker, place

Duality
Control Estimation
$$\mathbf{A}$$ $$\mathbf{A}^{T}$$
$$\mathbf{B}$$ $$\mathbf{C}^{T}$$
$$\mathbf{C}$$ $$\mathbf{B}^{T}$$

[Eq. (7.88)] becomes the estimator formula Eq. (7.146) if we make the substitutions given in Table 7.1. We can demonstrate this directly using matrix algebra. The control problem is to select the row matrix \(\mathbf{K}\) for satisfactory placement of the poles of the system matrix \(\mathbf{A} - \mathbf{BK}\); the estimator problem is to select the column matrix \(\mathbf{L}\) for satisfactory placement of the poles of \(\mathbf{A} - \mathbf{LC}\). However, the poles of \(\mathbf{A} - \mathbf{LC}\) equal those of \((\mathbf{A} - \mathbf{LC})^{T} = \mathbf{A}^{T} - \mathbf{C}^{T}\mathbf{L}^{T}\), and in this form, the algebra of the design for \(\mathbf{L}^{T}\) is identical to that for \(\mathbf{K}\). Therefore, where we used Ackermann's formula or the place algorithm in the forms

\[\begin{matrix} & K = acker(A,B,pc) \\ & K = place(A,B,pc) \end{matrix}\]

for the control problem, we use

\[\begin{matrix} & Lt = acker\left( A^{'},C^{'},\text{~}\text{pe}\text{~} \right), \\ & Lt = \text{~}\text{place}\text{~}\left( A^{'},C^{'},\text{~}\text{pe}\text{~} \right), \\ & L = Lt, \end{matrix}\]

where \(p_{e}\) is a vector containing the desired estimator error poles for the estimator problem.

Thus duality allows us to use the same design tools for estimator problems as for control problems with proper substitutions. The two canonical forms are also dual, as we can see by comparing the triples \(\left( \mathbf{A}_{c},\mathbf{B}_{c},\mathbf{C}_{c} \right)\) and \(\left( \mathbf{A}_{\circ},\mathbf{B}_{\circ},\mathbf{C}_{\circ} \right)\).

277.0.1. Reduced-Order Estimators

The estimator design method described in Section 7.7.1 reconstructs the entire state vector using measurements of some of the state-variables. If the sensors have no noise, a full-order estimator contains redundancies, and it seems reasonable to question the necessity for estimating state-variables that are measured directly. Can we reduce the complexity of the estimator using the state-variables that are measured directly and exactly? Yes. However, it is better to implement a full-order estimator if there is significant noise on the measurements because, in addition to estimating unmeasured state-variables, the estimator filters the measurements.

The reduced-order estimator reduces the order of the estimator by the number ( 1 in this text) of sensed outputs. To derive this estimator,
we start with the assumption that the output equals the first state as, for example, \(y = x_{a}\). If this is not so, a preliminary step is required. Transforming to observer form is possible but is overkill; any nonsingular transformation with \(\mathbf{C}\) as the first row will do. We now partition the state vector into two parts: \(x_{a}\), which is directly measured, and \(\mathbf{x}_{b}\), which represents the remaining state-variables that need to be estimated. If we partition the system matrices accordingly, the complete description of the system is given by

\[\begin{matrix} \begin{bmatrix} {\overset{˙}{x}}_{a} \\ {\overset{˙}{\mathbf{x}}}_{b} \end{bmatrix} & \ = \begin{bmatrix} A_{aa} & \mathbf{A}_{ab} \\ \mathbf{A}_{ba} & \mathbf{A}_{bb} \end{bmatrix}\begin{bmatrix} x_{a} \\ \mathbf{x}_{b} \end{bmatrix} + \begin{bmatrix} B_{a} \\ \mathbf{B}_{b} \end{bmatrix}u, \\ y & \ = \begin{bmatrix} 1 & \mathbf{0} \end{bmatrix}\begin{bmatrix} x_{a} \\ \mathbf{x}_{b} \end{bmatrix}. \end{matrix}\]

The dynamics of the unmeasured state-variables are given by

\[{\overset{˙}{\mathbf{x}}}_{b} = \mathbf{A}_{bb}\mathbf{x}_{b} + \underset{\text{known input}\text{~}}{\overset{\mathbf{A}_{ba}x_{a} + \mathbf{B}_{b}u}{︸}}, \]

where the right-most two terms are known and can be considered as an input into the \(\mathbf{x}_{b}\) dynamics. Because \(x_{a} = y\), the measured dynamics are given by the scalar equation

\[{\overset{˙}{x}}_{a} = \overset{˙}{y} = A_{aa}y + \mathbf{A}_{ab}\mathbf{x}_{b} + B_{a}u. \]

If we collect the known terms of Eq. (7.149) on one side, yielding

\[\underset{\text{known measurement}\text{~}}{\overset{\overset{˙}{y} - A_{aa}y - B_{a}u}{︸}} = \mathbf{A}_{ab}\mathbf{x}_{b}, \]

we obtain a relationship between known quantities on the left side, which we consider measurements, and unknown state-variables on the right. Therefore, Eqs. (7.149) and (7.150) have the same relationship to the state \(\mathbf{x}_{b}\) that the original equation [see Eq. (7.147b)] had to the entire state \(\mathbf{x}\). Following this line of reasoning, we can establish the following substitutions in the original estimator equations to obtain a (reduced-order) estimator of \(\mathbf{x}_{b}\) :

\[\begin{matrix} \mathbf{x} & \ \leftarrow \mathbf{x}_{b}, \\ \mathbf{A} & \ \leftarrow \mathbf{A}_{bb}, \\ \mathbf{B}u & \ \leftarrow \mathbf{A}_{ba}y + \mathbf{B}_{b}u, \\ y & \ \leftarrow \overset{˙}{y} - A_{aa}y - B_{a}u, \\ \mathbf{C} & \ \leftarrow \mathbf{A}_{ab}. \end{matrix}\]

Therefore, the reduced-order estimator equations are obtained by substituting Eqs. (7.151) into the full-order estimator [see Eq. (7.130)]:

\[{\overset{˙}{\widehat{\mathbf{x}}}}_{b} = \mathbf{A}_{bb}{\widehat{\mathbf{x}}}_{b} + \underset{\text{input}\text{~}}{\overset{\mathbf{A}_{ba}y + \mathbf{B}_{b}u}{︸}} + \underset{\text{measurement}\text{~}}{\overset{\left( \overset{˙}{y} - A_{aa}y - B_{a}u \right.\ }{︸}} - \mathbf{A}_{ab}{\widehat{\mathbf{x}}}_{b}). \]

If we define the estimator error to be

\[{\widetilde{\mathbf{x}}}_{b} \triangleq \mathbf{x}_{b} - {\widehat{\mathbf{x}}}_{b} \]

Figure 7.32

Reduced-order estimator structure

then the dynamics of the error are given by subtracting Eq. (7.148) from Eq. (7.152) to get

\[{\overset{˙}{\widetilde{\mathbf{x}}}}_{b} = \left( \mathbf{A}_{bb} - \mathbf{L}\mathbf{A}_{ab} \right){\widetilde{\mathbf{x}}}_{b}, \]

and its characteristic equation is given by

\[det\left\lbrack s\mathbf{I} - \left( \mathbf{A}_{bb} - \mathbf{L}\mathbf{A}_{ab} \right) \right\rbrack = 0. \]

We design the dynamics of this estimator by selecting \(\mathbf{L}\) so that Eq. (7.155) matches a reduced order \(\alpha_{e}(s)\). Now Eq. (7.152) can be rewritten as

\[{\overset{˙}{\widehat{\mathbf{x}}}}_{b} = \left( \mathbf{A}_{bb} - \mathbf{L}\mathbf{A}_{ab} \right){\widehat{\mathbf{x}}}_{b} + \left( \mathbf{A}_{ba} - \mathbf{L}A_{aa} \right)y + \left( \mathbf{B}_{b} - \mathbf{L}B_{a} \right)u + \mathbf{L}\overset{˙}{y} \]

The fact we must form the derivative of the measurements in Eq. (7.156) appears to present a practical difficulty. It is known that differentiation amplifies noise, so if \(y\) is noisy, the use of \(\overset{˙}{y}\) is unacceptable. To get around this difficulty, we define the new controller state to be

\[\mathbf{x}_{c} \triangleq {\widehat{\mathbf{x}}}_{b} - \mathbf{L}y \]

In terms of this new state, the implementation of the reduced-order estimator is given by

\[{\overset{˙}{\mathbf{x}}}_{c} = \left( \mathbf{A}_{bb} - \mathbf{L}\mathbf{A}_{ab} \right){\widehat{\mathbf{x}}}_{b} + \left( \mathbf{A}_{ba} - \mathbf{L}A_{aa} \right)y + \left( \mathbf{B}_{b} - \mathbf{L}B_{a} \right)u \]

and \(\overset{˙}{y}\) no longer appears directly. A block-diagram representation of the reduced-order estimator is shown in Fig. 7.32.

278. A Reduced-Order Estimator Design for Pendulum

Design a reduced-order estimator for the pendulum that has the error pole at \(- 10\omega_{0}\).

Solution. We are given the system equations

\[\begin{matrix} \begin{bmatrix} {\overset{˙}{x}}_{1} \\ {\overset{˙}{x}}_{2} \end{bmatrix} & \ = \begin{bmatrix} 0 & 1 \\ - \omega_{0}^{2} & 0 \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}u, \\ y & \ = \begin{bmatrix} 1 & 0 \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix}. \end{matrix}\]

Figure 7.33

Initial-condition response of the reduced-order estimator

The partitioned matrices are

\[\begin{matrix} \begin{bmatrix} A_{aa} & \mathbf{A}_{ab} \\ \mathbf{A}_{ba} & \mathbf{A}_{bb} \end{bmatrix} & \ = \begin{bmatrix} 0 & 1 \\ - \omega_{0}^{2} & 0 \end{bmatrix}, \\ \begin{bmatrix} B_{a} \\ \mathbf{B}_{b} \end{bmatrix} & \ = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. \end{matrix}\]

From Eq. (7.155), we find the characteristic equation in terms of \(L\) :

\[s - (0 - L) = 0. \]

We compare it with the desired equation,

\[\alpha_{e}(s) = s + 10\omega_{0} = 0, \]

which yields

\[L = 10\omega_{0} \]

The estimator equation, from Eq. (7.158), is

\[{\overset{˙}{x}}_{c} = - 10\omega_{0}{\widehat{x}}_{2} - \omega_{0}^{2}y + u \]

and the state estimate, from Eq. (7.157), is

\[{\widehat{x}}_{2} = x_{c} + 10\omega_{0}y. \]

We use the control law given in the earlier examples. The response of the estimator to a plant initial condition \(\mathbf{x}_{0} = \lbrack 1.0,0.0\rbrack^{T}\) and an estimator initial condition \(x_{c0} = 0\) is shown in Fig. 7.33 for \(\omega_{0} = 1\). The response may be obtained using the Matlab commands impulse or initial. Note the similarity of the initial-condition response to that of the full-order estimator plotted in Fig. 7.30.
using

The reduced-order estimator gain can also be found from Matlab

\[\begin{matrix} Lt & \ = acker\left( Abb^{'},Aab^{'},pe \right), \\ Lt & \ = place\left( Abb^{'},Aab^{'},pe \right) \\ L & \ = {Lt}^{'}. \end{matrix}\]

The conditions for the existence of the reduced-order estimator are the same as for the full-order estimator-namely, observability of $\left( \mathbf{A}_{bb} \right.\ $, \(\left. \ \mathbf{A}_{ab} \right)\) which can be shown to be the same as the observability of \((\mathbf{A},\mathbf{C})\).

278.0.1. Estimator Pole Selection

Design rules of thumb for selecting estimator poles

We can base our selection of estimator pole locations on the techniques discussed in Section 7.6 for the case of controller poles. As a rule of thumb, the estimator poles can be chosen to be faster than the controller poles by a factor of 2 to 6 . This ensures a faster decay of the estimator errors compared with the desired dynamics, thus causing the controller poles to dominate the total response. If sensor noise is large enough to be a major concern, we may choose the estimator poles to be slower than two times the controller poles, which would yield a system with lower bandwidth and more noise smoothing. However, we would expect the total system response in this case to be strongly influenced by the location of the estimator poles. If the estimator poles are slower than the controller poles, we would expect the system response to disturbances to be dominated by the dynamic characteristics of the estimator rather than by those selected by the control law.

In comparison with the selection of controller poles, estimator pole selection requires us to be concerned with a much different relationship than with control effort. As in the controller, there is a feedback term in the estimator that grows in magnitude as the requested speed of response increases. However, this feedback is in the form of an electronic signal or a digital word in a computer, so its growth causes no special difficulty. In the controller, increasing the speed of response increases the control effort; this implies the use of a larger actuator, which in turn increases its size, weight, and cost. The important consequence of increasing the speed of response of an estimator is that the bandwidth of the estimator becomes higher, thus causing more sensor noise to pass on to the control actuator. Of course, if \((\mathbf{A},\mathbf{C})\) are not observable, then no amount of estimator gain can produce a reasonable state estimate. Thus, as with controller design, the best estimator design is a balance between good transient response and low-enough bandwidth that sensor noise does not significantly impair actuator activity. Both dominant secondorder and optimal control ideas can be used to meet the requirements.

There is a result for estimator gain design based on the SRL. In optimal estimation theory, the best choice for estimator gain is dependent on the ratio of sensor noise intensity \(v\) to process (disturbance) noise intensity [ \(w\) in Eq. (7.160)]. This is best understood by reexamining the estimator equation

\[\overset{˙}{\widehat{\mathbf{x}}} = \mathbf{A}\widehat{\mathbf{x}} + \mathbf{B}u + \mathbf{L}(y - \mathbf{C}\widehat{\mathbf{x}}) \]

Process noise

Sensor noise

Estimator SRL equation to see how it interacts with the system when process noise \(w\) is present. The plant with process noise is described by

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} + \mathbf{B}u + \mathbf{B}_{1}w \]

and the measurement equation with sensor noise \(v\) is described by

\[y = \mathbf{Cx} + v. \]

The estimator error equation with these additional inputs is found directly by subtracting Eq. (7.159) from Eq. (7.160) and substituting Eq. (7.161) for \(y\) :

\[\overset{˙}{\widetilde{\mathbf{x}}} = (\mathbf{A} - \mathbf{LC})\widetilde{\mathbf{x}} + \mathbf{B}_{1}w - \mathbf{L}v. \]

In Eq. (7.162), the sensor noise is multiplied by \(\mathbf{L}\) and the process noise is not. If \(\mathbf{L}\) is very small, then the effect of sensor noise is removed, but the estimator's dynamic response will be "slow," so the error will not reject effects of \(w\) very well. The state of a low-gain estimator will not track uncertain plant inputs very well. These results can, with some success, also be applied to model errors in, for example, A or B. Such model errors will add terms to Eq. (7.162) and act like additional process noise. On the other hand, if \(\mathbf{L}\) is large, then the estimator response will be fast and the disturbance or process noise will be rejected, but the sensor noise, multiplied by \(\mathbf{L}\), results in large errors. Clearly, a balance between these two effects is required. It turns out that the optimal solution to this balance can be found under very reasonable assumptions by solving an SRL equation for the estimator that is very similar to the one for the optimal control formulation [see Eq. (7.109)]. The estimator SRL equation is

\[1 + qG_{e}( - s)G_{e}(s) = 0 \]

where \(q\) is the ratio of input disturbance noise intensity to sensor noise intensity and \(G_{e}\) is the transfer function from the process noise to the sensor output and is given by

\[G_{e}(s) = \mathbf{C}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B}_{1}. \]

Note from Eqs. (7.109) and (7.163) that \(G_{e}(s)\) is similar to \(G_{0}(s)\). However, a comparison of Eqs. (7.110) and (7.164) shows \(G_{e}(s)\) has the input matrix \(\mathbf{B}_{1}\) instead of \(\mathbf{B}\), and \(G_{0}\) is the transfer function from the control input \(u\) to cost output \(z\), and has output matrix \(\mathbf{C}_{1}\) instead of \(\mathbf{C}\).

The use of the estimator SRL [see Eq. (7.163)] is identical to the use of the controller SRL. A root locus with respect to \(q\) is generated, thus yielding sets of optimal estimator poles corresponding more or less to the ratio of process noise intensity to sensor noise intensity. The designer then picks the set of (stable) poles that seems best, considering all aspects of the problem. An important advantage of using the SRL technique is that after the process noise input matrix \(\mathbf{B}_{1}\) has been selected, the "arbitrariness" is reduced to one degree of freedom, the selection \(q\), instead of the many degrees of freedom required to select the poles directly in a higher-order system.

279. EXAMPLE 7.26

Figure 7.34

Symmetric root locus for the inverted pendulum estimator design
A final comment concerns the reduced-order estimator. Because of the presence of a direct transmission term from \(y\) through \(\mathbf{L}\) to \(\mathbf{x}_{b}\) (see Fig. 7.32), the reduced-order estimator has a much higher bandwidth from sensor to control when compared with the full-order estimator. Therefore, if sensor noise is a significant factor, the reduced-order estimator is less attractive because the potential saving in complexity is more than offset by the increased sensitivity to noise.

280. SRL Estimator Design for a Simple Pendulum

Draw the estimator SRL for the linearized equations of the simple inverted pendulum with \(\omega_{o} = 1\). Take the output to be a noisy measurement of position with noise intensity ratio \(q\).

Solution. We are given the system equations

\[\begin{matrix} \begin{bmatrix} {\overset{˙}{x}}_{1} \\ {\overset{˙}{x}}_{2} \end{bmatrix} & \ = \begin{bmatrix} 0 & 1 \\ - \omega_{0}^{2} & 0 \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}w, \\ y & \ = \begin{bmatrix} 1 & 0 \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} + v. \end{matrix}\]

We then calculate from Eq. (7.164) that

\[G_{e}(s) = \frac{1}{s^{2} + \omega_{0}^{2}} \]

The symmetric \(180^{\circ}\) loci are shown in Fig. 7.34. The Matlab statements to generate the SRL are (for \(\omega_{o} = 1\) )

\[\begin{matrix} & s = tf\left( \ ^{'}s^{'} \right)\text{;}\text{~} \\ & G = 1/\left( s^{\land}2 + 1 \right)\text{;}\text{~} \\ & sysGG = G^{*}G\text{;}\text{~} \\ & \text{~}\text{rlocus(sysGG);}\text{~} \end{matrix}\]

We would choose two stable roots for a given value of \(q\), for example, \(s = - 3 \pm j3.18\) for \(q = 365\), and use them for estimator pole placement.

Regulator

Poles of the combined control law and estimator

280.1. Compensator Design: Combined Control Law and Estimator

If we take the control-law design described in Section 7.5, combine it with the estimator design described in Section 7.7, and implement the control law by using the estimated state-variables, the design is complete for a regulator that is able to reject disturbances but has no external reference input to track. However, because the control law was designed for feedback of the actual (not the estimated) state, you may wonder what effect using \(\widehat{\mathbf{x}}\) in place of \(\mathbf{x}\) has on the system dynamics. In this section, we compute this effect. In doing so, we will compute the closedloop characteristic equation and the open-loop compensator transfer function. We will use these results to compare the state-space designs with root-locus and frequency-response designs.

The plant equation with feedback is now

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} - \mathbf{BK}\widehat{\mathbf{x}} \]

which can be rewritten in terms of the state error \(\widetilde{\mathbf{x}}\) as

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} - \mathbf{BK}(\mathbf{x} - \widetilde{\mathbf{x}}). \]

The overall system dynamics in state form are obtained by combining Eq. (7.166) with the estimator error [see Eq. (7.132)] to get

\[\begin{bmatrix} \overset{˙}{\mathbf{x}} \\ \overset{˙}{\widetilde{\mathbf{x}}} \end{bmatrix} = \begin{bmatrix} \mathbf{A} - \mathbf{BK} & \mathbf{BK} \\ \mathbf{0} & \mathbf{A} - \mathbf{LC} \end{bmatrix}\begin{bmatrix} \mathbf{x} \\ \widetilde{\mathbf{x}} \end{bmatrix}.\]

The characteristic equation of this closed-loop system is

\[det\begin{bmatrix} s\mathbf{I} - \mathbf{A} + \mathbf{BK} & - \mathbf{BK} \\ \mathbf{0} & s\mathbf{I} - \mathbf{A} + \mathbf{LC} \end{bmatrix} = 0.\]

Because the matrix is block triangular (see Appendix WD available online at www.pearsonglobaleditions.com), we can rewrite Eq. (7.168) as

\[det(s\mathbf{I} - \mathbf{A} + \mathbf{BK}) \cdot det(s\mathbf{I} - \mathbf{A} + \mathbf{LC}) = \alpha_{c}(s)\alpha_{e}(s) = 0 \]

In other words, the set of poles of the combined system consists of the union of the control poles and the estimator poles. This means that the designs of the control law and the estimator can be carried out independently, yet when they are used together in this way, the poles remain unchanged. \(\ ^{7}\)

To compare the state-variable method of design with the transform methods discussed in Chapters 5 and 6, we note from Fig. 7.35 that the blue shaded portion corresponds to a compensator. The state equation for this compensator is obtained by including the feedback law \(u = - \mathbf{K}\widehat{\mathbf{x}}\) (because it is part of the compensator) in the estimator Eq. (7.130) to get

Figure 7.35

Estimator and controller mechanization

Compensator transfer function

\[\begin{matrix} \overset{˙}{\widehat{\mathbf{x}}} & \ = (\mathbf{A} - \mathbf{BK} - \mathbf{LC})\widehat{\mathbf{x}} + \mathbf{L}y \\ u & \ = - \mathbf{K}\widehat{\mathbf{x}} \end{matrix}\]

Note Eq. (7.170a) has the same structure as Eq. (7.18a), which we repeat here:

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} + \mathbf{B}u \]

Because the characteristic equation of Eq. (7.18a) is

\[det(s\mathbf{I} - \mathbf{A}) = 0 \]

the characteristic equation of the compensator is found by comparing Eqs. (7.170a) and (7.171), and substituting the equivalent matrices into Eq. (7.172) to get

\[det(s\mathbf{I} - \mathbf{A} + \mathbf{BK} + \mathbf{LC}) = 0 \]

Note we never specified the roots of Eq. (7.173) nor used them in our discussion of the state-space design technique. (Note also the compensator is not guaranteed to be stable; the roots of Eq. (7.173) can be in the RHP.) The transfer function from \(y\) to \(u\) representing the dynamic compensator is obtained by inspecting Eq. (7.45) and substituting in the corresponding matrices from Eq. (7.173):

\[D_{c}(s) = \frac{U(s)}{Y(s)} = - \mathbf{K}(s\mathbf{I} - \mathbf{A} + \mathbf{BK} + \mathbf{LC})^{- 1}\mathbf{L} \]

The same development can be carried out for the reduced-order estimator. Here the control law is

\[u = - \begin{bmatrix} K_{a} & \mathbf{K}_{b} \end{bmatrix}\begin{bmatrix} x_{a} \\ {\widehat{\mathbf{x}}}_{b} \end{bmatrix} = - K_{a}y - \mathbf{K}_{b}{\widehat{\mathbf{x}}}_{b}\]

Substituting Eq. (7.175) into Eq. (7.171), and using Eq. (7.158) and some algebra, we obtain

\[\begin{matrix} {\overset{˙}{\mathbf{x}}}_{c} & \ = \mathbf{A}_{r}\mathbf{x}_{c} + \mathbf{B}_{r}y \\ u & \ = \mathbf{C}_{r}\mathbf{x}_{c} + D_{r}y \end{matrix}\]

where

\[\begin{matrix} & \mathbf{A}_{r} = \mathbf{A}_{bb} - \mathbf{L}\mathbf{A}_{ab} - \left( \mathbf{B}_{b} - \mathbf{L}B_{a} \right)\mathbf{K}_{b}, \\ & \mathbf{B}_{r} = \mathbf{A}_{r}\mathbf{L} + \mathbf{A}_{ba} - \mathbf{L}A_{aa} - \left( \mathbf{B}_{b} - \mathbf{L}B_{a} \right)K_{a}, \\ & \mathbf{C}_{r} = - \mathbf{K}_{b}, \\ & D_{r} = - K_{a} - \mathbf{K}_{b}\mathbf{L}. \end{matrix}\]

Reduced-order compensator transfer function
EXAMPLE 7.27
The dynamic compensator now has the transfer function

\[D_{cr}(s) = \frac{U(s)}{Y(s)} = \mathbf{C}_{r}\left( s\mathbf{I} - \mathbf{A}_{r} \right)^{- 1}\mathbf{B}_{r} + D_{r} \]

When we compute \(D_{c}(s)\) or \(D_{cr}(s)\) for a specific case, we will find that they are very similar to the classical compensators given in Chapters 5 and 6 , in spite of the fact that they are arrived at by entirely different means.

Full-Order Compensator Design for Satellite Attitude Control

Design a compensator using pole placement for the satellite plant with transfer function \(1/s^{2}\). Place the control poles at \(s = - 0.8 \pm 0.8j\) \(\left( \omega_{n} = 1.13rad/sec,\zeta = 0.7 \right)\) and place the estimator poles at \(\omega_{n} =\) \(8rad/sec,\zeta = 0.5\).

Solution. A state-variable description for the given transfer function \(G(s) = 1/s^{2}\) is

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}u, \\ y & \ = \begin{bmatrix} 1 & 0 \end{bmatrix}\mathbf{x}. \end{matrix}\]

If we place the control roots at $s = - 0.8 \pm 0.8j\left( \omega_{n} = 1.13rad/sec \right.\ $, \(\zeta = 0.7)\), then

\[\alpha_{c} = s^{2} + 1.6s + 1.28 \]

From \(K =\) place \((A,B,pc)\), the state feedback gain is found to be

\[\mathbf{K} = \begin{bmatrix} 1.28 & 1.6 \end{bmatrix}\]

If the estimator error roots are at \(\omega_{n} = 8rad/sec\) and \(\zeta = 0.5\), the desired estimator characteristic polynomial is

\[\alpha_{e}(s) = s^{2} + 8s + 64 = (s + 4 + 6.93j)(s + 4 - 6.93j), \]

and, from \(Lt =\) place \(\left( A^{'},C^{'},pe \right)\), the estimator feedback-gain matrix is found to be

\[\mathbf{L} = \begin{bmatrix} 8 \\ 64 \end{bmatrix}\]

The compensator transfer function given by Eq. (7.174) is

\[D_{c}(s) = - 112.7\frac{(s + 0.727)}{s^{2} + 9.6s + 78.1} \]

Figure 7.36

Root locus for the combined controller and estimator, with process gain as the parameter

Identical results of state-space and frequency response design methods

which looks very much like a lead compensator in that it has a zero on the real axis to the right of its poles; however, rather than one real pole, Eq. (7.181) has two complex poles. The zero provides the derivative feedback with phase lead, and the two poles provide some smoothing of sensor noise.

The effect of the compensation on this system's closed-loop poles can be evaluated in exactly the same way we evaluated compensation in Chapters 5 and 6 using root-locus or frequency-response tools. The gain of 112.7 in Eq. (7.181) is a result of the pole selection inherent in Eqs. (7.179) and (7.180). If we replace this specific value of compensator gain with a variable gain \(K\), then the characteristic equation for the closed-loop system of plant plus compensator becomes

\[1 + K\frac{(s + 0.727)}{(s + 4.8 + 7.42j)(s + 4.8 - 7.42j)s^{2}} = 0. \]

The root-locus technique allows us to evaluate the roots of this equation with respect to \(K\), as drawn in Fig. 7.36. Note the locus goes through the roots selection for Eqs. (7.179) and (7.180), and, when \(K = 112.7\), the four roots of the closed-loop system are equal to those specified.

The frequency-response plots given in Fig. 7.37 show that the compensation designed using state-space accomplishes the same results that one would strive for using frequency-response design. Specifically, the uncompensated phase margin of \(0^{\circ}\) increases to \(53^{\circ}\) in the compensated case, the gain \(K = 112.7\) produces a crossover frequency \(\omega_{c} =\) \(1.5rad/sec\). Both these values are roughly consistent with the controller closed-loop roots, with \(\omega_{n} = 1.14\) and \(\zeta = 0.7\), as we would expect, because these slow controller poles are dominant in the system response over the fast estimator poles.

Figure 7.37

Frequency response for the open-loop system for the compensated and uncompensated systems

EXAMPLE 7.28

Now we consider a reduced-order estimator for the same system.

Reduced-Order Compensator Design for a Satellite Attitude Control

Repeat the design for the \(1/s^{2}\) satellite plant, but use a reduced-order estimator. Place the one estimator pole at \(- 10rad/sec\).

Solution. From Eq. (7.155), we know that the estimator gain is

\[L = 10\text{,}\text{~} \]

From Eqs. (7.176a, b) and with \(\mathbf{K} = \begin{bmatrix} 1.28 & 1.6 \end{bmatrix}\) from Example 7.27, the scalar compensator equations are

\[\begin{matrix} {\overset{˙}{x}}_{c} & \ = - 11.6x_{c} - 117.28y \\ u & \ = - 1.6x_{c} - 17.28y \end{matrix}\]

where from Eq. (7.157),

\[x_{c} = {\widehat{x}}_{2} - 10y. \]

The compensator has the transfer function calculated from Eq. (7.178) to be

\[D_{c} = - \frac{17.28(s + 0.74)}{s + 11.6} \]

and is shown in Fig. 7.38.

The reduced-order compensator here is precisely a lead network. This is a pleasant discovery, as again it shows that transform and statevariable techniques can result in exactly the same type of compensation. The root locus of Fig. 7.39 shows the closed-loop poles occur at the

Figure 7.38

Simplified block

diagram of a

reduced-order

controller that is a lead network

Figure 7.39

Root locus of a reduced-order controller and \(1/s^{2}\) process, roots locations at \(K = 17.28\) shown by the dots.

assigned locations. The frequency response of the compensated system seen in Fig. 7.40 shows a phase margin of about \(60^{\circ}\). As with the fullorder estimator, analysis by other methods confirms the selected root locations.

More subtle properties of the pole-placement method can be illustrated by a third-order system.

281. Full-Order Compensator Design for DC Servo

Use the state-space pole-placement method to design a compensator for the DC servo system with the transfer function

\[G(s) = \frac{10}{s(s + 2)(s + 8)} \]

Using a state description in observer canonical form, place the control poles at \(pc = \lbrack - 1.42; - 1.04 \pm 2.14j\rbrack\) locations and the full-order estimator poles at \(pe = \lbrack - 4.25; - 3.13 \pm 6.41j\rbrack\).

Figure 7.40

Frequency response for \(G(s) = 1/s^{2}\) with a reduced-order estimator

Figure 7.41

DC servo in observer canonical form

Solution. A block diagram of this system in observer canonical form is shown in Fig. 7.41. The corresponding state-space matrices are

\[\begin{matrix} & \mathbf{A} = \begin{bmatrix} - 10 & 1 & 0 \\ - 16 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 0 \\ 0 \\ 10 \end{bmatrix}, \\ & \mathbf{C} = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix},\ D = 0 \end{matrix}\]

The desired poles are

\[pc = \lbrack - 1.42; - 1.04 + 2.14*j; - 1.04 - 2.14*j\rbrack \]

We compute the state feedback gain to be \(K =\) place \((A,B,pc)\),

\[\mathbf{K} = \begin{bmatrix} - 46.4 & 5.76 & - 0.65 \end{bmatrix}\]

The estimator error poles are at

\[pe = \lbrack - 4.25; - 3.13 + 6.41*j; - 3.13 - 6.41*j\rbrack; \]

Figure 7.42

Root locus for DC servo pole assignment

Conditionally stable compensator

We compute the estimator gain to be \(Lt =\) place $\left( A{'},C \right.\ \(, pe),\)L = Lt^{'}$,

\[\mathbf{L} = \begin{bmatrix} 0.5 \\ 61.4 \\ 216 \end{bmatrix}\]

The compensator transfer function, as given by substituting into Eq. (7.174), is

\[D_{c}(s) = - 190\frac{(s + 0.432)(s + 2.10)}{(s - 1.88)(s + 2.94 \pm 8.32j)} \]

Figure 7.42 shows the root locus of the system of compensator and plant in series, plotted with the compensator gain as the parameter. It verifies that the roots are in the desired locations specified when the gain \(K = 190\) in spite of the peculiar (unstable) compensation that has resulted. Even though this compensator has an unstable root at \(s =\) +1.88 , all system closed-loop poles (controller and estimator) are stable.

An unstable compensator is typically not acceptable because of the difficulty in testing either the compensator by itself or the system in open loop during a bench checkout. In some cases, however, better control can be achieved with an unstable compensator; then its inconvenience in checkout may be worthwhile. \(\ ^{8}\)

Figure 7.33 shows a direct consequence of the unstable compensator is that the system becomes unstable as the gain is reduced from its nominal value. Such a system is called conditionally stable, and should be avoided if possible. As we shall see in Chapter 9, actuator saturation in response to large signals has the effect of lowering the effective gain,

\(\ ^{8}\) There are even systems that cannot be stabilized with a stable compensator.

282. EXAMPLE 7.30

A nonminimum-phase compensator

Figure 7.43

Root locus for \(DC\) servo reduced-order controller and in a conditionally stable system, instability can result. Also, if the electronics are such that the control amplifier gain rises continuously from zero to the nominal value during startup, such a system would be initially unstable. These considerations lead us to consider alternative designs for this system.

283. Redesign of the DC Servo System with

a Reduced-Order Estimator

Design a compensator for the DC servo system of Example 7.29 using the same control poles, but with a reduced-order estimator. Place the estimator poles at \(- 4.24 \pm 4.24j\) positions with \(\omega_{n} = 6rad/sec\) and \(\zeta = 0.707\).

Solution. The reduced-order estimator corresponds to poles at

\[\text{~}\text{pe}\text{~} = \lbrack - 4.24 + 4.24*j; - 4.24 - 4.24*j\rbrack. \]

After partitioning we have,

\[\begin{bmatrix} A_{aa} & \mathbf{A}_{ab} \\ \mathbf{A}_{ba} & \mathbf{A}_{bb} \end{bmatrix} = \begin{bmatrix} - 10 & 1 & 0 \\ - 16 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix},\ \begin{bmatrix} B_{a} \\ \mathbf{B}_{b} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 10 \end{bmatrix}.\]

Solving for the estimator error characteristic polynomial,

\[det\left( s\mathbf{I} - \mathbf{A}_{bb} + \mathbf{L}\mathbf{A}_{ab} \right) = \alpha_{e}(s) \]

we find (using place) that

\[\mathbf{L} = \begin{bmatrix} 8.5 \\ 36 \end{bmatrix}\]

The compensator transfer function, given by Eq. (7.178), is computed to be

\[D_{cr}(s) = 20.93\frac{(s - 0.735)(s + 1.871)}{(s + 0.990 \pm 6.120j)} \]

The associated root locus for this system is shown in Fig. 7.43. Note this time, we have a stable but nonminimum-phase compensator and

284. EXAMPLE 7.31

Figure 7.44

Symmetric root locus for DC servo system a zero-degree root locus. The RHP portion of the locus will not cause difficulties because the gain has to be selected to keep all closed-loop poles in the LHP.

As a next pass at the design for this system, we attempt a design with the SRL.

Redesign of the DC Servo Compensator Using the SRL

Design a compensator for the DC servo system of Example 7.29 using pole placement based on the SRL. For the control law, let the cost output \(z\) be the same as the plant output; for the estimator design, assume the process noise enters at the same place as the system control signal. Select roots for a control bandwidth of about \(2.5rad/sec\), and choose the estimator roots for a bandwidth of about 2.5 times faster than the control bandwidth \((6.3rad/sec)\). Verify the design by plotting the step response and commenting. See Appendix W7.8 available online at www.pearsonglobaleditions.com for a discrete implementation of the solution.

Solution. Because the problem has specified that \(\mathbf{B}_{1} = \mathbf{B}\) and \(\mathbf{C}_{1} =\) \(\mathbf{C}\), then the SRL is the same for the control as for the estimator, so we need to generate only one locus based on the plant transfer function. The SRL for the system is shown in Fig. 7.44. From the locus, we select \(- 2 \pm 1.56j\) and -8.04 as the desired control poles \((pc = \lbrack - 2 + 1.56*j; - 2 - 1.56*j; - 8.04\rbrack)\) and \(- 4 \pm 4.9j\) and -9.169 ( \(pe = \lbrack - 4 + 4.9*j; - 4 - 4.9*j; - 9.169\rbrack)\) as the desired estimator poles. The state feedback gain is \(K = (A,B,pc)\), or

\[\mathbf{K} = \begin{bmatrix} - 0.285 & 0.219 & 0.204 \end{bmatrix}\]

and the estimator gain is \(Lt = place\left( A^{'},C^{'},pe \right),L = Lt^{'}\), or

\[\mathbf{L} = \begin{bmatrix} 7.17 \\ 97.4 \\ 367 \end{bmatrix}\]

Figure 7.45

Root locus for pole assignment from the SRL

Notice the feedback gains are much smaller than before. The resulting compensator transfer function is computed from Eq. (7.174) to be

\[D_{c}(s) = - \frac{94.5(s + 7.98)(s + 2.52)}{(s + 4.28 \pm 6.42j)(s + 10.6)} \]

We now take this compensator, put it in series with the plant, and use the compensator gain as the parameter. The resulting ordinary root locus of the closed-loop system is shown in Fig. 7.45. When the root-locus gain equals the nominal gain of 94.5, the roots are at the closed-loop locations selected from the SRL, as they should be. The step response and the associated control effort are shown in Fig. 7.46.

Note the compensator is now stable and minimum phase. This improved design comes about in large part because the plant pole at \(s = - 8\) is virtually unchanged by either controller or estimator. It does not need to be changed for good performance; in fact, the only feature in need of repair in the original \(G(s)\) is the pole at \(s = 0\). Using the SRL technique, we essentially discovered that the best use of control effort is to shift the two low-frequency poles at \(s = 0\) and -2 and to leave the pole at \(s = - 8\) virtually unchanged. As a result, the control gains are much lower, and the compensator design is less radical. This example illustrates why LQR design is typically preferable over pole placement.

Armed with the knowledge gained from Example 7.31, let us go back, with a better selection of poles, to investigate the use of pole placement for this example. Initially we used the third-order locations, which produced three poles with a natural frequency of about \(2rad/sec\). This design moved the pole at \(s = - 8\) to \(s = - 1.4\), thus violating the principle that open-loop poles should not be moved unless they are a problem.

Figure 7.46

Step response and control effort: (a) step response, (b) control signal

(a)

(b)

Now let us try it again, this time using dominant second-order locations to shift the slow poles, and leaving the fast pole alone at \(s = - 8\).

DC Servo System Redesign with Modified Dominant Second-Order Pole Locations

Design a compensator for the DC servo system of Example 7.29 using pole placement with control poles given by

\[pc = \lbrack - 1.7 \pm j; - 8\rbrack \]

and the estimator poles given by

\[\text{~ре~} = \lbrack - 7 \pm 3j; - 8\rbrack \]

Solution. With these pole locations, we find that the required feedback gain is [using \(K =\) place \((A,B,pc)\rbrack\)

\[\mathbf{K} = \begin{bmatrix} - 0.218 & 0.109 & 0.14 \end{bmatrix}\]

which has a smaller magnitude than the case where the pole at \(s = - 8\) was moved.

We find the estimator gain to be [using \(\left. \ Lt = place(A,B,pe),L = Lt^{'} \right\rbrack\)

\[\mathbf{L} = \begin{bmatrix} 12 \\ 154 \\ 464 \end{bmatrix}\]

The compensator transfer function is computed from Eq. (7.174)

\[D_{c} = - \frac{79.13(s + 8)(s + 2.28)}{(s + 11)(s + 6.17 \pm 5.23j)} \]

which is stable and minimum phase. This example illustrates the value of judicious pole selection and of the SRL technique.

The poor pole selection inherent in the initial use of the poles results in higher control effort and produces an unstable compensator. Both of these undesirable features are eliminated using the SRL (or LQR), or by improved pole selection. But we really need to use SRL to guide the proper selection of poles. The bottom line is that \(SRL\) (or \(LQR\) ) is the method of choice!

As seen from some of the preceding examples, we have shown the use of optimal design via the SRL. However, it is more common in practice to skip that step and use LQR directly.

284.1. Introduction of the Reference Input with the Estimator

The controller obtained by combining the control law studied in Section 7.5 with the estimator discussed in Section 7.8 is essentially a regulator design. This means the characteristic equations of the control and the estimator are chosen for good disturbance rejection - that is, to give satisfactory transients to disturbances such as \(w(t)\). However, this design approach does not consider a reference input, nor does it provide for command following, which is evidenced by a good transient response of the combined system to command changes. In general, good disturbance rejection and good command following both need to be taken into account in designing a control system. Good command following is done by properly introducing the reference input into the system equations.

Let us repeat the plant and controller equations for the full-order estimator; the reduced-order case is the same in concept, differing only in detail:

Figure 7.47

Possible locations for introducing the command input:

(a) compensation in the feedback path;

(b) compensation in the feed-forward path

(a)

(b)

\[\begin{matrix} \text{~}\text{Plant:}\text{~}\ \overset{˙}{\mathbf{x}} = \mathbf{Ax} + \mathbf{B}u & \\ y & \ = \mathbf{Cx}; \\ \text{~}\text{Controller:}\text{~} & \overset{˙}{\widehat{\mathbf{x}}} = (\mathbf{A} - \mathbf{BK} - \mathbf{LC})\widehat{\mathbf{x}} + \mathbf{L}y, \\ & u = - \mathbf{K}\widehat{\mathbf{x}}. \end{matrix}\]

Figure 7.47 shows two possibilities for introducing the command input \(r\) into the system. This figure illustrates the general issue of whether the compensation should be put in the feedback or feed-forward path. The response of the system to command inputs is different, depending on the configuration, because the zeros of the transfer functions are different. The closed-loop poles are identical, however, as can be easily verified by letting \(r = 0\) and noting the systems are then identical.

The difference in the responses of the two configurations can be seen quite easily. Consider the effect of a step input in \(r\). In Fig. 7.47(a), the step will excite the estimator in precisely the same way that it excites the plant; thus the estimator error will remain zero during and after the step. This means that the estimator dynamics are not excited by the command input, so the transfer function from \(r\) to \(y\) must have zeros at the estimator pole locations that cancel those poles. As a result, a step command will excite system behavior that is consistent with the control poles alone-that is, with the roots of \(det(s\mathbf{I} - \mathbf{A} + \mathbf{BK}) = 0\).

In Fig. 7.47(b), a step command in \(r\) enters directly only into the estimator, thus causing an estimation error that decays with the estimator dynamic characteristics in addition to the response corresponding to
the control poles. Therefore, a step command will excite system behavior consistent with both control roots and estimator roots- that is, the roots of

\[det(s\mathbf{I} - \mathbf{A} + \mathbf{BK}) \cdot det(s\mathbf{I} - \mathbf{A} + \mathbf{LC}) = 0 \]

For this reason, the configuration shown in Fig. 7.47(a) is typically the superior way to command the system, where \(\bar{N}\) is found using Eqs. (7.97)-(7.99).

In Section 7.9.1, we will show a general structure for introducing the reference input with three choices of parameters that implement either the feed-forward or the feedback case. We will analyze the three choices from the point of view of the system zeros and the implications the zeros have for the system transient response. Finally, in Section 7.9.2, we will show how to select the remaining parameter to eliminate constant errors.

284.1.1. General Structure for the Reference Input

Given a reference input \(r(t)\), the most general linear way to introduce \(r\) into the system equations is to add terms proportional to it in the controller equations. We can do this by adding \(\bar{N}r\) to Eq. (7.184b) and \(\mathbf{M}r\) to Eq. (7.184a). Note in this case, \(\bar{N}\) is a scalar and \(\mathbf{M}\) is an \(n \times 1\) vector. With these additions, the controller equations become

\[\begin{matrix} \overset{˙}{\widehat{\mathbf{x}}} & \ = (\mathbf{A} - \mathbf{BK} - \mathbf{LC})\widehat{\mathbf{x}} + \mathbf{L}y + \mathbf{M}r \\ u & \ = - \mathbf{K}\widehat{\mathbf{x}} + \bar{N}r \end{matrix}\]

The block diagram is shown in Fig. 7.48(a). The alternatives shown in Fig. 7.47 correspond to different choices of \(\mathbf{M}\) and \(\bar{N}\). Because \(r(t)\) is an external signal, it is clear that neither \(\mathbf{M}\) nor \(\bar{N}\) affects the characteristic equation of the combined controller-estimator system. In transfer-function terms, the selection of \(\mathbf{M}\) and \(\bar{N}\) will affect only the zeros of transmission from \(r\) to \(y\) and, as a consequence, can significantly affect the transient response but not the stability. How can we choose \(\mathbf{M}\) and \(\bar{N}\) to obtain satisfactory transient response? We should point out that we assigned the poles of the system by feedback gains \(\mathbf{K}\) and \(\mathbf{L}\), and we are now going to assign zeros by feed-forward gains \(\mathbf{M}\) and \(\bar{N}\).

There are three strategies for choosing \(\mathbf{M}\) and \(\bar{N}\) :

  1. Autonomous estimator: Select \(\mathbf{M}\) and \(\bar{N}\) so the state estimator error equation is independent of \(r\) [Fig. 7.48(b)].

  2. Tracking-error estimator: Select \(\mathbf{M}\) and \(\bar{N}\) so only the tracking error, \(e = (r - y)\), is used in the control [Fig. 7.48(c)].

  3. Zero-assignment estimator: Select \(\mathbf{M}\) and \(\bar{N}\) so \(n\) of the zeros of the overall transfer function are assigned at places of the designer's choice [Fig. 7.48(a)].

CASE 1. From the viewpoint of estimator performance, the first method is quite attractive and the most widely used of the alternatives.

(a)

(b)

(c)

Figure 7.48

Alternative ways to introduce the reference input: (a) general case-zero assignment; (b) standard case-estimator not excited, zeros \(= \alpha_{e}(s)\); (c) error-control case-classical compensation

If \(\widehat{\mathbf{x}}\) is to generate a good estimate of \(\mathbf{x}\), then surely \(\widetilde{\mathbf{x}}\) should be as free of external excitation as possible; that is, \(\widetilde{\mathbf{x}}\) should be uncontrollable from \(r\). The computation of \(\mathbf{M}\) and \(\bar{N}\) to bring this about is quite easy. The estimator error equation is found by subtracting Eq. (7.185a) from Eq. (7.183a), with the plant output [see Eq. (7.183b)] substituted into the estimator [see Eq. (7.184a)], and the control [see Eq. (7.184b)] substituted into the plant [see Eq. (7.183a)]:

\[\begin{matrix} \overset{˙}{\mathbf{x}} - \overset{˙}{\widehat{\mathbf{x}}} = & \mathbf{Ax} + \mathbf{B}( - \mathbf{K}\widehat{\mathbf{x}} + \bar{N}r) \\ & \ - \lbrack(\mathbf{A} - \mathbf{BK} - \mathbf{LC})\widehat{\mathbf{x}} + \mathbf{L}y + \mathbf{M}r\rbrack \\ \overset{˙}{\widetilde{\mathbf{x}}} = & \ (\mathbf{A} - \mathbf{LC})\widetilde{\mathbf{x}} + \mathbf{B}\bar{N}r - \mathbf{M}r. \end{matrix}\]

If \(r\) is not to appear in Eq. (7.186a), then we should choose

\[\mathbf{M} = \mathbf{B}\bar{N} \]

Because \(\bar{N}\) is a scalar, \(\mathbf{M}\) is fixed to within a constant factor. Note with this choice of \(\mathbf{M}\), we can write the controller equations as

\[\begin{matrix} u & \ = - \mathbf{K}\widehat{\mathbf{x}} + \bar{N}r \\ \overset{˙}{\widehat{\mathbf{x}}} & \ = (\mathbf{A} - \mathbf{LC})\widehat{\mathbf{x}} + \mathbf{B}u + \mathbf{L}y \end{matrix}\]

which matches the configuration in Fig. 7.48(b). The net effect of this choice is that the control is computed from the feedback gain and the reference input before it is applied, then the same control is input to both the plant and the estimator. In this form, if the plant control is subject to saturation (as shown by the inclusion of the saturation nonlinearity in Fig. 7.48(b) and discussed in Chapter 9), the same control limits can be applied in Eq. (7.188) to the control entering the equation for the estimate \(\widehat{\mathbf{x}}\), and the nonlinearity cancels out of the \(\widetilde{\mathbf{x}}\) equation. This behavior is essential for proper estimator performance. The block diagram corresponding to this technique is shown in Fig. 7.48(b). We will return to the selection of the gain factor on the reference input, \(\bar{N}\), in Section 7.9.2 after discussing the other two methods of selecting M.

CASE 2. The second approach suggested earlier is to use the tracking error. This solution is sometimes forced on the control designer when the sensor measures only the output error. For example, in many thermostats, the output is the difference between the temperature to be controlled and the setpoint temperature, and there is no absolute indication of the reference temperature available to the controller. Also, some radar tracking systems have a reading that is proportional to the pointing error, and this error signal alone must be used for feedback control. In these situations, we must select \(\mathbf{M}\) and \(\bar{N}\) so Eqs. (7.188) are driven by the error only. This requirement is satisfied if we select

\[\bar{N} = 0\ \text{~}\text{and}\text{~}\ \mathbf{M} = - \mathbf{L} \]

Then the estimator equation is

\[\overset{˙}{\widehat{\mathbf{x}}} = (\mathbf{A} - \mathbf{BK} - \mathbf{LC})\widehat{\mathbf{x}} + \mathbf{L}(y - r) \]

The compensator in this case, for low-order designs, is a standard lead compensator in the forward path. As we have seen in earlier chapters, this design can have a considerable amount of overshoot because of the zero of the compensator. This design corresponds exactly to the compensators designed by the transform methods given in Chapters 5 and 6.

CASE 3. The third method of selecting \(\mathbf{M}\) and \(\bar{N}\) is to choose the values so as to assign the system's zeros to arbitrary locations of the designer's choice. This method provides the designer with the maximum flexibility in satisfying transient-response and steady-state gain constraints. The other two methods are special cases of this third method. All three methods depend on the zeros. As we saw in Section 7.5.2, when there is no estimator and the reference input is added to the control, the closed-loop system zeros remain fixed as the zeros of the open-loop plant. We now examine what happens to the zeros when an estimator is present. To do so, we reconsider the controller of Eqs. (7.188). If there is a zero of transmission from \(r\) to \(u\), then there is necessarily a zero of transmission from \(r\) to \(y\), unless there is a pole at the same location as the zero. It is therefore sufficient to treat the controller alone to determine what effect the choices of \(\mathbf{M}\) and \(\bar{N}\) will have on the system zeros. The equations for a zero from \(r\) to \(u\) from Eqs. (7.188) are given by

\[det\begin{bmatrix} s\mathbf{I} - \mathbf{A} + \mathbf{BK} + \mathbf{LC} & - \mathbf{M} \\ - \mathbf{K} & \bar{N} \end{bmatrix} = 0\]

(We let \(y = 0\) because we care only about the effect of \(r\).) If we divide the last column by the (nonzero) scalar \(\bar{N}\) then add to the rest the product of \(\mathbf{K}\) times the last column, we find that the feed-forward zeros are at the values of \(s\) such that

\[det\begin{bmatrix} s\mathbf{I} - \mathbf{A} + \mathbf{BK} + \mathbf{LC} - \frac{\mathbf{M}}{\bar{N}}\mathbf{K} & - \frac{\mathbf{M}}{\bar{N}} \\ \mathbf{0} & 1 \end{bmatrix} = 0\]

or

\[det\left( s\mathbf{I} - \mathbf{A} + \mathbf{BK} + \mathbf{LC} - \frac{\mathbf{M}}{\bar{N}}\mathbf{K} \right) = \gamma(s) = 0 \]

Now Eq. (7.192) is exactly in the form of Eq. (7.133) for selecting \(\mathbf{L}\) to yield desired locations for the estimator poles. Arbitrary zero assignment is possible if the pair (A-BK-LC, \(\mathbf{K}\) ) is observable. Here we have to select \(\mathbf{M}/\bar{N}\) for a desired zero polynomial \(\gamma(s)\) in the transfer function from the reference input to the control. Thus, the selection of \(\mathbf{M}\) provides a substantial amount of freedom to influence the transient response. We can add an arbitrary \(n\) th-order polynomial to the transfer function from \(r\) to \(u\) and hence from \(r\) to \(y\); that is, we can assign \(n\) zeros in addition to all the poles that we assigned previously. If the roots of \(\gamma(s)\) are not canceled by the poles of the system, then they will be included in zeros of transmission from \(r\) to \(y\).

Two considerations can guide us in the choice of \(\mathbf{M}/\bar{N}\) - that is, in the location of the zeros. The first is dynamic response. We have seen in Chapter 3 that the zeros influence the transient response significantly, and the heuristic guidelines given there may suggest useful locations for the available zeros. The second consideration, which will connect statespace design to another result from transform techniques, is steady-state error or velocity-constant control. In Chapter 4, we derived the relationship between the steady-state accuracy of a Type 1 system and the closed-loop poles and zeros. If the system is Type 1, then the steady-state error to a step input will be zero and to a unit-ramp input will be

\[e_{\infty} = \frac{1}{K_{v}} \]

where \(K_{v}\) is the velocity constant. Furthermore, it can be shown that if the closed-loop poles are at \(\left\{ p_{i} \right\}\) and the closed-loop zeros are at \(\left\{ z_{i} \right\}\), then (for a Type 1 system) Truxal's formula gives

\[\frac{1}{K_{v}} = \sum_{}^{}\ \frac{1}{z_{i}} - \sum_{}^{}\ \frac{1}{p_{i}} \]

Equation (7.194) forms the basis for a partial selection of \(\gamma(s)\), and hence of \(\mathbf{M}\) and \(\bar{N}\). The choice is based on two observations:

  1. If \(\left| z_{i} - p_{i} \right| \ll 1\), then the effect of this pole-zero pair on the dynamic response will be small, because the pole is almost canceled by the

285. EXAMPLE 7.33

Lag compensation by a state-space method zero, and in any transient the residue of the pole at \(p_{i}\) will be very small.

  1. Even though \(z_{i} - p_{i}\) is small, it is possible for \(1/z_{i} - 1/p_{i}\) to be substantial and thus to have a significant influence on \(K_{v}\) according to Eq. (7.194). Application of these two guidelines to the selection of \(\gamma(s)\), and hence of \(\mathbf{M}\) and \(\bar{N}\), results in a lag-network design. We can illustrate this with an example.

286. Servomechanism: Increasing the Velocity Constant through Zero Assignment

Consider the second-order servomechanism system described by

\[G(s) = \frac{1}{s(s + 1)} \]

and with state description

\[\begin{matrix} & {\overset{˙}{x}}_{1} = x_{2}, \\ & {\overset{˙}{x}}_{2} = - x_{2} + u \end{matrix}\]

Design a controller using pole placement so both poles are at \(s = - 2 \pm j2\) and the system has a velocity constant \(K_{v} = 10\sec^{- 1}\). Verify the design by plotting the step response and the control effort. See Appendix W7.9 available online at www.pearsonglobaleditions.com for a discrete implementation of the solution.

Solution. For this problem, the state feedback gain

\[\mathbf{K} = \begin{bmatrix} 8 & 3 \end{bmatrix}\]

results in the desired control poles. However, with this gain, \(K_{v} = 2\) \(\sec^{- 1}\), and we need \(K_{v} = 10\sec^{- 1}\). What effect will using estimators designed according to the three methods for \(\mathbf{M}\) and \(\bar{N}\) selection have on our design? Using the first strategy (the autonomous estimator), we find that the value of \(K_{v}\) does not change. If we use the second method (error control), we introduce a zero at a location unknown beforehand, and the effect on \(K_{v}\) will not be under direct design control. However, if we use the third option (zero placement) along with Truxal's formula [Eq. (7.194)], we can satisfy both the dynamic response and the steadystate requirements.

First, we must select the estimator pole \(p_{3}\) and the zero \(z_{3}\) to satisfy Eq. (7.194) for \(K_{v} = 10\sec^{- 1}\). We want to keep \(z_{3} - p_{3}\) small, so there is little effect on the dynamic response, and yet have \(1/z_{3} - 1/p_{3}\) be large enough to increase the value of \(K_{v}\). To do this, we arbitrarily set \(p_{3}\) small compared with the control dynamics. For example, we let

\[p_{3} = - 0.1\text{.}\text{~} \]

Notice this approach is opposite to the usual philosophy of estimation design, where fast response is the requirement. Now, using Eq. (7.194) to get

\[\frac{1}{K_{v}} = \frac{1}{z_{3}} - \frac{1}{p_{1}} - \frac{1}{p_{2}} - \frac{1}{p_{3}} \]

where \(p_{1} = - 2 + 2j,p_{2} = - 2 - 2j\), and \(p_{3} = - 0.1\), we solve for \(z_{3}\) such that \(K_{v} = 10\) we obtain

\[\frac{1}{K_{v}} = \frac{4}{8} + \frac{1}{0.1} + \frac{1}{z_{3}} = \frac{1}{10} \]

or

\[z_{3} = - \frac{1}{10.4} = - 0.096 \]

We thus design a reduced-order estimator to have a pole at -0.1 and choose \(\mathbf{M}/\bar{N}\) such that \(\gamma(s)\) has a zero at -0.096 . A block diagram of the resulting system is shown in Fig. 7.49(a). You can readily verify that this system has the overall transfer function

\[\frac{Y(s)}{R(s)} = \frac{8.32(s + 0.096)}{\left( s^{2} + 4s + 8 \right)(s + 0.1)} \]

for which \(K_{v} = 10\sec^{- 1}\), as specified.

The compensation shown in Fig. 7.49(a) is nonclassical in the sense that it has two inputs ( \(e\) and \(y\) ) and one output. If we resolve the equations to provide pure error compensation by finding the transfer function from \(e\) and \(u\), which would give Eq. (7.195), we obtain the system shown in Fig. 7.49(b). This can be seen as follows: The relevant controller equations are

\[\begin{matrix} {\overset{˙}{x}}_{c} & \ = 0.8e - 3.1u \\ u & \ = 8.32e + 3.02y + x_{c} \end{matrix}\]

Figure 7.49

Servomechanism with

assigned zeros (a lag network): (a) the two-input compensator; (b) equivalent unity-feedback system

(a)

(b)

Figure 7.50

Root locus of lag-lead compensation

where \(x_{c}\) is the controller state. Taking the Laplace transform of these equations, eliminating \(X_{c}(s)\), and substituting for the output \(\lbrack Y(s) =\) \(G(s)U(s)\), we find the compensator is described by

\[\frac{U(s)}{E(s)} = D_{c}(s) = \frac{(s + 1)(8.32s + 0.8)}{(s + 4.08)(s + 0.0196)} \]

This compensation is a classical lag-lead network. The root locus of the system in Fig. 7.49(b) is shown in Fig. 7.50. Note the pole-zero pattern near the origin that is characteristic of a lag network. The Bode plot in Fig. 7.51 shows the phase lag at low frequencies and the phase lead at high frequencies. The step response of the system is shown in Fig. 7.52 (a) and shows the presence of a "tail" on the response due to the slow pole at -0.1 . The associated control effort is shown in Fig. 7.52 (b). Of course, the system is Type 1 and the system will have zero tracking error eventually.

We now reconsider the first two methods for choosing \(\mathbf{M}\) and \(\bar{N}\), this time to examine their implications in terms of zeros. Under the first rule (for the autonomous estimator), we let \(\mathbf{M} = \mathbf{B}\bar{N}\). Substituting this into Eq. (7.192) yields, for the controller feed-forward zeros,

\[det(s\mathbf{I} - \mathbf{A} + \mathbf{LC}) = 0 \]

This is exactly the equation from which \(\mathbf{L}\) was selected to make the characteristic polynomial of the estimator equation equal to \(\alpha_{e}(s)\). Thus we have created \(n\) zeros in exactly the same locations as the \(n\) poles of the

Figure 7.51

Frequency response of lag-lead compensation

(a)

(b)

estimator. Because of this pole-zero cancellation (which causes "uncontrollability" of the estimator modes), the overall transfer function poles consist only of the state feedback controller poles.

The second rule (for a tracking-error estimator) selects \(\mathbf{M} = - \mathbf{L}\) and \(\bar{N} = 0\). If these are substituted into Eq. (7.191), then the feedforward zeros are given by

\[det\begin{bmatrix} s\mathbf{I} - \mathbf{A} + \mathbf{BK} + \mathbf{LC} & \mathbf{L} \\ - \mathbf{K} & 0 \end{bmatrix} = 0\]

If we postmultiply the last column by \(\mathbf{C}\) and subtract the result from the first \(n\) columns, then premultiply the last row by \(\mathbf{B}\) and add it to the first \(n\) rows, Eq. (7.197) reduces to

\[det\begin{bmatrix} s\mathbf{I} - \mathbf{A} & \mathbf{L} \\ - \mathbf{K} & 0 \end{bmatrix} = 0\]

If we compare Eq. (7.198) with the equations for the zeros of a system in a state description, Eq. (7.63), we see the added zeros are those obtained by replacing the input matrix with \(\mathbf{L}\) and the output with \(\mathbf{K}\). Thus, if we wish to use error control, we have to accept the presence of these compensator zeros that depend on the choice of \(\mathbf{K}\) and \(\mathbf{L}\) and over which we have no direct control. For low-order cases this results, as we said before, in a lead compensator as part of a unity feedback topology.

Figure 7.52

Response of the system with lag compensation:

(a) step response;

(b) control effort

(a)

(b)

Let us now summarize our findings on the effect of introducing the reference input. When the reference input signal is included in the controller, the overall transfer function of the closed-loop system is

\[\mathcal{T}(s) = \frac{Y(s)}{R(s)} = \frac{K_{s}\gamma(s)b(s)}{\alpha_{e}(s)\alpha_{c}(s)} \]

where \(K_{S}\) is the total system gain and \(\gamma(s)\) and \(b(s)\) are monic polynomials. The polynomial \(\alpha_{c}(s)\) results in a control gain \(\mathbf{K}\) such that \(det\lbrack s\mathbf{I} - \mathbf{A} + \mathbf{BK}\rbrack = \alpha_{c}(s)\). The polynomial \(\alpha_{e}(s)\) results in estimator gains \(\mathbf{L}\) such that \(det\lbrack s\mathbf{I} - \mathbf{A} + \mathbf{LC}\rbrack = \alpha_{e}(s)\). Because, as designers, we get to choose \(\alpha_{c}(s)\) and \(\alpha_{e}(s)\), we have complete freedom in assigning the poles of the closed-loop system. There are three ways to handle the
polynomial \(\gamma(s)\) : We can select it so \(\gamma(s) = \alpha_{e}(s)\) by using the implementation of Fig. 7.48(b), in which case \(\mathbf{M}/\bar{N}\) is given by Eq. (7.187); we may accept \(\gamma(s)\) as given by Eq. (7.198), so error control is used; or we may give \(\gamma(s)\) arbitrary coefficients by selecting \(\mathbf{M}/\bar{N}\) from Eq. (7.192). It is important to point out that the plant zeros represented by \(b(s)\) are not moved by this technique, and remain as part of the closed-loop transfer function unless \(\alpha_{c}\) or \(\alpha_{e}\) are selected to cancel some of these zeros.

286.0.1. Selecting the Gain

We now turn to the process of determining the gain \(\bar{N}\) for the three methods of selecting M. If we choose method 1 , the control is given by Eq. (7.188a) and \({\widehat{x}}_{ss} = x_{ss}\). Therefore, we can use either \(\bar{N} = N_{u} + \mathbf{K}\mathbf{N}_{\mathbf{x}}\), as in Eq. (7.99), or \(u = N_{u}r - \mathbf{K}\left( \widehat{\mathbf{x}} - \mathbf{N}_{\mathbf{x}}r \right)\). This is the most common choice. If we use the second method, the result is trivial; recall that \(\bar{N} = 0\) for error control. If we use the third method, we pick \(\bar{N}\) such that the overall closed-loop DC gain is unity. \(\ ^{9}\)

The overall system equations then are

\[\begin{matrix} \begin{bmatrix} \overset{˙}{\mathbf{x}} \\ \widetilde{\mathbf{x}} \end{bmatrix} & \ = \begin{bmatrix} \mathbf{A} - \mathbf{BK} & \mathbf{BK} \\ \mathbf{0} & \mathbf{A} - \mathbf{LC} \end{bmatrix}\begin{bmatrix} \mathbf{x} \\ \widetilde{\mathbf{x}} \end{bmatrix} + \begin{bmatrix} \mathbf{B} \\ \mathbf{B} - \overline{\mathbf{M}} \end{bmatrix}\bar{N}r, \\ y & \ = \begin{bmatrix} \mathbf{C} & \mathbf{0} \end{bmatrix}\begin{bmatrix} \mathbf{x} \\ \widetilde{\mathbf{x}} \end{bmatrix} \end{matrix}\]

where \(\overline{\mathbf{M}}\) is the outcome of selecting zero locations with either Eq. (7.192) or Eq. (7.187). The closed-loop system has unity DC gain if

\[- \begin{bmatrix} \mathbf{C} & \mathbf{0} \end{bmatrix}\begin{bmatrix} \mathbf{A} - \mathbf{BK} & \mathbf{BK} \\ \mathbf{0} & \mathbf{A} - \mathbf{LC} \end{bmatrix}^{- 1}\begin{bmatrix} \mathbf{B} \\ \mathbf{B} - \overline{\mathbf{M}} \end{bmatrix}\bar{N} = 1.\]

If we solve Eq. (7.201) for \(\bar{N}\), we get \(\ ^{10}\)

\[\bar{N} = - \frac{1}{\mathbf{C}(\mathbf{A} - \mathbf{BK})^{- 1}\mathbf{B}\left\lbrack 1 - \mathbf{K}(\mathbf{A} - \mathbf{LC})^{- 1}(\mathbf{B} - \overline{\mathbf{M}}) \right\rbrack} \]

The techniques in this section can be readily extended to reduced-order estimators.

\(\ ^{9}\text{ }A\) reasonable alternative is to select \(\bar{N}\) such that, when \(r\) and \(y\) are both unchanging, the DC gain from \(r\) to \(u\) is the negative of the DC gain from \(y\) to \(u\). The consequences of this choice are that our controller can be structured as a combination of error control and generalized derivative control, and if the system is capable of Type 1 behavior, that capability will be realized.

\(\ ^{10}\) We have used the fact that

\[\begin{bmatrix} \mathbf{A} & \mathbf{C} \\ \mathbf{0} & \mathbf{B} \end{bmatrix}^{- 1} = \begin{bmatrix} \mathbf{A}^{- 1} & - \mathbf{A}^{- 1}\mathbf{CB}^{- 1} \\ \mathbf{0} & \mathbf{B}^{- 1} \end{bmatrix}\]

286.1. Integral Control and Robust Tracking

The choices of \(\bar{N}\) gain in Section 7.9 will result in zero steady-state error to a step command, but the result is not robust because any change in the plant parameters will cause the error to be nonzero. We need to use integral control to obtain robust tracking.

In the state-space design methods discussed so far, no mention has been made of integral control, and no design examples have produced a compensation containing an integral term. In Section 7.10.1, we will show how integral control can be introduced by a direct method of adding the integral of the system error to the dynamic equations. Integral control is a special case of tracking a signal that does not go to zero in the steady-state. We will introduce (in Section 7.10.2) a general method for robust tracking that will present the internal model principle, which solves an entire class of tracking problems and disturbance-rejection controls. Finally, in Section 7.10.4, we will show that if the system has an estimator and also needs to reject a disturbance of known structure, we can include a model of the disturbance in the estimator equations and use the computer estimate of the disturbance to cancel the effects of the real plant disturbance on the output.

286.1.1. Integral Control

We start with an ad hoc solution to integral control by augmenting the state vector with the integrator dynamics. For the system

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}u + \mathbf{B}_{1}w \\ y & \ = \mathbf{Cx} \end{matrix}\]

we can feed back the integral of the error, \(\ ^{11}e = y - r\), as well as the state of the plant, \(\mathbf{x}\), by augmenting the plant state with the extra (integral) state \(x_{I}\), which obeys the differential equation

\[{\overset{˙}{x}}_{I} = \mathbf{Cx} - r( = e)\text{.}\text{~} \]

Augmented state equations with integral control

Feedback law with integral control
Thus

\[x_{I} = \int_{}^{t}e(\tau)d\tau \]

The augmented state equations become

\[\begin{bmatrix} {\overset{˙}{x}}_{I} \\ \overset{˙}{\mathbf{x}} \end{bmatrix} = \begin{bmatrix} 0 & \mathbf{C} \\ \mathbf{0} & \mathbf{A} \end{bmatrix}\begin{bmatrix} x_{I} \\ \mathbf{x} \end{bmatrix} + \begin{bmatrix} 0 \\ \mathbf{B} \end{bmatrix}u - \begin{bmatrix} 1 \\ \mathbf{0} \end{bmatrix}r + \begin{bmatrix} 0 \\ \mathbf{B}_{1} \end{bmatrix}w\]

and the feedback law is

or simply

\[u = - \begin{bmatrix} K_{1} & \mathbf{K}_{0} \end{bmatrix}\begin{bmatrix} x_{I} \\ \mathbf{x} \end{bmatrix}\]

\[u = - \mathbf{K}\begin{bmatrix} x_{I} \\ \mathbf{x} \end{bmatrix}\]

Figure 7.53

Integral control structure

With this revised definition of the system, we can apply the design techniques from Section 7.5 in a similar fashion; they will result in the control structure shown in Fig. 7.53.

Integral Control of a Motor Speed System

Consider the motor speed system described by

\[\frac{Y(s)}{U(s)} = \frac{1}{s + 3} \]

that is, \(A = - 3,B = 1\), and \(C = 1\). Design the system to have integral control and two poles at \(s = - 5\). Design an estimator with pole at \(s =\) -10 . The disturbance enters at the same place as the control. Evaluate the tracking and disturbance rejection responses.

Solution. The pole-placement requirement is equivalent to

\[pc = \lbrack - 5; - 5\rbrack. \]

The augmented system description, including the disturbance \(w\), is

\[\begin{bmatrix} {\overset{˙}{x}}_{I} \\ \overset{˙}{x} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 0 & - 3 \end{bmatrix}\begin{bmatrix} x_{I} \\ x \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}(u + w) - \begin{bmatrix} 1 \\ 0 \end{bmatrix}r\]

Therefore, we can find \(\mathbf{K}\) from

\[det\left( s\mathbf{I} - \begin{bmatrix} 0 & 1 \\ 0 & - 3 \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}\mathbf{K} \right) = s^{2} + 10s + 25\]

or

\[s^{2} + \left( 3 + K_{0} \right)s + K_{1} = s^{2} + 10s + 25 \]

Consequently,

\[\mathbf{K} = \begin{bmatrix} K_{1} & K_{0} \end{bmatrix} = \begin{bmatrix} 25 & 7 \end{bmatrix}.\]

We may verify this result using acker. The system is shown with feedbacks in Fig. 7.54, along with a disturbance input \(w\).

The estimator gain \(L = 7\) is obtained from

\[\alpha_{e}(s) = s + 10 = s + 3 + L. \]

The estimator equation is of the form

\[\begin{matrix} \overset{˙}{\widehat{x}} & \ = (A - LC)\widehat{x} + Bu + Ly \\ & \ = - 10\widehat{x} + u + 7y, \end{matrix}\]

and

\[u = - K_{0}\widehat{x} = - 7\widehat{x}. \]

Figure 7.54

Integral control example

The step response \(y_{1}\) due to a step reference input \(r\) and the output disturbance response \(y_{2}\) due to a step disturbance input \(w\) are shown in Fig. 7.55(a), and the associated control efforts $\left( u_{1} \right.\ $ and \(\left. \ u_{2} \right)\) are shown in Fig. 7.55(b). As expected, the system is Type 1 and tracks the step reference input and rejects the step disturbance asymptotically.

287. \(\Delta\) 7.10.2 Robust Tracking Control: The Error-Space Approach

In Section 7.10.1, we introduced integral control in a direct way and selected the structure of the implementation so as to achieve integral action with respect to reference and disturbance inputs. We now present a more analytical approach to giving a control system the ability to track (with zero steady-state error) a nondecaying input and to reject (with zero steady-state error) a nondecaying disturbance such as a step, ramp, or sinusoidal input. The method is based on including the equations satisfied by these external signals as part of the problem formulation and solving the problem of control in an error space, so we are assured that the error approaches zero even if the output is following a nondecaying, or even a growing, command (such as a ramp signal) and even if some parameters change (the robustness property). The method is illustrated in detail for signals that satisfy differential equations of order 2 , but the extension to more complex signals is not difficult.

Suppose we have the system state equations

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}u + \mathbf{B}_{1}w \\ y & \ = \mathbf{Cx} \end{matrix}\]

and a reference signal that is known to satisfy a specific differential equation. The initial conditions on the equation generating the input are unknown. For example, the input could be a ramp whose slope and initial value are unknown. Plant disturbances of the same class may also be present. We wish to design a controller for this system so the closedloop system will have specified poles, and can also track input command

Figure 7.55

Transient response for motor speed system:

(a) step responses;

(b) control efforts

(a)

(b)

signals, and reject disturbances of the type described without steadystate error. We will develop the results only for second-order differential equations. We define the reference input to satisfy the relation

\[\overset{¨}{r} + \alpha_{1}\overset{˙}{r} + \alpha_{2}r = 0, \]

and the disturbance to satisfy exactly the same equation:

\[\overset{¨}{w} + \alpha_{1}\overset{˙}{w} + \alpha_{2}w = 0\text{.}\text{~} \]

The (tracking) error is defined as

\[e = y - r\text{.}\text{~} \]

The problem of tracking \(r\) and rejecting \(w\) can be seen as an exercise in designing a control law to provide regulation of the error, which is to say that the error \(e\) tends to zero as time gets large. The control must also be structurally stable or robust, in the sense that regulation of \(e\) to zero in the steady-state occurs even in the presence of "small" perturbations

Robust control equations in the error space of the original system parameters. Note that, in practice, we never have a perfect model of the plant, and the values of parameters are virtually always subject to some change, so robustness is always very important.

We know that the command input satisfies Eq. (7.206), and we would like to eliminate the reference from the equations in favor of the error. We begin by replacing \(r\) in Eq. (7.206) with the error of Eq. (7.208). When we do this, the reference cancels because of Eq. (7.206), and we have the formula for the error in terms of the state:

\[\begin{matrix} \overset{¨}{e} + \alpha_{1}\overset{˙}{e} + \alpha_{2}e & \ = \overset{¨}{y} + \alpha_{1}\overset{˙}{y} + \alpha_{2}y \\ & \ = \mathbf{C}\overset{¨}{\mathbf{x}} + \alpha_{1}\mathbf{C}\overset{˙}{\mathbf{x}} + \alpha_{2}\mathbf{Cx} \end{matrix}\]

We now replace the plant state vector with the error-space state, defined by

\[\xi \triangleq \overset{¨}{\mathbf{x}} + \alpha_{1}\overset{˙}{\mathbf{x}} + \alpha_{2}\mathbf{x} \]

Similarly, we replace the control with the control in error space, defined as

\[\mu \triangleq \overset{¨}{u} + \alpha_{1}\overset{˙}{u} + \alpha_{2}u \]

With these definitions, we can replace Eq. (7.209b) with

\[\overset{¨}{e} + \alpha_{1}\overset{˙}{e} + \alpha_{2}e = \mathbf{C}\xi \]

The state equation for \(\xi\) is given by \(\ ^{12}\)

\[\overset{˙}{\xi} = \dddot{\mathbf{x}} + \alpha_{1}\overset{¨}{\mathbf{x}} + \alpha_{2}\overset{˙}{\mathbf{x}} = \mathbf{A}\xi + \mathbf{B}\mu \]

Notice the disturbance, as well as the reference, cancels from Eq. (7.213). Equations (7.212) and (7.213) now describe the overall system in an error space. In standard state-variable form, the equations are

where \(\mathbf{z} = \begin{bmatrix} e & \overset{˙}{e} & \xi^{T} \end{bmatrix}^{T}\) and

\[\overset{˙}{\mathbf{z}} = \mathbf{A}_{s}\mathbf{z} + \mathbf{B}_{s}\mu \]

\[\mathbf{A}_{s} = \begin{bmatrix} 0 & 1 & \mathbf{0} \\ - \alpha_{2} & - \alpha_{1} & \mathbf{C} \\ \mathbf{0} & \mathbf{0} & \mathbf{A} \end{bmatrix},\ \mathbf{B}_{s} = \begin{bmatrix} 0 \\ 0 \\ \mathbf{B} \end{bmatrix}\]

The error system \(\left( \mathbf{A}_{s},\mathbf{B}_{s} \right)\) can be given arbitrary dynamics by state feedback if it is controllable. If the plant \((\mathbf{A},\mathbf{B})\) is controllable and does not have a zero at any of the roots of the reference-signal characteristic equation

\[\alpha_{r}(s) = s^{2} + \alpha_{1}s + \alpha_{2}, \]

then the error system \(\left( \mathbf{A}_{s},\mathbf{B}_{s} \right)\) is controllable. \(\ ^{13}\) We assume these conditions hold; therefore, there exists a control law of the form

\(\mu = - \begin{bmatrix} K_{2} & K_{1} & \mathbf{K}_{0} \end{bmatrix}\begin{bmatrix} e \\ \overset{˙}{e} \\ \xi \end{bmatrix} = - \mathbf{Kz}\)such that the error system has arbitrary dynamics by pole placement. We now need to express this control law in terms of the actual process state \(\mathbf{x}\) and the actual control. We combine Eqs. (7.216), (7.210), and (7.211) to get the control law in terms of \(u\) and \(\mathbf{x}\) (we write \(u^{(2)}\) to mean \(\left. \ \frac{d^{2}u}{dt^{2}} \right)\)

\[\left( u + \mathbf{K}_{0}\mathbf{x} \right)^{(2)} + \sum_{i = 1}^{2}\mspace{2mu}\alpha_{i}\left( u + \mathbf{K}_{0}\mathbf{x} \right)^{(2 - i)} = - \sum_{i = 1}^{2}\mspace{2mu} K_{i}e^{(2 - i)} \]

The structure for implementing Eq. (7.217) is very simple for tracking constant inputs. In that case, the equation for the reference input is \(\overset{˙}{r} = 0\). In terms of \(u\) and \(\mathbf{x}\), the control law [Eq. (7.217)] reduces to

\[\overset{˙}{u} + \mathbf{K}_{0}\overset{˙}{\mathbf{x}} = - K_{1}e \]

Here we need only to integrate to reveal the control law and the action of integral control:

\[u = - K_{1}\int_{}^{t}e(\tau)d\tau - \mathbf{K}_{0}\mathbf{x} \]

A block diagram of the system, shown in Fig. 7.56, clearly shows the presence of a pure integrator in the controller. In this case, the only difference between the internal model method of Fig. 7.56 and the ad hoc method of Fig. 7.54 is the relative location of the integrator and the gain.

A more complex problem that clearly shows the power of the errorspace approach to robust tracking is posed by requiring that a sinusoid be tracked with zero steady-state error. The problem arises, for instance, in the control of a mass-storage disk-head assembly.

288. EXAMPLE 7.35

289. Disk-Drive Servomechanism: Robust Control to Follow a Sinusoid

A simple normalized model of a computer disk-drive servomechanism is given by the equations

\[\begin{matrix} \mathbf{A} & = \begin{bmatrix} 0 & 1 \\ 0 & - 1 \end{bmatrix}, & \mathbf{B} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, & \\ \mathbf{B}_{1} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, & \mathbf{C} = \begin{bmatrix} 1 & 0 \end{bmatrix}, & & D = 0. \end{matrix}\]

Figure 7.56

Integral control using the internal model

Figure 7.57

Structure of the compensator for the servomechanism to track exactly the sinusoid of frequency \(\omega_{0}\)

Internal model principle

Because the data on the disk are not exactly on a centered circle, the servo must follow a sinusoid of radian frequency \(\omega_{0}\) determined by the spindle speed.

(a) Give the structure of a controller for this system that will follow the given reference input with zero steady-state error.

(b) Assume \(\omega_{0} = 1\) and the desired closed-loop poles are at \(- 1 \pm j\sqrt{3}\) and \(- \sqrt{3} \pm j1\).

(c) Demonstrate the tracking and disturbance rejection properties of the system using Matlab or Simulink.

290. Solution

(a) The reference input satisfies the differential equation \(\overset{¨}{r} = - \omega_{0}^{2}r\) so \(\alpha_{1} = 0\) and \(\alpha_{2} = \omega_{0}^{2}\). With these values, the error-state matrices, according to Eq. (7.215), are

\[\mathbf{A}_{s} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ - \omega_{0}^{2} & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & - 1 \end{bmatrix},\ \mathbf{B}_{s} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}\]

The characteristic equation of \(\mathbf{A}_{s} - \mathbf{B}_{s}\mathbf{K}\) is

\(s^{4} + \left( 1 + K_{02} \right)s^{3} + \left( \omega_{0}^{2} + K_{01} \right)s^{2} + \left\lbrack K_{1} + \omega_{0}^{2}\left( 1 + K_{02} \right) \right\rbrack s + K_{01}\omega_{0}^{2}K_{2} = 0\),

from which the gain may be selected by pole assignment. The compensator implementation from Eq. (7.217) has the structure shown in Fig. 7.57, which clearly shows the presence of the oscillator with frequency \(\omega_{0}\) (known as the internal model of the input generator) in the controller. \(\ ^{14}\)(b) Now, assume \(\omega_{0} = 1rad/sec\) and the desired closed-loop poles are as given:

\[pc = \lbrack - 1 + j*\sqrt{3}; - 1 - j*\sqrt{3}; - \sqrt{3} + j; - \sqrt{3} - j\rbrack. \]

Then the feedback gain is

\[\mathbf{K} = \left\lbrack K_{2}K_{1}:\mathbf{K}_{0} \right\rbrack = \begin{bmatrix} 2.0718 & 16.3923:13.9282 & 4.4641 \end{bmatrix}\]

which results in the controller

\[\begin{matrix} {\overset{˙}{\mathbf{x}}}_{c} & \ = \mathbf{A}_{c}\mathbf{x}_{c} + \mathbf{B}_{c}e \\ u & \ = \mathbf{C}_{c}\mathbf{x}_{c} - \mathbf{K}_{0}\mathbf{x} \end{matrix}\]

with

\[\begin{matrix} \mathbf{A}_{c} & \ = \begin{bmatrix} 0 & 1 \\ - 1 & 0 \end{bmatrix}, \\ \mathbf{C}_{c} & \ = \begin{bmatrix} 1 & 0 \end{bmatrix}. \end{matrix}\]

The relevant Matlab statements are

\(\%\) plant matrices

\(A = \lbrack 01;0 - 1\rbrack\);

\(B = \lbrack 0;1\rbrack\);

\(C = \lbrack 10\rbrack\);

\(D = \lbrack 0\rbrack\);

\(\%\) form error space matrices

omega \(= 1\);

\(Bs = \lbrack 0;0;B\rbrack\);

\(\%\) desired closed-loop poles

\(j = sqrt( - 1)\);

\[pc = \left\lbrack - 1 + sqrt(3)^{*}j; - 1 - sqrt(3)^{*}j; - sqrt(3) + j; - sqrt(3) - j \right\rbrack; \]

\(K =\) place \((As,Bs,pc)\);

\(\%\) form controller matrices

\(K1 = K(:,1:2)\);

\(Ko = K(:,3:4)\);

\(Ac = \lbrack 01; -\) omega*omega 0\(\rbrack\);

\(BC = - \lbrack K(2);K(1)\rbrack\);

\(Cc = \lbrack 10\rbrack\);

\(Dc = \lbrack 0\rbrack\);

The controller frequency response is shown in Fig. 7.58 and shows a gain of infinity at the rotation frequency of \(\omega_{0} = 1rad/sec\). The frequency response from \(r\) to \(e\) [that is, the sensitivity function \(\mathcal{S}(s)\) ], is shown in Fig. 7.59 and reveals a sharp notch at the rotation frequency \(\omega_{0} = 1rad/sec\). The same notch is also present in the frequency response of the transfer function from \(w\) to \(y\).

Figure 7.58

Controller frequency response for robust servomechanism
Figure 7.59

Sensitivity function frequency response for robust servomechanism

(c) Figure 7.60 shows the Simulink simulation diagram for the system. Although the simulations can also be done in Matlab, it is more instructive to use the interactive graphical environment of Simulink. Simulink also provides the capability to add nonlinearities (see Chapter 9) and carry out robustness studies efficiently. \(\ ^{15}\) The tracking properties of the system are shown in Fig. 7.61(a),

To

ॠ?
To workspace 2

To

workspace1

Gain3

Figure 7.60

Simulink block diagram for robust servomechanism

Source: Reprinted with permission of The MathWorks, Inc.

showing the asymptotic tracking property of the system. The associated control effort and the tracking error signal are shown in Fig. 7.61(b) and (c), respectively. The disturbance rejection properties of the system are illustrated in Fig. 7.62(a), displaying asymptotic disturbance rejection of sinusoidal disturbance input. The associated control effort is shown in Fig. 7.62(b). The closed-loop frequency response [that is, the complementary transfer function \(\mathcal{T}(s)\) ] for the robust servomechanism is shown in Fig. 7.63. As seen from the figure, the frequency response from \(r\) to \(y\) is unity at \(\omega_{0} = 1rad/sec\) as expected.

The zeros of the system from \(r\) to \(e\) are located at \(\pm j, - 2.7321\) \(\pm j2.5425\). The robust tracking properties are due to the presence of the blocking zeros at \(\pm j\). The zeros from \(w\) to \(y\), both blocking zeros, are located at \(\pm j\). The robust disturbance rejection properties are due to the presence of these blocking zeros.

From the nature of the pole-placement problem, the state \(\mathbf{z}\) in Eq. (7.214) will tend toward zero for all perturbations in the system parameters as long as \(\mathbf{A}_{s} - \mathbf{B}_{s}\mathbf{K}\) remains stable. Notice the signals that are rejected are those that satisfy the equations with the values of \(\alpha_{i}\)

Figure 7.61

(a) Tracking properties for robust servomechanism;

(b) control effort;

(c) tracking error signal

(a) Time (sec)

(b) Time (sec)

(c) Time (sec)

actually implemented in the model of the external signals. The method assumes these are known and implemented exactly. If the implemented values are in error, then a steady-state error will result.

Now let us repeat the example of Section 7.10.1 for integral control.

582 Chapter 7 State-Space Design

Figure 7.62

(a) Disturbance

rejection properties for robust

servomechanism;

(b) control effort

(a) Time (sec)

(b) Time (sec)

Figure 7.63

Closed-loop frequency response for robust servomechanism

291. EXAMPLE 7.36

292. Integral Control Using the Error-Space Design

For the system

\[H(s) = \frac{1}{s + 3} \]

with the state-variable description

\[A = - 3,\ B = 1,\ C = 1, \]

construct a controller with poles at \(s = - 5\) to track an input that satisfies \(\overset{˙}{r} = 0\).

Solution. The error-space system is

\[\begin{bmatrix} \overset{˙}{e} \\ \overset{˙}{\xi} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 0 & - 3 \end{bmatrix}\begin{bmatrix} e \\ \xi \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}\mu\]

with \(e = y - r,\xi = \overset{˙}{x}\), and \(\mu = \overset{˙}{u}\). If we take the desired characteristic equation to be

\[\alpha_{c}(s) = s^{2} + 10s + 25 \]

then the pole-placement equation for \(\mathbf{K}\) is

\[det\left\lbrack s\mathbf{I} - \mathbf{A}_{s} + \mathbf{B}_{s}\mathbf{K} \right\rbrack = \alpha_{c}(s). \]

In detail, Eq. (7.220) is

\[s^{2} + \left( 3 + K_{0} \right)s + K_{1} = s^{2} + 10s + 25 \]

which gives

\[\mathbf{K} = \begin{bmatrix} 25 & 7 \end{bmatrix} = \begin{bmatrix} K_{1} & K_{0} \end{bmatrix}\]

and the system is implemented as shown in Fig. 7.64. The transfer function from \(r\) to \(e\) for this system, the sensitivity function

\[\frac{E(s)}{R(s)} = \mathcal{S}(s) = - \frac{s(s + 10)}{s^{2} + 10s + 25} \]

shows a blocking zero at \(s = 0\), which prevents the constant input from affecting the error. The closed-loop transfer function - that is, the complementary sensitivity function - is

\[\frac{Y(s)}{R(s)} = \mathcal{T}(s) = 1 - \mathcal{S}(s) = \frac{25}{s^{2} + 10s + 25} \]

Figure 7.64

Example of internal model with feedforward

Figure 7.65

Internal model as integral control with feedforward

Figure 7.66

Step responses with integral control and feedforward

The structure of Fig. 7.65 permits us to add a feedforward of the reference input, which provides one extra degree of freedom in zero assignment. If we add a term proportional to \(r\) in Eq. (7.219), then

\[u = - K_{1}\int_{}^{t}e(\tau)d\tau - \mathbf{K}_{0}\mathbf{x} + Nr \]

This relationship has the effect of creating a zero at \(- K_{1}/N\). The location of this zero can be chosen to improve the transient response of the system. For actual implementation, we can rewrite Eq. (7.221) in terms of \(e\) to get

\[u = - K_{1}\int_{}^{t}e(\tau)d\tau - \mathbf{K}_{0}\mathbf{x} + N(y - e) \]

The block diagram for the system is shown in Fig. 7.65. For our example, the overall transfer function now becomes

\[\frac{Y(s)}{R(s)} = \frac{Ns + 25}{s^{2} + 10s + 25} \]

Notice the DC gain is unity for any value of \(N\) and that, through our choice of \(N\), we can place the zero at any real value to improve the dynamic response. A natural strategy for locating the zero is to have it cancel one of the system poles, in this case at \(s = - 5\). The step response of the system is shown in Fig. 7.66 for \(N = 5\), as well as for \(N = 0\) and

\(\Delta\) 7.10.3 Model-Following Design

A related method to track a persistent reference input is called modelfollowing design (see Fig. 7.67). This is an open-loop method that uses the feedforward of the state of the model to construct a specific control input. This control will force the plant output to asymptotically track the output of the desired model which may or may not be persistent. As an example, the desired model can be the specified path that an aircraft is required to track accurately. LTI models with nonzero initial conditions can be used to generate such paths. Alternatively, an impulsive input can be used to establish the initial condition on the desired model (as done here). The technique can produce superior tracking properties to follow such desired paths. The method is described more fully in Bryson (1994), including the case of disturbance rejection, and used to synthesize the landing flare logic for the Boeing 747 aircraft. Assume we have a plant described by the triple (A, B, C), having state \(\mathbf{x}\) and output \(y\). Furthermore, assume a given model that produces the desired response of the plant which is described by the triple \(\left( \mathbf{A}_{m},\mathbf{B}_{m},\mathbf{C}_{m} \right)\), with state \(\mathbf{z}\) and output \(y_{m}\). The idea is to use the states \(\mathbf{x}\) and \(\mathbf{z}\) to construct a control signal so the error \(y - y_{m}\) "quickly" approaches zero. In other words, we want the plant to follow the model with an error that goes to zero. As you will see in the ensuing development, we will tailor the control input such that the output of the plant is forced to follow the desired reference input. The control law uses the feedforward of the model state, \(\mathbf{z}\), and the feedback of the plant state \(\mathbf{x}\). The constant feedforward gain matrices \(\mathbf{M}\) and \(\mathbf{N}\) are obtained from the solution of a set of linear equations. The feedback gain, \(\mathbf{K}\), is designed as usual to stabilize or speed up the plant dynamics. We now derive the model-following design from first principles, and illustrate the results with an example.

Figure 7.67

Block diagram for the model-following design

Consider the plant described by

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}u, \\ y & \ = \mathbf{Cx}, \end{matrix}\]

and the desired model given by

\[\begin{matrix} \overset{˙}{\mathbf{z}} & \ = \mathbf{A}_{m}\mathbf{z} + \mathbf{B}_{m}\delta(t), \\ y_{m} & \ = \mathbf{C}_{m}\mathbf{z}, \end{matrix}\]

where \(\mathbf{A}_{m}\) is \(n_{m} \times n_{m}\). In our case, the model is driven by the impulse, \(\delta(t)\), or essentially initial conditions only. Assume the dimensions of \(u\), \(y\), and \(y_{m}\) are the same. Let

\[\begin{matrix} \mathbf{x} & \ = \mathbf{Mz} + \delta\mathbf{x} \\ u & \ = \mathbf{Nz} + \delta u \\ y & \ = y_{m} + \delta y \end{matrix}\]

where \(\mathbf{M}\) and \(\mathbf{N}\) are constant matrices. We wish that \(\delta y \rightarrow 0\) rapidly so \(y \rightarrow y_{m}\). If we substitute Eqs. (7.227) and (7.228) in Eqs. (7.223) and (7.224), we obtain

\[\begin{matrix} \mathbf{M}\overset{˙}{\mathbf{z}} + \delta\overset{˙}{\mathbf{x}} & \ = \mathbf{A}(\mathbf{Mz} + \delta\mathbf{x}) + \mathbf{B}(\mathbf{Nz} + \delta u), \\ y & \ = y_{m} + \delta y = \mathbf{C}(\mathbf{Mz} + \delta\mathbf{x}), \end{matrix}\]

which we can rewrite as

\[\begin{matrix} & \delta\overset{˙}{\mathbf{x}} = \mathbf{A}\delta\mathbf{x} + \mathbf{B}\delta u + \left( \mathbf{AM} - \mathbf{M}\mathbf{A}_{m} + \mathbf{BN} \right)\mathbf{z} - \mathbf{MB}_{m}\delta(t), \\ & \delta y = \mathbf{C}\delta\mathbf{x} + \left( \mathbf{CM} - \mathbf{C}_{m} \right)\mathbf{z}. \end{matrix}\]

If we select the matrices \(\mathbf{M}\) and \(\mathbf{N}\) so the matrices multiplying the model state \(\mathbf{z}\) in Eqs. (7.232) and (7.233) vanish, we have the two ensuing matrix equations \(\ ^{16}\)

\[\begin{matrix} \mathbf{AM} - \mathbf{M}\mathbf{A}_{m} + \mathbf{BN} = \mathbf{0}, \\ \mathbf{CM} = \mathbf{C}_{m}. \end{matrix}\]

Eq. (7.234) is called a Sylvester equation. In Eqs. (7.234) and (7.235), there are \(n_{m}(n + 1)\) linear equations in the \(n_{m}(n + 1)\) unknown elements of the matrices \(\mathbf{M}\) and \(\mathbf{N}\). A necessary and sufficient condition for the existence of the solution to Eqs. (7.234) and (7.235) is that the transmission zeros of the plant do not coincide with the eigenvalues of the model \(\mathbf{A}_{m}\). Let the control law be

\[u = \mathbf{Nz} - \mathbf{K}(\mathbf{x} - \mathbf{Mz}) \]

where \(\mathbf{K}\) is designed in the usual way so \(\mathbf{A} - \mathbf{BK}\) has a satisfactory stable control. We observe that

\[\delta u = u - \mathbf{Nz} = \mathbf{Nz} - \mathbf{K}(\mathbf{x} - \mathbf{Mz}) - \mathbf{Nz} = - \mathbf{K}\delta\mathbf{x} \]

With the given control law, Eq. (7.236), the plant equations become

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}(\mathbf{Nz} - \mathbf{K}(\mathbf{x} - \mathbf{Mz})) \\ & \ = (\mathbf{A} - \mathbf{BK})\mathbf{x} + \mathbf{B}(\mathbf{N} + \mathbf{KM})\mathbf{z} \end{matrix}\]

In the frequency domain, noting that \(\mathbf{Z}(s) = \left( s\mathbf{I} - \mathbf{A}_{m} \right)^{- 1}\mathbf{B}_{m}\), this can be written as

\[\mathbf{X}(s) = (s\mathbf{I} - \mathbf{A} + \mathbf{BK})^{- 1}\mathbf{B}(\mathbf{N} + \mathbf{KM})\left( s\mathbf{I} - \mathbf{A}_{m} \right)^{- 1}\mathbf{B}_{m}. \]

Now substituting for BN from Eq. (7.234) and adding and subtracting \(s\mathbf{M}\), this can be written as

\[\begin{matrix} & \mathbf{X}(s) = (s\mathbf{I} - \mathbf{A} + \mathbf{BK})^{- 1}\left\lbrack \mathbf{M}\mathbf{A}_{m} - \mathbf{AM} + \mathbf{BKM} \right\rbrack\left( s\mathbf{I} - \mathbf{A}_{m} \right)^{- 1}\mathbf{B}_{m}, \\ & \mathbf{X}(s) = (s\mathbf{I} - \mathbf{A} + \mathbf{BK})^{- 1}\left\lbrack (s\mathbf{I} - \mathbf{A} + \mathbf{BK})\mathbf{M} - \mathbf{M}\left( s\mathbf{I} - \mathbf{A}_{m} \right) \right\rbrack\left( s\mathbf{I} - \mathbf{A}_{m} \right)^{- 1}\mathbf{B}_{m} \end{matrix}\]

If we now multiply this out, the result is

\[\mathbf{X}(s) = \mathbf{M}\left( s\mathbf{I} - \mathbf{A}_{m} \right)^{- 1}\mathbf{B}_{m} - (s\mathbf{I} - \mathbf{A} + \mathbf{BK})^{- 1}\mathbf{M}\mathbf{B}_{m}. \]

The output, \(Y(s) = \mathbf{CX}(s)\) is thus

\[Y(s) = \mathbf{CM}\left( s\mathbf{I} - \mathbf{A}_{m} \right)^{- 1}\mathbf{B}_{m} - \mathbf{C}(s\mathbf{I} - \mathbf{A} + \mathbf{BK})^{- 1}\mathbf{M}\mathbf{B}_{m}. \]

Finally, as \(\mathbf{CM} = \mathbf{C}_{m}\), we have

\[Y(s) = \mathbf{C}_{m}\left( s\mathbf{I} - \mathbf{A}_{m} \right)^{- 1}\mathbf{B}_{m} - \mathbf{C}(s\mathbf{I} - \mathbf{A} + \mathbf{BK})^{- 1}\mathbf{M}\mathbf{B}_{m}, \]

and therefore, in the time domain,

\[y(t) = y_{m}(t) - \lbrack\text{~}\text{decaying transient term controlled by}\text{~}\mathbf{K}\rbrack, \]

which is what we set out to show.

293. Model-following for Disk Drive

Assume the model to be followed is given by an oscillator, that is,

\[\begin{matrix} & \mathbf{A}_{m} = \begin{bmatrix} 0 & 1 \\ - 1 & 0 \end{bmatrix},\mathbf{B}_{m} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \\ & \mathbf{C}_{m} = \begin{bmatrix} 1 & 0 \end{bmatrix}. \end{matrix}\]

The plant is the same as given in Example 7.35 and we wish to track the same sine wave signal. Assume the desired closed-loop poles are given by

\[p_{c} = \lbrack - 1 + j*\sqrt{3}; - 1 - j*\sqrt{3}\rbrack. \]

Solution. The feedback gain is

\[\mathbf{K} = \begin{bmatrix} 4 & 1 \end{bmatrix}\]

We solve Eqs. (7.234) and (7.235) for this case to obtain

\[\begin{matrix} \mathbf{M} & \ = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \\ \mathbf{N} & \ = \begin{bmatrix} - 1 & 1 \end{bmatrix}. \end{matrix}\]

Figure 7.68

Comparison of the tracking properties for the two designs: desired model \((r)\), modelfollowing design \(\left( y_{MF} \right)\), and internal model design $\left( y_{IM} \right.\ $, see Example 7.35) with the nominal plant model

Figure 7.69

Comparison of the tracking error signals for the two designs with the nominal plant model

The internal model design is the same as in Example 7.35. A comparison of the tracking error for the internal model and model-following designs are shown in Figs. 7.68 and 7.69. Both techniques track the sinusoid exactly in an asymptotic fashion, and the model-following technique has a snappier response and the smaller maximum error as seen from Fig. 7.69.

Figure 7.70

Comparison of the tracking errors of the two designs with the perturbed plant model

Now let us investigate the robustness of the two techniques with respect to plant perturbations. For comparison of robustness properties, both the model-following system and the internal model closed-loop systems were run but with the plant system matrix perturbed to be

\[\widetilde{\mathbf{A}} = \begin{bmatrix} 0 & 1 \\ 0 & - 1.1 \end{bmatrix}\]

The tracking errors for the two cases are plotted in Fig. 7.70. Notice in Fig. 7.70, the model-following design has the smaller maximum error but, being non-robust, has a persistent error while the internal model design continues to track the sine wave exactly.

294. \(\Delta\) 7.10.4 The Extended Estimator

Our discussion of robust control so far has used a control based on fullstate feedback. If the state is not available, then as in the regular case, the full-state feedback, \(\mathbf{Kx}\), can be replaced by the estimates, \(\mathbf{K}\widehat{\mathbf{x}}\), where the estimator is built as before. As a final look at ways to design control with external inputs, in this section, we develop a method for tracking a reference input and rejecting disturbances. The method is based on augmenting the estimator to include estimates from external signals in a way that permits us to cancel out their effects on the system error.

Suppose the plant is described by the equations

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}u + \mathbf{B}w \\ y & \ = \mathbf{Cx} \\ e & \ = \mathbf{Cx} - r. \end{matrix}\]

(a)

(b)

(c)

Figure 7.71

Block diagram of a system for tracking and disturbance rejection with extended estimator: (a) equivalent disturbance; (b) block diagram for design; (c) block diagram for implementation

Furthermore, assume both the reference \(r\) and the disturbance \(w\) are known to satisfy the equations \(\ ^{17}\)

\[\begin{matrix} \alpha_{w}(s)w = \alpha_{\rho}(s)w = 0 \\ \alpha_{r}(s)r = \alpha_{\rho}(s)r = 0 \end{matrix}\]

where

\[\alpha_{\rho}(s) = s^{2} + \alpha_{1}s + \alpha_{2} \]

corresponding to polynomials \(\alpha_{w}(s)\) and \(\alpha_{r}(s)\) in Fig. 7.71(a). In general, we would select the equivalent disturbance polynomial \(\alpha_{\rho}(s)\) in Fig. 7.71(b) to be the least common multiple of \(\alpha_{w}(s)\) and \(\alpha_{r}(s)\). The first step is to recognize that, as far as the steady-state response of the output is concerned, there is an input-equivalent signal \(\rho\) that satisfies the same equation as \(r\) and \(w\) and enters the system at the same place as the control signal, as shown in Fig. 7.71(b). As before, we must assume the plant does not have a zero at any of the roots of Eq. (7.247). For our purposes here, we can replace Eqs. (7.223) with

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}(u + \rho) \\ e & \ = \mathbf{Cx} \end{matrix}\]

If we can estimate this equivalent input, we can add to the control a term \(- \widehat{\rho}\) that will cancel out the effects of the real disturbance and reference and cause the output to track \(r\) in the steady-state. To do this, we combine Eqs. (7.223) and (7.247) into a state description to get

\(\begin{matrix} \overset{˙}{\mathbf{z}} & \ = \mathbf{A}_{s}\mathbf{z} + \mathbf{B}_{s}u \\ e & \ = \mathbf{C}_{s}\mathbf{z} \end{matrix}\)where \(\mathbf{z} = \begin{bmatrix} \rho & \overset{˙}{\rho} & \mathbf{x}^{T} \end{bmatrix}^{T}\). The matrices are

\[\begin{matrix} \mathbf{A}_{s} & \ = \begin{bmatrix} 0 & 1 & \mathbf{0} \\ - \alpha_{2} & - \alpha_{1} & \mathbf{0} \\ \mathbf{B} & \mathbf{0} & \mathbf{A} \end{bmatrix},\ \mathbf{B}_{s} = \begin{bmatrix} 0 \\ 0 \\ \mathbf{B} \end{bmatrix}, \\ \mathbf{C}_{s} & \ = \begin{bmatrix} 0 & 0 & \mathbf{C} \end{bmatrix}. \end{matrix}\]

The system given by Eqs. (7.251) is not controllable since we cannot influence \(\rho\) from \(u\). However, if \(\mathbf{A}\) and \(\mathbf{C}\) are observable and if the system (A, B, C) does not have a zero that is also a root of Eq. (7.247), then the system of Eq. (7.251) will be observable, and we can construct an observer that will compute estimates of both the state of the plant and of \(\rho\). The estimator equations are standard, but the control is not:

\[\begin{matrix} \overset{˙}{\widehat{\mathbf{z}}} & \ = \mathbf{A}_{s}\widehat{\mathbf{z}} + \mathbf{B}_{s}u + \mathbf{L}\left( e - \mathbf{C}_{s}\widehat{\mathbf{z}} \right) \\ u & \ = - \mathbf{K}\widehat{\mathbf{x}} - \widehat{\rho} \end{matrix}\]

In terms of the original variables, the estimator equations are

\[\overset{˙}{\widehat{\mathbf{z}}} = \begin{bmatrix} \overset{˙}{\widehat{\rho}} \\ \overset{¨}{\widehat{\rho}} \\ \overset{˙}{\widehat{\mathbf{x}}} \end{bmatrix} = \begin{bmatrix} 0 & 1 & \mathbf{0} \\ - \alpha_{2} & - \alpha_{1} & \mathbf{0} \\ \mathbf{B} & \mathbf{0} & \mathbf{A} \end{bmatrix}\begin{bmatrix} \widehat{\rho} \\ \overset{˙}{\widehat{\rho}} \\ \widehat{\mathbf{x}} \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ \mathbf{B} \end{bmatrix}u + \begin{bmatrix} l_{1} \\ l_{2} \\ \mathbf{L}_{3} \end{bmatrix}\lbrack e - \mathbf{C}\widehat{\mathbf{x}}\rbrack.\]

The overall block diagram of the system for design is shown in Fig. 7.71(b). If we write out the last equation for \(\widehat{\mathbf{x}}\) in Eq. (7.253) and substitute Eq. (7.252b), a simplification of sorts results because a term in \(\widehat{\rho}\) cancels out:

\[\begin{matrix} \overset{˙}{\widehat{\mathbf{x}}} & \ = \mathbf{B}\widehat{\rho} + \mathbf{A}\widehat{\mathbf{x}} + \mathbf{B}( - \mathbf{K}\widehat{\mathbf{x}} - \widehat{\rho}) + \mathbf{L}_{3}(e - \mathbf{C}\widehat{\mathbf{x}}) \\ & \ = \mathbf{A}\widehat{\mathbf{x}} + \mathbf{B}( - \mathbf{K}\widehat{\mathbf{x}}) + \mathbf{L}_{3}(e - \mathbf{C}\widehat{\mathbf{x}}) \\ & \ = \mathbf{A}\widehat{\mathbf{x}} + \mathbf{B}\bar{u} + \mathbf{L}_{3}(e - \mathbf{C}\widehat{\mathbf{x}}) \end{matrix}\]

With the estimator of Eq. (7.253) and the control of Eq. (7.252b), the state equation is

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} + \mathbf{B}( - \mathbf{K}\widehat{\mathbf{x}} - \widehat{\rho}) + \mathbf{B}\rho \]

In terms of the estimation errors, Eq. (7.254) can be rewritten as

\[\overset{˙}{\mathbf{x}} = (\mathbf{A} - \mathbf{BK})\mathbf{x} + \mathbf{BK}\widetilde{\mathbf{x}} + \mathbf{B}\widetilde{\rho} \]

Because we designed the estimator to be stable, the values of \(\widetilde{\rho}\) and \(\widetilde{\mathbf{x}}\) go to zero in the steady-state, and the final value of the state is not affected by the external input. The block diagram of the system for implementation is drawn in Fig. 7.71(c). A simple example will illustrate the steps in this process.

Steady-State Tracking and Disturbance Rejection of Motor Speed by Extended Estimator

Construct an estimator to control the state and cancel a constant bias at the output and track a constant reference in the motor speed system described by

\[\begin{matrix} \overset{˙}{x} & \ = - 3x + u, \\ y & \ = x + w, \\ \overset{˙}{w} & \ = 0, \\ \overset{˙}{r} & \ = 0. \end{matrix}\]

Place the control pole at \(s = - 5\), and the two extended estimator poles at \(s = - 15\).

Solution. To begin, we design the control law by ignoring the equivalent disturbance. Rather, we notice by inspection that a gain of -2 will move the single pole from -3 to the desired -5 , Therefore, \(K = 2\). The system augmented with equivalent external input \(\rho\), which replaces the actual disturbance \(w\) and the reference \(r\), is given by

\[\begin{matrix} & \overset{˙}{\rho} = 0, \\ & \overset{˙}{x} = - 3x + u + \rho, \\ & e = x. \end{matrix}\]

The extended estimator equations are

\[\begin{matrix} & \overset{˙}{\widehat{\rho}} = l_{1}(e - \widehat{x}) \\ & \overset{˙}{\widehat{x}} = - 3\widehat{x} + u + \widehat{\rho} + l_{2}(e - \widehat{x}) \end{matrix}\]

The estimator error gain is found to be \(\mathbf{L} = \begin{bmatrix} 225 & 27 \end{bmatrix}^{T}\) from the characteristic equation

\[det\begin{bmatrix} s & l_{1} \\ 1 & s + 3 + l_{2} \end{bmatrix} = s^{2} + 30s + 225\]

A block diagram of the system is given in Fig. 7.72(a), and the step responses to input at the command \(r\) (applied at \(t = 0sec\) ) and at the disturbance \(w\) (applied at \(t = 0.5sec\) ) are shown in Fig. 7.72(b).

295. \(\Delta 7.11\) Loop Transfer Recovery

The introduction of an estimator in a state feedback controller loop may adversely affect the stability robustness properties of the system [that is, the phase margin (PM) and gain margin (GM) properties may become arbitrarily poor, as shown by Doyle's famous example (Doyle, 1978)]. However, it is possible to modify the estimator design so as to try to "recover" the LQR stability robustness properties to some extent. This process, called LTR, is especially effective for minimum-phase systems. To achieve the recovery, some of the estimator Loop Transfer Recovery poles are placed at (or near) the zeros of the plant and the remaining

(a)

(b)

Figure 7.72

Motor speed system with extended estimator: (a) block diagram; (b) command step response and disturbance step response

poles are moved (sufficiently far) into the LHP. The idea behind LTR is to redesign the estimator in such a way as to shape the loop gain properties to approximate those of LQR.

The use of LTR means that feedback controllers can be designed to achieve desired sensitivity \(\lbrack\mathcal{S}(s)\rbrack\) and complementary sensitivity functions \(\lbrack\mathcal{T}(s)\rbrack\) at critical (loop-breaking) points in the feedback system (for example, at either the input or output of the plant). Of course, there is a price to be paid for this improvement in stability robustness! The newly designed control system may have worse sensor noise sensitivity properties. Intuitively, one can think of making (some of) the estimator poles arbitrarily fast so the loop gain is approximately that of LQR. Alternatively, one can think of essentially "inverting" the plant transfer function so that all the LHP poles of the plant are cancelled by the dynamic compensator to achieve the desired loop shape. There are obvious trade-offs, and the designer needs to be careful to make the correct choice for the given problem, depending on the control system specifications.

LTR is a well-known technique now, and specific practical design procedures have been identified (Athans, 1986; Stein and Athans, 1987; Saberi et al., 1993). The same procedures may also be applied to nonminimum phase systems, but there is no guarantee on the extent of possible recovery. The LTR technique may be viewed as a systematic procedure to study design trade-offs for linear quadratic-based compensator design (Doyle and Stein, 1981). We will now formulate the LTR problem.

Consider the linear system

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}u + w, \\ y & \ = \mathbf{Cx} + v \end{matrix}\]

where \(w\) and \(v\) are uncorrelated zero-mean white Gaussian process and sensor noise with covariance matrices \(\mathbf{R}_{w} \geq 0\) and \(\mathbf{R}_{v} \geq 0\). The estimator design yields

\[\begin{matrix} & \overset{˙}{\widehat{\mathbf{x}}} = \mathbf{A}\widehat{\mathbf{x}} + \mathbf{B}u + \mathbf{L}(y - \widehat{y}), \\ & \widehat{y} = \mathbf{C}\widehat{\mathbf{x}} \end{matrix}\]

resulting in the usual dynamic compensator

\[D_{c}(s) = - \mathbf{K}(s\mathbf{I} - \mathbf{A} + \mathbf{BK} + \mathbf{LC})^{- 1}\mathbf{L}. \]

We will now treat the noise parameters, \(\mathbf{R}_{w}\) and \(\mathbf{R}_{v}\), as design "knobs" in the dynamic compensator design. Without loss of generality, let us choose \(\mathbf{R}_{w} = \Gamma^{T}\Gamma\) and \(\mathbf{R}_{v} = 1\). For LTR, assume \(\Gamma = q\mathbf{B}\), where \(q\) is a scalar design parameter. The estimator design is then based on the specific design parameters \(\mathbf{R}_{w}\) and \(\mathbf{R}_{v}\). It can be shown that, for a minimum-phase system, as \(q\) becomes large (Doyle and Stein, 1979),

\[\lim_{q \rightarrow \infty}\mspace{2mu} D_{c}(s)G(s) = \mathbf{K}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B}, \]

LTR for nonminimum-phase systems the convergence is pointwise in \(s\) and the degree of recovery can be arbitrarily good. This design procedure in effect "inverts" the plant transfer function in the limit as \(q \rightarrow \infty\) :

\[\lim_{q \rightarrow \infty}\mspace{2mu} D_{c}(s) = \mathbf{K}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B}G^{- 1}(s) \]

This is precisely the reason that full-loop transfer recovery is not possible for a nonminimum-phase system. This limiting behavior may be explained using the symmetric root loci. As \(q \rightarrow \infty\), some of the estimator poles approach the zeros of

\[G_{e}(s) = \mathbf{C}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{\Gamma}, \]

and the rest tend to infinity \(\ ^{18}\) [see Eqs. (7.163) and (7.164)]. In practice, the LTR design procedure can still be applied to a nonminimum-phase plant. The degree of recovery will depend on the specific locations of the nonminimum-phase zeros. Sufficient recovery should be possible at many frequencies if the RHP zeros are located outside the specified closed-loop bandwidth. Limits on achievable performance of feedback systems due to RHP zeros are discussed in Freudenberg and Looze (1985). We will next illustrate the LTR procedure by a simple example.

296. LTR Design for Satellite Attitude Control

Consider the satellite system with state-space description

\[\begin{matrix} \mathbf{A} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, & \mathbf{B} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \\ \mathbf{C} = \begin{bmatrix} 1 & 0 \end{bmatrix}, & D = 0. \end{matrix}\]

(a) Design an LQR controller with \(\mathbf{Q} = \rho\mathbf{C}^{T}\mathbf{C}\) and \(R = 1,\rho = 1\), and determine the loop gain.

(b) Then design a compensator that recovers the LQR loop gain of part (a) using the LTR technique for \(q = 1,10,100\).

(c) Compare the different candidate designs in part (b) with respect to the actuator activity due to additive white Gaussian sensor noise.

Solution. Using lqr, the selected LQR weights result in the feedback gain \(\mathbf{K} = \begin{bmatrix} 1 & 1.414 \end{bmatrix}\). The loop transfer function is

\[\mathbf{K}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B} = \frac{1.414(s + 0.707)}{s^{2}} \]

A magnitude frequency response plot of this LQR loop gain is shown in Fig. 7.73. For the estimator design using lqe, let \(\mathbf{\Gamma} = q\mathbf{B},\mathbf{R}_{w} = \Gamma^{T}\mathbf{\Gamma}\), \(\mathbf{R}_{v} = 1\), and choose \(q = 10\), resulting in the estimator gain

\[\mathbf{L} = \begin{bmatrix} 14.142 \\ 100 \end{bmatrix}\]

The compensator transfer function is

\[\begin{matrix} D_{c}(s) & \ = \mathbf{K}(s\mathbf{I} - \mathbf{A} + \mathbf{BK} + \mathbf{LC})^{- 1}\mathbf{L} \\ & \ = \frac{155.56(s + 0.6428)}{\left( s^{2} + 15.556s + 121 \right)} = \frac{155.56(s + 0.6428)}{(s + 7.77 + j7.77)(s + 7.77 - j7.77)}, \end{matrix}\]

and the loop transfer function is

\[D_{c}(s)G(s) = \frac{155.56(s + 0.6428)}{s^{2}(s + 7.77 + j7.77)(s + 7.77 - j7.77)} \]

Figure 7.73 shows the frequency response of the loop transfer function for several values of \(q(q = 1,10,100)\), along with the ideal LQR loop transfer function frequency response. As seen from this figure, the loop gain tends to approach that of LQR as the value of \(q\) increases. As seen in Fig. 7.73, for \(q = 10\), the "recovered" gain margin is \(GM = 11.1 =\) \(20.9db\) and the \(PM = {55.06}^{\circ}\). Sample Matlab statements to carry out the preceding LTR design procedure are as follows:

\(A = \lbrack 01;00\rbrack\);

\(B = \lbrack 0;1\rbrack\);

\(C = \lbrack 10\rbrack\);

\(D = \lbrack 0\rbrack\);

sys \(0 = ss(A,B,C,D)\);

\(C1 = \lbrack 10\rbrack\);

sys \(= ss(A,B,C1,D)\);

Figure 7.73

Frequency response plots for LTR design

Matlab lqr

Matlab lqe

Matlab bode

Matlab margin

\(w =\) logspace \(( - 1,3,1000)\);

\(rho = 1.0\);

\(Q = {rho}^{*}\ ^{*}C^{'*}C\);

\(r = 1\);

\[\lbrack K\rbrack = lqr(A,B,Q,r) \]

sys1 \(1 = ss(A,B,K,0)\);

[maggk1, phasgk1]=bode(sys1,w);

\(q = 10\);

gam \(= q^{*}B\);

Q1=gam'* gam;

\(rv = 1\);

\[\lbrack L\rbrack = lqe(A,gam,C,Q1,rv) \]

\(aa = A - B*K - L*C\);

\(bb = L\);

\(cc = K\);

\(dd = 0\);

sysk=ss(aa, bb, ccc, dd);

sysgk=series(sys0,sysk);

[maggk,phsgk,w]=bode(sysgk,w);

\(\lbrack gm,phm,wcg,wcp\rbrack =\) margin (maggk, phsgk,w)

\(loglog(w,\lbrack maggk1(:)maggk(:)\rbrack)\);

semilogx(w,[phasgk1(? phsgk(?]);

To determine the effect of sensor noise, \(v\), on the actuator activity, we determine the transfer function from \(v\) to \(u\) as shown in Fig. 7.74. For the selected value of LTR design parameter, \(q = 10\), we have

Figure 7.74

Closed-loop system for LTR design

297. RMS value

Figure 7.75

Simulink block diagram for LTR design

Source: Franklin, Gene F. Feedback Control of Dynamic Systems, 8E, 2019, Pearson Education, Inc., New York, NY.

\[\begin{matrix} \frac{U(s)}{V(s)} & \ = H(s) = \frac{- D_{c}(s)}{1 + D_{c}(s)G(s)} \\ & \ = \frac{- 155.56s^{2}(s + 0.6428)}{s^{4} + 15.556s^{3} + 121s^{2} + 155.56s + 99.994} \end{matrix}\]

One reasonable measure of the effect of the sensor noise on the actuator activity is the root-mean-square (RMS) value of the control, \(u\), due to the additive noise, \(v\). The RMS value of the control may be computed as

\[\parallel u \parallel_{rms} = \left( \frac{1}{T_{0}}\int_{0}^{T_{0}}\mspace{2mu}\mspace{2mu} u(t)^{2}dt \right)^{1/2} \]

where \(T_{0}\) is the signal duration. Assuming white Gaussian noise \(v\), the RMS value of the control can also be determined analytically (Boyd and Barratt, 1991). The closed-loop Simulink diagram with band-limited white sensor noise excitation is shown in Fig. 7.75. The values of the RMS control were computed for different values of the LTR design parameter \(q\), using the Simulink simulations, and are tabulated in Table 7.2. The results suggest increased vulnerability due to actuator wear as \(q\) is increased. Refer to Matlab commands ltry and ttru for the LTR computations.

298. TABLE 7.2

General controller in polynomial form

Figure 7.76

Direct transfer-function formulation

Computed RMS Control for Various
Values of LTR Tuning Parameter \(q\)
$$\mathbf{q}$$ $$\parallel u \parallel_{rms}$$
1 0.1454
10 2.8054
100 70.5216

\(\Delta\ 7.12\) Direct Design with Rational Transfer Functions

An alternative to the state-space methods discussed so far is to postulate a general-structure dynamic controller with two inputs \((r\) and \(y)\) and one output \((u)\) and to solve for the transfer function of the controller to give a specified overall \(r\)-to- \(y\) transfer function. A block diagram of the situation is shown in Fig. 7.76. We model the plant as the transfer function

\[\frac{Y(s)}{U(s)} = \frac{b(s)}{a(s)} \]

rather than by state equations. The controller is also modeled by its transfer function, in this case, a transfer function with two inputs and one output:

\[U(s) = - \frac{c_{y}(s)}{d(s)}Y(s) + \frac{c_{r}(s)}{d(s)}R(s) \]

Here \(d(s),c_{y}(s)\), and \(c_{r}(s)\) are polynomials. In order for the controller of Fig. 7.76 and Eq. (7.265) to be implemented, the orders of the numerator polynomials \(c_{y}(s)\) and \(c_{r}(s)\) must not be higher than the order of the denominator polynomial \(d(s)\).

To carry out the design, we require that the closed-loop transfer function defined by Eqs. (7.264) and (7.265) be matched to the desired transfer function

\[\frac{Y(s)}{R(s)} = \frac{c_{r}(s)b(s)}{\alpha_{c}(s)\alpha_{e}(s)} \]

Equation (7.266) tells us that the zeros of the plant must be zeros of the overall system. The only way to change this is to have factors of \(b(s)\) appear in either \(\alpha_{c}\) or \(\alpha_{e}\). We combine Eqs. (7.264) and (7.265) to get

\[a(s)Y(s) = b(s)\left\lbrack - \frac{c_{y}(s)}{d(s)}Y(s) + \frac{c_{r}(s)}{d(s)}R(s) \right\rbrack \]

Diophantine equation

Dimension of the controller

299. EXAMPLE 7.40

which can be rewritten as

\[\left\lbrack a(s)d(s) + b(s)c_{y}(s) \right\rbrack Y(s) = b(s)c_{r}(s)R(s) \]

Comparing Eq. (7.266), with Eq. (7.267) we immediately see that the design can be accomplished if we can solve the Diophantine equation

\[a(s)d(s) + b(s)c_{y}(s) = \alpha_{c}(s)\alpha_{e}(s) \]

for given arbitrary \(a,b,\alpha_{c}\), and \(\alpha_{e}\). Because each transfer function is a ratio of polynomials, we can assume \(a(s)\) and \(d(s)\) are monic polynomials; that is, the coefficient of the highest power of \(s\) in each polynomial is unity. The question is, how many equations and how many unknowns are there, if we match coefficients of equal powers of \(s\) in Eq. (7.269)? If \(a(s)\) is of degree \(n\) (given) and \(d(s)\) is of degree \(m\) (to be selected), then a direct count yields \(2m + 1\) unknowns in \(d(s)\) and \(c_{y}(s)\) and \(n + m\) equations from the coefficients of powers of \(s\). Thus the requirement is that

\[2m + 1 \geq n + m \]

or

\[m \geq n - 1. \]

One possibility for a solution is to choose \(d(s)\) of degree \(n\) and \(c_{y}(s)\) of degree \(n - 1\). In that case, which corresponds to the state-space design for a full-order estimator, there are \(2n\) equations and \(2n\) unknowns with \(\alpha_{c}\alpha_{e}\) of degree \(2n\). The resulting equations will then have a solution for arbitrary \(\alpha_{i}\) if and only if \(a(s)\) and \(b(s)\) have no common factors. \(\ ^{19}\)

300. Pole Placement for Polynomial Transfer Functions

Using the polynomial method, design a controller of order \(n\) for the third-order plant in Example 7.29. Note if the polynomials \(\alpha_{c}(s)\) and \(\alpha_{e}(s)\) from Example 7.29 are multiplied, the result is the desired closedloop characteristic equation:

\(\alpha_{c}(s)\alpha_{e}(s) = s^{6} + 14s^{5} + 122.75s^{4} + 585.2s^{3} + 1505.64s^{2} + 2476.8s + 1728\).

Solution. Using Eq. (7.269) with \(b(s) = 10\), we find that

\(\left( d_{0}s^{3} + d_{1}s^{2} + d_{2}s + d_{3} \right)\left( s^{3} + 10s^{2} + 16s \right) + 10\left( c_{0}s^{2} + c_{1}s + c_{2} \right) \equiv \alpha_{c}(s)\alpha_{e}(s)\).

We have expanded the polynomial \(d(s)\) with coefficients \(d_{i}\) and the polynomial \(c_{y}(s)\) with coefficients \(c_{i}\).

Now we equate the coefficients of the like powers of \(s\) in Eq. (7.271) to find that the parameters must satisfy \(\ ^{20}\)

\[\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 10 & 1 & 0 & 0 & 0 & 0 & 0 \\ 16 & 10 & 1 & 0 & 0 & 0 & 0 \\ 0 & 16 & 10 & 1 & 0 & 0 & 0 \\ 0 & 0 & 16 & 10 & 10 & 0 & 0 \\ 0 & 0 & 0 & 16 & 0 & 10 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 10 \end{bmatrix}\begin{bmatrix} d_{0} \\ d_{1} \\ d_{2} \\ d_{3} \\ c_{0} \\ c_{1} \\ c_{2} \end{bmatrix} = \begin{bmatrix} 1 \\ 14 \\ 122.75 \\ 585.2 \\ 1505.64 \\ 2476.8 \\ 1728 \end{bmatrix}.\]

The solution to Eq. (7.272) is

\[\begin{matrix} d_{0} = 1, & c_{0} = 190.1, \\ d_{1} = 4, & c_{1} = 481.8, \\ d_{2} = 66.75, & c_{2} = 172.8. \\ d_{3} = - 146.3, & \end{matrix}\]

[The solution can be found using \(x = a \smallsetminus b\) command in Matlab, where \(a\) is the Sylvester matrix, and \(b\) is the right-hand side in Eq. (7.272).] Hence the controller transfer function is

\[\frac{c_{y}(s)}{d(s)} = \frac{190.1s^{2} + 481.8s + 172.8}{s^{3} + 4s^{2} + 66.75s - 146.3} \]

Note the coefficients of Eq. (7.273) are the same as those of the controller \(D_{c}(s)\) (which we obtained using the state-variable techniques), once the factors in \(D_{c}(s)\) are multiplied out.

The reduced-order compensator can also be derived using a polynomial solution.

Reduced-Order Design for a Polynomial Transfer Function Model

Design a reduced-order controller for the third-order system in Example 7.29. The desired characteristic equation is

\[\alpha_{c}(s)\alpha_{e}(s) = s^{5} + 12s^{4} + 74s^{3} + 207s^{2} + 378s + 288. \]

Solution. The equations needed to solve this problem are the same as those used to obtain Eq. (7.271), except that we take both \(d(s)\) and \(c_{y}(s)\) to be of degree \(n - 1\). We need to solve

\[\left( d_{0}s^{2} + d_{1}s + d_{2} \right)\left( s^{3} + 10s^{2} + 16s \right) + 10\left( c_{0}s^{2} + c_{1}s + c_{2} \right) \equiv \alpha_{c}(s)\alpha_{e}(s) \]

Equating coefficients of like powers of \(s\) in Eq. (7.274), we obtain

\[\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 10 & 1 & 0 & 0 & 0 & 0 \\ 16 & 10 & 1 & 0 & 0 & 0 \\ 0 & 16 & 10 & 10 & 0 & 0 \\ 0 & 0 & 16 & 0 & 10 & 0 \\ 0 & 0 & 0 & 0 & 0 & 10 \end{bmatrix}\begin{bmatrix} d_{0} \\ d_{1} \\ d_{2} \\ c_{0} \\ c_{1} \\ c_{2} \end{bmatrix} = \begin{bmatrix} 1 \\ 12 \\ 74 \\ 207 \\ 378 \\ 288 \end{bmatrix}\]

The solution is (again using the \(x = a \smallsetminus b\) command in Matlab)

\[\begin{matrix} d_{0} = 1, & c_{0} = - 20.8, \\ d_{1} = 2.0, & c_{1} = - 23.6, \\ d_{2} = 38, & c_{2} = 28.8, \end{matrix}\]

and the resulting controller is

\[\frac{c_{y}(s)}{d(s)} = \frac{- 20.8s^{2} - 23.6s + 28.8}{s^{2} + 2.0s + 38} \]

Again, Eq. (7.276) is exactly the same as \(D_{cr}(s)\) derived using the statevariable techniques in Example 7.30, once the polynomials of \(D_{cr}(s)\) are multiplied out and minor numerical differences are considered.

Notice the reference input polynomial \(c_{r}(s)\) does not enter into the analysis of Examples 7.40 and 7.41. We can select \(c_{r}(s)\) so it will assign zeros in the transfer function from \(R(s)\) to \(Y(s)\). This is the same role played by \(\gamma(s)\) in Section 7.9. One choice is to select \(c_{r}(s)\) to cancel \(\alpha_{e}(s)\) so the overall transfer function is

\[\frac{Y(s)}{R(s)} = \frac{K_{s}b(s)}{\alpha_{c}(s)} \]

This corresponds to the first and most common choice of \(\mathbf{M}\) and \(\bar{N}\) for introducing the reference input described in Section 7.9.

Adding integral control to the polynomial solution

It is also possible to introduce integral control and, indeed, internal-model-based robust tracking control into the polynomial design method. What is required is that we have error control, and that the controller has poles at the internal model locations. To get error control with the structure of Fig. 7.76, we need only let \(c_{r} = c_{y}\). To get desired poles into the controller, we need to require that a specific factor be part of \(d(s)\). For integral control - the most common casethis is almost trivial. The polynomial \(d(s)\) will have a root at zero if we set the last term, \(d_{m}\), to zero. The resulting equations can be solved if \(m = n\). For a more general internal model, we define \(d(s)\) to be the product of a reduced-degree polynomial and a specified polynomial such as Eq. (7.247), and match coefficients in the Diophantine equation as before. The process is straightforward but tedious. Again we caution that, while the polynomial design method can be effective, the numerical problems of this method are often much worse than

Overall transfer function for a time-delayed system those associated with methods based on state equations. For higherorder systems, as well as systems with multiple inputs and outputs, the state-space methods are preferable.

301. \(\Delta\ 7.13\) Design for Systems with Pure Time Delay

In any linear system consisting of lumped elements, the response of the system appears immediately after an excitation of the system. In some feedback systems - for example, process control systems, whether controlled by a human operator in the loop or by computer - there is a pure time delay (also called transportation lag) in the system. As a result of the distributed nature of these systems, the response remains identically zero until after a delay of \(\lambda\) seconds. A typical step response is shown in Fig. 7.77(a). The transfer function of a pure transportation lag is \(e^{- \lambda s}\). We can represent an overall transfer function of a SISO system with time delay as

\[G_{I}(s) = G(s)e^{- \lambda s} \]

where \(G(s)\) has no pure time delay. Because \(G_{I}(s)\) does not have a finite state description, standard use of state-variable methods is impossible. However, Smith (1958) showed how to construct a feedback structure that effectively takes the delay outside the loop and allows a feedback design based on \(G(s)\) alone, which can be done with standard methods. The result of this method is a design having closed-loop transfer function with delay \(\lambda\) but otherwise showing the same response as the closed-loop design based on no delay. To see how the method works, let us consider the feedback structure shown in Fig. 7.77(b). The overall transfer function is

\[\frac{Y(s)}{R(s)} = \mathcal{T}(s) = \frac{D_{c}^{'}(s)G(s)e^{- \lambda s}}{1 + D_{c}^{'}(s)G(s)e^{- \lambda s}} \]

Smith suggested that we solve for \(D_{c}^{'}(s)\) by setting up a dummy overall transfer function in which the controller transfer function \(D_{c}(s)\) is in a loop with \(G(s)\) with no loop delay but with an overall delay of \(\lambda\) :

\[\frac{Y(s)}{R(s)} = \mathcal{T}(s) = \frac{D_{c}(s)G(s)}{1 + D_{c}(s)G(s)}e^{- \lambda s} \]

\[D_{c}^{'}(s) = \frac{D_{c}(s)}{1 + D_{c}(s)\left\lbrack G(s) - G(s)e^{- \lambda s} \right\rbrack} \]

If the plant transfer function and the delay are known, \(D_{c}^{'}(s)\) can be realized with real components by means of the block diagram shown in Fig. 7.77(c). With this knowledge, we can design the compensator \(D_{c}(s)\) in the usual way, based on Eq. (7.279), as if there were no delay, then implement it as shown in Fig. 7.77(c). The resulting closed-loop

(a)

(b)

(c)

Figure 7.77

A Smith regulator for systems with pure time delay

system would exhibit the behavior of a finite closed-loop system except for the time delay \(\lambda\). This design approach is particularly suitable when the pure delay, \(\lambda\), is significant as compared to the process time constant, for example, in pulp and paper process applications.

Notice that, conceptually, the Smith compensator is feeding back a simulated plant output to cancel the true plant output and then adding in a simulated plant output without the delay. It can be demonstrated that \(D_{c}^{'}(s)\) in Fig. 7.77(c) is equivalent to an ordinary regulator in line with a compensator that provides significant phase lead. To implement such compensators in analog systems, it is usually necessary to approximate the delay required in \(D_{c}^{'}(s)\) by a Padé approximant; with digital compensators the delay can be implemented exactly (see Chapter 8). It is also a fact that the compensator \(D_{c}^{'}(s)\) is a strong function of \(G(s)\), and a small error in the model of the plant used in the controller could lead to large errors in the closed loop, perhaps even to instability. This design is very sensitive both to uncertainties in plant parameters as well as uncertainty in the time delay. If \(D_{c}(s)\) is implemented as a PI controller, then one could detune (that is, reduce the gain) to try to ensure stability and reasonable performance. For automatic tuning of the Smith regulator and an application to Stanford's quiet hydraulic precision lathe fluid temperature control, refer to Huang and DeBra (2000).

EXAMPLE 7.42

Figure 7.78

A heat exchanger

302. Heat Exchanger: Design with Pure Time Delay

Figure 7.78 shows the heat exchanger from Example 2.18. The temperature of the product is controlled by controlling the flow rate of steam in the exchanger jacket. The temperature sensor is several meters downstream from the steam control valve, which introduces a transportation lag into the model. A suitable model is given by

\[G(s) = \frac{e^{- 5s}}{(10s + 1)(60s + 1)} \]

Design a controller for the heat exchanger using the Smith compensator and pole placement. The control poles are to be at

\[p_{c} = - 0.05 \pm 0.087j \]

and the estimator poles are to be at three times the control poles' natural frequency:

\[p_{e} = - 0.15 \pm 0.26j \]

Simulate the response of the system with Simulink.

Solution. A suitable set of state-space equations is

\[\begin{matrix} \overset{˙}{\mathbf{x}}(t) & \ = \begin{bmatrix} - 0.017 & 0.017 \\ 0 & - 0.1 \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} 0 \\ 0.1 \end{bmatrix}u(t - 5), \\ y & \ = \begin{bmatrix} 1 & 0 \end{bmatrix}\mathbf{x}, \\ \lambda & \ = 5 \end{matrix}\]

For the specified control pole locations, and for the moment ignoring the time delay, we find that the state feedback gain is

\[\mathbf{K} = \lbrack 5.2 - 0.17\rbrack. \]

Figure 7.79

Closed-loop Simulink diagram for a heat exchanger

Source: Reprinted with permission of The MathWorks, Inc.

For the given estimator poles, the estimator gain matrix for a full-order estimator is

\[\mathbf{L} = \begin{bmatrix} 0.18 \\ 4.2 \end{bmatrix}\]

The resulting controller transfer function is

\[D_{c}(s) = \frac{U(s)}{Y(s)} = \frac{- 0.25(s + 1.8)}{s + 0.14 \pm 0.27j} \]

If we choose to adjust for unity closed-loop DC gain, then

\[\bar{N} = 1.2055\text{.}\text{~} \]

The Simulink diagram for the system is shown in Fig. 7.79. The open-loop and closed-loop step responses of the system and the control effort are shown in Figs. 7.80 and 7.81, and the root locus of the system (without the delay) is shown in Fig. 7.82. Note the time delay of \(5sec\) in Figs. 7.80 and 7.81 is quite small compared with the response of the system, and is barely noticeable in this case.

302.1. Solution of State Equations

It is possible to write down the solution to the state equations using the matrix exponential. See Appendix W7.13.1 available online at www.pearsonglobaleditions.com.

Figure 7.80

Step response for a heat exchanger

Figure 7.81

Control effort for a heat exchanger

Figure 7.82

Root locus for a heat exchanger

302.2. Historical Perspective

The state-variable approach to solving differential equations in engineering problems was advocated by R. E. Kalman while attending MIT. This was revolutionary and ruffled some feathers as it was going against the grain. The well-established academics, Kalman's teachers, were well-versed in the frequency domain techniques and staunch supporters of it. Beginning in the late 1950s and early 1960s, Kalman wrote a series of seminal papers introducing the ideas of state-variables, controllability, observability, the Linear Quadratic (LQ), and the Kalman Filter (LQF). Gunkel and Franklin (1963) and Joseph and Tou (1961) independently showed the separation theorem, which made possible the Linear Quadratic Gaussian (LQG) problem nowdays referred to as the \(H_{2}\) formulation. The separation theorem is a special case of the certainty-equivalence theorem of Simon (1956). The solutions to both LQ and LQG problems can be expressed in an elegant fashion in terms of the solutions to Riccati equations. D. G. Luenberger, who was taking a course with Kalman at Stanford University, derived the observer and reduced-order observer over a weekend after hearing Kalman suggesting the problem in a lecture. Kalman, Bryson, Athans, and others contributed to the field of optimal control theory that was widely employed in aerospace problems including the Apollo program. The book by Zadeh and Desoer published in 1962 was also influential in promoting the state-space method. In the 1970s, the robustness of LQ and LQG methods were studied resulting in the celebrated and influential paper of Doyle and Stein in 1981. One of the most significant contributions of Doyle and Safonov was to extend the idea of frequency domain gain to multi-input multi-output systems using the singular value decomposition. Others contributing to this research included \(G\). Zames who introduced the \(H_{\infty}\) methods that were found to be extensions of the \(H_{2}\) methods. The resulting design techniques are known as \(H_{\infty}\) and \(\mu\)-synthesis procedures. During the 1980s, reliable numerical methods were developed for dealing with state-variable designs and computer-aided software for control design was developed. The invention of Matlab by Cleve Moler and its wide distribution by The MathWorks has had a huge impact not only in the control design field but on all interactive scientific computations.

In the mid-1970s, polynomial and matrix fraction descriptions (MFDs) of systems attracted much attention culminating in the celebrated Q-parametrization which characterizes the set of all stabilizing controllers for a feedback system.

While the state-variable methods were gaining momentum particularly in the United States, research groups in Europe especially in England led by Rosenbrock, MacFarlane, Munro, and others extended the classical techniques to multi-input multi-output systems. Hence root locus and frequency domain methods such as the (inverse) Nyquist techniques could be used for multi-input multi-output systems. Eventually in the 1980s, there was a realization that the power of both frequency
domain and state-variable methods should be combined for an eclectic control design method employing the best of both approaches.

In the 1970s and 1980s, there was a lot of research on discrete event systems, adaptive control and system identification techniques. In the 1990s, research on control of intelligent autonomous systems and hybrid systems began. Since the turn of the century, research has focused on networked control, cyber-physical systems, control of driverless cars, use of machine learning and convex optimization techniques for control, as well as continued research efforts in control of nonlinear, time-delay, and stochastic systems.

We saw in Chapter 7 that, in contrast to frequency response methods of Bode and Nyquist, the state-variable method not only deals with the input and output variables of the system but also with the internal physical variables. The state-variable methods can be used to study linear and nonlinear, as well as time varying systems. Furthermore, the state-variable method handles the multi-input multi-output problems and high-order systems with equal ease. From a computational perspective, the state-variable methods are far superior to the frequency domain techniques that require polynomial manipulations.

303. SUMMARY

  • To every transfer function that has no more zeros than poles, there corresponds a differential equation in state-space form.

  • State-space descriptions can be in several canonical forms. Among these are control, observer, and modal canonical forms.

  • Open-loop poles and zeros can be computed from the state description matrices \((\mathbf{A},\mathbf{B},\mathbf{C},D)\) :

\[\text{~}\text{Poles:}\text{~}\begin{matrix} & p = eig(\mathbf{A}),\ det(p\mathbf{I} - \mathbf{A}) = 0, \\ & \text{~}\text{Zeros:}\text{~}det\begin{bmatrix} z\mathbf{I} - \mathbf{A} & - \mathbf{B} \\ \mathbf{C} & D \end{bmatrix} = 0. \end{matrix}\]

  • For any controllable system of order \(n\), there exists a state feedback control law that will place the closed-loop poles at the roots of an arbitrary control characteristic equation of order \(n\).

  • The reference input can be introduced so as to result in zero steadystate error to a step command. This property is not expected to be robust to parameter changes.

  • Good closed-loop pole locations depend on the desired transient response, the robustness to parameter changes, and a balance between dynamic performance and control effort.

  • Closed-loop pole locations can be selected to result in a dominant second-order response, or to minimize a quadratic performance measure.

  • For any observable system of order \(n\), an estimator (or observer) can be constructed with only sensor inputs and a state that
    estimates the plant state. The \(n\) poles of the estimator error system can be placed arbitrarily.

  • Every transfer function can be represented by a minimal realization, that is, a state-space model that is both controllable and observable.

  • A single-input single-output system is completely controllable if and only if the input excites all the natural frequencies of the system, that is, there is no cancellation of the poles in the transfer function.

  • The control law and the estimator can be combined into a controller such that the poles of the closed-loop system are the union of the control-law-only poles and the estimator-only poles.

  • With the estimator-based controller, the reference input can be introduced in such a way as to permit \(n\) arbitrary zeros to be assigned. The most common choice is to assign the zeros to cancel the estimator poles, thus not exciting an estimator error.

  • Integral control can be introduced to obtain robust steady-state tracking of a step by augmenting the plant state. The design is also robust with respect to rejecting constant disturbances.

  • General robust control can be realized by combining the equations of the plant and the reference model into an error space and designing a control law for the extended system. Implementation of the robust design demonstrates the internal model principle. An estimator of the plant state can be added while retaining the robustness properties.

  • The model-following technique can produce superior tracking properties but suffers from robustness problems.

  • The estimator can be extended to include estimates of the equivalent control disturbance, and so result in robust tracking and disturbance rejection.

  • Pole-placement designs, including integral control, can be computed using the polynomials of the plant transfer function in place of the state descriptions. Designs using polynomials frequently have problems with numerical accuracy.

  • Controllers for plants that include a pure time delay can be designed as if there were no delay, then a controller can be implemented for the plant with the delay. The design can be expected to be sensitive to parameter changes, especially to uncertainty in the delay time.

  • Table 7.3 gives the important equations discussed in this chapter. The triangles indicate equations taken from optional sections in the text.

  • Determining a model from experimental data, or verifying an analytically based model by experiment, is an important step in system design by state-space analysis, a step that is not necessarily needed for compensator design via frequency-response methods.

Control canonical form

\[\begin{matrix} \mathbf{A}_{c} & \ = \begin{bmatrix} - a_{1} & - a_{2} & \cdots & \cdots & - a_{n} \\ 1 & 0 & \cdots & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & & \ddots & 0 & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{bmatrix},\ \mathbf{B}_{c} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \\ \mathbf{C}_{c} & \ = \begin{bmatrix} & b_{1} & b_{2} & \cdots & \cdots & b_{n} \end{bmatrix},\ D_{c} = 0. \end{matrix}\]

State description

\[\overset{˙}{\mathbf{x}} = \mathbf{Ax} + \mathbf{B}u \]

Output equation

\[y = \mathbf{Cx} + Du \]

Transformation of state

\[\overline{\mathbf{A}} = \mathbf{T}^{- 1}\mathbf{AT} \]

\[\overline{\mathbf{B}} = \mathbf{T}^{- 1}\mathbf{B} \]

\(y = \mathbf{CTz} + Du = \overline{\mathbf{C}}\mathbf{z} + \bar{D}u\),

where \(\overline{\mathbf{C}} = \mathbf{CT},\bar{D} = D\)

Controllability matrix

\[\mathcal{C} = \begin{bmatrix} \mathbf{B} & \mathbf{AB} & \cdots & \mathbf{A}^{n - 1}\mathbf{B} \end{bmatrix}\]

Transfer function from state equations

\[G(s) = \frac{Y(s)}{U(s)} = \mathbf{C}(s\mathbf{I} - \mathbf{A})^{- 1}\mathbf{B} + D \]

Transfer-function poles

\[det\left( p_{i}\mathbf{I} - \mathbf{A} \right) = 0 \]

Transfer-function zeros

\(\alpha_{z}(s) = det\begin{bmatrix} z_{i}\mathbf{I} - \mathbf{A} & - \mathbf{B} \\ \mathbf{C} & D \end{bmatrix} = 0\) 506

Control characteristic equation

\[det\lbrack s\mathbf{I} - (\mathbf{A} - \mathbf{BK})\rbrack = 0 \]

Ackermann's control formula for pole placement

\[\mathbf{K} = \begin{bmatrix} 0 & \cdots & 0 & 1 \end{bmatrix}\mathcal{C}^{- 1}\alpha_{c}(\mathbf{A})\]

Reference input gains

\[\begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & D \end{bmatrix}\begin{bmatrix} \mathbf{N}_{\mathbf{X}} \\ N_{\mathbf{u}} \end{bmatrix} = \begin{bmatrix} \mathbf{0} \\ 1 \end{bmatrix}\]

Control equation with reference input

\[u = N_{u}r - \mathbf{K}\left( \mathbf{x} - \mathbf{N}_{\mathbf{x}}r \right) \]

\[= - \mathbf{Kx} + \left( N_{u} + \mathbf{KN}_{\mathbf{x}} \right)r \]

\[= - \mathbf{Kx} + \bar{N}r \]

Symmetric root locus

\[1 + \rho G_{0}( - s)G_{0}(s) = 0 \]

Estimator error

characteristic equation

Observer canonical form

\[\alpha_{e}(s) = det\lbrack s\mathbf{I} - (\mathbf{A} - \mathbf{LC})\rbrack = 0 \]

\({\overset{˙}{\mathbf{x}}}_{\circ} = \mathbf{A}_{\circ}\mathbf{x}_{\circ} + \mathbf{B}_{\circ}u\),

\(y = \mathbf{C}_{\circ}\mathbf{x}_{\circ} + D_{\circ}u\),

where

\[\begin{matrix} & \mathbf{A}_{o} = \begin{bmatrix} - a_{1} & 1 & 0 & 0 & \ldots & 0 \\ - a_{2} & 0 & 1 & 0 & \ldots & \vdots \\ \vdots & \vdots & \ddots & & & 1 \\ - a_{n} & 0 & & 0 & & 0 \end{bmatrix},\ \mathbf{B}_{o} = \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{n} \end{bmatrix}, \\ & \mathbf{C}_{o} = \begin{bmatrix} 1 & 0 & 0 & \ldots & 0 \end{bmatrix}, \end{matrix}\]

[TABLE]

304. REVIEW QUESTIONS

The following questions are based on a system in state-variable form with matrices \(\mathbf{A},\mathbf{B},\mathbf{C},D\), input \(u\), output \(y\), and state \(\mathbf{x}\).

7.1 Why is it convenient to write dynamic equations in state-variable form?

7.2 Give an expression for the transfer function of this system.

7.3 Give two expressions for the poles of the transfer function of the system.

7.4 Give an expression for the zeros of the system transfer function.

7.5 Under what condition will the state of the system be controllable?

7.6 Under what conditions will the system be observable from the output \(y\) ?

7.7 Give an expression for the closed-loop poles if state feedback of the form \(u = - \mathbf{Kx}\) is used.

7.8 Under what conditions can the feedback matrix \(\mathbf{K}\) be selected so the roots of \(\alpha_{c}(s)\) are arbitrary?

7.9 What is the advantage of using the LQR or SRL in designing the feedback matrix \(\mathbf{K}\) ?

7.10 What is the main reason for using an estimator in feedback control?

7.11 If the estimator gain \(\mathbf{L}\) is used, give an expression for the closed-loop poles due to the estimator.

7.12 Under what conditions can the estimator gain \(\mathbf{L}\) be selected so the roots of \(\alpha_{e}(s) = 0\) are arbitrary?

7.13 If the reference input is arranged so the input to the estimator is identical to the input to the process, what will the overall closed-loop transfer function be?

7.14 If the reference input is introduced in such a way as to permit the zeros to be assigned as the roots of \(\gamma(s)\), what will the overall closed-loop transfer function be?

7.15 What are the three standard techniques for introducing integral control in the state feedback design method?

305. PROBLEMS

306. Problems for Section 7.3: Block Diagrams and State-Space

Figure 7.83

Circuit for Problem 7.1

7.1 Write the dynamic equations describing in Fig. 7.83. Write the equations as a second-order differential equation in \(y(t)\). Assuming zero input, solve the differential equation for \(y(t)\) using Laplace transform methods for the parameters values and initial conditions shown in the figure. Verify your answer using the initial command in Matlab.

7.2 A schematic for the satellite and scientific probe for the Gravity Probe-B (GP-B) experiment that was launched April 30, 2004 is sketched in Fig. 7.84. Assume that the mass of the spacecraft plus helium tank, \(m_{1}\), is \(1500\text{ }kg\) and the mass of the probe, \(m_{2}\), is \(800\text{ }kg\). A rotor will float inside the probe and will be forced to follow the probe with a capacitive forcing mechanism. The spring constant of the coupling, \(k\), is \(2.8 \times 10^{6}\). The viscous damping \(b\) is \(5.0 \times 10^{3}\).

(a) Write the equations of motion for the system consisting of masses \(m_{1}\) and \(m_{2}\) using the inertia position variables, \(y_{1}\) and \(y_{2}\).

(b) The actual disturbance \(u\) is a micrometeorite, and the resulting motion is very small. Therefore, re-write your equations with the scaled variables \(z_{1} = 10^{6}y_{1},z_{2} = 10^{6}y_{2}\), and \(v = 1000u\).

Figure 7.84

Schematic diagram of the GP-B satellite and probe (c) Put the equations in state-variable form using the state $\mathbf{x} = \left\lbrack \begin{matrix}
z_{1} & {\overset{˙}{z}}_{1}
\end{matrix} \right.\ $ \(\left. \ \begin{matrix} z_{2} & {\overset{˙}{z}}_{2} \end{matrix} \right\rbrack^{T}\), the output \(y = z_{2}\), and the input an impulse, \(u = 10^{- 3}\delta(t)\) \(N \cdot sec\) on mass \(m_{1}\).

(d) Using the numerical values, enter the equations of motion into Matlab in the form

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{B}v \\ y & \ = \mathbf{Cx} + Dv \end{matrix}\]

and define the Matlab system: sysGPB \(= ss(A,B,C,D)\). Plot the response of \(y\) caused by the impulse with the Matlab command impulse(sysGPB). This is the signal the rotor must follow.

(e) Use the Matlab commands \(p = eig(F)\) to find the poles (or roots) of the system, and \(z = tzero(A,B,C,D)\) to find the zeros of the system.

307. Problems for Section 7.4: Analysis of the State Equations

7.3 Give the state description matrices in control-canonical form for the following transfer functions:

(a) \(G(s) = \frac{1}{7s + 1}\).

(b) \(G(s) = \frac{6(s/3 + 1)}{(s/10 + 1)}\).

(c) \(G(s) = \frac{7s + 1}{s^{2} + 3 + 2}\).

(d) \(G(s) = \frac{s + 7}{s\left( s^{2} + 2s + 2 \right)}\).

(e) \(G(s) = \frac{(s + 7)\left( s^{2} + s + 25 \right)}{s^{2}(s + 2)\left( s^{2} + s + 36 \right)}\).

7.4 Use the Matlab function tf2ss to obtain the state matrices called for in Problem 7.3.

7.5 Give the state description matrices in modal canonical form for the transfer functions of Problem 7.3. Make sure that all entries in the state matrices are real valued by keeping any pairs of complex conjugate poles together, and realize them as a separate subblock in control canonical form.

7.6 A certain system with state \(\mathbf{x}\) is described by the state matrices,

\[\begin{matrix} & \mathbf{A} = \begin{bmatrix} - 1.5 & 1 \\ - 1.5 & 0 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 1 \\ 5 \end{bmatrix}, \\ & \mathbf{C} = \begin{bmatrix} 1 & 0 \end{bmatrix},\ D = 0. \end{matrix}\]

Find the transformation \(\mathbf{T}\) so that if \(\mathbf{x} = \mathbf{Tz}\), the state matrices describing the dynamics of \(\mathbf{z}\) are in control canonical form. Compute the matrices \(\overline{\mathbf{A}},\overline{\mathbf{B}},\overline{\mathbf{C}}\), and \(\bar{D}\).

7.7 Show that the transfer function is not changed by a linear transformation of state.

7.8 Use block-diagram reduction or Mason's rule to find the transfer function for the system in observer canonical form depicted by Fig. 7.31.

7.9 Suppose we are given a system with state matrices \(\mathbf{A},\mathbf{B},\mathbf{C}(D = 0\) in this case). Find the transformation \(\mathbf{T}\) so, under Eqs. (7.21) and (7.22), the new state description matrices will be in observer canonical form.

7.10 Use the transformation matrix in Eq. (7.38) to explicitly multiply out the equations at the end of Example 7.9.

7.11 Find the state transformation that takes the observer canonical form of Eq. (7.32) to the modal canonical form.

7.12 (a) Find the transformation \(\mathbf{T}\) that will keep the description of the airplane system of Example 7.10 in modal canonical form but will convert each element of the input matrix \(\mathbf{B}_{m}\) to unity.

(b) Use Matlab to verify that your transformation does the job.

7.13 (a) Find the state transformation that will keep the description of the airplane system of Example 7.10 in modal canonical form but will cause the poles to be displayed in \(\mathbf{A}_{m}\) in order of increasing magnitude.

(b) Use Matlab to verify your result in part (a), and give the complete new set of state matrices as \(\overline{\mathbf{A}},\overline{\mathbf{B}},\overline{\mathbf{C}}\), and \(\bar{D}\).

7.14 Find the characteristic equation for the modal-form matrix \(\mathbf{A}_{m}\) of Eq. (7.14a) using Eq. (7.55).

7.15 Given the system

\[\overset{˙}{\mathbf{x}} = \begin{bmatrix} - 3 & 2 \\ - 5 & - 1 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}u\]

with zero initial conditions, find the steady-state value of \(x\) for a step input \(u\).

7.16 Consider the system shown in Fig. 7.85:

(a) Find the transfer function from \(U\) to \(Y\).

(b) Write state equations for the system using the state-variables indicated.

Figure 7.85

Block diagram for

Problem 7.16

Figure 7.86

Block diagram for Problem 7.17
7.17 Using the indicated state-variables, write the state equations for each of the system show in Fig. 7.86. Find the transfer function for each system using both block-diagram manipulation and matrix algebra [as in Eq. (7.45)].

(a)

(b)

7.18 For each of the listed transfer functions, write the state equations in both control and observer canonical form. In each case, draw a block diagram and given the appropriate expressions for \(\mathbf{A},\mathbf{B}\), and \(\mathbf{C}\).

(a) \(G(s) = \frac{\left( s^{2} - 2s + 7 \right)(s + 1)}{s^{5} + 2s^{4} + 7s^{3} + 5s^{2} + 10s + 1}\) (voltage equalisation circuit for a solar photovoltaic array using a high-order low-ripple power converter)

(b) \(G(s) = \frac{s^{2} - 6}{s^{2}\left( s^{2} - 2 \right)}\) (control of an inverted pendulum by a force on the cart)

7.19 Consider the transfer function

\[G(s) = \frac{s + 4}{s^{2} + 3s + 2} \]

(a) By re-writing Eq. (7.283) in the form,

\[G(s) = \frac{1}{s + 2}\left( \frac{s + 4}{s + 1} \right) \]

find a series realization of \(G(s)\) as a cascade of two first-order systems.

(b) Using a partial-fraction expansion of \(G(s)\), find a parallel realization of \(G(s)\).

(c) Realise in control canonical form.

7.20 Show that the impulse response of the system \((\mathbf{A},\mathbf{B},\mathbf{C},D)\) is given by

\[h(t) = \mathbf{C}e^{\mathbf{A}t}\mathbf{B} + D\delta(t) \]

where \(e^{\mathbf{A}t}\) is the matrix exponential defined by

\[e^{\mathbf{A}t} = \left( \mathbf{I} + \mathbf{A}t + \frac{\mathbf{A}^{2}t^{2}}{2!} + \cdots \right) = \sum_{k = 0}^{\infty}\mspace{2mu}\frac{\mathbf{A}^{k}t^{k}}{k!} \]

308. Problems for Section 7.5: Control Law Design for Full-State Feedback

7.21 Consider the plant described by,

\[\begin{matrix} & \overset{˙}{\mathbf{x}} = \begin{bmatrix} 0 & 1 \\ 2 & - 9 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 1 \\ 8 \end{bmatrix}u, \\ & y = \begin{bmatrix} 2 & 4 \end{bmatrix}\mathbf{x}. \end{matrix}\]

(a) Draw a block diagram for the plant with one integrator for each state variable.

(b) Find the transfer function using matrix algebra.

(c) Find the closed-loop characteristic equation if the feedback is

(i) \(u = - \begin{bmatrix} K_{1} & K_{2} \end{bmatrix}\mathbf{x}\);

(ii) \(u = - Ky\).

7.22 For the system

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \begin{bmatrix} 0 & 1 \\ - 7.2 & - 9.3 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}u, \\ y & \ = \begin{bmatrix} 1 & 0 \end{bmatrix}\mathbf{x}, \end{matrix}\]

design a state feedback controller that satisfies the following specifications:

(a) Closed-loop poles having a damping coefficient \(\zeta = 0.707\).

(b) Step-response peak time is under \(0.5sec\).

Verify your design with Matlab.

7.23 (a) Design a state feedback controller for the following system so that the closed-loop step response has an overshoot of less than \(18\%\) and \(1\%\) settling under \(0.3sec\) :

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \begin{bmatrix} 0 & 1 \\ 0 & - 7.5 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}u, \\ y & \ = \begin{bmatrix} 1 & 0 \end{bmatrix}\mathbf{x}. \end{matrix}\]

(b) Use the step command in Matlab to verify that your design meets the specifications. If it does not, modify your feedback gains accordingly.

7.24 Consider the system

\[\overset{˙}{\mathbf{x}} = \begin{bmatrix} - 1 & - 2 & - 2 \\ 0 & - 1 & 1 \\ 1 & 0 & - 1 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 2 \\ 0 \\ 1 \end{bmatrix}u\]

(a) Design a state feedback controller for the system so the closed-loop step response has an overshoot of less than \(5\%\) and a \(1\%\) settling time under \(4.6sec\).

(b) Use the step command in Matlab to verify that your design meets the specifications. If it does not, modify your feedback gains accordingly.

7.25 Consider the system in Fig. 7.87.

Figure 7.87

System for Problem 7.25

\[U \circ \frac{s}{s^{2} + 7}◯Y $$(a) Write a set of equations that describes this system in the control \(b\) Design a control law of the form, $$u = - \begin{bmatrix} K_{1} & K_{2} \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix}\]

which will place the closed-loop poles at \(s = - 2.5 \pm j2.5\).

7.26 Output Controllability. In many situations, a control engineer may be interested in controlling the output \(y\) rather than the state \(\mathbf{x}\). A system is said to be output controllable if at any time you are able to transfer the output from zero to any desired output \(y^{*}\) in a finite time using an appropriate control signal \(u^{*}\). Derive necessary and sufficient conditions for a continuous system (A, B, C) to be output controllable. Are output and state controllability related? If so, how?

7.27 Consider the system

\[\overset{˙}{\mathbf{x}} = \begin{bmatrix} 0 & 4 & 0 & 0 \\ - 1 & - 4 & 0 & 0 \\ 5 & 7 & 1 & 15 \\ 0 & 0 & 3 & - 3 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}u\]

(a) Find the eigenvalues of this system. (Hint: Note the block-triangular structure of A.)

(b) Find the controllable and uncontrollable modes of this system.

(c) For each of the uncontrollable modes, find a vector \(\mathbf{v}\) such that

\[\mathbf{v}^{T}\mathbf{B} = 0,\ \mathbf{v}^{T}\mathbf{A} = \lambda\mathbf{v}^{T} \]

(d) Show there are an infinite number of feedback gains \(\mathbf{K}\) that will relocate the modes of the system to \(- 5, - 3, - 2\), and -2 .

(e) Find the unique matrix \(\mathbf{K}\) that achieves these pole locations and prevents initial conditions on the uncontrollable part of the system from ever affecting the controllable part.

7.28 Two pendulums, coupled by a spring, are to be controlled by two equal and opposite forces \(u\), which are applied to the pendulum bobs as shown in Fig. 7.88. The equations of motion are

Figure 7.88

Coupled pendulums for Problem 7.28

\[\begin{matrix} & ml^{2}{\overset{¨}{\theta}}_{1} = - ka^{2}\left( \theta_{1} - \theta_{2} \right) - mgl\theta_{1} - lu, \\ & ml^{2}{\overset{¨}{\theta}}_{2} = - ka^{2}\left( \theta_{2} - \theta_{1} \right) - mgl\theta_{2} + lu. \end{matrix}\]

(a) Show the system is uncontrollable. Can you associate a physical meaning with the controllable and uncontrollable modes?

(b) Is there any way that the system can be made controllable?

7.29 The state-space model for a certain application has been given to us with the following state description matrices:

\[\begin{matrix} & \mathbf{A} = \begin{bmatrix} 0.174 & 0 & 0 & 0 & 0 \\ 0.157 & 0.645 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} - 0.207 \\ - 0.005 \\ 0 \\ 0 \\ 0 \end{bmatrix} \\ & \mathbf{C} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \end{bmatrix}. \end{matrix}\]

(a) Draw a block diagram of the realization with an integrator for each state-variable.

(b) A student has computed \(det\mathcal{C} = 2.3 \times 10^{- 7}\) and claims that the system is uncontrollable. Is the student right or wrong? Why?

(c) Is the realization observable?

7.30 Staircase Algorithm (Van Dooren et al., 1978): Any realization (A, B, C) can be transformed by an orthogonal similarity transformation to \((\overline{\mathbf{A}},\overline{\mathbf{B}},\overline{\mathbf{C}})\), where \(\overline{\mathbf{A}}\) is an upper Hessenberg matrix (having one nonzero diagonal above the main diagonal) given by

\[\overline{\mathbf{A}} = \mathbf{T}^{T}\mathbf{AT} = \begin{bmatrix} * & \alpha_{1} & \mathbf{0} & 0 \\ * & * & \ddots & 0 \\ * & * & \ddots & \alpha_{n - 1} \\ * & * & \cdots & * \end{bmatrix},\ \overline{\mathbf{B}} = \mathbf{T}^{T}\mathbf{B} = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ g_{1} \end{bmatrix}\]

where \(g_{1} \neq 0\), and

\[\overline{\mathbf{C}} = \mathbf{CT} = \begin{bmatrix} {\bar{c}}_{1} & {\bar{c}}_{2} & \cdots & {\bar{c}}_{n} \end{bmatrix},\ \mathbf{T}^{- 1} = \mathbf{T}^{T}.\]

Orthogonal transformations correspond to a rotation of the vectors (represented by the matrix columns) being transformed with no change in length.

(a) Prove that if \(\alpha_{i} = 0\) and \(\alpha_{i + 1},\cdots,\alpha_{n - 1} \neq 0\) for some \(i\), then the controllable and uncontrollable modes of the system can be identified after this transformation has been done.

(b) How would you use this technique to identify the observable and unobservable modes of (A, B, C)?

(c) What advantage does this approach for determining the controllable and uncontrollable modes have over transforming the system to any other form?

(d) How can we use this approach to determine a basis for the controllable and uncontrollable subspaces, as in Problem 7.44?

This algorithm can also be used to design a numerically stable algorithm for pole placement [see Minimis and Paige (1982)]. The name of the algorithm comes from the multi-input version in which the \(\alpha_{i}\) are the blocks that make \(\overline{\mathbf{A}}\) resemble a staircase. Refer to ctrbf, obsvf commands in Matlab.

309. Problems for Section 7.6: Selection of Pole Locations for Good Design

7.31 The normalized equations of motion for an inverted pendulum at angle \(\theta\) on a cart are

\[\overset{¨}{\theta} = \theta + u,\ \overset{¨}{x} = - \beta\theta - u, \]

where \(x\) is the cart position, and the control input \(u\) is a force acting on the cart.

(a) With the state defined as \(\mathbf{x} = \begin{bmatrix} \theta & \overset{˙}{\theta} & x & \overset{˙}{x} \end{bmatrix}^{T}\) find the feedback gain \(\mathbf{K}\) that places the closed-loop poles at \(s = - 1, - 1, - 1 \pm 1j\). For parts (b) through (d), assume that \(\beta = 0.5\).

(b) Use the SRL to select poles with a bandwidth as close as possible to those of part (a), and find the control law that will place the closedloop poles at the points you selected.

(c) Compare the responses of the closed-loop systems in parts (a) and (b) to an initial condition of \(\theta = 10^{\circ}\). You may wish to use the initial command in Matlab.

(d) Compute \(\mathbf{N}_{\mathbf{X}}\) and \(N_{u}\) for zero steady-state error to a constant command input on the cart position, and compare the step responses of each of the two closed-loop systems.

7.32 An asymptotically stable Type I system with input \(r\) and output \(y\) is described by the closed-loop system matrices (A, B, C, \(D = 0\) ). Suppose the input is given by the ramp \(r = at\), for \(t > 0\). Show the velocity error coefficient is given by

\[K_{v} = \left\lbrack \mathbf{CA}^{- 2}\mathbf{B} \right\rbrack^{- 1} \]

7.33 Prove the Nyquist plot for LQR design avoids a circle of radius one centered at the -1 point, as shown in Fig. 7.89. Show this implies that

Figure 7.89

Nyquist plot for an optimal regulator

\(\frac{1}{2} < GM < \infty\), the "upward" gain margin is \(GM = \infty\), and there is a "downward" \(GM = \frac{1}{2}\), and the phase margin is at least \(PM = \pm 60^{\circ}\). Hence the LQR gain matrix, \(\mathbf{K}\), can be multiplied by a large scalar or reduced by half with guaranteed closed-loop system stability.

310. Problems for Section 7.7: Estimator Design

7.34 Consider the system

\[\mathbf{A} = \begin{bmatrix} - 5 & 3 \\ 1 & 0 \end{bmatrix},\mathbf{B} = \begin{bmatrix} 1 \\ 0 \end{bmatrix},\mathbf{C} = \begin{bmatrix} 3 & 4 \end{bmatrix}\]

and assume that you are using feedback of the form \(u = - \mathbf{Kx} + r\), where \(r\) is a reference input signal.

(a) Show that \((\mathbf{A},\mathbf{C})\) is observable.

(b) Show that there exists a \(\mathbf{K}\) such that \((\mathbf{A} - \mathbf{BK},\mathbf{C})\) is unobservable.

(c) Compute a \(\mathbf{K}\) of the form \(\mathbf{K} = \left\lbrack 1,K_{2} \right\rbrack\) that will make the system unobservable as in part (b); that is, find \(K_{2}\) so that closed-loop system is not observable.

(d) Compare the open-loop transfer function with the transfer function of the closed-loop system of part (c). What is the unobservability due to?

7.35 Consider a system with the transfer function,

\[G(s) = \frac{s + 15}{s^{2} - 15}\text{.}\text{~} \]

(a) Find \(\left( \mathbf{A}_{o},\mathbf{B}_{o},\mathbf{C}_{o} \right)\) for this system in observer canonical form.

(b) Check if this system observable.

(c) Is \(\left( \mathbf{A}_{o},\mathbf{B}_{o} \right)\) controllable?

(d) Compute \(\mathbf{K}\) so that the closed-loop poles are assigned to \(s = - 10 \pm\) \(j10\).

(e) Design a full-order estimator with estimator-error poles at \(s = - 15 \pm\) j15.

(f) Prove that if \(u = - \mathbf{Kx} + r\) there is a feedback gain \(\mathbf{K}\) that makes the closed-loop system unobservable. Design \(\mathbf{K}\) so that the closed-loop system has no zero and there is only one pole at \(s = - 5\).

7.36 Explain how the controllability, observability, and stability properties of a linear system are related.

7.37 Consider the electric circuit shown in Fig. 7.90.

(a) Write the internal (state) equations for the circuit. The input \(u\) is a voltage source, and the output \(y\) is a voltage. Let \(x_{1} = i_{L}\) and \(x_{2} = v_{c}\).

(b) What condition(s) on \(R,L\), and \(C\) will guarantee that the system is controllable?

(c) What condition(s) on \(R,L\), and \(C\) will guarantee that the system is observable?

7.38 The block diagram of a feedback system is shown in Fig. 7.91. The system state is

\[\mathbf{x} = \begin{bmatrix} \mathbf{x}_{p} \\ \mathbf{x}_{f} \end{bmatrix}\]

Figure 7.90

Electric circuit for

Problem 7.37

and the dimensions of the matrices are as follows:

\[\begin{matrix} \mathbf{A} = n \times n, & \mathbf{L} = n \times 1, \\ \mathbf{B} = n \times 1, & \mathbf{x} = 2n \times 1, \\ \mathbf{C} = 1 \times n, & r = 1 \times 1, \\ \mathbf{K} = 1 \times n, & y = 1 \times 1. \end{matrix}\]

(a) Write state equations for the system.

(b) Let \(\mathbf{x} = \mathbf{Tz}\), where

\[\mathbf{T} = \begin{bmatrix} \mathbf{I} & \mathbf{0} \\ \mathbf{I} & \mathbf{I} \end{bmatrix}\]

Show the system is not controllable.

(c) Find the transfer function of the system from \(r\) to \(y\).

Figure 7.91

Block diagram for Problem 7.38

7.39 This problem is intended to give you more insight into controllability and observability. Consider the circuit in Fig. 7.92, with an input current source \(u(t)\) and an output voltage \(y(t)\). Note that usually \(R_{1}\) and \(R_{2}\) represent the respective internal resistances of \(L\) and \(C\) while \(R\) can be the load.

(a) Using the capacitor voltage and inductor current as state variables, write state and output equations for the system.

(b) Find the conditions relating \(R_{1},R_{2},R,C\), and \(L\) that render the system uncontrollable. Find a similar set of conditions that result in an unobservable system.

Figure 7.92

Electric circuit for

Problem 7.39

(c) Interpret the conditions found in part (b) in terms of the time constants of the system.

(d) If \(R_{1} = 2\Omega,R_{2} = 3\Omega\), and \(C = 0.01F\), find the value of \(L\) for the conditions derive in part (b) (that is, when the system is uncontrollable or unobservable). Find the transfer function of the system and show that there is a pole-zero cancellation for this system.

7.40 The linearized dynamic equations of motion for a satellite are

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \mathbf{Ax} + \mathbf{Bu} \\ \mathbf{y} & \ = \mathbf{Cx} \end{matrix}\]

where

\[\begin{matrix} & \mathbf{A} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 3\omega^{2} & 0 & 0 & 2\omega \\ 0 & 0 & 0 & 1 \\ 0 & - 2\omega & 0 & 0 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 0 & 0 \\ 1 & 0 \\ 0 & 0 \\ 0 & 1 \end{bmatrix},\ \mathbf{C} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix}, \\ & \mathbf{u} = \begin{bmatrix} u_{1} \\ u_{2} \end{bmatrix},\ \mathbf{y} = \begin{bmatrix} y_{1} \\ y_{2} \end{bmatrix}. \end{matrix}\]

The inputs \(u_{1}\) and \(u_{2}\) are the radial and tangential thrusts, the statevariables \(x_{1}\) and \(x_{3}\) are the radial and angular deviations from the reference (circular) orbit, and the outputs \(y_{1}\) and \(y_{2}\) are the radial and angular measurements, respectively.

(a) Show the system is controllable using both control inputs.

(b) Show the system is controllable using only a single input. Which one is it?

(c) Show the system is observable using both measurements.

(d) Show the system is observable using only one measurement. Which one is it?

7.41 Consider the system in Fig. 7.93.

(a) Write the state-variable equations for the system, using \(\begin{bmatrix} \theta_{1} & \theta_{2} & {\overset{˙}{\theta}}_{1} & {\overset{˙}{\theta}}_{2} \end{bmatrix}^{T}\) as the state vector and \(F\) as the single input.

(b) Show all the state-variables are observable using measurements of \(\theta_{1}\) alone.

Figure 7.93

Coupled pendulums for Problem 7.41

(c) Show the characteristic polynomial for the system is the product of the polynomials for two oscillators. Do so by first writing a new set of system equations involving the state-variables

\[\begin{bmatrix} y_{1} \\ y_{2} \\ {\overset{˙}{y}}_{1} \\ {\overset{˙}{y}}_{2} \end{bmatrix} = \begin{bmatrix} \theta_{1} + \theta_{2} \\ \theta_{1} - \theta_{2} \\ {\overset{˙}{\theta}}_{1} + {\overset{˙}{\theta}}_{2} \\ {\overset{˙}{\theta}}_{1} - {\overset{˙}{\theta}}_{2} \end{bmatrix}\text{.}\text{~}\]

Hint: If \(\mathbf{A}\) and \(\mathbf{D}\) are invertible matrices, then

\[\begin{bmatrix} \mathbf{A} & \mathbf{0} \\ \mathbf{0} & \mathbf{D} \end{bmatrix}^{- 1} = \begin{bmatrix} \mathbf{A}^{- 1} & \mathbf{0} \\ \mathbf{0} & \mathbf{D}^{- 1} \end{bmatrix}\]

(d) Deduce the fact that the spring mode is controllable with \(F\), but the pendulum mode is not.

7.42 A certain fifth-order system is found to have a characteristic equation with roots at \(0, - 1, - 2\), and \(- 1 \pm 1j\). A decomposition into controllable and uncontrollable parts discloses that the controllable part has a characteristic equation with roots 0 and \(- 1 \pm 1j\). A decomposition into observable and nonobservable parts discloses that the observable modes are at \(0, - 1\), and -2 .

(a) Where are the zeros of \(b(s) = \mathbf{Cadj}(s\mathbf{I} - \mathbf{A})\mathbf{B}\) for this system?

(b) What are the poles of the reduced-order transfer function that includes only controllable and observable modes?

7.43 Consider the systems shown in Fig. 7.94, employing series, parallel, and feedback configurations.

(a) Suppose we have controllable-observable realizations for each subsystem:

\[\begin{matrix} {\overset{˙}{\mathbf{x}}}_{i} & \ = \mathbf{A}\mathbf{x}_{i} + \mathbf{B}_{i}\mathbf{u}_{i} \\ \mathbf{y}_{i} & \ = \mathbf{C}_{i}\mathbf{x}_{i},\ \text{~}\text{where}\text{~}i = 1,2 \end{matrix}\]

Give a set of state equations for the combined systems in Fig. 7.94.

\[\overset{u = u_{1}}{\longrightarrow}G_{1}(s) = \frac{N_{1}(s)}{D_{1}(s)}\overset{y_{1} = u_{2}}{\longrightarrow}G_{2}(s) = \frac{N_{2}(s)}{D_{2}(s)} \multimap y \]

(a)

(b)

(c)

Figure 7.94

Block diagrams for Problem 7.43: (a) series; (b) parallel; (c) feedback

(b) For each case, determine what condition(s) on the roots of the polynomials \(N_{i}\) and \(D_{i}\) is necessary for each system to be controllable and observable. Give a brief reason for your answer in terms of pole-zero cancellations.

7.44 Consider the system \(\overset{¨}{y} + 3\overset{˙}{y} + 2y = \overset{˙}{u} + u\).

(a) Find the state matrices \(\mathbf{A}_{c},\mathbf{B}_{c}\), and \(\mathbf{C}_{c}\) in control canonical form that correspond to the given differential equation.

(b) Sketch the eigenvectors of \(\mathbf{A}_{c}\) in the \(\left( x_{1},x_{2} \right)\) plane, and draw vectors that correspond to the completely observable \(\left( \mathbf{x}_{0} \right)\) and the completely unobservable \(\left( \mathbf{x}_{\overline{0}} \right)\) state-variables.

(c) Express \(\mathbf{x}_{0}\) and \(\mathbf{x}_{\overline{0}}\) in terms of the observability matrix \(\mathcal{O}\).

(d) Give the state matrices in observer canonical form and repeat parts (b) and (c) in terms of controllability instead of observability.

7.45 The dynamic equations of motion for a station-keeping satellite (such as a weather satellite) are

\[\overset{¨}{x} - 2\omega\overset{˙}{y} - 3\omega^{2}x = 0,\ \overset{¨}{y} + 2\omega\overset{˙}{x} = u, \]

where

\[\begin{matrix} & x = \text{~}\text{radial perturbation}\text{~} \\ & y = \text{~}\text{longitudinal position perturbation,}\text{~} \\ & u = \text{~}\text{engine thrust in the}\text{~}y\text{-direction}\text{~} \end{matrix}\]

as depicted in Fig. 7.95. If the orbit is synchronous with the earth's rotation, then \(\omega = 2\pi/(3600 \times 24)rad/sec\).

Figure 7.95

Diagram of a station-keeping satellite in orbit for Problem 7.45

(a) Is the state \(\mathbf{x} = \begin{bmatrix} x & \overset{˙}{x} & y & \overset{˙}{y} \end{bmatrix}^{T}\) observable?

(b) Choose \(\mathbf{x} = \begin{bmatrix} x & \overset{˙}{x} & y & \overset{˙}{y} \end{bmatrix}^{T}\) as the state vector and \(y\) as the measurement, and design a full-order observer with poles placed at \(s = - 2\omega, - 3\omega\), and \(- 3\omega \pm 3\omega j\).

7.46 The linearized equations of motion of the simple pendulum in Fig. 7.96 are

\[\overset{¨}{\theta} + \omega^{2}\theta = u\text{.}\text{~} \]

Figure 7.96

Pendulum diagram for

Problem 7.46

(a) Write the equations of motion in state-space form.

(b) Design an estimator (observer) that reconstructs the state of the pendulum given measurements of \(\overset{˙}{\theta}\). Assume \(\omega = 5rad/sec\), and pick the estimator roots to be at \(s = - 10 \pm 10j\).

(c) Write the transfer function of the estimator between the measured value of \(\overset{˙}{\theta}\) and the estimated value of \(\theta\).

(d) Design a controller (that is, determine the state feedback gain \(\mathbf{K}\) ) so the roots of the closed-loop characteristic equation are at \(s = - 4 \pm 4j\).

7.47 An LCL Butterworth low pass filter is described by the following state equations:

\[\begin{bmatrix} {\overset{˙}{x}}_{1} \\ {\overset{˙}{x}}_{2} \\ {\overset{˙}{x}}_{3} \end{bmatrix} = \begin{bmatrix} 0 & - 1/L & 0 \\ 1/C & 0 & - 1/C \\ 0 & 1/L & - R/L \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \end{bmatrix} + \begin{bmatrix} 1/L \\ 0 \\ 0 \end{bmatrix}u\]

where \(x_{1}\) and \(x_{2}\) are the inductor currents, \(x_{3}\) is the capacitor voltage, and \(u\) is the input voltage. Consider \(L = 0.1H,C = 0.01\text{ }F\) and \(R = 10\Omega\).

Design a reduced-order estimator with \(y = x_{1}\) as the known measurement, and place the observer error poles at -200 and -200 . Be sure to provide all the relevant estimator equations.

311. Problems for Section 7.8: Compensator Design: Combined

Control Law and Estimator

7.48 A certain process has a the transfer function \(G(s) = \frac{4.5}{s(s - 4.5)}\).

(a) Find \(\mathbf{A},\mathbf{B}\), and \(\mathbf{C}\) for this system in observer canonical form.

(b) If \(u = - \mathbf{Kx}\), compute \(\mathbf{K}\) so that the closed-loop control poles are located at \(s = - 1.8 \pm 2j\).

(c) Compute \(\mathbf{L}\) so that the estimator-error poles are located at \(s = - 15 \pm\) \(15j\).

(d) Give the transfer function of the resulting controller (for example, using Eq. (7.174)).

(e) What are the gain and phase margins of the controller and the given open-loop system?

7.49 The linearized longitudinal motion of a helicopter near hover (see Fig.

7.97) can be modeled by the normalized third-order system

\[\begin{bmatrix} \overset{˙}{q} \\ \overset{˙}{\theta} \\ \overset{˙}{u} \end{bmatrix} = \begin{bmatrix} - 0.4 & 0 & - 0.01 \\ 1 & 0 & 0 \\ - 1.4 & 9.8 & - 0.02 \end{bmatrix}\begin{bmatrix} q \\ \theta \\ u \end{bmatrix} + \begin{bmatrix} 6.3 \\ 0 \\ 9.8 \end{bmatrix}\delta\]

Figure 7.97

Helicopter for

Problem 7.49
Fuselage

reference

Suppose our sensor measures the horizontal velocity \(u\) as the output; that is, \(y = u\).
(a) Find the open-loop pole locations.

(b) Is the system controllable?

(c) Find the feedback gain that places the poles of the system at \(s =\) \(- 1 \pm 1j\) and \(s = - 2\).

(d) Design a full-order estimator for the system, and place the estimator poles at -8 and \(- 4 \pm 4\sqrt{3}j\).

(e) Design a reduced-order estimator with both poles at -4 . What are the advantages and disadvantages of the reduced-order estimator compared with the full-order case?

(f) Compute the compensator transfer function using the control gain and the full-order estimator designed in part (d), and plot its frequency response using Matlab. Draw a Bode plot for the closed-loop design, and indicate the corresponding gain and phase margins.

(g) Repeat part (f) with the reduced-order estimator.

(h) Draw the SRL and select roots for a control law that will give a control bandwidth matching the design of part (c), and select roots for a full-order estimator that will result in an estimator error bandwidth comparable to the design of part (d). Draw the corresponding Bode plot and compare the pole placement and SRL designs with respect to bandwidth, stability margins, step response, and control effort for a unit-step rotor-angle input. Use Matlab for the computations.

7.50 Suppose a DC drive motor with motor current is connected to the wheels of a cart in order to control the movement of an inverted pendulum mounted on the cart. The linearized and normalized equations of motion corresponding to this system can be put in the form

\[\begin{matrix} \overset{¨}{\theta} & \ = \theta + v + u, \\ \overset{˙}{v} & \ = \theta - v - u, \end{matrix}\]

where

\[\begin{matrix} & \theta = \text{~}\text{angle of the pendulum,}\text{~} \\ & v = \text{~}\text{velocity of the cart.}\text{~} \end{matrix}\]

(a) We wish to control \(\theta\) by feedback to \(u\) of the form,

\[u = - K_{1}\theta - K_{2}\overset{˙}{\theta} - K_{3}v \]

Find the feedback gains so that the resulting closed-loop poles are located at \(- 1.2, - 1.2 \pm j\sqrt{2}\).

(b) Assume \(\theta\) and \(v\) are measured. Construct an estimator for \(\theta\) and \(\overset{˙}{\theta}\) of the form,

\[\overset{˙}{\widehat{\mathbf{x}}} = \mathbf{A}\widehat{\mathbf{x}} + \mathbf{L}(y - \widehat{\mathbf{y}}), \]

where \(\mathbf{x} = \begin{bmatrix} \theta & \overset{˙}{\theta} \end{bmatrix}^{T}\) and \(y = \theta\). Treat both \(v\) and \(u\) are known. Select \(\mathbf{L}\) so that the estimator poles are at -3.5 , and -3.5 .

(c) Based on the above designs, find the transfer function of the controller, and draw the Bode plot of the closed-loop system, indicating the corresponding gain and phase margins.

7.51 Consider the control of

\[G(s) = \frac{Y(s)}{U(s)} = \frac{7.5}{s(s + 3.5)} \]

(a) Let \(y = x_{1}\) and \({\overset{˙}{x}}_{1} = x_{2}\), and write state equations for the system.

(b) Find \(K_{1}\) and \(K_{2}\) so that \(u = - K_{1}x_{1} - K_{2}x_{2}\) yields closed-loop poles with a natural frequency \(\omega_{n} = 5\) and a damping ratio \(\zeta = 0.7\).

(c) Design a state estimator for the system that yields estimator error poles with \(\omega_{n1} = 20\) and \(\zeta_{1} = 0.7\).

(d) What is the transfer function of the controller obtained by combining parts (a) through (c)?

(e) Sketch the root locus of the resulting closed-loop system as plant gain (nominally 7.5) is varied.

7.52 Unstable equations of motion of the form,

\[\overset{¨}{x} = x + u\text{,}\text{~} \]

arise in situations where the motion of an upside-down pendulum (such as a rocket) must be controlled.

(a) Let \(u = - Kx\) (position feedback alone), and sketch the root locus with respect to the scalar gain \(K\).

(b) Consider a compensator of the form,

\[U(s) = K\frac{(s + a)}{s + b}X(s) \]

Select and so that the system will display a rise time of about \(1.8sec\) and no more than \(15\%\) overshoot. Sketch the root locus with respect to \(K\).

(c) Sketch the Bode plot (both magnitude and phase) of the uncompensated plant.

(d) Sketch the Bode plot of the compensated design, and estimate the phase margin and the bandwidth.

(e) Design state feedback so that the closed-loop poles are the same locations as those of the design in part (b).

(f) Design an estimator for \(x\) and \(\overset{˙}{x}\) using the measurement \(x = y\), and select the observer gain \(\mathbf{L}\) so that the equation for \(\widetilde{x}\) has characteristic roots with a damping ratio the same as chosen for the design in part (e) but the natural frequency is set to be twice larger than that in part (e).

(g) Draw a Bode plot for the closed-loop system, and compare the resulting bandwidth and stability margins with those obtained using the design of part (b). If the ones with the estimator are worse, re-design by selecting another value of \(\omega_{n}\) for the estimator. Comment on your designs based on the step responses of each design.

7.53 A simplified model for the control of a flexible robotic arm is shown in Fig. 7.98, where

\[\begin{matrix} k/M & \ = 900rad/\sec^{2} \\ y & \ = \text{~}\text{output, the mass position,}\text{~} \\ u & \ = \text{~}\text{input, the position of the end of the spring.}\text{~} \end{matrix}\]

Figure 7.98

Simple robotic arm for Problem 7.53

(a) Write the equations of motion in state-space form.

(b) Design an estimator with roots at \(s = - 100 \pm 100j\).

(c) Could both state-variables of the system be estimated if only a measurement of \(\overset{˙}{y}\) was available?

(d) Design a full-state feedback controller with roots at \(s = - 20 \pm 20j\).

(e) Would it be reasonable to design a control law for the system with roots at \(s = - 200 \pm 200j\). State your reasons.

(f) Write equations for the compensator, including a command input for \(y\). Draw a Bode plot for the closed-loop system and give the gain and phase margins for the design.

7.54 The linearized differential equations governing the fluid-flow dynamics for the two cascaded tanks in Fig. 7.99 are

\[\begin{matrix} & \delta{\overset{˙}{h}}_{1} + \sigma\delta h_{1} = \delta u, \\ & \delta{\overset{˙}{h}}_{2} + \sigma\delta h_{2} = \sigma\delta h_{1}, \end{matrix}\]

where

\(\delta h_{1} =\) deviation of depth in tank 1 from the nominal level,

\(\delta h_{2} =\) deviation of depth in tank 2 from the nominal level, and

\(\delta u =\) deviation in fluid in flow rate to tank 1 (control).

(a) Level Controller for Two Cascaded Tanks: Using state feedback of the form

\[\delta u = - K_{1}\delta h_{1} - K_{2}\delta h_{2}, \]

choose values of \(K_{1}\) and \(K_{2}\) that will place the closed-loop eigenvalues at

\[s = - 2\sigma(1 \pm j) \]

Figure 7.99

Coupled tanks for Problem 7.54

Figure 7.100

View of ship from above for Problem 7.55 (b) Level Estimator for Two Cascaded Tanks: Suppose only the deviation in the level of tank 2 is measured (that is, \(y = \delta h_{2}\) ). Using this measurement, design an estimator that will give continuous, smooth estimates of the deviation in levels of tank 1 and tank 2, with estimator error poles at \(- 8\sigma(1 \pm j)\).

(c) Estimator/Controller for Two Cascaded Tanks: Sketch a block diagram (showing individual integrators) of the closed-loop system obtained by combining the estimator of part (b) with the controller of part (a).

(d) Using Matlab, compute and plot the response at \(y\) to an initial offset in \(\delta h_{1}\). Assume \(\sigma = 1\) for the plot.

7.55 The lateral motions of a ship that is \(100\text{ }m\) long, moving at a constant velocity of \(10\text{ }m/sec\), are described by

\[\begin{bmatrix} \overset{˙}{\beta} \\ \overset{˙}{r} \\ \overset{˙}{\psi} \end{bmatrix} = \begin{bmatrix} - 0.0895 & - 0.286 & 0 \\ - 0.0439 & - 0.272 & 0 \\ 0 & 1 & 0 \end{bmatrix}\begin{bmatrix} \beta \\ r \\ \psi \end{bmatrix} + \begin{bmatrix} 0.0145 \\ - 0.0122 \\ 0 \end{bmatrix}\delta,\]

where

\[\begin{matrix} \beta & \ = \text{~}\text{side slip angle(deg)}\text{~} \\ \psi & \ = \text{~}\text{heading angle(deg)}\text{~} \\ \delta & \ = \text{~}\text{rudder angle(deg), and}\text{~} \\ r & \ = \text{~}\text{yaw rate (see Fig.}\text{~}7.100) \end{matrix}\]

(a) Determine the transfer function from \(\delta\) to \(\psi\) and the characteristic roots of the uncontrolled ship.

(b) Using complete state feedback of the form

\[\delta = - K_{1}\beta - K_{2}r - K_{3}\left( \psi - \psi_{d} \right), \]

where \(\psi_{d}\) is the desired heading, determine values of \(K_{1},K_{2}\), and \(K_{3}\) that will place the closed-loop roots at \(s = - 0.2, - 0.2 \pm 0.2j\).

(c) Design a state estimator based on the measurement of \(\psi\) (obtained from a gyrocompass, for example). Place the roots of the estimator error equation at \(s = - 0.8\) and \(- 0.8 \pm 0.8j\).

(d) Give the state equations and transfer function for the compensator \(D_{C}(s)\) in Fig. 7.101, and plot its frequency response.

(e) Draw the Bode plot for the closed-loop system, and compute the corresponding gain and phase margins.

(f) Compute the feed-forward gains for a reference input, and plot the step response of the system to a change in heading of \(5^{\circ}\).

Figure 7.101

Ship control block diagram for Problem 7.55

312. Problem for Section 7.9: Introduction of the Reference Input with the Estimator

\(\bigtriangleup \ 7.56\) As mentioned in footnote 9 in Section 7.9.2, a reasonable approach for selecting the feed-forward gain in Eq. (7.202) is to choose \(\bar{N}\) such that when \(r\) and \(y\) are both unchanging, the DC gain from \(r\) to \(u\) is the negative of the DC gain from \(y\) to \(u\). Derive a formula for \(\bar{N}\) based on this selection rule. Show if the plant is Type 1 , this choice is the same as that given by Eq. (7.202).

313. Problems for Section 7.10: Integral Control and Robust Tracking

7.57 Assume the linearized and time-scaled equation of motion for the ballbearing levitation device is \(\overset{¨}{x} - x = u + w\). Here \(w\) is a constant bias due to the power amplifier. Introduce integral error control, and select three control gains \(\mathbf{K} = \begin{bmatrix} K_{1} & K_{2} & K_{3} \end{bmatrix}\) so the closed-loop poles are at -1 and \(- 1 \pm j\) and the steady-state error to \(w\) and to a (step) position command will be zero. Let \(y = x\) and the reference input \(r \triangleq y_{\text{ref}\text{~}}\) be a constant. Draw a block diagram of your design showing the locations of the feedback gains \(K_{i}\). Assume both \(\overset{˙}{x}\) and \(x\) can be measured. Plot the response of the closed-loop system to a step command input and the response to a step change in the bias input. Verify that the system is Type 1. Use Matlab (Simulink) software to simulate the system responses.

7.58 Consider a system with state matrices

\[\mathbf{A} = \begin{bmatrix} - 2 & 1 \\ 0 & - 3 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 1 \\ 1 \end{bmatrix},\ \mathbf{C} = \begin{bmatrix} 1 & 3 \end{bmatrix}\]

(a) Use feedback of the form \(u(t) = - \mathbf{Kx}(t) + \bar{N}r(t)\), where \(\bar{N}\) is a nonzero scalar, to move the poles to \(- 3 \pm 3j\).

(b) Choose \(\bar{N}\) so if \(r\) is a constant, the system has zero steady-state error; that is, \(y(\infty) = r\).

(c) Show if \(\mathbf{A}\) changes to \(\mathbf{A} + \delta\mathbf{A}\), where \(\delta\mathbf{A}\) is an arbitrary \(2 \times 2\) matrix, then your choice of \(\bar{N}\) in part(b) will no longer make \(y(\infty) = r\). Therefore, the system is not robust under changes to the system parameters in \(\mathbf{A}\).

(d) The system steady-state error performance can be made robust by augmenting the system with an integrator and using unity feedback-that is, by setting \({\overset{˙}{x}}_{I} = r - y\), where \(x_{I}\) is the state of the integrator. To see this, first use state feedback of the form \(u = - \mathbf{Kx} - K_{1}x_{I}\) so the poles of the augmented system are at -3 , \(- 2 \pm j\sqrt{3}\)

(e) Show the resulting system will yield \(y(\infty) = r\) no matter how the matrices \(\mathbf{A}\) and \(\mathbf{B}\) are changed, as long as the closed-loop system remains stable.

(f) For part (d), use Matlab (Simulink) software to plot the time response of the system to a constant input. Draw Bode plots of the controller, as well as the sensitivity function \((\mathcal{S})\) and the complementary sensitivity function \((\mathcal{T})\).

$\bigtriangleup \ $ 7.59 Consider a servomechanism for following the data track on a computerdisk memory system. Because of various unavoidable mechanical imperfections, the data track is not exactly a centered circle, and thus the radial servo must follow a sinusoidal input of radian frequency \(\omega_{0}\) (the spin rate of the disk). The state matrices for a linearized model of such a system are

\[\mathbf{A} = \begin{bmatrix} 0 & 1 \\ 0 & - 1 \end{bmatrix},\ \mathbf{B} = \begin{bmatrix} 0 \\ 1 \end{bmatrix},\ \mathbf{C} = \begin{bmatrix} 1 & 3 \end{bmatrix}\]

The sinusoidal reference input satisfies \(\overset{¨}{r} = - \omega_{0}^{2}r\).

(a) Let \(\omega_{0} = 1\), and place the poles of the error system for an internal model design at

\[\alpha_{c}(s) = (s + 2 \pm j2)(s + 1 \pm j1) \]

and the pole of the reduced-order estimator at

\[\alpha_{e}(s) = (s + 6) \]

(b) Draw a block diagram of the system, and clearly show the presence of the oscillator with frequency \(\omega_{0}\) (the internal model) in the controller. Also verify the presence of the blocking zeros at \(\pm j\omega_{0}\).

(c) Use Matlab (Simulink) software to plot the time response of the system to a sinusoidal input at frequency \(\omega_{0} = 1\).

(d) Draw a Bode plot to show how this system will respond to sinusoidal inputs at frequencies different from but near \(\omega_{0}\).
\(\bigtriangleup \ 7.60\) Compute the controller transfer function [from \(Y(s)\) to \(U(s)\) ] in Example 7.38. What is the prominent feature of the controller that allows tracking and disturbance rejection?

$\bigtriangleup \ $ 7.61 Consider the pendulum problem with control torque \(T_{c}\) and disturbance torque \(T_{d}\) :

\[\overset{¨}{\theta} + 4\theta = T_{c} + T_{d}\text{.}\text{~} \]

(Here \(g/l = 4\).) Assume there is a potentiometer at the pin that measures the output angle \(\theta\), but with a constant unknown bias \(b\). Thus the measurement equation is \(y = \theta + b\).

(a) Take the "augmented" state vector to be

\[\begin{bmatrix} \theta \\ \overset{˙}{\theta} \\ w \end{bmatrix}\]

where \(w\) is the input-equivalent bias. Write the system equations in state-space form. Give values for the matrices \(\mathbf{A},\mathbf{B}\), and \(\mathbf{C}\).

(b) Using state-variable methods, show the characteristic equation of the model is \(s\left( s^{2} + 4 \right) = 0\).

(c) Show \(w\) is observable if we assume \(y = \theta\), and write the estimator equations for

\[\begin{bmatrix} \widehat{\theta} \\ \land \\ \overset{˙}{\theta} \\ \widehat{w} \end{bmatrix}\]

Pick estimator gains \(\begin{bmatrix} \mathcal{l}_{1} & \mathcal{l}_{2} & \mathcal{l}_{3} \end{bmatrix}^{T}\) to place all the roots of the estimator error characteristic equation at -10 .

(d) Using full-state feedback of the estimated (controllable) statevariables, derive a control law to place the closed-loop poles at \(- 2 \pm j2\).

(e) Draw a block diagram of the complete closed-loop system (estimator, plant, and controller) using integrator blocks.

(f) Introduce the estimated bias into the control so as to yield zero steady-state error to the output bias \(b\). Demonstrate the performance of your design by plotting the response of the system to a step change in \(b\); that is, \(b\) changes from 0 to some constant value.

314. Problems for Section 7.10.3: Model-following Design

$\bigtriangleup \ $ 7.62 Consider the servomechanism problem where we wish to track a ramp reference signal. The plant and the desired model equations are

\[\begin{matrix} \overset{˙}{\mathbf{x}} & \ = \begin{bmatrix} 0 & 1 \\ 0 & - 1 \end{bmatrix}\mathbf{x} + \begin{bmatrix} 0 \\ 1 \end{bmatrix}u, \\ y & \ = \begin{bmatrix} 1 & 0 \end{bmatrix} \end{matrix}\]

\[\begin{matrix} {\overset{˙}{\mathbf{x}}}_{m} & \ = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\mathbf{x}_{m}, \\ y_{m} & \ = \begin{bmatrix} 1 & 0 \end{bmatrix}\mathbf{x}_{m} \end{matrix}\]

Design a model-following control law and demonstrate its tracking performance. Place the closed-loop poles at \(s = - 2 \pm j2\).

\(\bigtriangleup \ \mathbf{7.63}\) Implicit Model-following: Suppose we wish the closed-loop system to behave like a desired model, called the implicit model

\[\overset{˙}{\mathbf{z}} = \mathbf{A}_{m}\mathbf{z} \]

We may minimize a modified LQR performance index

\[\mathcal{J} = \int_{0}^{\infty}\mspace{2mu}\left\{ \left( \overset{˙}{y} - \mathbf{A}_{m}y \right)^{T}\mathbf{Q}_{1}\left( \overset{˙}{y} - \mathbf{A}_{m}y \right) + u^{T}\mathbf{R}u \right\} dt \]

Show this performance index is equivalent to the standard one with the addition of a cross-weighting term between the control and the state of the form

\[\mathcal{J} = \int_{0}^{\infty}\mspace{2mu}\left\{ \mathbf{x}^{T}\widehat{\mathbf{Q}}\mathbf{x} + 2u^{T}\widehat{\mathbf{S}}\mathbf{x} + u^{T}\widehat{\mathbf{R}}u \right\} dt \]

where

\[\begin{matrix} & \widehat{\mathbf{Q}} = \left( \mathbf{CA} - \mathbf{A}_{m}\mathbf{C} \right)^{T}\mathbf{Q}_{1}\left( \mathbf{CA} - \mathbf{A}_{m}\mathbf{C} \right) \\ & \widehat{\mathbf{S}} = \mathbf{B}^{T}\mathbf{C}^{T}\mathbf{Q}_{1}\left( \mathbf{CA} - \mathbf{A}_{m}\mathbf{C} \right) \\ & \widehat{\mathbf{R}} = \mathbf{R} + \mathbf{B}^{T}\mathbf{C}^{T}\mathbf{Q}_{1}\mathbf{CB} \end{matrix}\]

\(\bigtriangleup \ \mathbf{7.64}\) Explicit Model-Following: Suppose in the LQR problem, we wish the closed-loop system to behave as close as possible to a system of the form

\[\overset{˙}{\mathbf{z}} = \mathbf{A}_{m}\mathbf{z} \]

which represents the model of desirable dynamics. We may choose a performance index of the form

\[\mathcal{J} = \int_{0}^{\infty}\mspace{2mu}\left\{ (y - \mathbf{z})^{T}\mathbf{Q}_{1}(y - \mathbf{z}) + u^{T}\mathbf{R}u \right\} dt \]

(a) Show this performance index can be converted to the standard one by augmenting the states of the plant and the model and again choose the augmented state vector, \(\xi = \begin{bmatrix} \mathbf{x}^{T} & \mathbf{z}^{T} \end{bmatrix}^{T}\) and write down the system equations to show that

\[\mathcal{J} = \int_{0}^{\infty}\mspace{2mu}\left\{ \xi^{T}\mathbf{Q}_{1}\xi + u^{T}\mathbf{R}u \right\} dt \]

where

\[\mathbf{Q} = \begin{bmatrix} \mathbf{C}^{T}\mathbf{Q}_{1}\mathbf{C} & - \mathbf{C}^{T}\mathbf{Q}_{1} \\ - \mathbf{Q}_{1}\mathbf{C} & \mathbf{Q}_{1} \end{bmatrix}\]

(b) Which state variables of the system are uncontrollable? Is this result surprising?

(c) The optimal control is of the form

\[u = - \mathbf{K}_{1}\mathbf{x} - \mathbf{K}_{2}\mathbf{z} \]

which means that the model's equations must be implemented as part of the control law. Suppose we now drive the model as follows

\[\overset{˙}{\mathbf{z}} = \mathbf{A}_{m}\mathbf{z} + \mathbf{B}_{p}u_{p} \]

where \(u_{p}\) may be the pilot input in an aircraft system. Show that

\[\frac{Y(s)}{U_{p}(s)} = \underset{\text{Closed-loop dynamics}\text{~}}{\overset{- \mathbf{C}\left( s\mathbf{I} - \mathbf{A} + \mathbf{B}\mathbf{K}_{1} \right)^{- 1}\mathbf{B}}{︸}}\underset{\text{Feedforward dynamics}\text{~}}{\overset{\mathbf{K}_{2}\left( s\mathbf{I} - \mathbf{A}_{m} \right)^{- 1}\mathbf{B}_{p}}{︸}}. \]

This indicates that the feedforward dynamics may be used to improve the transient response of the system.

(d) What are the transmission zeros of the overall system?

(e) What is a possible disadvantage of this scheme compared to the standard LQR, that is, with no explicit model?

315. Problem for Section 7.13: Design for Systems with Pure Time Delay

\(\bigtriangleup \ \mathbf{7.65}\) Consider the system with the transfer function \(e^{- Ts}G(s)\), where

\[G(s) = \frac{1}{s(s + 1)(s + 2)} \]

The Smith compensator for this system is given by

\[D_{c}^{'}(s) = \frac{D_{c}(s)}{1 + \left( 1 - e^{- sT} \right)G(s)D_{c}(s)} \]

Plot the frequency response of the compensator for \(T = 5\) and \(D_{c}(s) =\) 1 , and draw a Bode plot that shows the gain and phase margins of the system. \(\ ^{21}\)

316. Digital Control

317. A Perspective on Digital Control

Most of the controllers we have studied so far were described by the Laplace transform or differential equations, which, strictly speaking, are assumed to be built using analog electronics, such as that in Fig. 5.31. However, most control systems today use digital computers (usually microprocessors or microcontrollers) to implement the controllers. The intent of this chapter is to show how to implement a control system in a digital computer. The implementation leads to an average delay of half the sample period, and to a phenomenon called aliasing, both of which need to be addressed in the controller design.

Analog electronics can integrate and differentiate signals. In order for a digital computer to accomplish these tasks, the differential equations describing compensation must be approximated by reducing them to algebraic equations involving addition, division, and multiplication. This chapter expands on various ways to make these approximations. The resulting design can then be tuned up, if needed, using direct digital analysis. In some cases, it will pay to perform the design directly in the discrete-time domain.

You should be able to design, analyze, and implement a digital control system from the material in this chapter. However, our treatment here is a limited version of a complex subject covered in more detail in Digital Control of Dynamic Systems by Franklin et al. (1998).

318. Chapter Overview

In Section 8.1, we will describe the basic structure of digital control systems and introduce the issues that arise due to the sampling.

A digital implementation based on a discrete approximation of a continuous control law can be evaluated via Simulink to determine the degradation with respect to the continuous-time case. However, to fully understand the effect of sampling, it is useful to learn about discrete linear analysis tools. This requires an understanding of the \(z\)-transform, which we will discuss in Section 8.2. In Section 8.3 we will build on this understanding to provide a foundation for design using various discrete equivalents. Generally speaking, the discrete equivalents work well if the sampling rate is sufficiently fast. In Sections 8.4 and 8.5, we will discuss hardware characteristics and sample rate issues, both of which need to be addressed in order to implement a digital controller.

Discrete analysis also allows us to analytically determine the performance of the approximate discrete equivalent design without resorting to a numerical simulation, such as Simulink, as we do in the early examples. This analysis can then serve as a guide to tune up the designs, which will be described in Section 8.6. It is also possible to perform a direct digital design (also called discrete design), which provides an exact design method that is independent of whether the sample rate is fast or not. Direct digital design will be described in Section 8.7.

318.1. Digitization

Figure 8.1(a) shows the topology of the typical continuous system that we have been considering in previous chapters. The computation of the error signal \(e\) and the dynamic compensation \(D_{c}(s)\) can all be accomplished in a digital computer as shown in Fig. 8.1(b). The fundamental differences between the two implementations are that the digital system operates on samples of the sensed plant output rather than on the continuous signal, and that the continuous control provided by \(D_{c}(s)\), including any differentiation and integration, must be generated at discrete instances in time and approximated using numerical methods called difference equations. These equations are recursive, algebraic calculations because computers are not capable of performing dynamic functions directly.

Walking through the process in more detail, the analog output of the plant sensor is sampled and converted to a digital number in the analog-to-digital (A/D) converter. This device samples a physical variable, most commonly an electrical voltage, and converts the samples of the analog signal into a digital binary number that usually consists of 10 to 16 bits. Conversion from the continuous analog signal \(y(t)\) to the discrete digital samples, \(y(kT)\), occurs repeatedly at instants of time, \(T\), apart where \(T\) is the sample period and \(1/T\) is the sample rate. If \(T\) is in seconds, \(1/T\) is the sample rate in Hertz, denoted by \(f_{s}\). The sampled signal is \(y(kT)\), where \(k\) can take on any integer value. It is often written simply as \(y(k)\). We call this type of variable a discrete signal to

(a)

(b)

Figure 8.1

Block diagrams for a basic control system: (a) continuous system; (b) with a digital computer

distinguish it from a continuous signal such as \(y(t)\), which changes continuously in time. A system having both discrete and continuous signals is called a sampled-data system.

We make the assumption in this book that the sample period is fixed. In practice, digital control systems sometimes have varying sample periods and/or different periods in different feedback paths. Usually, the computer logic includes a clock that supplies a pulse, or interrupt, every \(T\) seconds, and the A/D converter sends a number to the computer each time the interrupt arrives. An alternative implementation, often referred to as free-running, is to access the A/D converter after each cycle of code execution has been completed. In the former case, the sample period is precisely fixed; in the latter case, the sample period is fixed essentially by the length of the code, provided that no logic branches are present, which could vary the amount of code executed. There also may be a sampler and an A/D converter for the input command \(r(t)\), which produces the discrete \(r(kT)\), from which the sensed output \(y(kT)\) will be subtracted to arrive at the discrete error signal \(e(kT)\).

The continuous compensation \(D_{c}(s)\) is approximated by difference equations, which are the discrete version of differential equations and can be made to duplicate the dynamic behavior of \(D_{c}(s)\) accurately if the sample rate is fast enough. The result of the difference equations

Zero-order hold (ZOH)

Sample rate selection

Figure 8.2

The delay due to the hold operation is a discrete control signal \(u(kT)\) at each sample instant. This signal is converted to a continuous signal \(u(t)\) by the digital-to-analog (D/A) converter and the hold: the D/A converter changes the digital binary number to an analog voltage, and a zero-order hold maintains that same voltage throughout the sample period. The resulting control signal \(u(t)\) is then applied to the actuator in precisely the same manner as the continuous implementation. There are two basic techniques for finding the difference equations for the digital controller. One technique, called the discrete equivalent,consists of designing a continuous compensation \(D_{c}(s)\) using methods described in the previous chapters, then approximating that \(D_{c}(s)\) using one of the methods to be described in Section 8.3. The other technique is discrete design, to be described in Section 8.7. Here, the difference equations are found directly without designing \(D_{c}(s)\) first.

The sample rate required depends on the closed-loop bandwidth of the system. Generally, sample rates should be at least 20 times the bandwidth \(\omega_{BW}\) in order to assure that the digital controller will match the performance of the continuous controller. Slower sample rates can be used if some adjustments are made in the digital controller or some performance degradation is acceptable. Use of the discrete design method allows for a much slower sample rate if that is desirable to minimize hardware costs; however, best performance of a digital controller is obtained when the sample rate is greater than 25 times the bandwidth.

It is worth noting the single most important impact of implementing a control system digitally is the delay associated with the hold. Because each value of \(u(kT)\) in Fig. 8.1(b) is held constant until the next value is available from the computer, the continuous value of \(u(t)\) consists of steps (see Fig. 8.2) that, on average, are delayed from a fit to \(u(kT)\) by \(T/2\), as shown in the figure. If we simply incorporate this \(T/2\) delay into a continuous analysis of the system, an excellent prediction of the effects of sampling results for sample rates much slower than 20 times bandwidth. We will discuss this further in Section 8.3.5.

Figure 8.3

A continuous, sampled version of signal \(f\)

318.2. Dynamic Analysis of Discrete Systems

The \(z\)-transform is the mathematical tool for the analysis of linear discrete systems. It plays the same role for discrete-time systems that the Laplace transform does for continuous-time systems. This section will give a short description of the \(z\)-transform, describe its use in analyzing discrete systems, and show how it relates to the Laplace transform.

318.3. 1 z-Transform

In the analysis of continuous-time systems, we use the Laplace transform, which is defined by

\[\mathcal{L}\{ f(t)\} = F(s) = \int_{0}^{\infty}\mspace{2mu} f(t)e^{- st}dt \]

which leads directly to the important property that (with zero initial conditions)

\[\mathcal{L}\{\overset{˙}{f}(t)\} = sF(s) \]

Equation (8.1) enables us easily to find the transfer function of a linear continuous-time system, given the differential equation description of that system.

For discrete systems a similar procedure is available. The \(z\) transform is defined by

\[\mathcal{Z}\{ f(k)\} = F(z) = \sum_{k = 0}^{\infty}\mspace{2mu} f(k)z^{- k} \]

where \(f(k)\) is the sampled version of \(f(t)\), as shown in Fig. 8.3, and \(k = 0,1,2,3,\ldots\) refers to discrete sample times \(t_{0},t_{1},t_{2},t_{3},\ldots\) This leads directly to a property analogous to Eq. (8.1), specifically, that

\[\mathcal{Z}\{ f(k - 1)\} = z^{- 1}F(z) \]

where \(z^{- 1}\) represents one sample delay. This relation allows us to easily find the transfer function of a discrete system, given the difference equations of that system. For example, the general second-order difference equation

\[y(k) = - a_{1}y(k - 1) - a_{2}y(k - 2) + b_{0}u(k) + b_{1}u(k - 1) + b_{2}u(k - 2) \]

can be converted from this form to the \(z\)-transform of the variables \(y(k)\), \(u(k),\ldots\) by invoking Eq. (8.3) once or twice to arrive at

\[Y(z) = \left( - a_{1}z^{- 1} - a_{2}z^{- 2} \right)Y(z) + \left( b_{0} + b_{1}z^{- 1} + b_{2}z^{- 2} \right)U(z) \]

Equation (8.4) then results in the discrete transfer function

\[\frac{Y(z)}{U(z)} = \frac{b_{0} + b_{1}z^{- 1} + b_{2}z^{- 2}}{1 + a_{1}z^{- 1} + a_{2}z^{- 2}} \]

318.4. 2 z-Transform Inversion

Table 8.1 relates simple discrete-time functions to their \(z\)-transforms and gives the Laplace transforms for the same time functions.

Given a general \(z\)-transform, we could expand it into a sum of elementary terms using partial-fraction expansion (see Appendix A.1.2) and find the resulting time series from the table. These procedures are exactly the same as those used for continuous-time systems. However, as with the continuous case, most designers would use a numerical evaluation of the discrete equations to obtain a time history rather than inverting the \(z\)-transform.

A \(z\)-transform inversion technique that has no continuous counterpart is called long division. Given the \(z\)-transform

\[Y(z) = \frac{N(z)}{D_{d}(z)} \]

we simply divide the denominator into the numerator using long division. The result is a series (perhaps with an infinite number of terms) in \(z^{- 1}\), from which the time series can be found using Eq. (8.2).

For example, a first-order discrete system described by the difference equation

\[y(k) = \alpha y(k - 1) + u(k) \]

yields the discrete transfer function

\[\frac{Y(z)}{U(z)} = \frac{1}{1 - \alpha z^{- 1}} \]

For a unit-pulse input defined by

\[\begin{matrix} & u(0) = 1 \\ & u(k) = 0,\ k \neq 0 \end{matrix}\]

the \(z\)-transform is then

\[U(z) = 1, \]

so

\[Y(z) = \frac{1}{1 - \alpha z^{- 1}} \]

No. $$F(s)$$ $$f(kT)$$ $$F(z)$$
1 $$1,k = 0;0,k \neq 0$$ 1
2 $$1,k = k_{o};0,k \neq k_{o}$$ $$z^{- k_{o}}$$
3 $$\frac{1}{s}$$ $$1(kT)$$ $$\frac{z}{z - 1}$$
4 $$\frac{1}{s^{2}}$$ $$kT$$ $$\frac{Tz}{(z - 1)^{2}}$$
5 $$\frac{1}{s^{3}}$$ $$\frac{1}{2!}(kT)^{2}$$ $$\frac{T^{2}}{2}\left\lbrack \frac{z(z + 1)}{(z - 1)^{3}} \right\rbrack$$
6 $$\frac{1}{s^{4}}$$ $$\frac{1}{3!}(kT)^{3}$$ $$\frac{T^{3}}{6}\left\lbrack \frac{z\left( z^{2} + 4z + 1 \right)}{(z - 1)^{4}} \right\rbrack$$
7 $$\frac{1}{s^{m}}$$ $$\lim_{a \rightarrow 0}\mspace{2mu}\frac{( - 1)^{m - 1}}{(m - 1)!}\left( \frac{\partial^{m - 1}}{\partial a^{m - 1}}e^{- akT} \right)$$ $$\lim_{a \rightarrow 0}\mspace{2mu}\frac{( - 1)^{m - 1}}{(m - 1)!}\left( \frac{\partial^{m - 1}}{\partial a^{m - 1}}\frac{z}{z - e^{- aT}} \right)$$
8 $$\frac{1}{s + a}$$ $$e^{- akT}$$ $$\frac{z}{z - e^{- aT}}$$
9 $$\frac{1}{(s + a)^{2}}$$ $$kTe^{- akT}$$ $$\frac{Tze^{- aT}}{\left( z - e^{- aT} \right)^{2}}$$
10 $$\frac{1}{(s + a)^{3}}$$ $$\frac{1}{2}(kT){2}e$$ $$\frac{T{2}}{2}ez\frac{\left( z + e^{- aT} \right)}{\left( z - e^{- aT} \right)^{3}}$$
11 $$\frac{1}{(s + a)^{m}}$$ $$\frac{( - 1)^{m - 1}}{(m - 1)!}\left( \frac{\partial^{m - 1}}{\partial a^{m - 1}}e^{- akT} \right)$$ $$\frac{( - 1)^{m - 1}}{(m - 1)!}\left( \frac{\partial^{m - 1}}{\partial a^{m - 1}}\frac{z}{z - e^{- aT}} \right)$$
12 $$\frac{a}{s(s + a)}$$ $$1 - e^{- akT}$$ $$\frac{z\left( 1 - e^{- aT} \right)}{(z - 1)\left( z - e^{- aT} \right)}$$
13 $$\frac{a}{s^{2}(s + a)}$$ $$\frac{1}{a}\left( akT - 1 + e^{- akT} \right)$$ $$\frac{z\left\lbrack \left( aT - 1 + e^{- aT} \right)z + \left( 1 - e^{- aT} - aTe^{- aT} \right) \right\rbrack}{a(z - 1)^{2}\left( z - e^{- aT} \right)}$$
14 $$\frac{b - a}{(s + a)(s + b)}$$ $$e^{- akT} - e^{- bkT}$$ $$\frac{\left( e^{- aT} - e^{- bT} \right)z}{\left( z - e^{- aT} \right)\left( z - e^{- bT} \right)}$$
15 $$\frac{s}{(s + a)^{2}}$$ $$(1 - akT)e^{- akT}$$ $$\frac{z\left\lbrack z - e^{- aT}(1 + aT) \right\rbrack}{\left( z - e^{- aT} \right)^{2}}$$
16 $$\frac{a^{2}}{s(s + a)^{2}}$$ $$1 - e^{- akT}(1 + akT)$$ $$\frac{z\left\lbrack z\left( 1 - e^{- aT} - aTe^{- aT} \right) + e^{- 2aT} - e^{- aT} + aTe^{- aT} \right\rbrack}{(z - 1)\left( z - e^{- aT} \right)^{2}}$$
17 $$\frac{(b - a)s}{(s + a)(s + b)}$$ $$be^{- bkT} - ae^{- akT}$$ $$\frac{z\left\lbrack z(b - a) - \left( be^{- aT} - ae^{- bT} \right) \right\rbrack}{\left( z - e^{- aT} \right)\left( z - e^{- bT} \right)}$$
18 $$\frac{a}{s^{2} + a^{2}}$$ $$sinakT$$ $$\frac{zsinaT}{z^{2} - (2cosaT)z + 1}$$
19 $$\frac{s}{s^{2} + a^{2}}$$ $$cosakT$$ $$\frac{z(z - cosaT)}{z^{2} - (2cosaT)z + 1}$$
20 $$\frac{s + a}{(s + a)^{2} + b^{2}}$$ $$e^{- akT}cosbkT$$ $$\frac{z\left( z - e^{- aT}cosbT \right)}{z^{2} - 2e^{- aT}(cosbT)z + e^{- 2aT}}$$
21 $$\frac{b}{(s + a)^{2} + b^{2}}$$ $$e^{- akT}sinbkT$$ $$\frac{ze^{- aT}sinbT}{z^{2} - 2e^{- aT}(cosbT)z + e^{- 2aT}}$$
22 $$\frac{a^{2} + b^{2}}{s\left\lbrack (s + a)^{2} + b^{2} \right\rbrack}$$ $$1 - e^{- akT}\left( cosbkT + \frac{a}{b}sinbkT \right)$$ $$\frac{z(Az + B)}{(z - 1)\left\lbrack z^{2} - 2e^{- aT}(cosbT)z + e^{- 2aT} \right\rbrack}$$
$$A = 1 - e^{- aT}cosbT - \frac{a}{b}e^{- aT}sinbT$$
$$B = e^{- 2aT} + \frac{a}{b}e^{- aT}sinbT - e^{- aT}cosbT$$

\(F(s)\) is the Laplace transform of \(f(t)\), and \(F(z)\) is the \(z\)-transform of \(f(kT)\).

Note: \(f(t) = 0\) for \(t = 0\).

Therefore, to find the time series, we divide the numerator of Eq. (8.8) by its denominator using long division:

\[\begin{matrix} & 1 + \alpha z^{- 1} + \alpha^{2}z^{- 2} + \alpha^{3}z^{- 3} + \cdots \\ & \begin{matrix} 1 - \alpha z^{- 1} & \ _{1} \\ & 1 - \alpha z^{- 1} \end{matrix} \\ & \alpha z^{- 1} + 0 \\ & \alpha z^{- 1} - \alpha^{2}z^{- 2} \\ & \alpha^{2}z^{- 2} + 0 \\ & \alpha^{2}z^{- 2} - \alpha 3z^{- 3} \\ & \alpha^{3}z^{- 3} \end{matrix}\]

This yields the infinite series

\[Y(z) = 1 + \alpha z^{- 1} + \alpha^{2}z^{- 2} + \alpha^{3}z^{- 3} + \cdots. \]

From Eqs. (8.9) and (8.2), we see the sampled time history of \(y\) is

\[\begin{matrix} & y(0) = 1, \\ & y(1) = \alpha, \\ & y(2) = \alpha^{2}, \\ & \ \vdots \\ & y(k) = \alpha^{k}, \end{matrix}\]

which also could have been easily calculated for this simple example by directly evaluating Eq. (8.6).

318.4.1. Relationship Between \(s\) and \(z\)

For continuous-time systems, we saw in Chapter 3 that certain behaviors result from different pole locations in the s-plane: oscillatory behavior for poles near the imaginary axis, exponential decay for poles on the negative real axis, and unstable behavior for poles with a positive real part. A similar kind of association would also be useful to know when designing discrete systems. Consider the continuous signal

\[f(t) = e^{- at},\ t > 0 \]

which has the Laplace transform

\[F(s) = \frac{1}{s + a} \]

and corresponds to a pole at \(s = - a\). The \(z\)-transform of \(f(kT)\) is

\[F(z) = \mathcal{Z}\left\{ e^{- akT} \right\} \]

Relationship between \(z\)-plane and \(s\)-plane characteristics

From Table 8.1, we can see that Eq. (8.10) is equivalent to

\[F(z) = \frac{z}{z - e^{- aT}} \]

which corresponds to a pole at \(z = e^{- aT}\). This means that a pole at \(s = - a\) in the s-plane corresponds to a pole at \(z = e^{- aT}\) in the discrete domain. This is true in general, which is shown in more detail in Franklin et al. (1998). The important result is:

The equivalent characteristics in the \(z\)-plane are related to those in the \(s\)-plane by the expression

\[z = e^{sT} \]

where \(T\) is the sample period.

Table 8.1 also includes the Laplace transforms, which demonstrates the \(z = e^{sT}\) relationship for the roots of the denominators of the table entries for \(F(s)\) and \(F(z)\).

Figure 8.4 shows the mapping of lines of constant damping \(\zeta\) and natural frequency \(\omega_{n}\) from the \(s\)-plane to the upper half of the \(z\)-plane, using Eq. (8.11). The mapping also has several other important features (see Problem 8.4):

Figure 8.4

Natural frequency (solid color) and damping loci (light color) in the z-plane; the portion below the \(Re(z)\)-axis (not shown) is the mirror image of the upper half shown

  1. The stability boundary ( \(s = 0 \pm j\omega\) in the \(s\)-plane) becomes the unit circle \(|z| = 1\) in the \(z\)-plane; inside the unit circle is stable, outside is unstable.

  2. The small vicinity around \(z = + 1\) in the \(z\)-plane is essentially identical to the vicinity around the origin, \(s = 0\), in the \(s\)-plane.

  3. The \(z\)-plane locations give response information normalized to the sample rate rather than to time as in the s-plane.

  4. The negative real \(z\)-axis always represents a frequency of \(\omega_{s}/2\), where \(\omega_{S} = 2\pi/T =\) sample rate in radians per second when \(T\) is in seconds.

  5. Vertical lines in the left half of the \(s\)-plane (the constant real part of \(s\) or time constant) map into circles within the unit circle of the \(z\)-plane.

  6. Horizontal lines in the s-plane (the constant imaginary part of \(s\) or frequency) map into radial lines in the \(z\)-plane.

  7. Frequencies greater than \(\omega_{s}/2\), called the Nyquist frequency \(\ ^{1}\), appear in the \(z\)-plane on top of corresponding lower frequencies because of the circular character of the trigonometric functions imbedded in Eq. (8.11). This overlap is called aliasing or folding. As a result it is necessary to sample at least twice as fast as a signal's highest frequency component in order to represent that signal with the samples. (We will discuss aliasing in greater detail in Section 8.4.3.)

To provide insight into the correspondence between \(z\)-plane locations and the resulting time sequence, Fig. 8.5 sketches time responses that would result from poles at the indicated locations. This figure is the discrete counterpart of Fig. 3.16.

318.4.2. Final Value Theorem

The Final Value Theorem for continuous-time systems, discussed in Section 3.1.6, states that

\[\lim_{t \rightarrow \infty}\mspace{2mu} x(t) = x_{ss} = \lim_{s \rightarrow 0}\mspace{2mu} sX(s) \]

as long as all the poles of \(sX(s)\) are in the left half-plane (LHP). It is often used to find steady-state system errors and/or steady-state gains of portions of a control system. We can obtain a similar relationship for discrete systems by noting a constant continuous steady-state response is denoted by \(X(s) = A/s\) and leads to the multiplication by \(s\) in Eq. (8.12). Therefore, because the constant steady-state response for discrete systems is

\[X(z) = \frac{A}{1 - z^{- 1}}, \]

\(\ ^{1}\) Nyquist frequency \(= \omega_{s}/2\)

Figure 8.5

Time sequences associated with points in the \(z\)-plane

Final Value Theorem for discrete systems the discrete Final Value Theorem is

\[\lim_{k \rightarrow \infty}\mspace{2mu} x(k) = x_{Ss} = \lim_{z \rightarrow 1}\mspace{2mu}\left( 1 - z^{- 1} \right)X(z) \]

if all the poles of \(\left( 1 - z^{- 1} \right)X(z)\) are inside the unit circle.

For example, to find the DC gain of the transfer function

\[G(z) = \frac{X(z)}{U(z)} = \frac{0.58(1 + z)}{z + 0.16} \]

we let \(u(k) = 1\) for \(k \geq 0\), so

and

\[U(z) = \frac{1}{1 - z^{- 1}} \]

\[X(z) = \frac{0.58(1 + z)}{\left( 1 - z^{- 1} \right)(z + 0.16)} \]

\(DC\) gain

Stages in design using discrete equivalents

Figure 8.6

Comparison of

(a) digital and;

(b) continuous

implementation
Applying the Final Value Theorem yields

\[x_{ss} = \lim_{z \rightarrow 1}\mspace{2mu}\left\lbrack \frac{0.58(1 + z)}{z + 0.16} \right\rbrack = 1 \]

so the DC gain of \(G(z)\) is unity. To find the DC gain of any stable transfer function, we simply substitute \(z = 1\) and compute the resulting gain. Because the DC gain of a system should not change whether represented continuously or discretely, this calculation is an excellent aid to check that an equivalent discrete controller matches a continuous controller. It is also a good check on the calculations associated with determining the discrete model of a system.

318.5. Design Using Discrete Equivalents

Design by discrete equivalent, sometimes called emulation, proceeds through the following stages:

  1. Design a continuous compensation, as described in Chapters 1 through 7.

  2. Find the discrete equivalent that, when implemented with the system described by Fig. 8.1(b), best approximates the continuous compensation.

  3. Use discrete analysis, simulation, or experimentation to verify the design.

Assume we are given a continuous compensation \(D_{c}(s)\), as shown in Fig. 8.1(a). We wish to find a set of difference equations or \(D_{d}(z)\) for the digital implementation of that compensation in Fig. 8.1(b). First, we rephrase the problem as one of finding the best \(D_{d}(z)\) in the digital implementation shown in Fig. 8.6(a) to match the continuous system represented by \(D_{c}(s)\) in Fig. \(8.6(\text{ }b)\). In this section, we examine and compare four methods for solving this problem.

It is important to remember, as stated earlier, that these methods are approximations; there is no exact solution for all possible inputs because \(D_{c}(s)\) responds to the complete time history of \(e(t)\), whereas \(D_{d}(z)\) has access to only the samples \(e(kT)\). In a sense, the various digitization techniques simply make different assumptions about what happens to \(e(t)\) between the sample points.

318.5.1. Tustin's Method

Tustin's method is a digitization technique that approaches the problem as one of numerical integration. Suppose

(a)

(b)

\[\frac{U(s)}{E(s)} = D_{c}(s) = \frac{1}{s} \]

which is integration. Therefore,

\[u(kT) = \int_{0}^{kT - T}\mspace{2mu} e(t)dt + \int_{kT - T}^{kT}\mspace{2mu} e(t)dt \]

which can be rewritten as

\[u(kT) = u(kT - T) + \text{~}\text{area under}\text{~}e(t)\text{~}\text{over last period,}\text{~}T \]

where \(T\) is the sample period.

For Tustin's method, the task at each step is to use trapezoidal integration, that is, to approximate \(e(t)\) by a straight line between the two samples (see Fig. 8.7). Writing \(u(kT)\) as \(u(k)\) and \(u(kT - T)\) as \(u(k - 1)\) for short, we convert Eq. (8.15) to

\[u(k) = u(k - 1) + \frac{T}{2}\lbrack e(k - 1) + e(k)\rbrack \]

or, taking the \(z\)-transform,

\[\frac{U(z)}{E(z)} = \frac{T}{2}\left( \frac{1 + z^{- 1}}{1 - z^{- 1}} \right) = \frac{1}{\frac{2}{T}\left( \frac{1 - z^{- 1}}{1 + z^{- 1}} \right)} \]

For \(D_{c}(s) = a/(s + a)\), applying the same integration approximation yields

\[D_{d}(z) = \frac{a}{\frac{2}{T}\left( \frac{1 - z^{- 1}}{1 + z^{- 1}} \right) + a}\text{.}\text{~} \]

In fact, substituting

\[s = \frac{2}{T}\left( \frac{1 - z^{- 1}}{1 + z^{- 1}} \right) \]

Tustin's method or bilinear approximation

Figure 8.7

Trapezoidal integration in Tustin's method for every occurrence of \(s\) in any \(D_{c}(s)\) yields a \(D_{d}(z)\) based on the trapezoidal integration formula. This is called Tustin's method or the bilinear approximation. Finding Tustin's approximation by hand for even a simple transfer function requires fairly extensive algebraic manipulations. The c2d function of Matlab expedites the process, as shown in the next example.

Determine the difference equations to implement the compensation from Example 6.15,

\[D_{c}(s) = 10\frac{s/2 + 1}{s/10 + 1} \]

at a sample rate of 25 times bandwidth using Tustin's approximation. Compare the performance against the continuous system done in Example 6.15.

Solution. The bandwidth, \(\omega_{BW}\), for Example 6.15 is approximately \(10rad/sec\), as can be deduced by observing that the crossover frequency \(\left( \omega_{c} \right)\) is approximately \(5rad/sec\) and noting the relationship between \(\omega_{c}\) and \(\omega_{BW}\) in Fig. 6.50. Therefore, the sample frequency should be

\[\omega_{S} = 25 \times \omega_{BW} = (25)(10) = 250rad/sec. \]

Normally, when a frequency is indicated with the units of cycles per second, or \(Hz\), it is given the \(symbolf\), so with this convention, we have

\[f_{s} = \omega_{s}/(2\pi) \simeq 40\text{ }Hz \]

and the sample period is then

\[T = 1/f_{s} = 1/40 = 0.025sec \]

The discrete compensation is computed by the Matlab statement

\[\begin{matrix} & s = tf(^{'}s^{'}) \\ & \text{~}\text{sysDc}\text{~} = tf\left( 10^{*}(\text{ }s/2 + 1)/(s/10 + 1) \right.\ \\ & T = 0.025 \\ & \text{~}\text{sys}\text{~}Dd = c2\text{ }d(sysDc,T,\text{~}\text{'tustin')}\text{~} \end{matrix}\]

which produces

\[D_{d}(z) = \frac{45.56 - 43.33z^{- 1}}{1 - 0.7778z^{- 1}} \]

We can then write the difference equation by inspecting Eq. (8.19) to get

\[u(k) = 0.7778u(k - 1) + 45.56e(k) - 43.33e(k - 1), \]

or,

\[u(k) = 0.7778u(k - 1) + 45.56\lbrack e(k) - 0.9510e(k - 1)\rbrack \]

Equation (8.20) computes the new value of the control, \(u(k)\), given the past value of the control, \(u(k - 1)\), and the new and past values of the error signal, \(e(k)\) and \(e(k - 1)\).

In principle, the difference equation is evaluated initially with \(k =\) 0 , then \(k = 1,2,3,\ldots\) However, there is usually no requirement that values for all times be saved in memory. Therefore, the computer only needs

Figure 8.8

Simulink block diagram for transient response of lead-compensation designs with discrete and analog implementations

Figure 8.9

Comparison between the digital (using Tustin's) and the continuous controller step response with a sample rate 25 times the bandwidth:

(a) position; (b) control signal to have variables defined for the current and past values. The instructions to the computer to implement the feedback loop in Fig. 8.1(b) with the difference equation from Eq. (8.20) would call for a continual looping through the following code:

\[READA/D:y,r \]

\[\begin{matrix} & e = r - y \\ & u = 0.7778u_{p} + 45.56\left\lbrack e - 0.9510e_{p} \right\rbrack \end{matrix}\]

OUTPUT D/A: \(u\)

\(u_{p} = u\) (where \(u_{p}\) will be the past value for the next loop through)

\[e_{p} = e \]

go back to READ when \(T\) sec have elapsed since last READ.

To evaluate this discrete controller, we use Simulink to compare the two implementations. Figure 8.8 shows the block diagram for the comparison, and the results of the step responses are shown in Fig. 8.9.

(a)

(b)

Note sampling at 25 times the bandwidth causes the digital implementation to match the continuous one quite well. Generally speaking, if you want to match a continuous system with a digital approximation of the continuous compensation, a conservative approach is to sample at approximately 25 times the bandwidth or faster.

318.5.2. Zero-Order Hold (ZOH) Method

Tustin's method essentially assumed the input to the controller varied linearly between the past sample rate and the current one, as shown in Fig. 8.7. Another assumption is the input to the controller remains constant throughout the sample period. In other words, for purposes of this design approximation, we assume that the \(D_{c}(s)\) in Fig. 8.1(a) is preceded by a ZOH whose function is to accept the value of \(e\) at sample time, \(k\), and hold that value constant until \(k + 1\). This is not the actual case; rather, the only \(ZOH\) in the system is the one preceding the plant, \(G(s)\), as shown in Fig. 8.1(b). With this assumption, there is an exact discrete equivalent for this system because the \(ZOH\) precisely describes what happens between samples of \(e\) and the output \(u\) is dependent only on the input at the sample times \(e(k)\).

For a controller described by \(D_{c}(s)\) preceded by a \(ZOH\), given an input, \(e(k)\), the system is essentially responding to a positive step at sample time, \(k\), followed by a negative step one cycle delayed. In other words, one input sample produces a square pulse of height, \(e\), that lasts for one sample period. For a constant positive step input, \(e\), at time \(k\), \(E(s) = e(k)/s\), so the result would be,

\[D_{d}(z) = \mathcal{Z}\left\{ \frac{D_{c}(s)}{s} \right\} \]

where \(\mathcal{Z}\{ F(s)\}\) is the \(z\)-transform of the sampled time series whose Laplace transform is the expression for \(F(s)\), that is, it is given on the same line in Table 8.1. Furthermore, a constant negative step, one cycle delayed, would be

\[D_{d}(z) = z^{- 1}\mathcal{Z}\left\{ \frac{D_{c}(s)}{s} \right\} \]

Therefore, the discrete transfer function for the square pulse is

\[D_{d}(z) = \left( 1 - z^{- 1} \right)\mathcal{Z}\left\{ \frac{D_{c}(s)}{s} \right\} \]

For a more complete derivation, see Chapter 4 in Franklin et al. (1998). Equation (8.23) provides us with a discrete approximation to \(D_{c}(s)\) and determines the difference equations to be used in Fig. 8.1(b).

319. EXAMPLE 8.2

320. Digital Controller for Example 6.15 Using the \(ZOH\) Approximation

Again, determine the difference equations to implement the compensation from Example 6.15,

\[D_{c}(s) = 10\frac{s/2 + 1}{s/10 + 1} \]

at a sample rate of 25 times the bandwidth using the \(ZOH\) approximation. Compare the performance against the continuous system done in Example 6.15 and with the results of Example 8.1.

Solution. The bandwidth is the same as the previous example, so the sample period is unchanged

\[T = 0.025sec. \]

The discrete compensation is computed by the Matlab statement, but this time we use the \(ZOH\) version of \(c2\text{ }d\)

\[\begin{matrix} & s = tf\left( \ ^{'}s^{'} \right) \\ & sysDc = 10^{*}(s/2 + 1)/(s/10 + 1) \\ & T = 0.025 \\ & \text{~}\text{sysDd = c2d(sysDc,T,'zoh')}\text{~} \end{matrix}\]

which produces

\[D_{d}(z) = \frac{\left( 50 - 47.79z^{- 1} \right)}{1 - 0.7788z^{- 1}} \]

We can then write the difference equation by inspecting Eq. (8.24) to get

\[u(k) = 0.7788u(k - 1) + 50e(k) - 47.79e(k - 1), \]

or,

\[u(k) = 0.7788u(k - 1) + 50\lbrack e(k) - 0.9558e(k - 1)\rbrack \]

Note the similarity between Eq. (8.25) and Eq. (8.20). There are very small differences in the zero and pole locations and the overall gain. The difference equations to be implemented in the digital controller are:

READ A/D: \(y,r\)

\[\begin{matrix} & e = r - y \\ & u = 0.7788u_{p} + 50\left\lbrack e - 0.9510e_{p} \right\rbrack \end{matrix}\]

OUTPUT D/A: \(u\)

\(u_{p} = u\) (where \(u_{p}\) will be the past value for the next loop through) \(e_{p} = e\)

go back to READ when \(T\) sec have elapsed since last READ.

Use of Simulink to compare the two implementations, in a manner similar to that used for Example 8.1, yields the step responses shown in Fig. 8.10. Note, again, that sampling at 25 times the bandwidth again

Figure 8.10

Comparison between the digital (using \(ZOH\) approximation) and the continuous controller step response with a sample rate 25 times bandwidth:(a) position;

(b) control

causes the digital implementation to match the continuous one quite well, although for this case, use of the Tustin approximation matched slightly better than the \(ZOH\) approximation. Historically, the advantage of the \(ZOH\) method was that it involved simpler algebraic manipulations; however, with the availability of control software such as Matlab, that advantage has diminished. A comparison of all the methods will be contained in Section 8.3.5.

320.0.1. Matched Pole-Zero (MPZ) Method

Another digitization method, called the matched pole-zero (MPZ) method, is found by extrapolating from the relationship between the \(s\) - and \(z\)-planes stated in Eq. (8.11). If we take the \(z\)-transform of a sampled function \(x(k)\), the poles of \(X(z)\) are related to the poles of \(X(s)\) according to the relation \(z = e^{sT}\). The MPZ technique applies the relation \(z = e^{sT}\) to the poles and zeros of a transfer function, even though, strictly speaking, this relation applies neither to transfer functions nor even to the zeros of a time sequence. Like all transfer-function digitization methods, the MPZ method is an approximation; here the approximation is motivated partly by the fact that \(z = e^{sT}\) is the correct \(s\) to \(z\) transformation for the poles of the transform of a time sequence and partly by the minimal amount of algebra required to determine the digitized transfer function by hand, in the event that one wanted to check the computer calculations.

Because physical systems often have more poles than zeros, it is useful to arbitrarily add zeros at \(z = - 1\), resulting in a \(1 + z^{- 1}\) term in \(D_{d}(z)\). This causes an averaging of the current and past input values,
as in Tustin's method. We select the low-frequency gain of \(D_{d}(z)\) so it equals that of \(D_{c}(s)\).

321. MPZ Method Summary

  1. Map poles and zeros according to the relation \(z = e^{sT}\).

  2. If the numerator is of lower order than the denominator, add powers of \((z + 1)\) to the numerator until numerator and denominator are of equal order.

  3. Set the DC or low-frequency gain of \(D_{d}(z)\) equal to that of \(D_{c}(s)\).

For example, the MPZ approximation of

\[D_{c}(s) = K_{c}\frac{s + a}{s + b} \]

is

\[D_{d}(z) = K_{d}\frac{z - e^{- aT}}{z - e^{- bT}} \]

where \(K_{d}\) is found by causing the DC gain of \(D_{d}(z)\) to equal the DC gain of \(D_{c}(s)\) using the continuous and discrete versions of the Final Value Theorem. The result is

or

\[K_{c}\frac{a}{b} = K_{d}\frac{1 - e^{- aT}}{1 - e^{- bT}} \]

\[K_{d} = K_{c}\frac{a}{b}\left( \frac{1 - e^{- bT}}{1 - e^{- aT}} \right) \]

For a \(D_{c}(s)\) with a higher-order denominator, Step 2 in the method calls for adding the \((z + 1)\) term. For example,

\[D_{c}(s) = K_{c}\frac{s + a}{s(s + b)} \Rightarrow D_{d}(z) = K_{d}\frac{(z + 1)\left( z - e^{- aT} \right)}{(z - 1)\left( z - e^{- bT} \right)} \]

however, because the DC gains of these transfer functions are infinite, it is necessary to match the low frequency gains instead. This can be accomplished by deleting the pure integral terms, that is, the poles at \(s = 0\) and \(z = 1\), and proceeding as before to match the DC gains of the remaining transfer functions for the two cases. Doing this, we find that

\[K_{d} = K_{c}\frac{a}{2b}\left( \frac{1 - e^{- bT}}{1 - e^{- aT}} \right) \]

In the digitization methods described so far, the same power of \(z\) appears in the numerator and denominator of \(D_{d}(z)\). This implies that the difference equation output at time \(k\) will require a sample of the input at time \(k\). For example, the \(D_{d}(z)\) in Eq. (8.27) can be written

\[\frac{U(z)}{E(z)} = D_{d}(z) = K_{d}\frac{1 - \alpha z^{- 1}}{1 - \beta z^{- 1}} \]

where \(\alpha = e^{- aT}\) and \(\beta = e^{- bT}\). By inspection, we can see that Eq. (8.31) results in the difference equation

\[u(k) = \beta u(k - 1) + K_{d}\lbrack e(k) - \alpha e(k - 1)\rbrack \]

A very simplified model of the space station attitude control dynamics has the plant transfer function

\[G(s) = \frac{1}{s^{2}}\text{.}\text{~} \]

Design a digital controller to have a closed-loop natural frequency \(\omega_{n} \cong\) \(0.3rad/sec\) and a damping ratio \(\zeta = 0.7\).

Solution. The first step is to find the proper \(D_{c}(s)\) for the system defined in Fig. 8.11. After some trial and error, we find that the specifications can be met by the lead compensation

\[D_{c}(s) = 0.81\frac{s + 0.2}{s + 2}\text{.}\text{~} \]

The root locus in Fig. 8.12 verifies the appropriateness of using Eq. (8.33).

To digitize this \(D_{c}(s)\), we first need to select a sample rate. For a system with \(\omega_{n} = 0.3rad/sec\), the bandwidth will also be about \(0.3rad/sec\). Let's try a sample rate slightly slower than the previous examples to obtain a sense of the effect. So let's use approximately 20 times \(\omega_{n}\). Thus

\[\omega_{s} = 0.3 \times 20 = 6rad/sec \]

A sample rate of \(6rad/sec\) is about \(1Hertz\); therefore, the sample period should be \(T = 1sec\). The MPZ digitization of Eq. (8.33), given by Eqs. (8.27) and (8.28), yields

\[\begin{matrix} D_{d}(z) & \ = 0.389\frac{z - 0.82}{z - 0.135} \\ & \ = \frac{0.389 - 0.319z^{- 1}}{1 - 0.135z^{- 1}} \end{matrix}\]

Figure 8.11