You are on page 1of 26

Z

LECTURE NOTES
ON

d
ADVANCED CONTROL SYSTEMS

l
or
W
TU
JN
ll
A
UNIT - I

INTRODUCTION

Introduction The classical control theory and methods (such as root locus) that we have been using in class to

d
date are based on a simple input-output description of the plant, usually expressed as a transfer function.
These methods do not use any knowledge of the interior structure of the plant, and limit us to single-input
single-output (SISO) systems, and as we have seen allows only limited control of the closed-loop behavior
when feedback control is used. Modern control theory solves many of the limitations by using a much

l
richer description of the plant dynamics. The so-called state-space description provide the dynamics as a set
of coupled first-order differential equations in a set of internal variables known as state variables, together

or
with a set of algebraic equations that combine the state variables into physical output variables.

In a state space system, the internal state of the system is explicitly accounted for by an equation known as
the state equation. The system output is given in terms of a combination of the current system state, and the
current system input, through the output equation. These two equations form a system of equations known
collectively as state-space equations. The state-space is the vector space that consists of all the possible

W
internal states of the system. For a system to be modeled using the state-space method, the system must meet
this requirement: The system must be "lumped""Lumped" in this context, means that we can find a finite-
dimensional state-space vector which fully characterizes all such internal states of the system. this text
mostly considers linear state space systems, where the state and output equations satisfy the superposition
principle and the state space is linear. However, the state-space approach is equally valid for nonlinear
systems although some specific methods are not applicable to nonlinear systems. Central to the state-space
notation are the idea of a state. A state of a system is the current value of internal elements of the system,
that change separately (but not completely unrelated) to the output of the system. In essence, the state of a
TU
system is an explicit account of the values of the internal system components. Here are some examples:

Consider an electric circuit with both an input and an output terminal. This circuit may contain any number of
inductors and capacitors. The state variables may represent the magnetic and electric fields of the inductors
and capacitors, respectively.

Consider a spring-mass-dashpot system. The state variables may represent the compression of the spring, or
JN

the acceleration at the dashpot.

Consider a chemical reaction where certain reagents are poured into a mixing container, and the output is the
amount of the chemical product produced over time. The state variables may represent the amounts of un-
reacted chemicals in the container, or other properties such as the quantity of thermal energy in the container
(that can serve to facilitate the reaction).

When modeling a system using a state-space equation, we first need to define three vectors:
ll

Input variables
A SISO (Single Input Single Output) system will only have a single input value, but a MIMO system may
A

have multiple inputs. We need to define all the inputs to the system, and we need to arrange them into a
vector.
Output variables
This is the system output value, and in the case of MIMO systems, we may have several. Output variables
should be independent of one another, and only dependent on a linear combination of the input vector and the
state vector.
State Variables
The state variables represent values from inside the system, that can change over time. In an electric circuit,

d
for instance, the node voltages or the mesh currents can be state variables. In a mechanical system, the forces
applied by springs, gravity, and dashpots can be state variables.

l
We denote the input variables with u, the output variables with y, and the state variables with x. In essence,
we have the following relationship:

or
Where f(x, u) is our system. Also, the state variables can change with respect to the current state and the
system input:

W
Where x' is the rate of change of the state variables. We will define f(u, x) and g(u, x).

The state equations and the output equations of systems can be expressed in terms of matrices A, B, C, and D.
Because the form of these equations is always the same, we can use an ordered quadruplet to denote a system.
We can use the shorthand (A, B, C, D) to denote a complete state-space representation. Also, because the state
TU
equation is very important for our later analyis, we can write an ordered pair (A, B) to refer to the state
equation:
JN

Obtaining the State-Space Equations

The beauty of state equations, is that they can be used to transparently describe systems that are both
continuous and discrete in nature. Some texts will differentiate notation between discrete and continuous
cases, but this text will not make such a distinction. Instead we will opt to use the generic coefficient
matrices A, B, C and D for both continuous and discrete systems. Occasionally this book may employ the
subscript C to denote a continuous-time version of the matrix, and the subscript D to denote the discrete-time
ll

version of the same matrix. Other texts may use the letters F, H, and G for continuous systems and ,
and for use in discrete systems. However, if we keep track of our time-domain system, we don't need to
worry about such notations.
A

IMPORTANCE
The state space model of a continuous-time dynamic system can be derived either from the system model
given in the time domain by a differential equation or from its transfer function representation. Both cases
will be considered in this section. Four state space formsthe phase variable form (controller form), the
observer form, the modal form, and the Jordan formwhich are often used in modern control theory and
practice, are presented.

d
APPLICATIONS

the analysis and design of the following systems can be carried using state space method

l
1.linear systems

or
2.non-linear systems

3.time varying systems

4.multiple i/p and multiple output systems

W
State-space methods of feedback control system design and design optimization for invariant and time-
varying deterministic, continuous systems; pole positioning, observability, controllability, modal control,
observer design, the theory of optimal processes and Pontryagin's Maximum principle, the linear quadratic
optimal regulator problem, Lyapunovs functions and stability theorems, linear optimal open loop control;
introduction to the calculus of variations. Intended for engineers with a variety of backgrounds. Examples will
be drawn from mechanical, electrical and chemical engineering applications. MATLAB is used extensively
during the course for the analysis, design and simulation. Transfer functions state-space representations -
TU
Solution of linear differential equations, linearization - Canonical systems, modes, modal signal-flow
diagrams - Observability & Controllability - Observability & Controllability, Rank tests - Stability - State
feedback control; Accommodating reference inputs - Linear observer design - Separation principle
JN
ll
A
UNIT - II

INTRODUCTION

The importance of linear multivariable control systems is evidenced by the papers published in recent years.
Despite the extensive literature certain fundamental matters are not well understood. This is confirmed by

d
numerous inaccurate stability analyses, erroneous statements about the existence of stable control, and overly
severe constraints on compensator characteristics. The basic difficulty has been a failure to account properly
for all dynamic modes of system response. This failure is attributable to a limitation of the transfer-function

l
matrix-- it fully describes a linear system if and only if the system is controllable and observable. The
concepts of controllability and observability were introduced by Kalman and have been employed primarily

or
in the study of optimal control. In this paper, the primary objective is to determine the controllability and
observability of composite systems which are formed by the interconnection of several multivariable
subsystems. To avoid the limitations of the transfer-function matrix, the beginning sections deal with
multivariable systems as described by a set of n first order, constant coefficient differential equations. Later,
the extension to systems described by transfer-function matrices is made. Throughout, emphasis is on the
fundamental aspects of describing multivariable control systems. Detail design procedures are not treated.

W
Introduction In the context of this course, the main objective of using state-space equations to model systems
is the design of suitable compensation schemes to control these systems. Typically, the control signal u(t) is a
function of several measurable state variables. Thus, a state variable controller that operates on the
measurable information is developed. State variable controller design is typically comprised of three steps:
Assume that all the state variables are measurable and use them to design a full state feedback control law. In
practice, only certain states or combination of them can be measured and provided as system outputs. An
observer is constructed to estimate the states that are not directly sensed and available as outputs. Reduced-
TU
order observers take advantage of the fact that certain states are already available as outputs and they dont
need to be estimated. Appropriately connecting the observer to the full-state feedback control law yields a
state-variable controller, or compensator.

Definitions and notation:

Controllability and Controllability matrix Definition: A control system is said to be (completely) controllable
if, for all initial times t0 and all initial states x(t0), there exists some input function u(t) that drives the state
JN

vector x(t) to any final state at some finite time t0<t<T

CONTROLLABILITY TEST: Given a system defined by the linear state equation the controllability
matrix is defined as: It can be proved that a system is controllable if and only if: For the general multiple-
input (m) case, A is an n x n matrix and B is n x m. Then, P consists of n matrix blocks[ B, AB, A2B An-
1B, ]each with dimension n x m, stacked side by side. Thus, P has dimension n x nm, having more columns
than rows. For the single-input case, B consists of a single column, yielding a square n x n controllability
matrix P. Therefore, a single-input linear system is controllable if and only if the associated controllability
ll

matrix P is nonsingular.

x Ax Bu
A

rank[Pc ] n

Pc 0

Controllability Matrix Pc [B AB A2B An1B]


Observability

In state-space description of linear time-invariant systems, the state vector x(t) is an internal quantity that is
influenced by the input u(t) and affects the output y(t). In practice, the dimension of x(t) is greater than the
number of inputs or output signals

d
The question stated now is whether or not the initial state can be uniquely determined by measurements of the
input and output signals of the linear s-s equation over a finite time interval. If so, initial state and input signal
can be injected to the state equation solution formula to reconstruct (predict) the entire state trajectory.

l
or
Definition: Given a LTI system that is described by the state x(t0) is said to be observable if given any input
u(t), there exists a finite time T >t0 such that the knowledge of signal input u(t), signal output y(t) for T > t
t0, and matrices A, B, C and D, are sufficient to determine x(t0). If every state of the system is observable, the
system is said to be (completely) observable.

Observability matrix OBSERVABILITY TEST :

W
Given a system defined by its linear state equation, the observability matrix is defined as: It can be proved
that a system is observable if and only if: For the general multiple-output (p) case, A is an n x n matrix and C
is p x n. Then, Q consists of n matrix blocks [C, CA, CA2 CAn-1], each with dimension p x n, stacked one
on top of another. Thus, Q has dimension np x n, having more rows than columns. For the single-output case,
C consists of a single row, yielding a square n x n observability matrix Q. Therefore, a single-output linear
TU
system is observable if and only if the associated observability matrix Q is nonsingular. (|Q | not equal to0)

x Ax Bu
y=Cx
rank[Pc ] n

Observability Matrix Po [C CA CAn1 ]T


JN

IMPORTANCE

The concept of controllability and observability play a vital role in the design of control systems in state
space. They govern the existence of a complete solution to the control system design problem, the solution to
this problem may not exist if the system considered is not controllable.
ll

It is important to note that all physical systems are controllable and observable. however the mathematical
models of these system may not possess the property of the controllability or observability, then it is
A

necessary to know the conditions under which a system is controllable and observable and the designer can
seek another state model which is controllable and observable.

The controllability test is necessary to find the usefulness of a state variable. if the state variables are
controllable by controlling (i.e. varying)the state variables the desired outputs of the system is achieved.
The observability test is necessary to find the whether the state variables are measurable or not. if the state
variables are measurable then the state of the system can be determined by practical measurements of the state
variables.

APPLICATIONS

d
Controllability is an important property of a control system, and the controllability property plays a crucial
role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.

l
Controllability and observability are dual aspects of the same problem.

or
Roughly, the concept of controllability denotes the ability to move a system around in its entire configuration
space using only certain admissible manipulations. The exact definition varies slightly within the framework
or the type of models applied.

W
TU
JN
ll
A
UNIT III

INTRODUCTION
The control system which contains at least one nonlinear factor is called as nonlinear control system. The
nonlinear factor discussed in this chapter refers to the factor whose static characteristic between the input and

d
output cannot satisfy the linear relationship. The subject of nonlinear control system includes all of the
mathematical relationship except the linear relationship. Thus there is no unanimous and universal design
method for nonlinear systems. In this chapter, we mainly introduce the phase plane analysis and describing

l
function analysis.

or
To a certain extent, all control systems are nonlinear. For example, the transistor amplifier will be in
saturation region if the it exceeds the operating range; the electromotor always has the friction moment and
the load moment on the output shaft, thus the electromotor has a dead zone, which means the motor wont
work until the input exceeds the trigger voltage; when the output voltage reaches the saturation point, the

W
electromotor will go into saturation due to the nonlinearity of magnetic materials which limits the maximum
revolution; there always exist the gaps in the transmission process due to the errors in manufacturing and
assembly; a switcher or a relay may lead to a voltage jump. In the practical control system, the nonlinear
factors exist widely. If the operating range of a control system is small, and the involved nonlinearities can be
ignored under some conditions, then the nonlinear system can be approximated by a linear system.

If the system has essentially nonlinear component, or if the input signal is too strong that some components
TU
exceed the linear operating range, then inaccurate or false results may be obtained if we still analyze the
system by the linear analysis method. Nonlinear control systems are those control systems where nonlinearity
plays a significant role, either in the controlled process (plant) or in the controller itself. Nonlinear plants arise
naturally in numerous engineering and natural systems, including mechanical and biological systems,
aerospace and automotive control, industrial process control, and many others. Nonlinear control theory is
concerned with the analysis and design of nonlinear control systems. It is closely related to nonlinear systems
JN

theory in general, which provides its basic analysis tools. The superposition theorem is not valid in nonlinear
systems, thus the analysis and design methods for linear systems discussed in the previous six chapters are not
applicable. We should search for new methods for the analysis of nonlinear systems.

The common nonlinearities in control systems In control systems, there are various kinds of nonlinearities.
The typical nonlinearities are presented as follows.

I. Saturation nonlinearity
ll
A

II. Dead zone nonlinearity


d
III. Relay nonlinearity

l
or
IV. Backlash nonlinearity

W
TU
JN

IMPORTANCE

In the practical control system, the nonlinear factors exist widely. If the operating range of a control system is
small, and the involved nonlinearities can be ignored under some conditions, then the nonlinear system can be
approximated by a linear zed system. If the system has essentially nonlinear component, or if the input signal is
too strong that some components exceed the linear operating range, then inaccurate or false results may be
obtained if we still analyze the system by the linear analysis method.
ll

The analysis and design methods of the nonlinear systems

Phase plane method


A

Describing function method

Computer and intelligence


APPLICATIONS
Optimal Control: Here the control objective is to minimize a pre-determined cost function. The basic solution
tools are Dynamic Programming and variation methods (Calculus of Variations and Pontryagin's maximum
principle). The available solutions for nonlinear problems are mostly numeric.

Model Predictive Control: An approximation approach to optimal control, where the control objective is

d
optimized on-line for a finite time horizon. Due to computational feasibility this method has recently found wide
applicability, mainly in industrial process control.

l
Adaptive Control: A general approach to handle uncertainty and possible time variation of the controlled system
model. Here the controller parameters are tuned on-line as part of the controller operation, using various

or
estimation and learning techniques.

Neural Network Control: A particular class of adaptive control systems, where the controller is in the form of
an Artificial Neural Network.

Fuzzy Logic Control: Here the controller implements an (often heuristic) set of logical (or discrete) rules for

W
synthesizing the control signal based on the observed outputs. Defuzzification and fuzzification procedures are
used to obtain a smooth control law from discrete rules.
TU
JN
ll
A
UNIT IV

INTRODUCTION

Problems of nonlinear systems are hard to be solved due to the complexity and particularity of the systems. Up to

d
now, no universal method has been devised for nonlinear system analysis. Although some powerful methods are
presented for the analysis and design of nonlinear systems, they all have limitations in their application. Therein,
the phase plane analysis and describing function method are widely used in the engineering.
Phase plane analysis is a graphical method for studying second-order nonlinear ordinary differential

l
equation. The motion patterns of the system are represented by trajectories in the phase plane. Thus we can study
the stability and time response of the equilibrium state graphically.

or
The describing function method is an approximate technique, also known as harmonic linearization method.
The method can be used to study the stability and oscillation for a class of nonlinear control systems. It can also
reflect the relationship between the basic characteristics of the self-oscillation (such as magnitude and frequency)
and the system coefficients (such as amplification coefficient and time constant), and give a possible prediction
for the system design.
One of the powerful tools to analyze and design the nonlinear system is to solve the nonlinear differential equation

W
by the computer and then analyze the system in the form of numerical solution. With the development of
computer technique, the computer simulation has become an essential method for nonlinear system analysis.

Phase plane analysis

Phase plane analysis is one of the most important techniques for studying the behavior of nonlinear systems, since
there is usually no analytical solution for a nonlinear system. Phase plane analysis is a graphical method for
studying the first-order and second-order linear or nonlinear systems, which is firstly introduced by Poincare. H in
TU
1885.

Phase plane: Phase plane method is applied to autonomous 2nd order system described as follows:

x1 = f1(x1, x2) (1)

x2 = f2(x1, x2)
JN

System response (x(t) = (x1(t), x2(t))) to initial condition x0 = (x10, x20) is a mapping from R to R2 . The x1 x2
plane is called State plane or Phase plane. The locus in the x1 x2 plane of the solution x(t) for all t 0 is a
curve named trajectory or orbit that passes through the point x0 I The family of phase plane trajectories
corresponding to various initial conditions is called Phase portrait of the system.

Phase Plane Trajectories can be constructed by some methods named: Isoclines, delta method.

Isoclines method
ll

An isocline is a curve through points at which the parent function's slope will always be the same, regardless of
initial conditions. The word comes from the Greek words Isis () meaning "same" and Kisi () meaning
"slope".
A

It is often used as a graphical method of solving ordinary differential equations. In an equation of the form y' =
f(x,y), the isoclines are lines in the (x, y) plane obtained by setting f(x,y) equal to a constant. This gives a series of
lines (for different constants) along which the solution curves have the same gradient. By calculating this gradient
for each isoclines, the slope field can be visualized; making it relatively easy to sketch approximate solution
curves; as in fig. 1.

In population dynamics refers to the set of population sizes at which the rate of change, or partial derivative, for
one population in a pair of interacting populations is size.

l d
or
W
Fig 1,Isoclines (blue), slope field (black), and some solution curves (red) of y'=xy

IMPORTANCE

Since it is on second-order, the solution trajectories can be represented by carves in plane provides easy
visualization of the system qualitative behavior. Without solving the nonlinear equations analytically, one can
TU
study the behavior of the nonlinear system from various initial conditions. It is not restricted to small or smooth
nonlinearities and applies equally well to strong and hard nonlinearities. There are lots of practical systems which
can be approximated by second-order systems, and apply phase plane analysis
Phase plane method is only used for analyzing or designing the 1th-order or 2th-order nonlinear systems.
Analyzing the nonlinear systems by phase plane method is more all-sided compare with the describing function
method.
Also the phase plane method is used to analyze the stability of some intelligent control systems, such as the Fuzzy
JN

control systems.
Providing motion trajectories corresponding to various initial conditions. then examine the qualitative features of
the trajectories. Finally obtaining information regarding the stability and other motion patterns of the system.

APPLICATIONS

There are lots of practical systems which can be approximated by second-order systems, and apply phase plane
ll

analysis. Also the phase plane method is used to analyze the stability of some intelligent control systems, such as
the Fuzzy control systems, lienard system to positive solutions of Schrodinger equations, for autonomous
systems of ordinary differential equations (ODEs) in one or two dimensions. Phase plane analysis is one
A

of the most important techniques for studying the behavior of dynamic systems, especially in the nonlinear case,
where general methods for computing analytical solution do not exist .
UNIT V

INTRODUCTION
Lyapunovs second (or direct) method provides tools for studying (asymptotic) stability properties of an
equilibrium point of a dynamical system (or systems of differential equations). The intuitive picture is that of a
scalar output-function, often thought of as a generalized energy that is bounded below, and decreasing along

d
solutions. If this function has only a single local minimum, and it is strictly decreasing along all non equilibrium
solutions, then one expects that all solutions tend to that equilibrium where the output function has a minimum.
This is indeed correct. In the sequel we state and prove theorems, including some that relax the requirement of
strictly or globally decreasing, and also discuss converse theorems that guarantee the existence of such functions.

l
Much of the power of the method comes from its simplicity one does not need to know any solutions: Knowing

or
only the differential (or difference) equation one can easily establish whether such an output function is decreasing
along solutions. However, while it is easy to see that for every asymptotically stable system there exists many,
even smooth, such Lyapunov's functions, in many cases it is almost impossible to get ones hands onto one such
Lyapunovs function. They are easy to construct for e.g. linear systems, and many strategies are available for
special classes in general it is a true art to come up with explicit formulas for good candidate Lyapunovs

W
functions.

Consider a dynamical system which satisfies

x = f(x, t)

Stability in the sense of Lyapunov's


TU
The equilibrium point x = 0 of (4.31) is stable (in the sense of Lyapunovs) at t = t0 if for any > 0 there exists a
(t0, ) > 0 such that

x(t0)< = x(t) < , t t0.

n particular, it does not require that trajectories starting close to the origin tend to the origin asymptotically. Also,
stability is defined at a time instant t0. Uniform stability is a concept which guarantees that the equilibrium point
is not losing stability. We insist that for a uniformly stable equilibrium point x, in the Definition 4.1 not be a
JN

function of t0, so that equation may hold for all t0. Asymptotic stability is made precise in the following definition

Asymptotic stability

An equilibrium point x = 0 of is asymptotically stable at t = t0 if

1. x = 0 is stable, and

2. x = 0 is locally attractive; i.e., there exists (t0) such that


ll

x(t0)< = limit x(t)=0

As in the previous definition, asymptotic stability is defined at t0. Uniform asymptotic stability requires:
A

1. x = 0 is uniformly stable, and


2. 2. x = 0 is uniformly locally attractive; i.e., there exists independent of t0 for which equation (4.33) holds.
Further, it is required that the convergence in equation is uniform.
Finally, we say that an equilibrium point is unstable if it is not stable. This is less of a tautology than it sounds and
the reader should be sure he or she can negate the definition of stability in the sense of Lyapunov to get a
definition of instability. In robotics, we are almost always interested in uniformly asymptotically stable equilibria.
If we wish to move the robot to a point, we would like to actually converge to that point, not merely remain
nearby. Figure 4.7 illustrates the difference between stability in the sense of Lyapunovs and asymptotic stability

d
Definitions are local definitions; they describe the behavior of a system near an equilibrium point. We say an
equilibrium point x is globally stable if it is stable for all initial conditions x0 Rn. Global stability is very
desirable, but in many applications it can be difficult to achieve. We will concentrate on local stability theorems
and indicate where it is possible to extend the results to the global case. Notions of uniformity are only important

l
for time-varying systems. Thus, for time-invariant systems, stability implies uniform stability and asymptotic

or
stability implies uniform asymptotic stability. It is important to note that the definitions of asymptotic stability do
not quantify the rate of convergence. There is a strong form of stability which demands an exponential rate of
convergence.

Exponential stability, rate of convergence

The equilibrium point x = 0 is an exponentially stable equilibrium point of if there exist constants m, > 0 and >

W
0 such that

x(t) me(tt0) x(t0)

for all x(t0) and t t0. The largest constant which may be utilized in (4.34) is called the rate of convergence.

Exponential stability is a strong form of stability; in particular, it implies uniform, asymptotic stability.
TU
Exponential convergence is important in applications because it can be shown to be robust to perturbations and is
essential for the consideration of more advanced control algorithms.

The direct method of Lyapunovs

Lyapunovs direct method (also called the second method of Lyapunovs) allows us to determine the stability of a
system without explicitly integrating the differential equation (4.31). The method is a generalization of the idea
that if there is some measure of energy in a system, then we can study the rate of change of the energy of the
JN

system to ascertain stability. To make this precise, we need to define exactly what one means by a measure of
energy. Let Bbe a ball of size around the origin, B = {x Rn : x< }.

Locally positive definite functions (lpdf)

A continuous function V : RnR+ R is a locally positive definite function if for some > 0 and some continuous,
strictly increasing function : R+ R,

V (0, t) = 0 and V (x, t) (x) x B, t 0.


ll

A locally positive definite function is locally like an energy function. Functions which are globally like energy
functions are called positive definite functions:
A

Positive definite functions (pdf) A continuous function V : Rn R+ R is a positive definite function if it


satisfies the conditions of Definition 4.4 and, additionally, (p) as p .

To bound the energy function from above, we define decrescence as follows


Decrescent functions A continuous function V :

Rn R+ R is decrescendos if for some > 0 and some continuous, strictly increasing function : R+ R,

d
V (x, t) (x) x B, t 0

Using these definitions, the following theorem allows us to determine stability for a system by studying an

l
appropriate energy function. Roughly, this theorem states that when V (x, t) is a locally positive definite function

or
and V (x, t) 0 then we can conclude stability of the equilibrium point. The time derivative of V is taken along
the trajectories of the system

V /x=f(x,t) = V/ t + V/ x f

Basic theorem of Lyapunovs

W
Let V (x, t) be a non-negative function with derivative V along the trajectories of the system.

1. If V (x, t) is locally positive definite and V (x, t) 0 locally in x and for all t, then the origin of the system is
locally stable (in the sense of Lyapunov).

2. If V (x, t) is locally positive definite and decrescent, and V (x, t) 0 locally in x and for all t, then the origin of
the system is uniformly locally stable (in the sense of Lyapunovs).
TU
3. If V (x, t) is locally positive definite and decrescent, and V (x, t) is locally positive definite, then the origin of
the system is uniformly locally asymptotically stable.

4. If V (x, t) is positive definite and decrescent, and V (x, t) is positive definite, then the origin of the system is
globally uniformly asymptotically stable
JN

Construction for Linear System

Construction for Linear System x Ax

V ( x) xT Px Where p is symmetric, positive definite

V ( x) V Ax 2 xT PAx
xT ( PA AT P) x
ll

Q0
A

PA AT P Q

Also an explicit representation of the solution exists:



P e A t Qe At dt
T

IMPORTANCE
0
Various types of stability may be discussed for the solutions of differential equations describing dynamical
systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This
may be discussed by the theory of Lyapunovs. In simple terms, if the solutions that start out near an equilibrium

d
point stay near forever, then is Lyapunovs stable. More strongly, if is Lyapunovs stable and all
solutions that start out near converge to , then is asymptotically stable. The notion of exponential
stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of

l
Lyapunovs stability can be extended to infinite-dimensional manifolds, where it is known as structural stability,
which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability

or
(ISS) applies Lyapunovs notions to systems with inputs.

APPLICATIONS

Lyapunovs functions are useful in assessing the stability of systems and in particular the method can be used: For
exploring non-linear systems For time varying systems, o determine both stability and asymptotic stability.

W
Some common applications of Lyapunovs functions are in the area of: Assessing the importance of non-linear
terms in stability and instability.
Estimating the domain of attraction of an equilibrium point
Designing control laws that guarantee global asymptotic stability
Rate control for communication networks
Proposed optimization framework: Larger system problem decomposed into a User and a Net
TU
Proposed rate control algorithm: System of differential equations in the rate/capacity allocated to each user
work problem.
With the correct choice of Lyapunovs function, it is shown that: The found equilibrium point is asymptotically
stable The Lyapunovs function closely approximates the Networks optimization problem.
JN
ll
A
UNIT VI
INTRODUCTION

d
In control theory, a state observer is a system that provides an estimate of the internal state of a given real
system, from measurements of the input and output of the real system. It is typically computer-implemented, and
provides the basis of many practical applications. Knowing the system state is necessary to solve many control

l
theory problems; for example, stabilizing a system using state feedback. In most practical cases, the physical state
of the system cannot be determined by direct observation. Instead, indirect effects of the internal state are

or
observed by way of the system outputs. A simple example is that of vehicles in a tunnel: the rates and velocities at
which vehicles enter and leave the tunnel can be observed directly, but the exact state inside the tunnel can only be
estimated. If a system is observable, it is possible to fully reconstruct the system state from its output
measurements using the state observer.

The problem with state feedback control is that every element of the state vector is used in the feedback path and,
clearly, many states in realistic systems are not easily measurable. In many cases, only a few states are readily

W
available from physical or economical concerns. Without the full state vector, the above development is not
possible. One way around this dilemma is to use an estimate of the immeasurable states using a mathematical
simulation of the system. With this approach, we need to implement a state estimation routine or state
observer into the overall system model, being sure to account for the fact that some states are measurable and may
be used to improve the computed estimate. In the following development, we assume a SISO LTI system. This
means that there is a single manipulated variable and a single measurable quantity. This assumption is not
necessary in general, but the equations and notation become more complicated for the general case. Thus, in the
following development the only measurable quantity is the desired output, y(t), and this will be used within the
TU
state observer to help improve the state estimation process. Here we use the notation to represent the

estimate of the state, , at any time t.Consider the state observer pictured in Fig. 7.5 (note that the variable xc

refers to , etc.). This observer uses u(t) and y(t) as input quantities and it outputs an estimate of the state
vector versus time. From the diagram, we have
JN

where is a vector of unknown gains that is determined based on the desired transient response characteristics

for this subsystem. This quantity is known as the state observer gain matrix. For a SISO system, is a column
vector of length N.
ll
A
This design problem is similar to that described above for standard state feedback control. Here the observer gain

matrix, , is chosen such that the eigenvalues of the state estimator are stable and fast compared to the dynamics
of the closed loop system. The eigenvalues of the state observer are given by

d
When the state observer is incorporated within the system with state feedback control, we have the block diagram
given in Fig. 7.6. For this system, the input to the plant is given by

l
or
Note that this expression is very similar to eqn. (7.7) for the state feedback case without state observer. The only

difference here is that is replaced by .

W
TU
Now, if the linearized model of the plant and the state observer are both characterized by the same state space

matrices, , then, for the plant , one has


JN

and substituting in the control rule from eqn. (7.14) gives the final expression for the plant dynamics

For the state observer, we can write a similar relationship by substituting eqn. (7.14) into eqn. (7.12), giving an
equation for the observer dynamics
ll

or
A
Now defining an error vector, , and subtracting eqn. (7.17) from eqn. (7.16), gives a mathematical
representation for the error dynamics

d
This expression represents an unforced stable system if the eigenvalues of the state matrix have negative real

parts.In this case, approaches zero for large t, which implies that . If the dynamics

l
of are also quite fast compared to the dynamics of , then becomes a good estimate of the state
at any time t.

or
Since the time domain behavior of the error vector is determined by the eigenvalues of , the N

elements of the observer gain matrix, , can be varied to give the desired transient response time for the error
dynamics. In fact, if the system is completely state observable, then the N elements of the gain matrix can be

W
specified to give any desired location for the N eigenvalues of .

Complete State Observability

A system is said to be completely state observable if every state, , can be determined from the observation
TU
of y(t) over a finite time interval, . To develop a test for observability, consider the SISO LTI
system defined in eqn. (7.9). The time domain solution for this system can be written as

If we let u(t) = 0, for convenience (since for known u(t), the second term in eqn. (7.19) is known precisely), then
this expression reduces to
JN

Recall here that is known and y(t) can be measured. Therefore, the statement of observability concerns

the determination of from the observation of y(t) over some period of time.

Consider the following manipulations of eqn. (7.20). First, lets multiply both sides by the transpose of the known
ll

coefficient matrix (assuming real elements of the matrix), or


A

Now, rewriting the transposed matrix as


gives the more manageable expression,

Integrating both sides of this relationship over the observation time, gives

d
where

l
or
Finally, solving eqn. (7.21) for gives

If is nonsingular, then
be completely state observable.

W
can be uniquely determined from observation of y(t), and the system is said to

To put the observability test into final form, we again use Sylvesters Interpolation Formula given in eqn. (7.10).
TU
Using this representation for the matrix exponential gives

and
JN

Lets define
ll
A

The matrix (or sometimes ) is referred to as the observability matrix for a SISO system (assuming real

matrices). If the rank of is N, then the coefficient matrix, , in eqn. (7.21) will be nonsingular and the
system is completely state observable (see your text by Ogata for justification of this last argument).
As a simple example to illustrate the ramifications of this result, consider a SISO system defined by the 2x2 state

matrix, . Lets identify two cases; one whose output yA(t) is the first state, x1(t), and another
whose output is the second state, or yB(t) = x2(t). For these two cases we have

d
Case A: then which has Rank = 2

l
or
Case B: then which has Rank = 1

Notice that from the state matrix, , we see that x1 is a function of x2. Therefore, observation of the first state, y
= x1, gives information about both states. Thus, Case A is completely state observable. However, the state matrix
also indicates that the second state, x2, is independent of x1. Therefore observation of y = x2 cannot give
information about x1, and Case B is not completely state observable. Checking the rank of the observability matrix

W
simply gives a formal methodology for evaluating the observability condition.

Determining the Observer Gains

The procedure for finding the elements of the observer gain matrix is based on the pole placement method
discussed previously. In this case, however, one specifies the pole locations for the error dynamics of the state
estimator. The selection here is somewhat arbitrary, but the overall dynamics should be relatively fast compared to
TU
the plant dynamics. If the system is completely state observable, the specification of the N eigenvalues

for should allow a unique determination of the N elements of the observer gain matrix, .

The procedure can be summarized as follows:

1. Check that the rank of the observability matrix is N.


JN

2. Specify the desired pole locations for the error vector, (the poles, , should be
further into the left hand side of the complex plane than the dominant poles associated with the plant dynamics).

3. With the desired poles given, one can develop the desired characteristic

equation, .

4. Finally one develops the characteristic equation for the state error vector, which is given
ll

by , and equates the coefficients of like powers of s from the desired

characteristic equation. This gives N equations for the N unknown elements of .


A

A sample problem showing this procedure for a low order system is given in Example 7.2. This problem is based
on the same system used in Example 7.1. For the present case, the observer dynamics are chosen to be three times
faster than the plant dynamics. This example gives a good illustration of the hand calculations required in the
design of a state observer. The procedure is very similar to the steps required for finding the state feedback gain
matrix.
As was the case for determining the state feedback gains, Mat lab also has an automated procedure for computing
the elements of the observer gain matrix. In fact, Mat labs place command is used again for this purpose. To see
this, first recall that the eigenvalues of a matrix and its transpose are identical. Therefore, we have

d
which has the same form as the state matrix for the feedback gain design problem, . Thus, the
same function can be used to find the state feedback gains and the state observer gains. For the feedback gain

l
problem, one passes the matrices into the place function, and for the observer gain design problem,

or
one passes the matrices into place. With the notation used here, the matrix is really the transpose
of the matrix normally used in the MIMO state space formulation. Thus, when used with the standard
Mat lab state space matrices.

W
IMPORTANCE

In certain systems the state variables may not be available for measurement and feedback. in such cases we need
to estimate the un measurable state variables from the knowledge of input and output .hence a state observer is
employed which estimates the state variables from the input and output of the system. the estimated state variable
can be used for feedback to the design the system by pole placement. In conventional approach to the design of a
single input, single output control system, a controller or compensator is designed such that the dominant closed
loop poles have a desired damping ratio, and under damped natural frequency. in the compensated system the
output alone is used as a feedback to achieve desired performance. In state space design any inner parameter or
TU
variable of a system can be used for feedback. if the system variables are used for feedback, then the system can
be optimized for satisfying a desired performance index.

In control system design by pole placement or pole assignment technique, the state variables are used for
feedback, to achieve desired closed loop poles, the advantage in this system is that the closed loop poles may be
placed at any desired locations by means of state feedback through an appropriate state feedback gain matrix k.the
necessary and sufficient condition to be satisfied by the system for arbitrary pole placement is that the system
completely state controllable.
JN

APPLICATIONS

constraint monitoring;
data logging and trending;
condition and performance monitoring;
feedback control
ll

Fault detection in linear time varying system.


Robust Tracking Control and State Estimation of Mechanical Systems
A
UNIT VII

INTRODUCTION

There are basically two approaches to the design of control systems. in one approach we select the configuration
of the overall system by introducing compensators and then chosen the parameters of the compensators to meet

d
the given specifications on performance. in the other approach for a given plant we find an overall system that m
method based on first approach mentioned above domain and consisting of heavily Laplace transforms and z
transforms. the designer is given a set of specifications in time domain or frequency domain. Phase design peak
over shoot ,settle in time, gain margin, phase margin, study state error etc or among most commonly used is

l
complete necessary compensators.

or
These design specifications are selected because of the convincing in graphical interpretation with respect to root
locus or frequency plots. Compensators are selected that give as closely as possible, the desired system
performance in general ,it may not be possible to satisfy the all the desired specifications.Then,through a trial and
error procedure an acceptable system performance is achieved. There are generally many designs that can give
this acceptable performance i.e the solution is not unique. this trial and error design procedure works satisfactorily
for single input and singe output systems. The gap between classical procedure and its applications to multi input
and multi output has been discussed recently.

W
The classical design specification and takes the give in optimization problem involves minimizing a function (call
ed the objective function) of several variables, possibly subject to restrictions on the values of the variables
defined by a set of constraint functions. Most functions in the Library are concerned with
function minimization only, since the problem of maximizing a given objective function F(x) is equivalent to
minimizing. Some functions allow you to specify whether you are solving a minimization or maximization
problem, carrying out the required transformation of the objective function in the latter case. in general functions
in this chapter find a local minimum of a function , that is a point s.t. for all near se.f(x) .
TU
Types of Optimization Problems
he solution of optimization problems by a single, all-purpose, method is cumbersome and inefficient.
Optimization problems are therefore classified into particular categories, where each category is defined by the
properties of the objective and constraint functions, as illustrated by some examples below.
JN

properties of Objective Function Properties of Constraints

Nonlinear Nonlinear

Sums of squares of nonlinear functions Sparse linear

Quadratic Linear
ll

Sums of squares of linear functions Bounds

Linear None
A
d
IMPORTANCE

In some sense, all engineering design is optimization: choosing design parameters to improve some objective .
Much of data analysis is also optimization: extracting some model parameters from data while minimizing some

l
error measure (e.g. fitting) .Most business decisions = optimization. Varying some decision parameters to

or
maximize profit (e.g. investment portfolios, supply chains, etc.)For instance, a specific problem category involves
the minimization of a nonlinear objective function subject to bounds on the variables. In the following sections we
define the particular categories of problems that can be solved by functions contained in this chapter. Not every
category is given special treatment in the current version of the Library; however, the long-term objective is to
provide a comprehensive set of functions to solve problems in all such categories.

W
In this chapter minimization of functional of single function, constrained minimization, minimum principle,
control variable inequality constraints, control and state variable inequality, euler Lagrangian equation

APPLICATIONS

Application of Lagrange Multipliers to Compute Equilibrium Reaction Forces, Minimum surface area of
TU
revolution Free boundary conditions. Control theory emerged to address the extremer problems in science,
engineering, and decision-making. These problems specialize the available the degrees of freedom by the so-
called controls; these are constrained functions that can be optimally chosen. Optimal design theory addresses
space dependent analog of control problems. Minimax problems address optimization in a con ict situation or in
undetermined environment. A special branch of the theory uses minimization principles to create effective method
such as finite element method to computingsolutions.
JN
ll
A
UNIT VIII

INTRODUCTION

Optimal control deals with the problem of finding a control law for a given system such that a certain optimality
criterion is achieved. A control problem includes a cost function that is a function of state and control variables.

d
An optimal control is a set of differential equations describing the paths of the control variables that minimize the
cost function. The optimal control can be derived using Pontryagin's maximum principle (a necessary
condition also known as Pontryagin's minimum principle or simply Pontryagin's Principle), or by solving
the HamiltonJacobiBellman equation (a sufficient condition).

l
The trial and error uncertainties are eliminated in the parameter optimization method. The point of departure in the

or
parameter optimization procedure is that the performance specifications consisting of single performance index.
Integral square error performance is very common but other performance indices can be used as well .For a fixed
system configuration ,parameters that minimize performance index are selected.

We begin with a simple example. Consider a car traveling on a straight line through a hilly road. The question is,
how should the driver press the accelerator pedal in order to minimize the total traveling time? Clearly in this

W
example, the term control law refers specifically to the way in which the driver presses the accelerator and shifts
the gears. The system consists of both the car and the road, and the optimality criterion is the minimization of the
total traveling time. Control problems usually include ancillary constraints. For example the amount of available
fuel might be limited; the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc.

A proper cost functional is a mathematical expression giving the traveling time as a function of the speed,
geometrical considerations, and initial conditions of the system. It is often the case that the constraints are
interchangeable with the cost functional. Another optimal control problem is to find the way to drive the car so as
TU
to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some
amount. Yet another control problem is to minimize the total monetary cost of completing the trip, given assumed
monetary prices for time and fuel.

A more abstract framework goes as follows. Minimize the continuous-time cost functional
JN

subject to the first-order dynamic constraints

the algebraic path constraints

and the boundary conditions


ll
A

where is the state, is the control, is the independent variable (generally speaking, time), is the
initial time, and is the terminal time. The terms and are called the endpoint cost and Lagrangian,
respectively. Furthermore, it is noted that the path constraints are in general inequality constraints and thus may
not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated
above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any
solution to the optimal control problem is locally minimizing. this chapter includes the
minimum energy, minimum time ,minimum fuel problems, state regulator ,output regulator ,tracking problem,
continuous ,time linear regulator problems. In control theory, the minimum energy control is the control
that will bring a linear time invariant system to a desired state with a minimum expenditure of energy. Let the
linear time invariant (LTI) system be

d
with initial state . One seeks an input so that the system will be in the state at time ,

l
and for any other input , which also drives the system from to at time the energy expenditure

or
would be larger

To choose this input, first compute the controllability gramian

Assuming

W
is nonsingular (if and only if the system is controllable), the minimum energy control is then
TU
Substitution into the solution

IMPORTANCE

Optimal control deals with the problem of finding a control law for a given system such that a certain optimality
JN

criterion is achieved. a control problem includes a cost function that is a function of state and control variables.
an optimal control is a set of differential equations describing the paths of the control variables that minimize the
cost function. the optimal control can be derived using pontryagin's maximum principle (a necessary
condition also known as pontryagin's minimum principle or simply pontryagin's principle [2]), or by solving
the hamiltonjacobibellman equation (a sufficient condition).

APPLICATIONS

Optimal oil extraction and exploration state delay


Biomedical applications: optimal protocols in cancer treatment and immunology
ll

Vintage control problems


Delayed control problems with free final time
Optimal control problems with state-dependent delays
Verifiable sufficient conditions
A

aerospace orbit transfer problem

You might also like