Professional Documents
Culture Documents
There are two types of stability; they are inherent and synthetic stability.
Inherent stability is a property of the basic airframe with either fixed or free controls, i.e. controls
– fixed stability or controls – free stability.
Synthetic stability is provided by an automatic flight control system (AFCS) and vanishes if the
control system fails. Such automatic control systems are capable of stabilizing an inherent
unstable airplane or simply improving its stability with stability augmentation system (SAS). The
question of how much to rely on such systems to make an airplane flyable entails a trade off
among weight, cost, reliability and safety. If the SAS works most of the time, and if the plane
can be controlled and landed after it has failed even with diminished handling qualities, then
poor inherent stability may be acceptable. Current aviation technology shows an increasing
acceptance of SAS in all cases of airplane stability.
If aeroplane is to remain in steady uniform flight, the resultant force as well as the resultant
moment about ‘cg’ must both be equal to zero. Any airplane satisfying this requirement is said to
be in state of equilibrium after disturbance. Static stability can be visualized by a ball (or any
object) on a surface.
Statically stable: if the forces and moments on the body caused by a disturbance tend initially to
return the body towards its equilibrium position, the ball is statically stable.
Statically unstable: if the forces and moments are such that the body continues to move away
from its equilibrium position after being disturbed.
Neutrally stable: if the body is disturbed but the moments remain, the body stays in equilibrium
i.e. if the ball is displaced from its initial equilibrium point to another position, the ball remain in
the new position.
STATE TRANSITION BY CLASSICAL TECHNIQUE
(1)
Where is a matrix exponential and substituting the above equation into the
homogenous state equation shows that it has a solution: i.e.
(2)
(3)
(4)
since the state transition matrix satisfies the homogenous state equation, it represents the free
response of the system. (Read about the properties)
How to derive the transfer functions of a single-input-single-output system from the state – space
equations.
(5)
(6)
(7)
Where x is the state vector, u is the input and y is the output. The Laplace transform of equation
(6) and (7) are given by
(8)
(9)
Since the transfer function was previously defined as the ratio of the Laplace transform of the
output to the Laplace transform of the input when the initial conditions were zero, we assume
that x(0) in equation (8) is zero.
Then we have
(10)
(11)
(12)
MATHEMATICAL MODELING
Steps in modeling
Example
Consider a mass M, on a frictionless surface connected to a rigid wall by a spring with stiffness
K. Obtain the mathematical model for the system.
y (t)
Spring
Mass M(kg)
K (N/m)
Solution:
1. Choose a sign convention for the position variable y (t). NB that the sign convention for
velocity and acceleration are the same as that for displacement.
2. Use fundamental physical principles to model the system (Newton’s law)
3. Draw free body diagrams of the system. In this example, the spring force is the only force
acting on the mass.
y (t)
Ky(t) N M(kg)
4. Spring exerts a force proportional to and in opposition to movement of the mass.
From Newton’s Law
6. System characteristics
The system has no input. No external force acts on the mass. In the differential equation,
this is indicated by the zero on the right hand side. the system has no damping.
A modern complex system may have many inputs and many outputs and these may be
interrelated in a complicated manner. To analyze such a system, it is important to reduce the
complexity of the mathematical expressions as well as to resort to computers for most of the
tedious computations necessary in the analysis. The state-space approach to the system analysis
is best suited for this view point.
While conventional control theory is based on the input-output relationship or transfer function,
modern control theory is based on the description of system equations in terms of ‘n’ first order
differential equation which may be combined into first order vector matrix differential equation.
The use of vector matrix notation greatly simplifies the mathematical representation of systems
of equations.
The increase in the number of state variables, the number of inputs, or the number of
outputs does not increase the complexity of the equations.
State-space approach to control system analysis and design is the time domain method.
Definition:
State-space equation: we are concerned with 3 types of variables that are involved
in the modeling of dynamic system; input, output, state variable.
State vector: if ‘n’ variables are needed to completely describe the behaviour of a
given system, the‘n’ components of a vector ‘x’, such a vector is called a state
vector. This is a vector that determines uniquely the system state x(t) for any time
, once the state at is given and the input u(t) for is specified.
State-space: It is the ‘n’ dimensional space whose coordinate axes consists of the
axis, axis …. axis is called a state space. A state can be represented by a
point in the state space.
Once a physical system has been reduced to a set differential equations, the equations can
be rewritten in a convenient matrix form
The output of the system is expressed in terms of the state and control input as follows
Where
A is state matrix
B is input matrix
C is output matrix
D is direct transmission matrix
x is state vector
u is input vector
y is output vector
Example 1
For example suppose that the physical system being modeled can be described by an nth – order
differential equation:
(1)
The variables c(t) and r(t) are the output and input variables, respectively. The above differential
equation can be reduced to a set of first-order differential equations by defining the state
variables as follows:
(2)
.
The last equation is obtained by solving for the highest-order derivative in the original
differential equation. Rewriting the equation in state vector form yields
Where
The output equation
Where
Example 2
(4)
This is of the second order. This means that the system involves 2 integrators.
Step 1
(5)
(6)
Then we obtain
Or
(7)
(8)
(9)
(11)
Equation (10) is the state space equation and equation (11) is the output equation for the system.
Where
The state transition matrix is defined as a matrix that satisfies the linear homogenous state
equation, i.e.
(14)
and (15)
Taking the Laplace transform of the equation (15) yields
(16)
Or (17)
The state transition matrix is obtained by the inverse Laplace transform of equation (17).
(18)
Once the state transition matrix has found, the solution to the non homogenous equation can be
determined as follows:
(19)
(20)
Solving for
(21)
Or
Find the state transition matrix of the system described by the following equations:
Where
the state transition matrix is determined by taking the inverse Laplace transform of ф(s):
∴ф (t) =L-1(ф(s))
- Deriving the transfer function of a single-input-single-output system from the state space
equations.
(20)
(21)
(22)
Where x is the state vector, u is the input and y is the output. The Laplace transforms of
equations (21) and (22) are given by
(23)
(24)
Since the transfer function was previously defined as the ratio of the Laplace transform of the
output to the Laplace transform of the input when the initial conditions were zero, we assume
that in equation (23) is zero.
Then we have
Or
(25)
By pre multiplying to both sides of this last we obtain
(26)
(27)
(28)
NB: The right hand of equation (28) involves . Hence can be written as
Example
Obtain the transfer function for the system from the state space equation. Where,
Since
We have
Find the transfer function of the mass/spring/damper state space model
Solution:
Assignment
A state space model for the longitudinal motion of a helicopter near hover is
Determine the transfer function matrix, characteristic equation and state whether the helicopter is
stable.
The two concepts that play an important role in modern control theory are the concepts of
controllability and observability.
Controllability is concerned with whether the states of the dynamic system are affected by the
control input. A system is said to be controllable if there exists a control that transfers any initial
state to any final state in some finite time. If one or more of the states are unaffected
by the control, the system is not controllable. A mathematical definition of controllability for a
linear dynamical system can be expressed as follows. If the dynamic system can be described by
the state equation
where and are the state and control vectors of order ‘n’ and ‘m’ respectively, then the
necessary and sufficient condition for the system to be completely controllable is that the rank of
the matrix P is equal to the number of states. The matrix P is constructed from the A and B
matrices in the following way:
The rank of a matrix is defined as the largest non – zero determinant. The mathematical test was
developed by kalman in the 1960’s.
Solution
Matrix A = and B=
AB=
∴ P= =singular
Hence the system is not completely state controllable. Since the inverse and the determinant are
zero.
Example 2
is controllable.
OBSERVABILITY
It deals with whether the states of the system can be identified from the output of the system. A
system is said to be completely observable if every state can be determined by the
measurement of the output over a finite time interval. If one or more states cannot be
identified from the output of the system, then the system is not observable. A mathematical test
for observability of an nth-order dynamic system governed by the equations.
is given as follows. The necessary and sufficient condition for a system to be completely
observable is that the matrix is defined as
A is of the rank n.
Try controllability.
Solution
Matrix A= B= C=
=
Observability Matrix
The rank of is 2, hence the system is observable. The rank of a matrix A is the
order of the largest square array contained in A which has a non-zero determinant.