You are on page 1of 28

Control of Nonlinear Dynamic Systems: Theory and Applications

J. K. Hedrick and A. Girard © 2005

8 `Feedback Linearization

Key points
• Feedback linearization = ways of transforming original system models into
equivalent models of a simpler form.
• Completely different from conventional (Jacobian) linearization, because
feedback linearization is achieved by exact state transformation and feedback,
rather than by linear approximations of the dynamics.
• Input-Output, Input-State
• Internal dynamics, zero dynamics, linearized zero dynamics
• Jacobi’s identity, the theorem of Frobenius
• MIMO feedback linearization is also possible.

Feedback linearization is an approach to nonlinear control design that has attracted lots of
research in recent years. The central idea is to algebraically transform nonlinear systems
dynamics into (fully or partly) linear ones, so that linear control techniques can be
applied.

This differs entirely from conventional (Jacobian) linearization, because feedback


linearization is achieved by exact state transformation and feedback, rather than by linear
approximations of the dynamics.

133
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

The basic idea of simplifying the form of a system by choosing a different state
representation is not completely unfamiliar; rather it is similar to the choice of reference
frames or coordinate systems in mechanics.

Feedback linearization = ways of transforming original system models into


equivalent models of a simpler form.

Applications: helicopters, high-performance aircraft, industrial robots, biomedical


devices, vehicle control.
Warning: there are a number of shortcomings and limitations associated with the
feedback linearization approach. These problems are very much topics of current
research.

References: Sastry, Slotine and Li, Isidori, Nijmeijer and van der Schaft

Terminology

Feedback Linearization

A “catch-all” term which refers to control techniques where the input is used to linearize
all or part of the system’s differential equations.

Input/Output Linearization

A control technique where the output y of the dynamic system is differentiated until the
physical input u appears in the rth derivative of y. Then u is chosen to yield a transfer
function from the “synthetic input”, v, to the output y which is:

Y ( s) 1
=
V ( s) s r

If r, the relative degree, is less than n, the order of the system, then there will be internal
dynamics. If r = n, then I/O and I/S linearizations are the same.

Input/State Linearization

A control technique where some new output ynew = hnew(x) is chosen so that with respect
to ynew, the relative degree of the system is n. Then the design procedure using this new
output ynew is the same as for I/O linearization.

134
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

Input/Output I/O and I/S Input/State


Linearization coincide Linearization

Natural output Where the natural output Choose new output


r<n y has relative degree n ynew with relative degree n

SISO Systems

Consider a SISO nonlinear system:

x& = f ( x) + g ( x)u
y = h( x )

Here, u and y are scalars.

∂h 1
y& = x&L f h + L g (h)u = L1f h
∂x

If L g h = 0 , we keep taking derivatives of y until the output u appears. If the output


doesn’t appear, then u does not affect the output! (Big difficulties ahead).

&y& = L2f h + L g ( L1f h)u = L2f h If L g ( L1f h) = 0 , we keep going.

We end up with the following set of equalities:

y = h( x) = L0f h
y& = L1f h + L g (h)u = L1f h with L g h = 0
&y& = L2f h + L g ( L1f h)u = L2f h with L g ( L1f h) = 0

y ( r ) = Lrf h + L g ( Lrf−1 h)u = v with L g ( Lrf−1 h) ≠ 0

The letter r designates the relative degree of y=h(x) iff:

( )
L g Lrf−1 (h) ≠ 0

135
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

That is, r is the smallest integer for which the coefficient of u is non-zero over the space
where we want to control the system.

Let’s set:

α ( x) = Lrf (h)
β ( x) = Lg ( Lrf−1 (h))

Then y ( r ) = Lrf h + L g ( Lrf−1 h)u = α ( x) + β ( x)u ≡ v( x) , where β ( x) ≠ 0

v(x) is called the synthetic input or synthetic control. y(r)=v

v 1 1 1 … 1 y
s s s s

r integrators
Y (s) 1
We have an r-integrator linear system, of the form: = .
V (s) s r

We can now design a controller for this system, using any linear controller design
method. We have v = α + βu . The controller that is implemented is obtained through:

1
u= [− α ( x) + v]
β ( x)

Any linear method can be used to design v. For example,


r −1
v = −∑ c k Lkf (h) = −c0 y − c1 y& − c 2 &y&...
k =0

⇒ y ( r ) + c r −1 y ( r −1) + ... + c0 y = 0

Problems with this approach:


1. Requires a perfect model, with perfect derivatives (one can anticipate robustness
problems).
2. If the goal is y → y d (t ) , v = −c0 ( y − yd ) − ... − c r −1 ( y ( r −1) − y d( r −1) ) .
If x ∈ ℜ 20 , and r = 2, there are 18 states for which we don’t know what is
happening. That is, if r < n , we have internal dynamics.

Note: There is an ad-hoc approach to the robustness problem, by setting:

136
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

τ
r −1  1 
v = −∑ c k Lkf (h) + K c ( y d − y ) + ∫ ( y d − y )dτ 
k =0  τ 0 

Here the first term in the expression is the standard feedback linearization term, and the
second term is tuned online for robustness.

Internal Dynamics

Assume r<n ⇒ there are some internal dynamics

 z1  z1 ≡ y = L0f h
z  z = y& = L1f h
z ≡  2  where 2
 ...  ...
 
 zr  z r = y ( r −1) = Lrf−1 h

So we can write:

z& = Az + Bv

where A and B are in controllable canonical form, that is:

0 1 0 ... 0 0
0 0 1 0 ... 0
  
z& = ... ... ... ... ... z + ... v
   
0 0 ... 0 1 0
 0 0 ... 0 0   1 

y = [1 0 ... 0 0]z

0 1 0 ... 0 0
0 0 1 0 ... 0
  
where A = ... ... ... ... ... and B = ...
   
0 0 ... 0 1 0
 0 0 ... 0 0   1 

We define:

137
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

z
x =   where z is rx1 and ξ is (n-r)x1. ( z ∈ ℜ r , ξ ∈ ℜ n − r ).
ξ 

The normal forms theorem tells us that there exists an ξ such that:

ξ& = ψ ( z, ξ )

Note that the internal dynamics are not a function of u.

So we have:

 z& = Az + Bv
&
 ξ = ψ ( z,ξ )

The ξ equation represents “internal dynamics”; these are not observable because z does
not depend on ξ at all ⇒ “internal”, and hard to analyze!

We want to analyze the zero dynamics. The system is difficult to analyze. Oftentimes, to
make our lives easier, we analyze the so-called “zero dynamics”:

ξ& = ψ (0, ξ )

and in most cases we even look at the “linearized zero dynamics”.

∂ψ
J= and we look at the eigenvalues of J.
∂ξ 0

If these are well behaved, perhaps the nonlinear dynamics might be well-behaved. If
these are not well behaved, the control may not be acceptable!

For linear systems:

 x& = Ax + Bu

 y = Cx

Y ( s)
We have: H (s) = = C ( sI − A) −1 B
U ( s)

The eigenvalues of the zero dynamics are the zeroes of H(s). Therefore if the zeroes of
H(s) are non-minimum phase (in the right-half plane) then the zero dynamics are
unstable.

138
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

By analogy, for nonlinear systems: if ξ& = ψ (0, ξ ) is unstable, then the system:

x& = f ( x) + g ( x)u
y = h( x )

is called a non-minimum phase nonlinear system.

Input/Output Linearization

x& = f ( x) + g ( x)u
y = h( x )
o Procedure

a) Differentiate y until u appears in one of the equations for the derivatives of y

y&
&y&
...
y (r ) = α ( x) + β ( x)u

after r steps, u appears

b) Choose u to give y(r)=v, where v is the synthetic input

1
u= [− α ( x) + v]
β ( x)

Y (s) 1
c) Then the system has the form: =
V (s) s r

Design a linear control law for this r-integrator liner system.

d) Check internal dynamics.

o Example

Oral exam question

139
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

Design an I/O linearizing controller so that y → 0 for the plant:

 x&1 = x 2 + x13 + u

 x& 2 = −u
y = h( x) = x1

Follow steps:

a) y& = x&1 = x 2 + x13 + u u appears ⇒ r = 1

b) Choose u so that y& = v = x 2 + x13 + u

⇒ u = − x 2 − x13 + v

In our case, α ( x) = x13 + x 2 and β ( x) = 1 .

c) Choose a control law for the r-integrator system, for example proportional control

Goal: to send y to zero exponentially

⇒ v = − K p ( y − y des ) = − K p y since ydes = 0

d) Check internal dynamics:

Closed loop system:

x&1 = v = − K p x1
x& 2 = −u = −(− x13 − x 2 + v) = −(− x13 − x 2 − K p x1 ) = x13 + K p x1 + x 2

If x1 → 0 as desired, x2 is governed by x& 2 = x 2


⇒ Unstable internal dynamics!

There are two possible approaches when faced with this problem:

ƒ Try and redefine the output: y=h(x1,x2)


ƒ Try to linearize the entire system/space ⇒ Input/State Linearization

140
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

Input/State Linearization (SISO Systems)

x& = f ( x) + g ( x)u

Question: does there exist a transformation φ(x) such that the transformed system is
linear?

Define the transformed states:

 z1 
z 
z ≡  2
 ... 
 
zn 

I want to find φ(x) such that z& = Az + Bv where v ∈ ℜ , with:


ƒ v=v(x,u) is the synthetic control
ƒ the system is in Brunowski (controllable) form

0 1 0 ... 0 0
0 0 1 0 ... 0
  
A = ... ... ... ... ... and B = ...
   
0 0 ... 0 1 0
 0 0 ... 0 0   1 

A is nxn and B is nx1.

We want a 1 to 1 correspondence between z and x such that:

 z1   x1 
z  x 
z ≡  2 ⇔ x ≡  2
 ...   ... 
   
zn   xn 

Question: does there exist an output y=z1(x) such that y has relative degree n?

141
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

z&1 = L1f h + Lg (h)u = L1f h with L g h = 0

Let z 2 ≡ L1f ( z1 )

Then: Lg ( L1f ( z1 )) = 0

z&1 = z 2
z& 2 = z 3
...
z& n = Lnf ( z1 ) + L g ( Lnf−1 ( z1 ))u ≡ v

⇒ does there exist a scalar z1(x) such that:

( )
Lg Lkf (h) = 0 for k = 1,…,n-2
And L g (Lnf−1 (h) ) ≠ 0 ?

 z1   L f ( z1 ) 
0

 z   L1 ( z ) 
z ≡  2 =  f 1 
 ...   ... 
   n −1 
 z r   L f ( z1 )

⇒ is there a test?

x& = f ( x) + g ( x)u

so the test should depend on f and g.

Jacobi’s identity

Carl Gustav Jacob Jacobi

142
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

Born: 10 Dec 1804 in Potsdam, Prussia (now Germany)


Died: 18 Feb 1851 in Berlin, Germany

Famous for his work on:


ƒ Orbits and gravitation
ƒ General relativity
ƒ Matrices and determinants

Jacobi’s Identity

A convenient relationship (S+L) is called “Jacobi’s identity”.

Lad f g (h) = L f ( L g (h)) − L g ( L f (h))

Remember:

[ ]
ad if g ≡ f , ad if−1 g and ad f g = [ f , g ]

This identity allows us to keep the conditions in first order in z1

⇒ Trod through messy algebra

ƒ For k = 0

L g ( L0f ( z1 )) = 0 ⇒ Lg ( z1 ) = 0

∂z1 ∂z ∂z
.g 1 + 2 .g 2 + ... + n .g n = 0 (first order)
∂x1 ∂x 2 ∂x n

ƒ For k = 1

L g ( L1f ( z1 )) = 0

 ∂z ∂z ∂z 
L g  1 . f 1 + 2 . f 2 + ... + n . f n  = 0
 ∂x1 ∂x 2 ∂x n 

 ∂z ∂z ∂z 
∇ 1 . f 1 + 2 . f 2 + ... + n . f n .g = 0
 ∂x1 ∂x 2 ∂x n 

143
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

∂ 2 z1
⇒ + ... ⇒ 2nd order (gradient)
∂x12

Things get messy, but by repeated use of Jacobi’s identity (see Slotine and Li), we have:

L g (Lkf ( z1 ) ) = 0 for k ∈ [0, n − 2] ⇔ Lad k g ( z1 ) = 0 for k ∈ [0, n − 2] (*)


f

The two conditions above are equivalent. Evaluating the second half:

[ ]
Lad k g ( z1 ) = 0 ⇔ ∇z1 . g , ad f g ,..., ad nf − 2 g = 0
f

This leads to conditions of the type:

∂z1 ∂z ∂z
∇z1 .g = 0 ⇒ .g1 + 2 .g 2 + ... + n .g n = 0
∂x1 ∂x 2 ∂x n

∂z1 ∂z ∂z
∇z1 .ad f g = 0 ⇒ .(...) + 2 .(...) + ... + n .(...) = 0
∂x1 ∂x 2 ∂x n

The Theorem of Frobenius

Ferdinand Georg Frobenius:

Born: 26 Oct 1849 in Berlin-Charlottenburg, Prussia (now Germany)

144
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

Died: 3 Aug 1917 in Berlin, Germany

Famous for his work on:


ƒ Group theory
ƒ Fundamental theorem of algebra
ƒ Matrices and determinants

Theorem of Frobenius:

A solution to the set of partial differential equations Lad k g ( z1 ) = 0 for k ∈ [0, n − 2] exists
f

if and only if:

a) [g , ad f ]
g ,..., ad nf −1 g has rank n

b) [g , ad f ]
g ,..., ad nf − 2 g is involutive

Definition of “involutive”:

A linear independent set of vectors (f1, …, fm) is involutive if:

[ f , f ] = ∑α
m

i j ijk ( x) f k ( x) ∀(i, j ) ∈ N 2
k =1


i.e. when you take Lie brackets you don’t generate new vectors.

Note: this is VERY hard to do.


Reference: George Myers at NASA Ames, in the context of helicopter control.

Example: (same as above)

 x&1 = x 2 + x13 + u

 x& 2 = −u

Question: does there exist a scalar z1(x1,x2) such that the relative degree be 2?

 x + x13  1
f = 2  g= 
 0  − 1

This will be true if:


a) (g, [f, g]) has rank 2

145
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

b) g is involutive (any Lie bracket on g is zero → OK)

Setting stuff up to look at (a):

− 1 − 3 x12 + 1
(g , [ f , g ]) =  
1 0 

∂g ∂f 0 3 x12 1  1  − 3 x12 + 1


Note: [ f , g ] = . f − .g =   −    =  
∂x ∂x 0   0 0 − 1  0 

x1 = ± 3 3 looks dangerous

Question: how do we find z1?

We get a list of conditions:

 ∂z ∂z 2   1 
ƒ ∇z1 .g = 0 ⇒  1  =0
 ∂x1 ∂x 2  − 1
∂z1 ∂z 2
⇒ = ⇒ z1 = x1 + x 2
∂x1 ∂x 2

ƒ ∇z1 .ad f g = 0 (automatically)

So let’s trod through and check:

z1 = x1 + x 2

z&1 = x&1 + x& 2 = x 2 + x13 = z 2 (good that u doesn’t appear, or r=1!)

&z&1 = z& 2 = x& 2 + 3x12 x&1 = 3x12 ( x 2 + x13 ) + (3 x12 − 1).u (u appears! (good))

Question: if you want y=x1 like in the original problem:

Define &z&1 = v , z1 = x1 + x 2

Hope the problem is far away from x1 = ± 3 3

Let v = −c1 z&1 − c 2 ( z1 − z1d )

146
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

⇒ &z&1 + c1 z&1 + c 2 z1 = c 2 z1d

⇒ z1 → z1d

Question: How to pick z1d?

z1 = x1 + x 2

z1d = x1d + x 2 d

We want: x 2 d ≡ − x13d for z&1 = 0 = x 2 + x13


x1
x1d

x2

-x1d3

Feedback Linearization for MIMO Nonlinear Systems

Consider a “square” system (where the number of inputs is equal to the number of
outputs = n)

 m


x
& = f ( x ) + ∑
i =1
g i .u i
 y = [h1 ,...hm ]T

m
y& k = L f (hk ) + ∑ Lg i (hk )u i
i =1

Let rk, the relative degree, be defined as the relative degree of each output, i.e.

147
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

For some i, ( )
L gi Lrfk −1 (hk ) ≠ 0

Let J(x) be an mxm matrix such that:

 Lg1 ( Lr1f −1 (h1 )) ... Lg m ( Lr1f −1 (h1 )) 


 
J ( x) ≡  ... ... ... 
 L g ( Lrfm −1 (hm )) ... L g ( Lrfm −1 (hm ))
 1 m 

J(x) is called the invertibility or decoupling matrix.

We will assume that J(x) is non-singular.

Let:

 d r1 y1 
 r1 
 dt 
y ≡  ...  where yr is an mx1 vector
r
rm
 d ym 
 dt rm 

 Lr1f (h1 ) 
 
l ( x) =  ... 
 Lrfm (hm )
 

Then we have:

y r ≡ l ( x) + J ( x).u ≡ v where v is the synthetic input (v is mx1).

We obtain a decoupled set of equations:

 d r1 y1
 r1
= v1
 dt
 ...
 so y⇔v
 rm ...
 d ym
 dt rm = v m

Design v any way you want to using linear techniques…

148
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

u = J −1 (v − l )

Problems:

ƒ Need confidence in the model


ƒ Internal dynamics

Internal Dynamics

The linear subspace has dimension (or relative degree) for the whole system:

m
rT = ∑ rk
k =1

⇒ we have internal dynamics of order n-rT.

 z1i = hi

 z&1i = z 2i
 ...
 i
( )
m
 z& ri = L fi (hi ) + ∑ Lg k L fi (hi ) u k ≡ vi
r r −1

 1

The superscript notation denotes which output we are considering. We have:

 z11 
 1
 z2 
 ... 
  zT 
x =  z 1r1  ⇒ x= T where zT is rx1, ξT is (n-rT)x1
z2  ξ 
 1
 ... 
 ... 
 

The representation for x may not be unique!

Can we get a ξ who isn’t directly a function of the controls (like for the SISO case)? NO!

149
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

ξ& = ψ (ξ , z ) + P(ξ , z ).u

z& = Az + Bv

and y r ≡ l ( x) + J ( x).u ≡ v

Internal dynamics ⇒ what is u?


⇒ design v, then solve for u using u = J −1 (v − l )

The zero dynamics are defined by z = 0.

y r ≡ 0 ⇒ u* = − J −1l ( x)

The output is identically equal to zero if we set the control equal to zero (at all times).

Thus the zero dynamics are given by:

ξ& = ψ (ξ ,0) − P(ξ ,0).J −1 (ξ ,0)l (ξ ,0)

Dynamic Extension - Example

References: Slotine and Li


Hauser, PhD Dissertation, UCB, 1989 from which this example is taken

x2

u1

x1

150
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

Basically, ψ is the yaw angle of the vehicle, and x1 and x2 are the Cartesian locations of
the wheels. u1 is the velocity of the front wheels, in the direction that they are pointing,
and u2 is the steering velocity.

We define our state vector to be:

 x1 
x =  x 2 
ψ 

Our dynamics are:

 x&1 = sin ψu1



 x& 2 = cosψu1
 ψ& = u
 2

We determined in a previous lecture that the system is controllable (f = 0).

y1 ≡ x1 and y 2 ≡ x 2 are defined as outputs.

 y&1  cosψ 0  u1 
 y&  =  sin ψ 0 u 2 
 2 

cosψ 0
J ( x) =  is clearly singular (has rank 1).
 sin ψ 0

Let u1 ≡ x3 , u&1 ≡ x& 3 = u 3 where u3 is the acceleration of the axle

⇒ the state has been extended.

 x&1 = cosψx3
 x& = sin ψx
 2 3

 x& 3 = u&1 = u 3
 ψ& = u 2

151
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

 x3 cosψ  0  0 
 x sin ψ  0  0 
f = 3  g1 =   g2 =  
 0  1  0 
     
 0  0  1

where x& = f + g1u 3 + g 2 u 2 in the extended state space

Take y1 ≡ x1 and y 2 ≡ x 2 .

 &y&1  cosψ − u1 sin ψ  u 3 


 &y&  =  sin ψ u1 cosψ  u 2 
 2 

cosψ − u1 sin ψ 
and the new J(x) matrix: J ( x) =  is non-singular for u1 ≠ 0 (as long as
 sin ψ u1 cosψ 
the axle is moving).

How does one go about designing a controller for this example?

 v1  cosψ − u1 sin ψ   u1  u 
v  =  sin ψ    = J ( x)  1 
 2  u1 cosψ  u 2  u 2 

 &y&1 = v1

 &y&2 = v 2

Given y1d(t), y2d(t):

Let:

 v1 = −c1 y&1 − c 2 ( y1 − y1d )  &y& + c y& + c 2 y1 = c 2 y1d


 ⇒  1 1 1
v 2 = −c3 y& 2 − c 4 ( y 2 − y 2 d )  &y&2 + c3 y& 2 + c 4 y 2 = c 4 y 2 d

To obtain the control, u:

u 3  −1  v1 
u  = J ( x) v 
 2  2

and u&1 = u 3 ⇒ we have a dynamic feedback controller (the controller has dynamics, not
just gains, in it).

152
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

Pictures for SISO cases:

o Picture of I/O system (r = 1)

o In general terms

x& = f ( x) + g ( x)u
y = h(x)

nth order
r = relative degree < n

a) Differentiate:

∂h ∂x ∂h
y& = h&( x) = . = .( f ( x) + g ( x)u ) = L f h + L g h.u and L g h = 0 if r>1
∂x ∂t ∂x

153
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

⇒ y& = L f h
∂L f h ∂L f h
&y& =
∂y& ∂
=
∂t ∂t
[ ]
Lf h =
∂x
x& =
∂x
.[ f ( x) + g ( x).u ] = L2f h + L g L f h.u

and L g L f h.u =0 if r<2

⇒ &y& = L2f h

y ( r −1) = Lrf−1 h

∂ r −1
y (r ) = L f h[ f ( x) + g ( x)u ] = Lrf h + L g Lrf−1 h.u where L g Lrf−1 h ≠ 0
∂t

b) Choose u in terms of v

y ( r ) = Lrf h + L g Lrf−1 h.u = v

1
Let u = r −1
(− Lrf h + v)
Lg L f h

For now, to simplify the pictures, let L g Lrf−1 h = 1

c) Choose control law

d) Check internal dynamics

154
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

155
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

Feedback Linearization and State Transformation

Input/Output I/O and I/S Input/State


Linearization coincide Linearization

Natural output Where the natural output Choose new output


r<n y has relative degree n ynew with relative degree n

x& = f ( x) + g ( x)u
y = h(x)

We have an nth order system where y is the natural output, with relative degree r.

Previously, we skimmed over the state transformation interpretation of feedback


linearization.

Why do we transform the states?

The differential equations governing the new states have some convenient properties.

Example:

Consider a linear system x& = Ax + Bu

 1  0 
The points in 2-space are usually expressed in the natural basis:   1   .
 0   
So when we write “x”, we mean a point in 2-space that is gotten to from the origin by
doing:

156
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

x2

x1

1 0  x1 
0 1   x 
  2 

where (x1,x2) are the coordinates of a point in ℜ 2 in the natural basis.

To diagonalize the system, we do a change of coordinates, so we express points like:

1 0 
0 1 x = [t1 t 2 ]x'
 

where t1 and t2 are the eigenvectors of A and x’ represents the coordinates in the new
basis.

⇒ Tx = T ' x' ⇒ x = T ' x'

So we get a nice equation in the new coordinates:

x& ' = Λx'+ B ' u

where Λis diagonal.

For I/O linearization, we do the same kind of thing:

We seek some nonlinear transformation so that the new state, x’, is governed by
differential equations such that the first r-1 states are a string of integrators (derivatives of
each other), and the differential equation for the rth state has the form:

x& ' r = nonlinear function(x) +u

and n-r internal dynamics states will be decoupled from u (this is a matter of
convenience).

157
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

So we have: x’=T(x) where T is nonlinear.

T1 ( x) 
T ( x) 
x' = T ( x) =  2 
 ... 
 
Tn ( x)

Let’s enforce the above properties:

 z&1   z2 
 ...   ... 
   
 ...   zr 
   
x& ' = z& r = nonlin. fn( x) + u 
 
 ξ&1   Φ1 ( z, ξ ) 
 ...   
   ... 
ξ&n − r   Φ n − r ( z , ξ ) 

We know how to choose T1(x) through Tr(x). They are just y, y& , &y&,... etc…

How do we choose the Tr+1(x) through Tn(x)?

These transformations need to be chosen so that:

1. The transformation T(x) is a diffeomorphism:

o One to one transformation

OK Not OK
o T(x) is continuous
o T-1(x’) is continuous

158
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

∂T ∂T −1
o Also and must exist and be continuous
∂x ∂x'

2. The ξ states should have no direct dependence on the input u.

Example (from HW)

 z  T ( x ) 
x' =   =  1 
ξ  T2 ( x)

We know that T1(x1,x2) = y = x2.

What about T2(x1,x2)?

Choose T2(x1,x2) to satisfy the above conditions. Let’s start with condition 2, u does not
appear in the equation for ξ& .

ξ& = fn( z , ξ )

 ∂T2 ( x1 , x 2 ) ∂T2 ( x1 , x 2 ) 
ξ& =   x&
 ∂x1 ∂x 2 
 ∂T ( x , x ) ∂T2 ( x1 , x 2 )  a sin x 2  0 
= 2 1 2   − x 2  + 1u 
 ∂x1 ∂x 2   1    

We are only concerned about the second term. To eliminate the dependence in u, we must
have:

 ∂T2 ( x1 , x 2 ) ∂T2 ( x1 , x 2 )  0


   = 0
 ∂x1 ∂x 2  1

∂T2 ( x1 , x 2 )
⇒ =0
∂x 2

⇒ T2 ( x1 , x 2 ) = T2 ( x1 )

(T2 should not depend on x2).

An obvious answer is : T2(x1,x2) = x1. Then, we would have:

 z  T ( x )   x 
x' =   =  1  =  2 
ξ  T2 ( x)  x1 

159
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005

Is this a diffeomorphism? Obviously yes.

Note that T2 ( x1 ) = x13 works also.

What about T2 ( x1 ) = x12 ? (NO – violates one-to-one transformation part of the conditions
for a proper diffeomorphism).

160

You might also like