You are on page 1of 21

BANG-BANG CONTROL OF SERVO-

HYDRAULIC TESTING MACHINES


USING LEARNING TECHNIQUES
Juan Gerardo Castillo Alva
Marco Antonio Meggiolaro
Jaime Tupiassú Pinho de Castro
Pontifical Catholic University of Rio de Janeiro

Timothy Hamilton Topper


University of Waterloo, Canada
INTRODUCTION

• machinery used in materials


fatigue testing is based on
servo-hydraulic systems

• used to provide useful


information about material’s life
in service by applying load
cycles

• the application of the load may


be repeated millions of times in
typical frequencies up to 100Hz
for metals

• to achieve these frequencies,


relatively high in a typical
fatigue test, it is necessary to
LEARNING CONTROL

The learning process can be seen as a problem of estimation or successive


approximations of unknown quantities or unknown function (King-Sun, 1970).
In this case, the unknown quantities that are estimated or learned by the
controller are parameters that are governed by the control laws.

Mem
UIJ(k+1
LEARNING CONTROL
LEARNING CONTROL
Methodology

• the methodology aims to maintain the servo-


valve working in its extreme operation
limits, keeping the valve most of the time in
the fully open position in one or other
direction.
• due to the system dynamics, the reversion
points must be define before the peaks
and valleys of the desired force or stresses.
• the instant of reversion is a parameter that
depends on several factors such as the
amplitude and mean load, and it is also
influenced by dead zones caused in some
cases by slacks in the test specimen fixture
LEARNING CONTROL

• use information stored in the memory in values UIJ to


control the system by feed-forward

• the measured errors at every cycle are used to update


the UIJ values through a Learning Law.

• this law is applied only after each reversal k of the


controlled movement
LEARNING CONTROL
Learning Tables C oluns(gamma
Columns ) (gamma)
-25 -15 -5 5 15 25

• contains the Uij values -25 0,9810 0,9602 0,8795 0,8016 0,8712 0,9475

)
-15 0,9688 0,9415 Uij 0,8245 0,9005 0,9516

(m
inm
u
• the stored values are -5 0,9520 0,9230 0,8456 0,8429 0,9406 0,9712

dimensionless between 0 and

L
ines
15 0,9256 0,8910 0,7415 0,9038 0,9668 0,9856

1 25 0,9086 0,8723 0,6879 0,9312 0,9765 0,9901

• determine the reversion point


where the servo-valve will be
reversed
xd xr − x '
U IJ =
Reversion point
xr
xd − x '
xd desired peak (or valley)
xr  reversion point
x’ measured valley (or peak)
x’
LEARNING CONTROL
Reading of the UIJ Colunas (gamma)
Columns (gamma)

•value
U → element of the table
ij
gamaj gamaj+1

associated with row i and 0,8595 0,8364 0,8153 0,9314 0,9650


column j

Linhas (mínimo)
mini 0,8143 0,7923 Ui,j Ui,j+1 0,9736

• UIJ = Uij for a load cycle with mini+1 0,7640 0,7289 Ui+1,j Ui+1,j+1 0,9812
minimum value min = mini
0,7128 0,6935 0,9216 0,9715 0,9878
and range gama = gamaj.
0,6550 0,6320 0,9418 0,9835 0,9934

• if mini< min< mini+1 and


gamaj< gama < gamaj+1 , then
UIJ is obtained by interpolation:
U IJ := U i , j ⋅ (1− α ) ⋅ (1− β ) + U i +1, j ⋅ α ⋅ (1− β )+ U i , j+ 1⋅ (1− α )⋅ β + U i+ 1, j+ ⋅1 α ⋅ β

min − mini gama − gama j


α= ,0 < α <1 β= ,0 < β <1
mini +1 − mini gama j +1 − gama j
LEARNING CONTROL
Error used in the
Learning Law

xd − x xd
error =  the desired peak (or valley)
xd − x ' x  the reached peak (or valley)
x’  the last measured valley (or
peak)

Note :

• if x and xd are peaks, x’ will have • if x and xd are valleys, x’ will have
been a valley, then xd – x’ > 0. been a peak, then xd – x’ < 0.

• undershoot  x < xd  error > 0 • undershoot  x > xd  error > 0

overshoot  x > xd  error < 0 • overshoot  x < xd  error < 0


LEARNING CONTROL
Learning Law
Due −1 < error < 1, it is proposed the following law:

U IJ := U IJ ⋅ (1 + error )
Another law could be:

U IJ := U IJ ⋅ (1 + Klearning ⋅ error )
where Klearning is the adjustable gain proportional to the learning
rate (but very high values can cause instability, e.g. if (1 + Klearning
⋅ error) < 0)
moreover, one can choose e.g. Klearning = 1 when error > 0
(undershoot), and Klearning = 2 when error < 0 (overshoot), so we
can avoid overshoots, which are undesirable in fatigue tests since
they can generate overload effects.
LEARNING CONTROL
Updating the values on the
Learning Table
U i , j := U i , j ⋅ [ 1 + (1 − α ) ⋅ (1 − β ) ⋅ error ]
U i , j +1 := U i , j +1 ⋅[ 1 + (1 −α ) ⋅ β ⋅ error]

U i +1, j := U i +1, j ⋅[ 1 + α ⋅ (1 − β ) ⋅ error]


U i +1, j +1 := U i +1, j +1 ⋅[ 1 + α ⋅ β ⋅ error ]

min − mini gama − gama j


α= ,0 < α <1 β= ,0 < β <1
mini +1 − mini gama j +1 − gama j
LEARNING CONTROL
Learning Algorithm
• read a loading peak or valley
• calculate the range and minimum value of the load with respect to the
previous value
• read UIJ from the table, interpolated from Uij using the range and
minimum
• calculate the reversion point
• change the direction of the servo-valve at the calculated instant of
reversion
• measure the reached peak or valley
• calculate the error
• apply the learning law to UIJ
LEARNING CONTROL
Simulations
Response of learning control
10 Output values
Desired values
9 Reversion point

6
Force (kN)

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14
Time (s)
LEARNING CONTROL
Simulations
Response of learning control

Output values
10
Desired values
8 Reversion Point

4
Force (kN)

-2

-4

-6

-8

-10

0 0.05 0.1 0.15 0.2


Time (s)
LEARNING CONTROL
SimulationsResponse
Response of learning
of learning control
control
R e s p o s ta d o C o n tr o le p o r A p r e n d iz a d o
VOutput
a lo r e s values
d e s a id a
10 2 0.5
VDesired
a lo r e s values
d e s e ja d o s
PReversion
o n t o d e re ve r s ã o
Point
8
20
6

4 1 9.5
Força (kN)
(kN )
Force (kN)

2
19
Force

-2
1 8.5
-4

-6 18

-8

-10 0 .8 7 3 0 .8 7 4 0 .8 7 5 0 .8 7 6 0 .8 7 7 0.8 7 8 0 .8 79
T e m (s)
p o (s )
0 0.05 0.1 Time 0.15 0.2
Time (s)
LEARNING CONTROL
Simulations
Response of learning control

30 Output values
Desired values
Reversion point
25

20

15
Force (kN)

10

-5

-10
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

Time (s)
LEARNING CONTROL
Simulations
Response of the Learning ControlResposta
after previous learned
do Controle por Aprendizado com a tabela enchida
Resposta do Controle por Aprendizado com a tabela enchida
25 80

60
20

40
15
output(kN)

20

output x
x

10
Force

5
-20

0
-40

-5 -60
0 0.02 0.04 0.06 0.08 0.1 0.12 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
time (s) time (s)
EXPERIMENTAL SYSTEM

NI 9237
EXPERIMENTAL SYSTEM
RESULTS
Performance of the fatigue test machine
with the learning control
100
90 Control INSTRON with
overdrive (current >
80 40mA)
70
Control INSTRON
60 without overdrive
50 (current until 40mA)

40 Learning control
30 (without overdrive,
(H
n
u
qF

current = 20mA,
z)y
c
re

20 CompactRIO)
10
0
0 10 20 30 40 50 60 70

Force (kN)
Thanks!

You might also like