Professional Documents
Culture Documents
art ic l e i nf o
a b s t r a c t
Article history:
Received 13 January 2012
Received in revised form
6 November 2013
Accepted 6 November 2013
Available online 9 December 2013
This paper proposes an approach for Fault Diagnosis and Isolation (FDI) on industrial systems via faults
estimation. FDI is presented as an optimization problem and it is solved with Particle Swarm
Optimization (PSO) and Ant Colony Optimization (ACO) algorithms. Also, is presented a study of the
inuence of some parameters from PSO and ACO in the desirable characteristics of FDI, i.e. robustness
and sensitivity. As a consequence, the Particle Swarm Optimization with Memory (PSO-M) algorithm, a
new variant of PSO was developed. PSO-M has the objective of reducing the number of iterations/
generations that PSO needs to execute in order to provide a reasonable quality diagnosis. The proposed
approach is tested using simulated data from a DC Motor benchmark. The results and analysis indicate
the suitability of the approach as well as the PSO-M algorithm.
& 2013 Elsevier Ltd. All rights reserved.
Keywords:
Ant colony optimization
Fault diagnosis
Industrial systems
Particle swarm optimization
Robust diagnosis
Sensitive diagnosis
1. Introduction
A fault is an unpermitted deviation of at least one characteristic
property or parameter of a system from the acceptable, usual or
standard operating condition (Simani et al., 2002).
Faults can cause economic losses as well as damage to human
capital and the environment. There is an increasing interest on the
development of new methods for fault detection and isolation, FDI,
also known as Fault Diagnosis, in relation to reliability, safety and
efciency (Isermann, 2005).
The FDI methods are responsible for detecting, isolating and
establishing the causes of the faults affecting the system. They
should also guarantee the fast detection of incipient faults (sensitivity to faults) and the rejection of false alarms that are attributable to disturbances or spurious signals (robustness).
The FDI methods are broken down into three general groups,
the process history based methods (Venkatasubramanian et al.,
2002c), those based on qualitative models (Venkatasubramanian
et al., 2002b), and the quantitative model based methods, also
known as analytical methods (Venkatasubramanian et al., 2002a).
The quantitative model based methods make use of an analytical or computational model of the system. The great variety of
the proposed model based methods is brought down to a few basic
concepts such as: the parity space; observer approach and the
n
0952-1976/$ - see front matter & 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.engappai.2013.11.007
37
The most used model is the linear time invariant (LTI), which has
two representations: the transfer function or transfer matrix, and
the state space representation. This last representation is also valid
for non-linear models.
Let us express the input/output behavior of SISO (single input
single output) processes by means of ordinary linear differential
equations
yt T tt
where
t a1 an b0 bm
and
T t y1 t yn t u1 t um t
3
n
Gp
ys Bs b0 b1 s bm sm
us As
1 a1 s an s n
Gyf y sf y s Gyf p sf p s
38
functions that represent the faults fu, fp and fy, respectively (Ding,
2008).
This proposed approach considers the estimation of the faulty
parameters vector f f u f y f p instead of t. Therefore, it
requires a model that directly represents the effect of the faults
in the actuator, process and sensors of the system. This kind of
model is widely used in other FDI model based methods such as
those based in observers or in parity spaces (Frank, 1990;
Isermann, 2005; Ding, 2008).
The estimation of f allows diagnosing the system in a direct
way: from the minimization of the sum of the square of the output
errors. The optimization is described as follows:
I
min
F^f yt f y^ t ^f 2
t1
s:a:
f min r ^f r f max
7
V zl
39
c2 X gbest X zl 1
max
max min
Itr max
10
11
where
j2
2
p
2 4 j
12
and c1 c2 44.
The literature recommends to set 0:729 with c1 c2 2:05.
This is equivalent to use the inertia weight variant with 0:729
through the entire procedure and establishing c1 c2 1:49
(Eberhart and Shi, 2001).
There are different topologies for PSO. In this work, is used the
Gbest topology. It determines that all the particles are connected to
each other and they are part of a unique neighborhood
(Kameyama, 2009).
A pseudo-code for PSO is given in Fig. 1.
jg 1 f ig l
13
kg 1 f ig l
ij;best
if ji xbest
i
otherwise
15
8 m 1; 2; ; k
17
and
^ : pcnm^ 4 qrand
4 pcnm^ r pcnm
m
n
^
8mZm
18
14
40
19
where
gbest
m : jxgbest
n xm
n xm
n
n j minjxn
nj
m
with m 1; 2k
20
6. Benchmark DC motor
This section describes the main characteristics of the DC Motor
control system DR300 (Ding, 2008). This system has been widely
used for studying and testing new methods of FDI due to its
similitude with high speed industrial control systems (Ding, 2008).
The system is formed by a permanent magnet, which is coupled
to a DC generator. The main function of this generator is to
simulate the effect of a fault that results when a load torque is
applied to the axis of the motor. The speed is measured by a
tachometer that feeds the signal to a PI (proportional-integral
controller) speed controller. Fig. 4 shows the block diagram of the
DC Motor control system AMIRA DR300.
The voltage UT(Volts) is proportional to the rotational speed of
the motor's axis W(rad/s). UT is compared with Uref(Volts) in order
to use the error for computing the control signal UC(Volts) for the
PI speed controller. The AMIRA DR300 system also includes an
internal control loop for the armature current IA. The controller
computes the motor armature voltage UA(Volts) as a function of
the reference that is obtained by means of the gain K1(Volts/Amp)
and the output IA(Amp).
6.1. Mathematical model
For this study, the internal current loop, which is the process to
be controlled, is a single block. The block diagram of the closed
loop is formed by the process and the PI controller. The parameters
of the laboratory DC motor DR300 are reported in Table 3.1 from
Ding (2008).
This analysis considers that the system can be affected by three
additive faults fu, fp and fy. fu represents a fault in the actuator and
it is modeled as a deviation of the control signal; fp represents a
fault in the process itself due to a load torque, which is applied to
the axis of the motor, and fy represents a fault in the measurement
of the motor speed.
The dynamics of the control system in the open loop is
described in the frequency domain by
U T s Gyu sU C s Gyf p sf p s
Gyu s
8:75
1 1:225s1 0:03s1 0:005s
31:07
Gyf p s
s1 0:005s
21
22
23
Gc s
U C s
1:6
1:96
Es
s
Es U ref s U T s
24
25
Table 1
Faulty situations for the rst and second parts of the numerical experiments.
Cases
fu
fy
fp
Case
Case
Case
Case
0.87
0.27
0.63
0
0.12
0.96
0
0.47
0.53
0
0.29
0.86
1
2
3
4
Table 2
Faulty situations for the third part of the numerical experiments.
Cases
fu
fy
fp
Case
Case
Case
Case
0.08
0.15
0
0
0.09
0
0.1
0
0.2
0
0
0.12
5
6
7
8
Table 3
Variants of PSO.
Alg
c1
c2
max
min
PSOB
PSOI1
2
2
2
2
30
30
0.4
0.9
30
0.4
0.9
3.5
0.5
30
0.4
0.9
PSOI2
41
PSOI3
max min
l
Itr max
max min
max
l
Itr max
max min
max
l
Itr max
max
Table 4
Variants of ACO.
Alg
qo
ACO1:1
ACO1:2
ACO1:3
ACO2:1
ACO2:2
ACO2:3
63
63
63
127
127
127
0.15
0.55
0.85
0.15
0.55
0.85
30
30
30
30
30
30
26
For the present study, it was considered that the faults are time
invariant and under the following restrictions:
7. Experimental methodology
f u; f y A R : 1 V r f u; f y r 1 V
f p A R : 0 Nm r f p r1 Nm
27
42
Fig. 5. Comparison between the performance of PSOB and PSOI1 when diagnosing the faulty situations in Table 1, up to 2% level noise.
as well as multiple and incipient faults, see Case 5 from Table 2. All
the measurements are corrupted with a noise level up to 8%.
Different values for some parameters in PSO and ACO were
considered. The General, Robust and Sensitive performance of the
variants of PSO and ACO were analyzed. For each faulty situation
was made 30 runs of each algorithm. The abbreviations F ^ f for
the mean value of the objective function and Eval for the
arithmetic average of the minimum number of objective function
evaluations achieved until the nal value of the objective function
was reached were used. The analysis of the computational effort of
the algorithm was based on the number of evaluation of the
objective function.
Based on this study, the best set of parameters for ACO and PSO,
respectively, were selected. For that selection the Sign test, which
is an easy way to compare the overall performance of two
algorithms (Derrac et al., 2011) was used.
A comparison between the best variant of ACO and PSO using
the Wilcoxon signed ranks test was made. This is a simple and safe
nonparametric test for pairwise statistical comparisons (Derrac
et al., 2011). The statistical T was computed and it was compared
with the value of the Wilcoxons distribution for Num degrees of
freedom (critical value of W), where Num is the number of cases
for which the performance of the algorithms is compared (Derrac
et al., 2011). Also, the Wilcoxon signed ranks test for comparing
the best variant of PSO against PSO-M was used.
8. Experiments
8.1. Implementation of PSO
Two variants of PSO were considered: basic (canonical) PSO,
PSOB , and PSO with Inertial Weight, PSOI , see algorithm in Fig. 1.
Table 3 shows the values for the parameters of the algorithm in
each variant. These variants will permit to analyze the inuence of
the parameters , c1 and c2 in the diagnosis. These parameters
have a great inuence over the quality of the solution and the
convergence (Becceneri et al., 2006). The selected values for c1 and
c2 follow the recommendations from Kennedy (1998), Carlisle and
Dozier (2001) and Kameyama (2009).
In all the experiments were selected Z30, following the idea
of taking Z dimD. The values of the coefcients c1, c2 and
permit to establish the balance between the intensication and
diversication of the search. In Table 3 the notation l represents
the number of the current iteration, yielding a reduction on the
inertial weight along the iterative procedure.
8.2. Implementation of ACO
The variants of ACO were based on the different values for the
parameters q0 and k. The parameter q0 permits to establish the level
of randomness in the selection of the discrete value of the variable
9. Results
9.1. Results of the diagnosis with PSO
9.1.1. General performance
In this part the variants PSOB and PSOI1 to solving the optimization problem given in Eq. (6) were applied.
In Fig. 5 it is shown that both variants PSOB and PSOI1 detect the
fault but PSOI1 is more precise while it presents a greater number
of objective function evaluations.
In order to determine the causes of this behavior, in Fig. 6 the best
value of the objective function versus the iterations for PSOB and PSOI1
is represented, respectively. The gures demonstrate that the greater
number of evaluations of the objective function in the variant PSOI1 is a
consequence of its capability for obtaining better estimations. This is
related with the fact that the algorithm decreases the parameter as
a function of the number of iterations, allowing a more intensication
around the better solutions.
FObj vs Iterations
FObj vs Iterations
12
10
8
FObj
FObj
10
6
4
4
2
43
10
15
10
Iterations
20
30
40
Iterations
Fig. 6. Best value of F^ f obtained by PSOB and PSOI at each iteration for the Case 3 in Table 1, up to 2% level noise. (a) PSOB and (b) PSOI.
Fig. 7. Comparison between the performance of PSOI1, PSOI2 and PSOI3 when diagnosing the faulty situations in Table 1, up to 8% level noise.
Fig. 8. Comparison between the performance of PSOI1, PSOI2 and PSOI3 when diagnosing the faulty situations in Table 2, up to 8% level noise
1.5
1.5
fp
fp
0.5
0
0
0.5
1.5
1
1.5
0
fy 0.5
0
1 1
1.5
1
0.5
fu
fy
0.2
0.5
1
fu
Fig. 9. Comparison between the behavior of the search space by the variant PSOI1 and PSOI3. (a) PSOI1 and (b) PSOI3.
44
PSO in the quality of the diagnosis, see description of the experiments in Section 7.
In Fig. 8 it is shown that the worse estimations were exhibited
by PSOI3. The performance of PSOI1 and PSOI2 was quite similar
respecting the quality of the estimations. This indicates that a
higher diversication of the search space is important for a
sensitive diagnosis.
In order to analyze the effect of diversication in sensitivity, the
search of PSOI1 and PSOI3 was compared, see Fig. 9.
In Fig. 9 it is shown that PSOI1 performs a greater diversication
than PSOI3. Based on the better estimations of PSOI1 over PSOI3, and
based on Fig. 9, it is possible to conclude that the sensitivity is
improved with greater diversication of the search space. On the
other hand, sensitivity does not necessarily imply a higher
computational cost.
Taking into account all these results also it was concluded that
the better variants for obtaining a sensitive diagnosis are PSOI1 and
PSOI2. Based on the Sign test, the variant PSOI1 was selected as the
best variant with a level of signicant 0:05, see the results of
the test in Table 5.
ACO1:1 shows the best performance for Case 1. For Case 2, ACO2:1 is
more efcient than ACO1:1 ; for Cases 3 and 4 both variants are
similar.
In Fig. 11 a comparison between ACO1:1 and ACO2:1 is shown. This
time the gures show the evolution of the best value of the objective
function obtained for each iteration. Taking into account these results,
the conclusion is that the greater search space of ACO2:1 with a low
value for the parameter q0 0.15 produces greater variations in the
value of the objective function than ACO1:1 .
Criterion
PSOI1 wins
PSOI1 lost
Num
critical value
PSOI1vs PSOI2
F^ f
0.05
Fig. 10. Comparison between the performance of ACO1:1 and ACO2:1 when diagnosing the faulty situations in Table 1, up to 2% level noise.
FObj vs Iterations
FObj vs Iterations
2
1.8
1.6
1.4
FObj
FObj
1.2
1
0.8
0.6
0
10
Iterations
15
20
0.4
10
15
20
25
Iterations
Fig. 11. Comparison between the best value of F^ f obtained by ACO1:1 and ACO2:1 , until each iteration, when diagnosing the Case 3 in Table 1, up to 2% level noise. (a) ACO1:1
and (b) ACO2:1 .
45
Fig. 12. Comparison between the performance obtained by six variants of ACO from Table 4 for the faulty situations from Table 1, up to 8% level noise.
Table 6
Sign test results: ACO1:1 versus ACO2:1 .
Comparison
ACO1:1 vs ACO2:1 F^
f
10
12
0.05
28
46
Fig. 13. Comparison between the performance obtained by six variants of ACO from Table 4 when diagnosing the faulty situations in Table 2, up to 8% level noise.
Fig. 14. Comparison between the performance obtained by ACO1:1 and PSOI1 for the faulty situations from Tables 1 and 2, up to 8% level noise.
Table 7
Wilcoxon signed ranks test results: PSOI1 versus ACO1:1 .
Table 8
Results of the comparison between PSOI1, ACO1:1 and PSO-M, up to 8% level noise.
Comparison
Criterion
R
T minfR ; R g
Critical value
of W
PSOI1 vs ACO1:1
F^ f
36
0.01
ACO1:1 vs PSOI1
Eval
36
0.01
Case
29
us k uk s uk s 1ukt ;
30
f ;s k f k s f k s 1f k ;
t
2
6
6
H u;s 6
4
and
6
6
H f ;s 6
6
4
CB
7
7
7
05
CAs 1 B
CB
Ff
CEf
Ff
CEf
33
CAs 1 Ef
s t
H o;s C CACA
34
510
1491
1037
(0.96)
0.7556
0.9944
0.9587
(0)
0.0857
0.01734
0.0015
9.5828
9.1491
9.0020
591
2244
1083
ACO1:1
PSOI1
PSO M
(0.63)
0.6159
0.6428
0.6412
(0)
0.0889
0.0275
0.0859
(0.29)
0.3270
0.3040
0.3293
2.4367
2.1905
2.1881
549
1734
991
ACO1:1
PSOI1
PSO M
(0)
0.0063
0.0010
0.0087
(0.47)
0.4889
0.4576
0.4614
(0.86)
0.8667
0.8545
0.8570
1.0940
0.3261
0.3548
483
1761
947
ACO1:1
PSOI1
PSO M
( 0.08)
0.1397
0.0775
0.0720
(0.09)
0.0444
0.0951
0.1126
(0.2)
0.1746
0.2023
0.2113
3.3207
2.8132
2.8339
660
1716
1051
ACO1:1
PSOI1
PSO M
(0.15)
0.1714
0.1633
0.1483
(0)
0.0349
0.0175
0.0247
(0)
0.0222
0.0088
0.0109
4.9194
4.3135
4.3073
576
1941
963
ACO1:1
PSOI1
PSO M
(0)
0.0032
0.0103
0.0039
( 0.1)
-0.2127
-0.0884
-0.0547
(0)
0.0508
0.0043
0.0194
4.3432
3.7786
3.6969
606
1473
1077
ACO1:1
PSOI1
PSO M
(0)
0.0190
0.0028
0.0118
(0)
0.0063
0.0135
0.0315
(0.12)
0.1175
0.1261
0.1358
3.6654
3.1399
3.1353
633
1830
1109
7
7
7
07
5
Ff
1.4702
0.7018
0.5966
( 0.3)
0.2349
0.2496
0.2995
(0.53)
0.5429
0.5402
0.5526
ACO1:1
PSOI1
PSO M
Eval
( 0.2)
0.0794
0.0985
0.1496
32
F^ f
fp
(0.87)
0.8508
0.8778
0.8635
31
fy
fu
ACO1:1
PSOI1
PSO M
Variant
the equations:
35
36
vs : vs H o;s 0
37
r s k vs H f ;s f ;s k a 0
38
x^_ A HCx^ Bu Hy
^
y^ C x:
40
_ A HC Ef f HF f f
e C F f f
41
39
R
T minfR ; R g
Critical
value of W
Comparison
Criterion
PSO-M vs PSOI1
F^ f
27
0.1
Eval
36
0.01
PSO-M vs PSOI1
47
Fig. 15. Comparison between the performance obtained by obtained by ACO1:1 , PSOI1 and PSO-M for the faulty situations in Tables 1 and 2, up to 8% level noise.
0.5
x 10
1
0
Residual
0.5
1
1.5
2.5
5
0
10
20
30
40
50
60
70
80
90
100
10
20
30
40
50
60
70
80
90
100
Time [s]
Fig. 16. Residual obtained with parity space and diagnostic observers; no faults affecting the system and no noise affecting the output. (a) Parity space and (b) diagnostic
observers.
48
approaches provide false alarms. This fact is related with the lack
of robustness. In order to increase robustness, some thresholds
could be established. In Fig. 17 (c) is shown the results of the
estimation of fu in this case, which is next to zero but not equal. In
this case, it is necessary some thresholds which are represented
with red lines and indicates when the estimation values are within
the 0.1% of the maximum values allowed to this fault.
The residual and the result of the fault estimation approach
when the system is affected by fu 0.9 at t 50 s is shown in
Fig. 18. A zoom of the residual obtained by Diagnostic Observer is
shown in Fig. 19. The residual exceeded the thresholds values at
this time, in both approaches. Thus, it indicates that system is
under a fault fu. Both approaches detected the fault. On the other
hand, our proposal also detected the fault and it also allowed to
obtain fu 0.8940 as an estimation of its magnitude, see Fig. 18 (c).
x 10
Fault alarm
Fault alarm
2
Residual
x 10
1
0
1
1
2
3
4
0
10
20
30
40
50
60
70
80
90
100
50
55
60
65
70
x 10
80
85
90
95
100
Fault Estimation
75
Time [s]
Time [s]
Value of fu
4
2
0
2
4
6
8
20
40
60
80
100
Time[s]
Fig. 17. Residual obtained with parity space and diagnostic observers; no faults affecting the system and up to 8% level noise. (a) Parity space, (b) diagnostic observers
and (c) fault estimation.
49
x 10
Fault alarm
4
0
1
Residual
Residual
2
3
4
4
Fault alarm
5
0
10
20
30
40
50
60
70
80
90
100
20
40
60
80
100
Time [s]
Time [s]
Fault Estimation
Fault Alarm
fu=0.894
Value of fu
0.5
0.5
20
40
60
80
100
Time[s]
Fig. 18. Residual obtained with parity space and diagnostic observers, actuator fault affecting the system fu 0.9 and up to 8% level noise. (a) Parity space, (b) diagnostic
observers, and (c) fault estimation.
6
x 10
Fault alarm
Residual
2
0
2
4
6
10
20
30
40
50
60
70
80
90
100
Time [s]
the PSO-M algorithm took 906 model simulations (174 s) and took
1081 model simulations (305 s), respectively. On the other hand,
diagnostic observers took around 16 s for computing the residual.
Considering that most industrial processes have high time constants,
the processing time required by this proposal does not make
impracticable its use.
11. Conclusions
This study indicates that the application of meta heuristics, in
particular PSO and ACO, characterizes a promising methodology
for fault diagnosis problems based on direct faults estimation.
50
x 10
Fault alarm
0
1
Residual
Residual
4
0
10
20
40
Time [s]
60
80
100
Time [s]
Fig. 20. Residual obtained with parity space and diagnostic observers, actuator fault affecting the system fu 0.08 and up to 8% level noise. (a) Parity space, (b) diagnostic
observers, and (c) fault estimation.
Residual
x 106
5
0
10
20
30
40
50 60
Time [s]
70
80
90
100
Acknowledgments
The authors acknowledge the support provided by FAPERJ, Fundao de Amparo Pesquisa do Estado do Rio de Janeiro, CNPq, Conselho
Nacional de Desenvolvimento Cientco e Tecnolgico, and CAPES,
Coordenao de Aperfeioamento de Pessoal de Nvel Superior.
References
Angeline, P.J., 1998. Evolutionary Programming VII: Proceedings of the Seventh
Annual Conference on Evolutionary Programming (EP98), Lecture Notes in
Computer Science. Springer-Verlag, vol. 1447. Chapter Evolutionary Optimization versus Particle Swarm Optimization: Philosophy and Performance Differences, pp. 601611.
Becceneri, J., Stephany, S., Campos-Velho, H.F., Silva-Neto, A.J., 2006. Solution of the
inverse problem of radiative properties estimation with PSO technique. In:
Inverse Problems in Engineering Symposium (IPES), Iowa State University, USA.
Becceneri, J.C., Zinober, A., 2001. Extraction of energy in a nuclear reactor. In: XXXIII
Simpsio Brasileiro de Pesquisa Operacional, Campos do Jordo, SP, Brazil.
Beielstein, T., Parsopoulos, K.E., Vrahatis, M.N., 2002. Tuning PSO Parameters
Through Sensitivity Analysis. Technical Report. Reihe Computational Intelligence CI 124/02, Collaborative Research Center (SFB 531), Department of
Computer Science and University of Dortmund.
Camps-Echevarra, L., Llanes-Santiago, O., Silva-Neto, A.J., 2010. An approach for fault
diagnosis based on bio-inspired strategies. In: Nature Inspired Cooperative
Strategies for Optimization (NICSO 2010) Studies in Computational Intelligence,
pp. 5363.
Carlisle, A., Dozier, G., 2001. An off-the-shelf PSO. In: Proceedings of the Particle
Swarm Optimization Workshop, pp. 16.
Chen, J., Patton, R.J., 1999. Robust Model-Based Fault Diagnosis for Dynamic
Systems. Kluwer Academic Publishers, Dordrecht.
Chow, E.Y., Willsky, A., 1984. Analytical redundancy and the design of robust failure
detection systems. IEEE Trans. Autom. Control 29, 603614.
Clerc, M., Kennedy, J., 2002. The particle swarm-explosion, stability and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 6, 5873.
Derrac, J., Garca, S., Molina, D., Herrera, F., 2011. A practical tutorial on the use of
nonparametric statistical test as a methodology for comparing evolutionary
and swarm intelligence algorithms. Swarm Evol. Comput. 1, 318.
Ding, S.X., 2008. Model-Based Fault Diagnosis Techniques. Springer.
Dorigo, M., Blum, C., 2005. Ant colony optimization theory: a survey. Theor.
Comput. Sci. 344, 243278.
Dorigo, M., Caro, G.D., The Ant Colony Optimization Meta-Hueristic (Ph.D. thesis),
Universite Libre de Bruxelles, 1992.
Duarte, C., Quiroga, J., 2010. Algoritmo PSO para identicacin de parmetros en un
motor DC. Rev. Fac. Ing. Univ. Antioquia 55, 116124.
Eberhart, R.C., Shi, Y.H., 2001. Comparing inertia weights and constriction factors in
particle swarm optimization. In: Proceedings of the IEEE Congress on Evoltionary Computing, pp. 84 88.
Fliess, M., Join, C., Mounier, H., 2004. An introduction to nonlinear fault diagnosis
with an application to a congested internet router. Advances in communication
and control networks. In: Lecture Notes Control Information Science, Springer.
Frank, P.M., 1990. Fault diagnosis in dynamic systems using analytical and knowledgebased redundancy a survey and some new results. Automatica 26, 459474.
Frank, P.M., 1996. Analytical and qualitative model-based fault diagnosis- a survey
and some new results. Eur. J. Control 2, 628.
Hoeing, T., 1993. Detection of parameter variations by continuous-time parity
equations. In: 12th IFACWorld-Congress, pp. 511516.
Hoeing, T., Isermann, R., 1996. Fault detection based on adaptive parity equations
and single-parameter tracking. Control Eng. Pract. 4, 13611369.
Isermann, R., 1984. Process fault detection based on modelling and estimation
methods a survey. Automatica 20, 387404.
Isermann, R., 2005. Model based fault detection and diagnosis. Status and applications. Annu. Rev. Control 29, 7185.
Kameyama, K., 2009. Particle swarm optimization a survey. IEICE Trans. Inf. Syst.
E92-D, 13541361.
Kennedy, J., 1997. The particle swarm: social adaptation of knowledge. In: IEEE
International Conference on Evolutionary Computation. IEEE, pp. 303308.
Kennedy, J., 1998. The Behavior of Particles. Evolutionary Programming VII.
Springer, 581590.
Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. In: IEEE International
Conference on Neural Networks. IEEE, Perth, Australia, pp. 19421948.
Li, Z., Dahhou, B., 2008. A new fault isolation and identication method for
nonlinear dynamic systems: application to a fermentation process. Appl. Math.
Model. 32, 28062830.
Liang, J.J., Qin, A.K., Suganthan, P.N., Baskar, S., 2006. Comprehensive learning
particle swarm optimizer for global optimization of multimodal functions. IEEE
Trans. Evol. Comput. 10, 281295.
Liu, L., Liu, W., Cartes, D.A., 2008. Particle swarm optimization-based parameter
identication applied to permanent magnet synchronous motors. Eng. Appl.
Artif. Intell. 21, 10921100.
51
Liu, Q.L.W., 2009. The study of fault diagnosis based on particle swarm optimization
algorithm. Comput. Inf. Sci. 2, 8791.
Metenidin, M.F., Witczak, M., Korbicz, J., 2011. A novel genetic programming
approach to nonlinear system modelling: application to the damadics benchmark problem. Eng. Appl. Artif. Intell. 24, 958967.
Narasimhana, S., Rengaswamya, P.V.R., 2008. New nonlinear residual feedback
observer for fault diagnosis in nonlinear systems. Automatica 44, 22222229.
Odgaard, P.F., Matajib, B., 2008. Observer-based fault detection and moisture
estimating in coal mills. Control Eng. Pract. 16, 909921.
Patton, R.J., Frank, P.M., Clark, R.N., 2000. Issues of Fault Diagnosis for Dynamic
Systems. Springer, London.
Poli, R., 2007. An Analysis of Publications on Particle Swarm Optimisation Applications. Department of Computer Science, University of Essex.
Samanta, B., Nataraj, C., 2009. Use of particle swarm optimization for machinery
fault detection. Eng. Appl. Artif. Intell. 22, 308316.
Shelokar, P., Siarry, P., Jayaraman, V., Kulkarni, B., 2007. Particle swarm and ant
colony algorithms hybridized for improved continuous optimization. Appl.
Math. Comput. 188, 129142.
Silva-Neto, A.J., Becceneri, J.C., 2009. Bioinspired computational intelligence techniques- application in inverse radiactive transfer problems. Notes in Applied
Mathematics. SBMAC, So Carlos.
Simani, S., Fantuzzi, C., Patton, R.J., 2002. Model-Based Fault Diagnosis in Dynamics
Systems using Identications Techniques. Springer.
Simani, S., Patton, R.J., 2008. Fault diagnosis of an industrial gas turbine prototype
using a system identication approach. Control Eng. Pract. 16, 769786.
Socha, K., Dorigo, M., 2008. Ant colony optimization for continuous domains. Eur.
J. Oper. Res. 185, 11551173.
Souto, R.P., Stephany, S., Becceneri, J.C., Campos Velho, H.F., Silva Neto, A.J., 2005. On
the use of the ant colony system for radiative properties estimation. In: 5th
International Conference on Inverse Problems in Engineering Theory and
Practice (V ICIPE), Leeds University Press, Leeds, Inglaterra, pp. 110.
Venkatasubramanian, V., Rengaswamy, R., Yin, K., Kavuri, S.N., 2002a. A review of
process fault detection and diagnosis. Part 1: quantitative model-based
methods. Comput. Chem. Eng. 27, 293311.
Venkatasubramanian, V., Rengaswamy, R., Yin, K., Kavuri, S.N., 2002b. A review of
process fault detection and diagnosis. part 2: qualitative models and search
strategies. Comput. Chem. Eng. 227, 313326.
Venkatasubramanian, V., Rengaswamy, R., Yin, K., Kavuri, S.N., 2002c. A review of
process fault detection and diagnosis. Part 3: process history based methods.
Comput. Chem. Eng. 27, 327346.
Wang, L., Niu, Q., Fei, M., 2008. A novel quantum ant colony optimization algorithm
and its application to fault diagnosis. Trans. Inst. Meas. Control 30, 313329.
Witczak, M., 2007. Modelling and Estimation Strategies for Fault Diagnosis of NonLinear Systems From Analytical to Soft Computing Approaches. Springer.
Yang, E., Xiang, H., Zhang, D.G.Z., 2007. A comparative study of genetic algorithm
parameters for the inverse problem-based fault diagnosis of liquid rocket
propulsion systems. Int. J. Autom. Comput. 4, 255261.