Professional Documents
Culture Documents
Research Article
A Novel Fractional-Order PID Controller for
Integrated Pressurized Water Reactor Based on
Wavelet Kernel Neural Network Algorithm
Copyright 2014 Yu-xin Zhao et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper presents a novel wavelet kernel neural network (WKNN) with wavelet kernel function. It is applicable in online learning
with adaptive parameters and is applied on parameters tuning of fractional-order PID (FOPID) controller, which could handle time
delay problem of the complex control system. Combining the wavelet function and the kernel function, the wavelet kernel function
is adopted and validated the availability for neural network. Compared to the conservative wavelet neural network, the most
innovative character of the WKNN is its rapid convergence and high precision in parameters updating process. Furthermore, the
integrated pressurized water reactor (IPWR) system is established by RELAP5, and a novel control strategy combining WKNN and
fuzzy logic rule is proposed for shortening controlling time and utilizing the experiential knowledge sufficiently. Finally, experiment
results verify that the control strategy and controller proposed have the practicability and reliability in actual complicated system.
As a mature controller with strong availability, PID and RELAP 5 program in Section 4. Then the control strategy
the controllers have been applied in industrial processes for and the fractional-order PID controller are designed with
sophisticated dynamic systems which have large scale. The fuzzy logic and wavelet kernel neural network in Section 5.
primary reason is its relatively simple structure that could In Section 6, experiments illustrate the effectiveness of the
be easily comprehended and implemented [1315]. In recent wavelet kernel neural network and the fractional-order PID
years, considerable interests in time delay have been aroused controller. As some controllers are faced with the problem
in fractional-order PID controller and many researchers have of losing efficacy in practice, Section 6 also emulates and
presented beneficial methods to design stabilizing fractional- analyzes the control process in the basis of reactor model.
order PID (FOPID) controllers for varying time delay system Finally, conclusion of this paper has been described in
[1619]. This paper adopts neural network (NN) to tune Section 7.
the FOPID online to control the nuclear reactor power,
which keeps the coolant average temperature constant on 2. The Description of Kernel LMS Algorithm
the condition of load changing. NN is an effective method
for PID parameters tuning, and the wavelet neural network In this section, the algorithm of kernel LMS is introduced as a
(WNN) improves the efficiency of NN by transforming show review for wavelet kernel neural network. Kernel LMS
the nonlinear excitation function to the wavelet function. is an approach to perform the LMS algorithm in the kernel
However, WNN still follows the gradient descent method feature space. In the LMS algorithm, the hypothesis space
to adjust weights, which inevitably causes the problems 1 is composed by all the linear operators on (mapping
of deficient generalization performance, ill-posedness, and : ) [28], which is embedded in a Hilbert space,
overfitting. Motivated by the drawback above, it should take and the linear operator represents standard inner product.
advantage of the algorithm core, LMS, to improve WNN. The LMS algorithm minimizes the total instantaneous error
Both LMS and KLMS are derived from mean squared error of the system output:
cost function [2022]. LMS algorithm is famous for its easy
1
comprehending and implementing but has no ability to () = e2 () , (1)
be utilized in nonlinear domain. With the wide spread of 2
kernel theory, the novel LMS algorithm with kernel trick has where e() = norm() x()w(), norm is the desired value,
drawn considerable attention to learning machines including w() is the weight vector, x() is the input vector, and is the
neural network [2326]. Recently, kernel functions (such instant. Using the straightforward stochastic gradient descent
as Gaussian kernel) are applied in support vector machine update rule, the weight vector w is approximated based on
(SVM) mostly, which projects the linear nonseparable data input vector x:
to another space for being separable in the new space. RBF w ( + 1) = w () + e () x () , (2)
network is a sort of NN which use Gaussian kernel frequently.
But this kernel could not approach the target appropriately where is the step-size. By training step by step, the weight w
when it encounters more complex signal in the calculation is updated as a linear combination of the previous and present
process. There are some literatures indicating that Gaussian input vector.
kernel in RBF network and SVM could not generate a set of This LMS algorithm learns linear patterns satisfactorily
complete bases in the subspace ideally by translation [27]. while it could not approach the same sound effect in non-
Some potential wavelet functions could be transformed as linear pattern. Motivated by the drawback, Pokharel with
kernel functions, which are competitive to produce a set of other scholars extended the LMS algorithm to RKHS by the
complete bases in 2 () space (square and integrable space) kernel trick [28, 29]. Kernel methods map the input into a
by stretching and translating. Meanwhile, RBF network needs high dimensional space (HDS) [30]. And any kernel function
to identify the model of object and obtain the Jacobian based on Mercers theorem has the follow mapping [31]:
information when it tunes the network weights, and it is
hard to gain the Jacobian information of complex industrial (, ) = () , ( ) = () ( ) . (3)
system. But the conventional WNN could tune the PID When the input in (2) consists of {(x(1)), (x(2)), . . . ,
controller without such information and is more appropriate (x())} but not {x(1), x(2), . . . , x()}, (x()) is the infinite
to be the parent body for improvement. Thus, this paper feature vector transformed from input vector x(). The LMS
proposes a novel neural network based on wavelet kernel algorithm is no longer applicable, and the total instantaneous
function and combines with fuzzy logic rules to achieve the error of the system output is as follows:
online tuning of FOPID. Experiment results validate that the
1 2
control strategy and controller design improve the accuracy = (norm (x ()) w ()) , (4)
and speed, which could control the nuclear reactor power 2
operation and enhances the overall efficiency. where (x()) is matrixes of input vector, norm is desired
The next content is composed of six sections. Section 2 value vector, and w() is weight vector in RKHS.
provides a short introduction to LMS algorithm and kernel Then build the hypothesis that w(0) = 0 for convenience,
LMS algorithm. Section 3 presents the related theory of and (2) could convert to
wavelet kernel function and proposes the method of wavelet 1
kernel neural network including the discussion of adaptive w () = e () (x ()) , (5)
parameters. The model of nuclear reactor is structured by =0
Mathematical Problems in Engineering 3
and the final output in step updating of the learning The conception above is the foundation of wavelet anal-
algorithm is ysis and theory. Note that the wavelet kernel function for
neural network is a special kernel function and comes from
1
wavelet theory; it would be the horizontal function (, ) =
z () = e () (x ()) , (x ()) = e ( 1) k, (6)
( ) and satisfied the condition of Mercer theorem [34],
=0
which could be summarized as follows.
and k = [(x(1), x()), (x(2), x()), . . . , (x( 1), x())].
From (4)(6), the KLMS algorithm remedies the defect Lemma 1. A continuous symmetric kernel is (, ), and
of the LMS algorithm of its relatively small dimensional is bounded, likewise for . Then (, ) is the valid kernel
input space and improves the well-posedness study in the function for wavelet neural network if and only if it holds for
infinite dimensional space with finite training data [28]. As any nonzero function () which makes 2 () < , and
the application of adaptive filter theory, the neural network the following condition will be satisfied:
could take advantage of the KLMS algorithm, which endows
the learning of neural network with more efficiency and (, ) () ( ) 0. (10)
reliability.
Morlet kernel function As the hidden layer , the signal of single neuron is
4
provided by input layer, and the induced local field k and
output of hidden layer x could be displayed as
2
k () = () () = x () w () ,
=1 (15)
x () = (k ()) ,
y 0
1 w () = () x () (x () w ())
() = e2 () , (14) =0
2 (20)
e () ( (k ()) w ()) w () ,
where e() = norm() z ().
Mathematical Problems in Engineering 5
and the output of hidden neuron could be determined as Proof. Hypothesize that is small enough and linear function
() = ()u ()+() exists, which has the relationship that
x = (w () , x ())
() () < in the closed interval [, ]; therefore, regard
1 that () () and () are satisfied.
= ( () () (x () , x ())) (21) Consider the equations
=0
( ()) = norm (w () u ())
= ( () ( 1) k ) ,
norm () (w () u ()) ()
where () = ( () ( 1) k )( 1)w ( 1).
= E () [ () 2 () u () u ()] E ,
3.3. The Updating of Adaptive Variable Step. According to
1
the algorithm presented above (see Section 2), it stresses that ( ()) = 2 ( ()) ,
the adaptive variable step could be regarded as a significant 2
coefficient with the ability to influence the adaptivity and (26)
effectivity of the algorithm. The stability and convergence 1 () 0
of the algorithm are influenced by step-size parameter and where () = [ . . ], and minimize the functions
. .
the choice of this parameter reflects a compromise between . d .
0 ()
the dual requirements of rapid convergence and small mis- ( ()) and ( ()) = 0, and then () ( = 1, 2, . . . , )
adjustment. A large prediction error would cause the step- could be inferred as follows:
size to increase to provide faster tracking while a small
prediction error will result in a decrease in the step-size 1 () 0
to yield smaller misadjustment [37]. Therefore, this paper [ .. .. ] ,
() = [ . d . ] (27)
improves the NLMS algorithm derived from LMS algorithm,
which is propitious to multidimensional nonlinear structure [ 0 ()]
and avoids the limitation of linear structure. 1
NLMS focuses on the main problem of LMS algorithms () = . (28)
constant step-size, which designs an adaptive step-size to 2 () u () , u ()
avoid instability and picks the convergence speedup:
Equations (27) and (28) reveal that the () is indepen-
1 dent of the linear function intercept () and only relates
( + 1) = () + () () , (22)
() , () to the input and slope (). Motivated by the drawback of
where 1/(), () is regarded as the variable step-size slight error from () (), the 2 () in (28) is replaced
compared with the LMS algorithm. But the NLMS has little by (w () u ()), which improves the accuracy and
ability to perform in the nonlinear space. By the kernel trick, completes the proof.
the kernel function could not only improve the wavelet neural
network, but also update the step-size of neural network On the basis of the discussion above, (22) could be
self-adaptively. As what (17) reveals, z is a consequence represented as a special situation by Theorem 2 when the
of nonlinear function (). Since () is continuously dif- slope of function () is 1. Theorem 2 expands the NLMS to
ferentiable, this function could be matched by innumerable the multidimensional nonlinear space from the linear space
linear function following the principle of the definition of and broadens the application of NLMS. Compared with the
differential calculus. variable step-size in (22), (27) and (28) could be defined as the
adaptive variable step-size of NLMS in the extended space.
Theorem 2. The weights updating satisfies the NLMS condi- When u (), u () is too small, the problem caused by
tion in the linear integrable space 1 () as follows: numerical calculation has to be considered. To overcome this
difficulty, (28) is altered to
( + 1) = () + () () () , (23)
where () = 1/(), () is the adaptive variable step-size. 1
() = , (29)
Then weights are expanded to dimension matrix w () and 2 () u () , u () + V2
adjusted as
w ( + 1) = w () + () (w () u ()) where V2 > 0 which is set as a pretty small value. The weights
(24) updating in (20) is more complicated than that between
e () u () , output layer and hidden layer. Because the updating in (20)
is based on the output layer, meanwhile the existence of
where the nonlinear function () is continuously differentiable
e() ( (k ())w ())w () helps every neuron of
everywhere, the () will have elements, and each step-size
output layer have the relationship with the updating process
() ( = 1, 2 . . . ) is given as
between th neuron of hidden layer and the previous layer;
1 in other words, every neuron of output layer contributes to
() = . (25)
(w () u ()) u () , u () (). Therefore, this paper adopts the same step-size in ()
6 Mathematical Problems in Engineering
based on (), and the elements () of () could be given Allowing for (19) and (33), the updating formulation of
as follows: dilation factor () is
1
1
() = () . (30) ( + 1) = () () () () ( () , ()) ,
=1 =0
(34)
3.4. The Modulation of Wavelet Kernel Width. The goal
of minimizing the mean square error could be achieved where () = ().
by gradient search. If kernel space structure cost function The WKNN process is presented as Algorithm 1 as fol-
is derivable, the wavelet kernel of neural network could lows.
also be deduced by gradient algorithm. In consideration of
conventional wavelet neural network, the gradient search 4. The Model Design of IPWR
method could be utilized for updating wavelet kernel weight
due to the derivability of the wavelet kernel function. Besides In order to reduce the volume of the pressure vessel, the
the synaptic weights among various layers of network, there integration reactor generally uses compact OTSG. Since there
is another coefficientdilation factor in (11) which also is no separator in the steam generator, the superheated steam
influences the mean square error. With respect to the cost produced by OTSG could improve the thermal efficiency
function (14), the dilation factor is corrected by of the secondary loop systems. Compared with natural
circulation steam generator U-tube, the OTSG is designed
( + 1) = () () (e2 ()) , (31) with simpler structure but less water volume in heat transfer
tube. Since it is difficult to measure the water level in OTSG
where is the constant step-size and () is the partial especially during variable load operation, the OTSG needs
derivative function, that is, gradient function. According to a high-efficiency control system to ensure safe operation.
the chain rule of calculus, this gradient is expressed as Primary coolant average temperature is an important param-
eter to ensure steam generator heat exchange. The control
z z ()
() = e () ( ) = e () ( ), scheme with a constant average temperature of the coolant
() () () can achieve accurate control of the reactor power and avoid
1 generation of large volume coolant loop changes.
() () ()
= = () () ( () , ()) , In the integrated pressurized water reactor (IPWR) sys-
() () () =0 tem, the essential equation of thermal-hydraulic model is
(32) listed as follows.
(1) Mass continuity equations:
where = / and = () 1=0 ()((), ()).
Then () could be represented as 1
( ) + ( V ) = ,
1 (35)
() = () e () () () ( () , ()) . (33) 1
( ) + ( V ) = .
=0
Mathematical Problems in Engineering 7
(2) Momentum conservation equations: dependent variables are pressure (), phasic specific internal
energies ( , ), vapor volume fraction (void fraction) ( ),
V1 V2 phasic velocities (V , V ), noncondensable quality ( ), and
+ boron density ( ). The independent variables are time () and
2
distance (). The secondary dependent variables used in the
= + ( ) FWG (V ) equations are phasic densities ( , ), phasic temperatures
( , ), saturation temperature ( ), and noncondensable
+ (V V ) ( ) FIG (V V ) mass fraction in noncondensable gas phase ( ) [38]. The
point-reactor kinetics model in RELAP5 computes both the
(V V ) V V immediate fission power and the power from decay of fission
[ + V V ],
fragments.
2
V 1 V
+ 5. The Control Strategy and
2
FOPID Controller Design
= + ( ) FWF (V )
New control law is demanded in nuclear device [39]. When
the nuclear power plant operates stably on different con-
(V V ) ( ) FIF (V V )
ditions, the variation of main parameters in primary and
(V V ) V V secondary side is indeterminate. The operation strategy
[ + V V ]. employs the coolant average temperature av to be constant
with load changing. The WKNN fuzzy fractional-order PID
(36) system achieves coolant average temperature invariability by
controlling reactor power system (Figure 4).
(3) Energy conservation equations: The five parameters, including three coefficients and
two exponents, are acquired in two formsfuzzy logic and
1 wavelet kernel neural network. The idea of this method
( ) + ( V )
combination is born from the fact that the power level of
nuclear reactor tends to alter dramatically in a short while and
= ( V ) + then maintain slight variation within a certain period. Due
to the boundedness of the WKNN excitation function, the
+ + + + DISS , output range of FOPID coefficients could not be wide enough.
(37) The initial parameters of FOPID are set by WKNN and fuzzy
1
( ) + ( V ) logic. Moreover, the exponents of FOPID have no need to
vary with the small changing. As a consequence, the fuzzy
logic rule is executed when the reactor power transforms
= ( V ) +
remarkably to tune the five parameters of FOPID, and the
+ + DISS . WKNN retains updating and tuning of the coefficients based
on the fuzzy logic all the while. The process is illustrated in
Figure 5.
(4) Noncondensables in the gas phase: Based on theoretical analysis and design experience,
1 fuzzy enhanced fractional-order PID controller is adopted
( ) + ( V ) = 0. (38) to achieve the reactor power changing with steam flow
and maintain the coolant average temperature av constant.
Power control system regards the average temperature devi-
(5) Boron concentration in the liquid field:
ation and steam flow as the main control signal and assistant
control signal, respectively. Since the thermal output of
1 ( V ) (39)
+ = 0. reactor is proportional to the steam output of steam generator,
the consideration of steam flow variation could calculate the
power need immediately, which impels the reactor power
(6) The point-reactor kinetics equations are to follow secondary loop load changing and improves the
6 mobility of equipment. When the load alters, reactor power
could be determined by the following equation:
= + + ,
=1 (40)
0 = 1 + () + ( )
= .
=0
(41)
The RELAP5 thermal-hydraulic model solves eight field + ( ) ,
equations for eight primary dependent variables. The primary =0
8 Mathematical Problems in Engineering
KWNN
0 Tm
Tm e
KPpW
KPdW
KPiW
Expertise
improvement
n0
KPpW
KPdW
KPiW
Gs
0 e
Tm Temperature n
edt Power require Signal
Actuator
Nuclear
OTSG
comparer counter processor reactor
de
dt
KPpF
KPdF
KPiF
Tm
de
dt
Fuzzy logic
e rules
Tm To
Ti
0 If error > 1
Tm , Tm , error Fuzzy logic de/dt > 2
KPpF
If the 1st loop KPiF
KPdF
WKNN
KPpF
KPiF
KPdF
KPpW
KPiW
KPdW
FOPID
regression quality of WKNN are better than that of WNN the effectiveness of tuning the FOPID controller in time delay
and initial condition brings no benefit to WKNN. From control. To illustrate clearly, a known dynamic time delay
the argument above, it indicates preliminary that WKNN control system is proposed as follows:
algorithm reduces the amplitude of error variation and has
4 3
better performance than WNN. However, it remains to be () = , (46)
verified whether or not the WKNN is more appropriate 5 + 1
for FOPID controller tuning in time delay system. Then, in and then analyze the performance of unit step response. The
this paper, the algorithm proposed is designed to examine control process is displayed in Figures 10 and 11.
10 Mathematical Problems in Engineering
0.7 1
0.9
0.65
0.8
0.6
0.7
0.55
Output
Output
0.6
0.5
0.5
0.45 0.4
0.4 0.3
0.2
0.35 0 200 400 600
0 10 20 30
Time (s)
Time (s)
norm
norm WKNN
WKNN WNN
WNN
Figure 8: The comparison of output.
Figure 6: The comparison of output.
0.3 0.03
0.02
0.25
0.01
0.2
0
0.15
Error
Error
0.01
0.1
0.02
0.05
0.03
0 0.04
0.05 0.05
0 10 20 30 0 200 400 600
Time (s) Time (s)
norm norm
WKNN WKNN
WNN WNN
2.5 2.5
2 2
1.5 1.5
1 1
Output
Error
0.5 0.5
0 0
0.5 0.5
1 1
1.5 1.5
0 50 100 150 200 0 50 100 150 200
Time (s) Time (s)
Figure 10: The comparison of controller output. Figure 11: The comparison of controller error.
1.2 1.2
1.0 1.0
Mass flow rate (kg/s)
0.8 0.8
Reactor power
0.6 0.6
0.4 0.4
0.2 0.2
0.0 0.0
100 200 300 400 500 600 100 200 300 400 500 600
Time (s) Time (s)
Feedwater flow
Steam flow
(a) (b)
600 15.8
590
15.6
Temperature (K)
Pressure (MPa)
580
15.4
570
15.2
560
550 15.0
100 200 300 400 500 600 100 200 300 400 500 600
Time (s) Time (s)
Conflict of Interests present research. They also thank the anonymous reviewers
for improving the quality of this paper.
The authors declare that there is no conflict of interests
regarding the publication of this paper.
References
[1] M. V. D. Oliveira and J. C. S. D. Almeida, Application of
Acknowledgments artificial intelligence techniques in modeling and control of a
nuclear power plant pressurizer system, Progress in Nuclear
The authors are grateful for the funding by Grants from the Energy, vol. 63, pp. 7185, 2013.
National Natural Science Foundation of China (Project nos. [2] C. Liu, J. Peng, F. Zhao, and C. Li, Design and optimization
51379049 and 51109045), the Fundamental Research Funds of fuzzy-PID controller for the nuclear reactor power control,
for the Central Universities (Project nos. HEUCFX41302 and Nuclear Engineering and Design, vol. 239, no. 11, pp. 23112316,
HEUCF110419), the Scientific Research Foundation for the 2009.
Returned Overseas Chinese Scholars, Heilongjiang Province [3] W. Zhang, Y. Sun, and X. Xu, Two degree-of-freedom smith
(no. LC2013C21), and the Young College Academic Backbone predictor for processes with time delay, Automatica, vol. 34, no.
of Heilongjiang Province (no. 1254G018) in support of the 10, pp. 12791282, 1998.
Mathematical Problems in Engineering 13
[4] S. Niculescu and A. M. Annaswamy, An adaptive Smith- [22] W. Liu, J. C. Principe, and S. Haykin, Kernel Adaptive Filtering:
controller for time-delay systems with relative degree 2, A Comprehensive Introduction, John Wiley & Sons, 1st edition,
Systems and Control Letters, vol. 49, no. 5, pp. 347358, 2003. 2010.
[5] W. D. Zhang, Y. X. Sun, and X. M. Xu, Dahlin controller design [23] W. D. Parreira, J. C. M. Bermudez, and J. Tourneret, Stochastic
for multivariable time-delay systems, Acta Automatica Sinica, behavior analysis of the Gaussian kernel least-mean-square
vol. 24, no. 1, pp. 6472, 1998. algorithm, IEEE Transactions on Signal Processing, vol. 60, no.
[6] L. Wu, Z. Feng, and J. Lam, Stability and synchronization of 5, pp. 22082222, 2012.
discrete-time neural networks with switching parameters and [24] H. Fan and Q. Song, A linear recurrent kernel online learning
time-varying delays, IEEE Transactions on Neural Networks algorithm with sparse updates, Neural Neworks, vol. 50, pp.
and Learning Systems, vol. 24, no. 12, pp. 19571972, 2013. 142153, 2014.
[7] L. Wu, Z. Feng, and W. X. Zheng, Exponential stability analysis [25] W. Gao, J. Chen, C. Richard, J. Huang, and R. Flamary, Kernel
for delayed neural networks with switching parameters: average LMS algorithm with forward-backward splitting for dictionary
dwell time approach, IEEE Transactions on Neural Networks, learning, in Proceedings of the IEEE International Conference on
vol. 21, no. 9, pp. 13961407, 2010. Acoustics, Speech and Signal Processing (ICASSP 13), pp. 5735
[8] D. Yang and H. Zhang, Robust networked control for 5739, Vancouver, BC, Canada, 2013.
uncertain fuzzy systems with time-delay, Acta Automatica [26] J. Chen, F. Ma, and J. Chen, A new scheme to learn a kernel in
Sinica, vol. 33, no. 7, pp. 726730, 2007. regularization networks, Neurocomputing, vol. 74, no. 12-13, pp.
[9] Y. Shi and B. Yu, Robust mixed 2 / control of networked 22222227, 2011.
control systems with random time delays in both forward and [27] F. Wu and Y. Zhao, Novel reduced support vector machine on
backward communication links, Automatica, vol. 47, no. 4, pp. Morlet wavelet kernel function, Control and Decision, vol. 21,
754760, 2011. no. 8, pp. 848856, 2006.
[10] L. Wu and W. X. Zheng, Passivity-based sliding mode control [28] W. Liu, P. P. Pokharel, and J. C. Principe, The kernel least-mean-
of uncertain singular time-delay systems, Automatica, vol. 45, square algorithm, IEEE Transactions on Signal Processing, vol.
no. 9, pp. 21202127, 2009. 56, no. 2, pp. 543554, 2008.
[11] L. Wu, X. Su, and P. Shi, Sliding mode control with bounded [29] P. P. Pokharel, L. Weifeng, and J. C. Principe, Kernel LMS, in
L2 gain performance of Markovian jump singular time-delay Proceedings of the IEEE International Conference on Acoustics,
systems, Automatica, vol. 48, no. 8, pp. 19291933, 2012. Speech and Signal Processing (ICASSP 07), pp. III1421III1424,
[12] Z. Qin, S. Zhong, and J. Sun, Sliding mode control experiments April 2007.
of uncertain dynamical systems with time delay, Communica- [30] Z. Khandan and H. S. Yazdi, A novel neuron in kernel domain,
tions in Nonlinear Science and Numerical Simulation, vol. 18, no. ISRN Signal Processing, vol. 2013, Article ID 748914, 11 pages,
12, pp. 35583566, 2013. 2013.
[13] L. M. Eriksson and M. Johansson, PID controller tuning rules [31] N. Aronszajn, Theory of reproducing kernels, Transactions of
for varying time-delay systems, in American Control Con- the American Mathematical Society, vol. 68, pp. 337404, 1950.
ference, pp. 619625, IEEE, July 2007. [32] Q. Zhang, Using wavelet network in nonparametric estima-
[14] I. Pan, S. Das, and A. Gupta, Tuning of an optimal fuzzy tion, IEEE Transactions on Neural Networks, vol. 8, no. 2, pp.
PID controller with stochastic algorithms for networked control 227236, 1997.
systems with random time delay, ISA Transactions, vol. 50, no. [33] Q. Zhang and A. Benveniste, Wavelet networks, IEEE Trans-
1, pp. 2836, 2011. actions on Neural Networks, vol. 3, no. 6, pp. 889898, 1992.
[15] H. Tran, Z. Guan, X. Dang, X. Cheng, and F. Yuan, A [34] S. Haykin, Neural Networks and Learning Machines, Prentic
normalized PID controller in networked control systems with Hall, Upper Saddle River, NJ, USA, 3rd edition, 2008.
varying time delays, ISA Transactions, vol. 52, no. 5, pp. 592 [35] B. Scholkopf, C. J. C. Burges, and A. J. Smola, Advances in Kernel
599, 2013. Methods: Support Vector Learning, MIT Press, 1999.
[16] C. Hwang and Y. Cheng, A numerical algorithm for stability [36] L. Zhang, W. Zhou, and L. Jiao, Wavelet support vector
testing of fractional delay systems, Automatica, vol. 42, no. 5, machine, IEEE Transactions on Systems, Man and Cybernetics,
pp. 825831, 2006. Part B: Cybernetics, vol. 34, pp. 3439, 2004.
[17] S. E. Hamamci, An algorithm for stabilization of fractional- [37] R. H. Kwong and E. W. Johnston, A variable step size LMS
order time delay systems using fractional-order PID con- algorithm, IEEE Transactions on Signal Processing, vol. 40, no.
trollers, IEEE Transactions on Automatic Control, vol. 52, no. 7, pp. 16331642, 1992.
10, pp. 19641968, 2007. [38] S. M. Sloan, R. R. Schultz, and G. E. Wilson, RELAP5/MOD3
[18] S. Das, I. Pan, S. Das, and A. Gupta, A novel fractional order Code Manual, United States Nuclear Regulatory Commission,
fuzzy PID controller and its optimal time domain tuning based Instituto Nacional Electoral, 1998.
on integral performance indices, Engineering Applications of [39] G. Xia, J. Su, and W. Zhang, Multivariable integrated model
Artificial Intelligence, vol. 25, no. 2, pp. 430442, 2012. predictive control of nuclear power plant, in Proceedings of
[19] H. Ozbay, C. Bonnet, and A. R. Fioravanti, PID controller the IEEE 2nd International Conference on Future Generation
design for fractional-order systems with time delays, Systems Communication and Networking Symposia (FGCNS 08), vol. 4,
and Control Letters, vol. 61, no. 1, pp. 1823, 2012. pp. 811, 2008.
[20] S. Haykin, Adaptive Filter Theory, Prentice-Hall, Upper Saddle [40] S. Das, I. Pan, and S. Das, Fractional order fuzzy control
River, NJ, USA, 4th edition, 2002. of nuclear reactor power with thermal-hydraulic effects in
[21] J. Kivinen, A. J. Smola, and R. Williamson, Online learning the presence of random network induced delay and sensor
with kernels, IEEE Transactions on Signal Processing, vol. 52, noise having long range dependence, Energy Conversion and
no. 8, pp. 21652176, 2004. Management, vol. 68, pp. 200218, 2013.
Advances in Advances in Journal of Journal of
Operations Research
Hindawi Publishing Corporation
Decision Sciences
Hindawi Publishing Corporation
Applied Mathematics
Hindawi Publishing Corporation
Algebra
Hindawi Publishing Corporation
Probability and Statistics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences