You are on page 1of 14

Hindawi Publishing Corporation

Mathematical Problems in Engineering


Volume 2014, Article ID 312541, 13 pages
http://dx.doi.org/10.1155/2014/312541

Research Article
A Novel Fractional-Order PID Controller for
Integrated Pressurized Water Reactor Based on
Wavelet Kernel Neural Network Algorithm

Yu-xin Zhao,1 Xue Du,1 and Geng-lei Xia2


1
College of Automation, Harbin Engineering University, Harbin, Heilongjiang 150001, China
2
National Defense Key Subject Laboratory of Nuclear Safety and Simulation Technology, Harbin Engineering University,
Harbin, Heilongjiang 150001, China

Correspondence should be addressed to Xue Du; duxue2012cc@163.com

Received 9 June 2014; Accepted 15 July 2014; Published 12 August 2014

Academic Editor: Ligang Wu

Copyright 2014 Yu-xin Zhao et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper presents a novel wavelet kernel neural network (WKNN) with wavelet kernel function. It is applicable in online learning
with adaptive parameters and is applied on parameters tuning of fractional-order PID (FOPID) controller, which could handle time
delay problem of the complex control system. Combining the wavelet function and the kernel function, the wavelet kernel function
is adopted and validated the availability for neural network. Compared to the conservative wavelet neural network, the most
innovative character of the WKNN is its rapid convergence and high precision in parameters updating process. Furthermore, the
integrated pressurized water reactor (IPWR) system is established by RELAP5, and a novel control strategy combining WKNN and
fuzzy logic rule is proposed for shortening controlling time and utilizing the experiential knowledge sufficiently. Finally, experiment
results verify that the control strategy and controller proposed have the practicability and reliability in actual complicated system.

1. Introduction In numerous problems encountered in modern control,


the time delay issue is of central importance and also a
With the tightening supplies of energy and deterioration of difficult point to elaborate. There is an increasing interest in
environment pollution in the globe, the reliability, cleanli- time delay problem of industrial system from scientists and
ness, and security help the nuclear technology gain increasing engineers. Classical control theory utilizes plenty of Smith
traction. The majority of marine nuclear power plants adopt control, Dahlin control, and so on [35]. During recent
integrated design. Since vessels request strong mobility and years, some novel control algorithms also have obtained
operational condition changes dramatically, the nuclear reac- achievements of time delay systems [6, 7]; for instance,
tor and system device need to accommodate themselves to Yang and Zhang [8] studied a robust network control
the radical load changing. The nuclear reactor power control method for T-S fuzzy systems with uncertainty and time
dominates the whole system of nuclear power plant and delay. Besides, Shi and Yu proposed the 2 / control
decides the security and reliability of the global device. Hence, for the two-mode-dependent robust control synthesis of
the operation control strategy and controller design should networked control systems with random delay [9]. Moreover,
accord with its particularities, which is convenient for power a class of uncertain singular time delay systems is controlled
controlling and adjusting and ensures the steady operation. by sliding mode [10, 11], and an optimal sliding surface is
The reactor is a time delay system that is similar to other designed for sliding model control in time delay uncertain
industrial systems. The time delay effect influences the system systems [12]. The industrial system, especially the large-scaled
performance, which possibly brings oscillation and even complex industrial systems, possesses the feature of process
destroys the instability [1, 2]. Thus, it is practically significant delay. The control of these systems depends on both current
to research the time delay control of nuclear reactor. state and past state, which makes the control issue difficult.
2 Mathematical Problems in Engineering

As a mature controller with strong availability, PID and RELAP 5 program in Section 4. Then the control strategy
the controllers have been applied in industrial processes for and the fractional-order PID controller are designed with
sophisticated dynamic systems which have large scale. The fuzzy logic and wavelet kernel neural network in Section 5.
primary reason is its relatively simple structure that could In Section 6, experiments illustrate the effectiveness of the
be easily comprehended and implemented [1315]. In recent wavelet kernel neural network and the fractional-order PID
years, considerable interests in time delay have been aroused controller. As some controllers are faced with the problem
in fractional-order PID controller and many researchers have of losing efficacy in practice, Section 6 also emulates and
presented beneficial methods to design stabilizing fractional- analyzes the control process in the basis of reactor model.
order PID (FOPID) controllers for varying time delay system Finally, conclusion of this paper has been described in
[1619]. This paper adopts neural network (NN) to tune Section 7.
the FOPID online to control the nuclear reactor power,
which keeps the coolant average temperature constant on 2. The Description of Kernel LMS Algorithm
the condition of load changing. NN is an effective method
for PID parameters tuning, and the wavelet neural network In this section, the algorithm of kernel LMS is introduced as a
(WNN) improves the efficiency of NN by transforming show review for wavelet kernel neural network. Kernel LMS
the nonlinear excitation function to the wavelet function. is an approach to perform the LMS algorithm in the kernel
However, WNN still follows the gradient descent method feature space. In the LMS algorithm, the hypothesis space
to adjust weights, which inevitably causes the problems 1 is composed by all the linear operators on (mapping
of deficient generalization performance, ill-posedness, and : ) [28], which is embedded in a Hilbert space,
overfitting. Motivated by the drawback above, it should take and the linear operator represents standard inner product.
advantage of the algorithm core, LMS, to improve WNN. The LMS algorithm minimizes the total instantaneous error
Both LMS and KLMS are derived from mean squared error of the system output:
cost function [2022]. LMS algorithm is famous for its easy
1
comprehending and implementing but has no ability to () = e2 () , (1)
be utilized in nonlinear domain. With the wide spread of 2
kernel theory, the novel LMS algorithm with kernel trick has where e() = norm() x()w(), norm is the desired value,
drawn considerable attention to learning machines including w() is the weight vector, x() is the input vector, and is the
neural network [2326]. Recently, kernel functions (such instant. Using the straightforward stochastic gradient descent
as Gaussian kernel) are applied in support vector machine update rule, the weight vector w is approximated based on
(SVM) mostly, which projects the linear nonseparable data input vector x:
to another space for being separable in the new space. RBF w ( + 1) = w () + e () x () , (2)
network is a sort of NN which use Gaussian kernel frequently.
But this kernel could not approach the target appropriately where is the step-size. By training step by step, the weight w
when it encounters more complex signal in the calculation is updated as a linear combination of the previous and present
process. There are some literatures indicating that Gaussian input vector.
kernel in RBF network and SVM could not generate a set of This LMS algorithm learns linear patterns satisfactorily
complete bases in the subspace ideally by translation [27]. while it could not approach the same sound effect in non-
Some potential wavelet functions could be transformed as linear pattern. Motivated by the drawback, Pokharel with
kernel functions, which are competitive to produce a set of other scholars extended the LMS algorithm to RKHS by the
complete bases in 2 () space (square and integrable space) kernel trick [28, 29]. Kernel methods map the input into a
by stretching and translating. Meanwhile, RBF network needs high dimensional space (HDS) [30]. And any kernel function
to identify the model of object and obtain the Jacobian based on Mercers theorem has the follow mapping [31]:
information when it tunes the network weights, and it is
hard to gain the Jacobian information of complex industrial (, ) = () , ( ) = () ( ) . (3)
system. But the conventional WNN could tune the PID When the input in (2) consists of {(x(1)), (x(2)), . . . ,
controller without such information and is more appropriate (x())} but not {x(1), x(2), . . . , x()}, (x()) is the infinite
to be the parent body for improvement. Thus, this paper feature vector transformed from input vector x(). The LMS
proposes a novel neural network based on wavelet kernel algorithm is no longer applicable, and the total instantaneous
function and combines with fuzzy logic rules to achieve the error of the system output is as follows:
online tuning of FOPID. Experiment results validate that the
1 2
control strategy and controller design improve the accuracy = (norm (x ()) w ()) , (4)
and speed, which could control the nuclear reactor power 2
operation and enhances the overall efficiency. where (x()) is matrixes of input vector, norm is desired
The next content is composed of six sections. Section 2 value vector, and w() is weight vector in RKHS.
provides a short introduction to LMS algorithm and kernel Then build the hypothesis that w(0) = 0 for convenience,
LMS algorithm. Section 3 presents the related theory of and (2) could convert to
wavelet kernel function and proposes the method of wavelet 1
kernel neural network including the discussion of adaptive w () = e () (x ()) , (5)
parameters. The model of nuclear reactor is structured by =0
Mathematical Problems in Engineering 3

and the final output in step updating of the learning The conception above is the foundation of wavelet anal-
algorithm is ysis and theory. Note that the wavelet kernel function for
neural network is a special kernel function and comes from
1
wavelet theory; it would be the horizontal function (, ) =
z () = e () (x ()) , (x ()) = e ( 1) k, (6)
( ) and satisfied the condition of Mercer theorem [34],
=0
which could be summarized as follows.
and k = [(x(1), x()), (x(2), x()), . . . , (x( 1), x())].
From (4)(6), the KLMS algorithm remedies the defect Lemma 1. A continuous symmetric kernel is (, ), and
of the LMS algorithm of its relatively small dimensional is bounded, likewise for . Then (, ) is the valid kernel
input space and improves the well-posedness study in the function for wavelet neural network if and only if it holds for
infinite dimensional space with finite training data [28]. As any nonzero function () which makes 2 () < , and
the application of adaptive filter theory, the neural network the following condition will be satisfied:
could take advantage of the KLMS algorithm, which endows
the learning of neural network with more efficiency and (, ) () ( ) 0. (10)
reliability.

With regard to the horizontal floating function, there


3. Wavelet Kernel Neural Network is an analogous theory for support vector machine (SVM)
Following the principle above, this paper acquires the wavelet [35]. Allowing for the affiliation between SVM and NN [34],
kernel function by Mercer theorem and proposes a more an approved horizontal floating kernel function of SVM is
competent method to update parameters on basis of con- effective in NN. When () is a mother wavelet, is a
ventional WNN. With the kernel LMS above, the wavelet dilation factor and is a translation factor, , ;
kernel neural network is proposed in this section on the basis the horizontal floating translation-invariant wavelet kernels
of wavelet kernel function and conventional wavelet neural could be expressed as [36]
network. First, the wavelet kernel function is constructed,

which is admissible neural network kernel. Second, the (, ) = ( ). (11)
wavelet kernel neural network is proposed including neuron =1
explanation and weight updating. Then the modulations
of adaptive variable step-size and wavelet kernel width are Since the number of wavelet kernel functions which
discussed in Sections 3.3 and 3.4. satisfy the horizontal floating function condition [37] is not
much up to now, the neural network in this paper adopts an
3.1. The Wavelet Kernel Function. The wavelet analysis is existent wavelet kernel function, Morlet wavelet function:
executed to express or approximate functions or signals
2
generated family functions given by dilations and translations () = cos (0 ) exp ( ). (12)
from the mother wavelet , () 2 (). Hypothesize 2
that 1 () is linear integrable space and 2 () is square
Then the Morlet wavelet kernel function is defined as
integrable space. When the wavelet function () 1 ()
2 () ((0)
= 0), the wavelet function group could be (, )
defined as
2
1/2
( )
, () = ( ), (7) = cos (0 ( )) exp ( ).
=1 22
where , , , is a dilation factor, and is a translation (13)
factor. And () satisfy the condition [32, 33]
Figure 1 displays the Morlet wavelet kernel function in
2
() the four-dimensional space. As a continuously differentiable
= < , (8) function, the derived function of Morlet kernel is illustrated
||
in Figure 2.

where () is the Fourier transform of (). For a common
multidimensional wavelet function, if the wavelet function of 3.2. The Wavelet Kernel-LMS Neural Network. The neural
one dimension is () which satisfies the condition (8), the network on basis of wavelet kernel function focuses on the
product of one-dimensional (1D) wavelet functions could be matter relating to the performance of a multilayer perception
described by tensor theory [29, 31]: trained with the backpropagation algorithm by adding KLMS

theory to wavelet neural network. This paper utilizes the
() = ( ) , (9) multilayer perception (MLP) fully connected structure and
=1 chooses three layers for understanding. The network consists
of input layer , hidden layer , and output layer . The
where {x = (1 , . . . , ) }. activation functions in hidden layer and output layer are
4 Mathematical Problems in Engineering

Morlet kernel function As the hidden layer , the signal of single neuron is
4
provided by input layer, and the induced local field k and
output of hidden layer x could be displayed as


2
k () = () () = x () w () ,
=1 (15)
x () = (k ()) ,
y 0

where k () is weights, () is the input of input layer


neurons, w () and x () are the matrix or vector constituted
2
by () and (), represents the sum of the neurons in the
input layer, and () is the wavelet kernel function in layer.
As the output layer , the signal of single neuron is
provided by hidden layer, and the induced local field k and
4 output of output layer z could be displayed as
2 0 2 4
x

Figure 1: Morlet kernel function.


k () = () () = x () w () , (16)
=1

Morlet kernel derived function z = (k ()) = ( (k ()) w ()) , (17)


8
where () is weights, () is the input of hidden layer neu-
6
rons, w () and x () are the matrix or vector constituted by
() and (), represents the sum of the neurons in the
4
input layer, and () is the sigmoid function in layer.
2 Similar to the WNN algorithm, this paper achieves a gra-
dient search to solve the optimum weight. For convenience,
y 0 it chooses w (0) to be zero. Then, based on (17), the weight
matrix w () between output layer and hidden layer is given
2 as follows:
1
4
w () = () (k ()) e () ( (k ()) w ()) ,
=0
6
(18)
8
2 0 2 4 where () is adaptive variable step-size, and the method to
x find appropriate step-size will be illustrated in the following
Figure 2: Morlet kernel derived function.
subsection. By the kernel trick and (17), z () could be
determined as
1
wavelet kernel function () and sigmoid function (), z () = ( () (k () , k ()))
=0 (19)
respectively. Weights between input layer and hidden layer
are regarded as w , and the one between hidden layer and
= ( () ( 1) k ) ,
output layer is w .
In the WKNN, hypothesize that the input set is {x (1),
x (2), . . . , x ()}, practical output set and norm output set where () = e() ( ( 1) ( 1)k ).
are {z (1), z (2), . . . , z ()} and {norm (1), norm (2), . . . , Then the weight matrix w () between hidden layer and
norm ()}, respectively, and then the cost function is defined input layer is obtained on the basis of (18) as follows:
as ():
1

1 w () = () x () (x () w ())
() = e2 () , (14) =0
2 (20)

e () ( (k ()) w ()) w () ,
where e() = norm() z ().
Mathematical Problems in Engineering 5

and the output of hidden neuron could be determined as Proof. Hypothesize that is small enough and linear function
() = ()u ()+() exists, which has the relationship that
x = (w () , x ())
() () < in the closed interval [, ]; therefore, regard
1 that () () and () are satisfied.
= ( () () (x () , x ())) (21) Consider the equations
=0
( ()) = norm (w () u ())
= ( () ( 1) k ) ,
norm () (w () u ()) ()
where () = ( () ( 1) k )( 1)w ( 1).
= E () [ () 2 () u () u ()] E ,
3.3. The Updating of Adaptive Variable Step. According to
1
the algorithm presented above (see Section 2), it stresses that ( ()) = 2 ( ()) ,
the adaptive variable step could be regarded as a significant 2
coefficient with the ability to influence the adaptivity and (26)
effectivity of the algorithm. The stability and convergence 1 () 0
of the algorithm are influenced by step-size parameter and where () = [ . . ], and minimize the functions
. .
the choice of this parameter reflects a compromise between . d .
0 ()
the dual requirements of rapid convergence and small mis- ( ()) and ( ()) = 0, and then () ( = 1, 2, . . . , )
adjustment. A large prediction error would cause the step- could be inferred as follows:
size to increase to provide faster tracking while a small
prediction error will result in a decrease in the step-size 1 () 0
to yield smaller misadjustment [37]. Therefore, this paper [ .. .. ] ,
() = [ . d . ] (27)
improves the NLMS algorithm derived from LMS algorithm,
which is propitious to multidimensional nonlinear structure [ 0 ()]
and avoids the limitation of linear structure. 1
NLMS focuses on the main problem of LMS algorithms () = . (28)
constant step-size, which designs an adaptive step-size to 2 () u () , u ()
avoid instability and picks the convergence speedup:
Equations (27) and (28) reveal that the () is indepen-
1 dent of the linear function intercept () and only relates
( + 1) = () + () () , (22)
() , () to the input and slope (). Motivated by the drawback of
where 1/(), () is regarded as the variable step-size slight error from () (), the 2 () in (28) is replaced
compared with the LMS algorithm. But the NLMS has little by (w () u ()), which improves the accuracy and
ability to perform in the nonlinear space. By the kernel trick, completes the proof.
the kernel function could not only improve the wavelet neural
network, but also update the step-size of neural network On the basis of the discussion above, (22) could be
self-adaptively. As what (17) reveals, z is a consequence represented as a special situation by Theorem 2 when the
of nonlinear function (). Since () is continuously dif- slope of function () is 1. Theorem 2 expands the NLMS to
ferentiable, this function could be matched by innumerable the multidimensional nonlinear space from the linear space
linear function following the principle of the definition of and broadens the application of NLMS. Compared with the
differential calculus. variable step-size in (22), (27) and (28) could be defined as the
adaptive variable step-size of NLMS in the extended space.
Theorem 2. The weights updating satisfies the NLMS condi- When u (), u () is too small, the problem caused by
tion in the linear integrable space 1 () as follows: numerical calculation has to be considered. To overcome this
difficulty, (28) is altered to
( + 1) = () + () () () , (23)
where () = 1/(), () is the adaptive variable step-size. 1
() = , (29)
Then weights are expanded to dimension matrix w () and 2 () u () , u () + V2
adjusted as
w ( + 1) = w () + () (w () u ()) where V2 > 0 which is set as a pretty small value. The weights
(24) updating in (20) is more complicated than that between
e () u () , output layer and hidden layer. Because the updating in (20)
is based on the output layer, meanwhile the existence of
where the nonlinear function () is continuously differentiable
e() ( (k ())w ())w () helps every neuron of
everywhere, the () will have elements, and each step-size
output layer have the relationship with the updating process
() ( = 1, 2 . . . ) is given as
between th neuron of hidden layer and the previous layer;
1 in other words, every neuron of output layer contributes to
() = . (25)
(w () u ()) u () , u () (). Therefore, this paper adopts the same step-size in ()
6 Mathematical Problems in Engineering

Input: Input signal x Ideal output norm


Output: Observe output z () Weights w()
Initialize: The number of layers in WNN structure layer
The number of neurons in every layer 1 , 2 , . . .
The activation function of every layer 1 (), 2 (), . . .
Loop count
Begin:
Initialize weights w(0)
Repeat
Wavelet kernel neural network output z ()
Output layer step-size ()
Hidden layer step-size ()
Synaptic weight updating w
Synaptic weight updating w
Wavelet kernel Width modulating ()
Until >
End

Algorithm 1: Algorithm process.

based on (), and the elements () of () could be given Allowing for (19) and (33), the updating formulation of
as follows: dilation factor () is
1
1
() = () . (30) ( + 1) = () () () () ( () , ()) ,
=1 =0
(34)
3.4. The Modulation of Wavelet Kernel Width. The goal
of minimizing the mean square error could be achieved where () = ().
by gradient search. If kernel space structure cost function The WKNN process is presented as Algorithm 1 as fol-
is derivable, the wavelet kernel of neural network could lows.
also be deduced by gradient algorithm. In consideration of
conventional wavelet neural network, the gradient search 4. The Model Design of IPWR
method could be utilized for updating wavelet kernel weight
due to the derivability of the wavelet kernel function. Besides In order to reduce the volume of the pressure vessel, the
the synaptic weights among various layers of network, there integration reactor generally uses compact OTSG. Since there
is another coefficientdilation factor in (11) which also is no separator in the steam generator, the superheated steam
influences the mean square error. With respect to the cost produced by OTSG could improve the thermal efficiency
function (14), the dilation factor is corrected by of the secondary loop systems. Compared with natural
circulation steam generator U-tube, the OTSG is designed
( + 1) = () () (e2 ()) , (31) with simpler structure but less water volume in heat transfer
tube. Since it is difficult to measure the water level in OTSG
where is the constant step-size and () is the partial especially during variable load operation, the OTSG needs
derivative function, that is, gradient function. According to a high-efficiency control system to ensure safe operation.
the chain rule of calculus, this gradient is expressed as Primary coolant average temperature is an important param-
eter to ensure steam generator heat exchange. The control
z z ()
() = e () ( ) = e () ( ), scheme with a constant average temperature of the coolant
() () () can achieve accurate control of the reactor power and avoid
1 generation of large volume coolant loop changes.
() () ()
= = () () ( () , ()) , In the integrated pressurized water reactor (IPWR) sys-
() () () =0 tem, the essential equation of thermal-hydraulic model is
(32) listed as follows.
(1) Mass continuity equations:
where = / and = () 1=0 ()((), ()).
Then () could be represented as 1
( ) + ( V ) = ,

1 (35)
() = () e () () () ( () , ()) . (33) 1
( ) + ( V ) = .
=0
Mathematical Problems in Engineering 7

(2) Momentum conservation equations: dependent variables are pressure (), phasic specific internal
energies ( , ), vapor volume fraction (void fraction) ( ),
V1 V2 phasic velocities (V , V ), noncondensable quality ( ), and
+ boron density ( ). The independent variables are time () and
2
distance (). The secondary dependent variables used in the

= + ( ) FWG (V ) equations are phasic densities ( , ), phasic temperatures
( , ), saturation temperature ( ), and noncondensable
+ (V V ) ( ) FIG (V V ) mass fraction in noncondensable gas phase ( ) [38]. The
point-reactor kinetics model in RELAP5 computes both the
(V V ) V V immediate fission power and the power from decay of fission
[ + V V ],
fragments.
2
V 1 V
+ 5. The Control Strategy and
2
FOPID Controller Design

= + ( ) FWF (V )
New control law is demanded in nuclear device [39]. When
the nuclear power plant operates stably on different con-
(V V ) ( ) FIF (V V )
ditions, the variation of main parameters in primary and
(V V ) V V secondary side is indeterminate. The operation strategy
[ + V V ]. employs the coolant average temperature av to be constant
with load changing. The WKNN fuzzy fractional-order PID
(36) system achieves coolant average temperature invariability by
controlling reactor power system (Figure 4).
(3) Energy conservation equations: The five parameters, including three coefficients and
two exponents, are acquired in two formsfuzzy logic and
1 wavelet kernel neural network. The idea of this method
( ) + ( V )
combination is born from the fact that the power level of
nuclear reactor tends to alter dramatically in a short while and
= ( V ) + then maintain slight variation within a certain period. Due

to the boundedness of the WKNN excitation function, the
+ + + + DISS , output range of FOPID coefficients could not be wide enough.
(37) The initial parameters of FOPID are set by WKNN and fuzzy
1
( ) + ( V ) logic. Moreover, the exponents of FOPID have no need to

vary with the small changing. As a consequence, the fuzzy
logic rule is executed when the reactor power transforms
= ( V ) +
remarkably to tune the five parameters of FOPID, and the
+ + DISS . WKNN retains updating and tuning of the coefficients based
on the fuzzy logic all the while. The process is illustrated in
Figure 5.
(4) Noncondensables in the gas phase: Based on theoretical analysis and design experience,
1 fuzzy enhanced fractional-order PID controller is adopted
( ) + ( V ) = 0. (38) to achieve the reactor power changing with steam flow
and maintain the coolant average temperature av constant.
Power control system regards the average temperature devi-
(5) Boron concentration in the liquid field:
ation and steam flow as the main control signal and assistant
control signal, respectively. Since the thermal output of
1 ( V ) (39)
+ = 0. reactor is proportional to the steam output of steam generator,
the consideration of steam flow variation could calculate the
power need immediately, which impels the reactor power
(6) The point-reactor kinetics equations are to follow secondary loop load changing and improves the
6 mobility of equipment. When the load alters, reactor power
could be determined by the following equation:
= + + ,
=1 (40)
0 = 1 + () + ( )
= .
=0

(41)
The RELAP5 thermal-hydraulic model solves eight field + ( ) ,
equations for eight primary dependent variables. The primary =0
8 Mathematical Problems in Engineering

where = (1 ((1 + )/))1 , = (1 ((1 )/ Hidden layer J


))1 , is time step, 1 is conversion coefficient, , , wIJ
()
norm
Input layer I wKJ
and are design parameters, is new steam flow of
x1
steam generator, and is the coolant average temperature Output layer K
f() z
deviation. Consider x2
K
e(n)
..
= (0) , (42) . ..
xI .
where (0) is norm value and is measuring temperature:
Figure 3: The structure of WKNN.
+
= , (43)
2
where and are the coolant temperature of core inlet in Figure 4, which corrects parameters order of magnitudes.
and outlet, respectively. The signal from signal processor is This control strategy overcomes the limitation problem of NN
generated by activation function and takes advantage of a priori knowledge
from the researchers in reactor power domain, which endows
= 0 . (44) the algorithm with reliability and simplifies the fuzzy logic
rules setting to reduce the control complexity.
Following the fuzzy logic principle, the fuzzy FOPID con-
troller considers the error and its rate of change as incoming
signals. It adjusts , , , , and online and 6. Simulation Results and Discussion
produces them with the fractional factors satisfying condi-
tions 0 < , < 2. The coordination principle is establishing In this section, simulations of WKNN and WKNN fractional-
the relationship among , , and parameters. The fuzzy sub- order PID controller have been carried out aiming to examine
set of fuzzy quantity chooses {NB, NM, NS, ZO, PS, PM, PB} performance and assess accuracy of the algorithm proposed
and the membership functions adopt normal distribution in the previous sections, and it compares experimentally the
functions. Larger , smaller , and could obtain performance of the WKNN algorithm and WNN algorithm.
rapider system response and avoid the differential overflow To display the results clearly, simulations are divided into
by initial saltus step when is large. Then adopt smaller two parts. One is achieved as numerical examples in Matlab
and to prevent exaggerated overshooting. When is program which includes WKNN algorithm and its fractional-
medium, should reduce properly; meanwhile, , order PID application in time delay system. Another is
, and should be moderate to decrease the system written in FORTRAN language and connected with RELAP
overshooting and ensure the response fast. If is small, the 5 which implements the reactor power controlling.
larger , , and have the ability to fade the steady-
state error. At the same time, moderate and keep 6.1. Numerical Examples. To demonstrate the performance of
output away from vibration around the norm value and WKNN algorithm, two simulations are conducted comparing
guarantee anti-interference performance of the system. For with the WNN algorithm. The structure of neural network
more information, one could refer to the literature [40]. could be consulted in Figure 3, and the initial weights are
The WKNN proposed above in Section 3 has been generated randomly. The first experiment shows the behavior
adopted to improve the performance of FOPID parameter of WKNN and WNN in a stationary environment of which
tuning, and three-layer structure has been applied. The norm output is 0.6 (Figures 5 and 6). And the second one
WKNN input and output could be displayed as follows: operates in the variable time process as 0.3 sin(0.02) + 0.5
(Figures 7 and 8).
x1 () = rin () , Figures 69 adopt WKNN and WNN to calculate the
regression process in the constant and time varying environ-
x2 () = yout () ,
ment, respectively. And both methods bring relatively large
x3 () = rin () yout () , system error in initial period. In Figures 6 and 7, the constant
norm output and straight regression process emphasize the
x4 () = 1, (45) faster convergence of algorithms. In Figures 8 and 9, the time
varying impels the regression process more difficult than the
z1 () = () , previous experiment. Firstly, there is overshoot from 0 s to
50 s in the WNN, which impels the error of WNN to decrease
z2 () = () ,
slowly. When the error of WNN starts to reduce, the one
z3 () = () . of WKNN has approached zero. Secondly, comparing the
two error curves, the phenomenon could not be ignored in
As the boundedness of sigmoid function, WKNN output which the WKNN curve has sharp points, while the WNN
parameters for enhanced PID are maintained in a range of curve has the relative smoothness. Thirdly, although the
[1, 1]. To solve this problem, the expertise improvement initial parameters have randomness, the regression process is
unit is added between WKNN and power require counter periodical. The figures imply that the convergence rate and
Mathematical Problems in Engineering 9

KWNN
0 Tm
Tm e

KPpW

KPdW
KPiW
Expertise
improvement
n0

KPpW

KPdW
KPiW
Gs
0 e
Tm Temperature n
edt Power require Signal
Actuator
Nuclear
OTSG
comparer counter processor reactor
de
dt
KPpF

KPdF
KPiF
Tm
de
dt
Fuzzy logic
e rules
Tm To
Ti

Figure 4: Control system of nuclear reactor power.

0 If error > 1
Tm , Tm , error Fuzzy logic de/dt > 2

KPpF
If the 1st loop KPiF
KPdF
WKNN

KPpF

KPiF
KPdF


KPpW
KPiW
KPdW

FOPID

Figure 5: The control strategy of FOPID.

regression quality of WKNN are better than that of WNN the effectiveness of tuning the FOPID controller in time delay
and initial condition brings no benefit to WKNN. From control. To illustrate clearly, a known dynamic time delay
the argument above, it indicates preliminary that WKNN control system is proposed as follows:
algorithm reduces the amplitude of error variation and has
4 3
better performance than WNN. However, it remains to be () = , (46)
verified whether or not the WKNN is more appropriate 5 + 1
for FOPID controller tuning in time delay system. Then, in and then analyze the performance of unit step response. The
this paper, the algorithm proposed is designed to examine control process is displayed in Figures 10 and 11.
10 Mathematical Problems in Engineering

0.7 1

0.9
0.65

0.8
0.6

0.7
0.55

Output
Output

0.6

0.5
0.5

0.45 0.4

0.4 0.3

0.2
0.35 0 200 400 600
0 10 20 30
Time (s)
Time (s)
norm
norm WKNN
WKNN WNN
WNN
Figure 8: The comparison of output.
Figure 6: The comparison of output.

0.3 0.03

0.02
0.25

0.01
0.2

0
0.15
Error
Error

0.01

0.1
0.02

0.05
0.03

0 0.04

0.05 0.05
0 10 20 30 0 200 400 600
Time (s) Time (s)

norm norm
WKNN WKNN
WNN WNN

Figure 7: The comparison of error. Figure 9: The comparison of error.


Mathematical Problems in Engineering 11

2.5 2.5

2 2

1.5 1.5

1 1
Output

Error
0.5 0.5

0 0

0.5 0.5

1 1

1.5 1.5
0 50 100 150 200 0 50 100 150 200
Time (s) Time (s)

norm output norm output


WNN-FOPID WNN-FOPID
WKNN-FOPID WKNN-FOPID

Figure 10: The comparison of controller output. Figure 11: The comparison of controller error.

heat by feedwater, and primary coolant average temperature


From the information of Figures 10 and 11, when the increases. The reactor power control system accords with
time delay system utilizes FOPID controller with the same the deviation of average temperature and introduces negative
order, the overshooting of WNN-FOPID controller and the reactivity to bring the power down and maintain the temper-
output oscillation frequency and amplitude are all higher ature constant. The variation of outlet and inlet temperature
than WKNN-FOPID obviously. The control curves and error is shown in Figure 12(c). The primary coolant flow stays
curves of different algorithm confirm that the whole tuning the same and reduces the core temperature difference to
performance of WKNN is better than that of WNN. To guarantee the heat derivation. In the process of rapid load
validate the algorithm practicability and independence from variation, the change of coolant temperature causes the
model, the next subsection will apply and control a more motion of coolant loading and then leads to the variation
complex nuclear reactor model with the strategy of Section 5. of voltage regulator pressure (Figure 12(d)). The pressure
maximum approaches 15.62 MPa; however, it drops rapidly
6.2. RELAP5 Operation Example. This paper establishes with the power decrease and volume compression. Then
the integrated pressurized water reactor (IPWR) model by pressure comes to the stable state when coolant temperature
RELAP5 transient analysis program, and the control strategy stabilizes.
in Figure 5 is applied to control the reactor power. As the
boundedness of sigmoid function, WNN output parameters 7. Conclusions
for enhanced PID remained in a range of [1, 1]. To solve this
problem, the expertise improvement unit is added between The goal of this paper is to present a novel wavelet kernel
KWNN and power require counter in Figure 1, which cor- neural network. Before the proposed WKNN, the wavelet
rects parameters order of magnitudes. Through the analysis kernel function has been confirmed to be valid in neural
of load shedding limiting condition of nuclear power unit, network. The WKNN takes advantage of KLMS and NLMS,
the reliability and availability of the control method proposed which endow the algorithm with reliability, stability, and well
have been demonstrated. In the process of load shedding, convergence. On basis of the WKNN, a novel control strategy
secondary feedwater reduces from 100% FP to 60% FP in for FOPID controller design has been proposed. It overcomes
10 s and then reduces to 20% FP. The operating characteristic the boundary problem of activation functions in network and
could be displayed in Figure 7. utilizes the experiential knowledge of researchers sufficiently.
Figure 12(a) displays the variation of once-through steam Finally, by establishing the model of IPWR, the method
generator (OTSG) secondary feedwater flow and steam above is applied in a certain simulation of load shedding, and
flow. Due to the small water capacity in OTSG, the steam simulation results validate the fact that the method proposed
flow decreased rapidly when water flow decreases. Then has the practicability and reliability in actual complicated
the decrease in steam flow leads to the reduction of the system.
12 Mathematical Problems in Engineering

1.2 1.2

1.0 1.0
Mass flow rate (kg/s)

0.8 0.8

Reactor power
0.6 0.6

0.4 0.4

0.2 0.2

0.0 0.0
100 200 300 400 500 600 100 200 300 400 500 600
Time (s) Time (s)

Feedwater flow
Steam flow
(a) (b)
600 15.8

590
15.6
Temperature (K)

Pressure (MPa)

580
15.4
570

15.2
560

550 15.0
100 200 300 400 500 600 100 200 300 400 500 600
Time (s) Time (s)

OTSG outlet temperature Primary coolant pressure


OTSG inlet temperature
Primary average temperature
(c) (d)

Figure 12: Operating characteristic.

Conflict of Interests present research. They also thank the anonymous reviewers
for improving the quality of this paper.
The authors declare that there is no conflict of interests
regarding the publication of this paper.
References
[1] M. V. D. Oliveira and J. C. S. D. Almeida, Application of
Acknowledgments artificial intelligence techniques in modeling and control of a
nuclear power plant pressurizer system, Progress in Nuclear
The authors are grateful for the funding by Grants from the Energy, vol. 63, pp. 7185, 2013.
National Natural Science Foundation of China (Project nos. [2] C. Liu, J. Peng, F. Zhao, and C. Li, Design and optimization
51379049 and 51109045), the Fundamental Research Funds of fuzzy-PID controller for the nuclear reactor power control,
for the Central Universities (Project nos. HEUCFX41302 and Nuclear Engineering and Design, vol. 239, no. 11, pp. 23112316,
HEUCF110419), the Scientific Research Foundation for the 2009.
Returned Overseas Chinese Scholars, Heilongjiang Province [3] W. Zhang, Y. Sun, and X. Xu, Two degree-of-freedom smith
(no. LC2013C21), and the Young College Academic Backbone predictor for processes with time delay, Automatica, vol. 34, no.
of Heilongjiang Province (no. 1254G018) in support of the 10, pp. 12791282, 1998.
Mathematical Problems in Engineering 13

[4] S. Niculescu and A. M. Annaswamy, An adaptive Smith- [22] W. Liu, J. C. Principe, and S. Haykin, Kernel Adaptive Filtering:
controller for time-delay systems with relative degree 2, A Comprehensive Introduction, John Wiley & Sons, 1st edition,
Systems and Control Letters, vol. 49, no. 5, pp. 347358, 2003. 2010.
[5] W. D. Zhang, Y. X. Sun, and X. M. Xu, Dahlin controller design [23] W. D. Parreira, J. C. M. Bermudez, and J. Tourneret, Stochastic
for multivariable time-delay systems, Acta Automatica Sinica, behavior analysis of the Gaussian kernel least-mean-square
vol. 24, no. 1, pp. 6472, 1998. algorithm, IEEE Transactions on Signal Processing, vol. 60, no.
[6] L. Wu, Z. Feng, and J. Lam, Stability and synchronization of 5, pp. 22082222, 2012.
discrete-time neural networks with switching parameters and [24] H. Fan and Q. Song, A linear recurrent kernel online learning
time-varying delays, IEEE Transactions on Neural Networks algorithm with sparse updates, Neural Neworks, vol. 50, pp.
and Learning Systems, vol. 24, no. 12, pp. 19571972, 2013. 142153, 2014.
[7] L. Wu, Z. Feng, and W. X. Zheng, Exponential stability analysis [25] W. Gao, J. Chen, C. Richard, J. Huang, and R. Flamary, Kernel
for delayed neural networks with switching parameters: average LMS algorithm with forward-backward splitting for dictionary
dwell time approach, IEEE Transactions on Neural Networks, learning, in Proceedings of the IEEE International Conference on
vol. 21, no. 9, pp. 13961407, 2010. Acoustics, Speech and Signal Processing (ICASSP 13), pp. 5735
[8] D. Yang and H. Zhang, Robust networked control for 5739, Vancouver, BC, Canada, 2013.
uncertain fuzzy systems with time-delay, Acta Automatica [26] J. Chen, F. Ma, and J. Chen, A new scheme to learn a kernel in
Sinica, vol. 33, no. 7, pp. 726730, 2007. regularization networks, Neurocomputing, vol. 74, no. 12-13, pp.
[9] Y. Shi and B. Yu, Robust mixed 2 / control of networked 22222227, 2011.
control systems with random time delays in both forward and [27] F. Wu and Y. Zhao, Novel reduced support vector machine on
backward communication links, Automatica, vol. 47, no. 4, pp. Morlet wavelet kernel function, Control and Decision, vol. 21,
754760, 2011. no. 8, pp. 848856, 2006.
[10] L. Wu and W. X. Zheng, Passivity-based sliding mode control [28] W. Liu, P. P. Pokharel, and J. C. Principe, The kernel least-mean-
of uncertain singular time-delay systems, Automatica, vol. 45, square algorithm, IEEE Transactions on Signal Processing, vol.
no. 9, pp. 21202127, 2009. 56, no. 2, pp. 543554, 2008.
[11] L. Wu, X. Su, and P. Shi, Sliding mode control with bounded [29] P. P. Pokharel, L. Weifeng, and J. C. Principe, Kernel LMS, in
L2 gain performance of Markovian jump singular time-delay Proceedings of the IEEE International Conference on Acoustics,
systems, Automatica, vol. 48, no. 8, pp. 19291933, 2012. Speech and Signal Processing (ICASSP 07), pp. III1421III1424,
[12] Z. Qin, S. Zhong, and J. Sun, Sliding mode control experiments April 2007.
of uncertain dynamical systems with time delay, Communica- [30] Z. Khandan and H. S. Yazdi, A novel neuron in kernel domain,
tions in Nonlinear Science and Numerical Simulation, vol. 18, no. ISRN Signal Processing, vol. 2013, Article ID 748914, 11 pages,
12, pp. 35583566, 2013. 2013.
[13] L. M. Eriksson and M. Johansson, PID controller tuning rules [31] N. Aronszajn, Theory of reproducing kernels, Transactions of
for varying time-delay systems, in American Control Con- the American Mathematical Society, vol. 68, pp. 337404, 1950.
ference, pp. 619625, IEEE, July 2007. [32] Q. Zhang, Using wavelet network in nonparametric estima-
[14] I. Pan, S. Das, and A. Gupta, Tuning of an optimal fuzzy tion, IEEE Transactions on Neural Networks, vol. 8, no. 2, pp.
PID controller with stochastic algorithms for networked control 227236, 1997.
systems with random time delay, ISA Transactions, vol. 50, no. [33] Q. Zhang and A. Benveniste, Wavelet networks, IEEE Trans-
1, pp. 2836, 2011. actions on Neural Networks, vol. 3, no. 6, pp. 889898, 1992.
[15] H. Tran, Z. Guan, X. Dang, X. Cheng, and F. Yuan, A [34] S. Haykin, Neural Networks and Learning Machines, Prentic
normalized PID controller in networked control systems with Hall, Upper Saddle River, NJ, USA, 3rd edition, 2008.
varying time delays, ISA Transactions, vol. 52, no. 5, pp. 592 [35] B. Scholkopf, C. J. C. Burges, and A. J. Smola, Advances in Kernel
599, 2013. Methods: Support Vector Learning, MIT Press, 1999.
[16] C. Hwang and Y. Cheng, A numerical algorithm for stability [36] L. Zhang, W. Zhou, and L. Jiao, Wavelet support vector
testing of fractional delay systems, Automatica, vol. 42, no. 5, machine, IEEE Transactions on Systems, Man and Cybernetics,
pp. 825831, 2006. Part B: Cybernetics, vol. 34, pp. 3439, 2004.
[17] S. E. Hamamci, An algorithm for stabilization of fractional- [37] R. H. Kwong and E. W. Johnston, A variable step size LMS
order time delay systems using fractional-order PID con- algorithm, IEEE Transactions on Signal Processing, vol. 40, no.
trollers, IEEE Transactions on Automatic Control, vol. 52, no. 7, pp. 16331642, 1992.
10, pp. 19641968, 2007. [38] S. M. Sloan, R. R. Schultz, and G. E. Wilson, RELAP5/MOD3
[18] S. Das, I. Pan, S. Das, and A. Gupta, A novel fractional order Code Manual, United States Nuclear Regulatory Commission,
fuzzy PID controller and its optimal time domain tuning based Instituto Nacional Electoral, 1998.
on integral performance indices, Engineering Applications of [39] G. Xia, J. Su, and W. Zhang, Multivariable integrated model
Artificial Intelligence, vol. 25, no. 2, pp. 430442, 2012. predictive control of nuclear power plant, in Proceedings of

[19] H. Ozbay, C. Bonnet, and A. R. Fioravanti, PID controller the IEEE 2nd International Conference on Future Generation
design for fractional-order systems with time delays, Systems Communication and Networking Symposia (FGCNS 08), vol. 4,
and Control Letters, vol. 61, no. 1, pp. 1823, 2012. pp. 811, 2008.
[20] S. Haykin, Adaptive Filter Theory, Prentice-Hall, Upper Saddle [40] S. Das, I. Pan, and S. Das, Fractional order fuzzy control
River, NJ, USA, 4th edition, 2002. of nuclear reactor power with thermal-hydraulic effects in
[21] J. Kivinen, A. J. Smola, and R. Williamson, Online learning the presence of random network induced delay and sensor
with kernels, IEEE Transactions on Signal Processing, vol. 52, noise having long range dependence, Energy Conversion and
no. 8, pp. 21652176, 2004. Management, vol. 68, pp. 200218, 2013.
Advances in Advances in Journal of Journal of
Operations Research
Hindawi Publishing Corporation
Decision Sciences
Hindawi Publishing Corporation
Applied Mathematics
Hindawi Publishing Corporation
Algebra
Hindawi Publishing Corporation
Probability and Statistics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

The Scientific International Journal of


World Journal
Hindawi Publishing Corporation
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Submit your manuscripts at


http://www.hindawi.com

International Journal of Advances in


Combinatorics
Hindawi Publishing Corporation
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal of Journal of Mathematical Problems Abstract and Discrete Dynamics in


Complex Analysis
Hindawi Publishing Corporation
Mathematics
Hindawi Publishing Corporation
in Engineering
Hindawi Publishing Corporation
Applied Analysis
Hindawi Publishing Corporation
Nature and Society
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences

Journal of International Journal of Journal of

Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014


Function Spaces
Hindawi Publishing Corporation
Stochastic Analysis
Hindawi Publishing Corporation
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

You might also like