Professional Documents
Culture Documents
NEURAL NETWORKS
ABSTRACT
This paper presents a development of a neuro-wavelet hybrid system for long-time prediction that
incorporates multiscale wavelet shell decompositions into a set of neural networks for a multistage
prediction.
Keywords:
neuro-wavelet, long-time prediction, multiscale wavelet, neural networks, and multistage prediction
fahri@neu.edu.tr, jamalfathi2004@yahoo.com
Static modeling with wavelet networks has been investigated by other authors in
[17]. We consider a process with Ni inputs and a scalar output yp. Steady-state
measurements of the inputs and outputs of the process build up a training set of N
n
examples ( x y p )
n
x n x1n ,....., xnni T
being the input vector for example n and
y np f ( x n ) w n n 1 to N (2)
y n ( x n , ) n 1 to N (3)
where yn is the model output value related to example n, the nonlinear function Ψ
is given by relation (7), and θ is the set of adjustable parameters:
In this case, the N copies are independent, and the training is similar to that of a
static model. Therefore, the input vector of copy n can be viewed as the vector xn
n
and {yp (n)} as the process output defined as y p . More precisely, the inputs of copy
n can be renamed as:
n
- external inputs: x k u ( n k ) with k = 1,…., Ne
n
- state inputs: x k y p ( n k N e ) with k = Ne + 1,….., Ne + Ns
Since the state inputs of the copies are forced to the corresponding desired values,
the predictor is said to be trained in a directed [8].
Φ1 Φ2 ΦNW
Unit
a0 a1
aN
Basically, we suggest the direct application of the a trous wavelet transform based
on the ASR to financial time series and the prediction of each scale of the wavelet's
coefficients by a separate feedforward neural network. The separate predictions of
each scale are proceeded independently. The prediction results for the wavelet
coefficients can be combined directly by the linear additive reconstruction property
of ASR, or preferably. The aim of this last network is to adaptively choose the
weight of each scale in the final prediction [11], as illustrated in figure 2.
a trous filtering
Keep last coefficient in each scale
1 2 . . . k K+1 K+2
W1(k+2)W2(k+2) W3(k+2) W4 (k+2) cp(k+2)
Figure 2 Illustration of the Procedure for Preparing Data in the Hybrid Neuro-
Wavelet Prediction Scheme.
Figure 3 shows our hybrid neuro-wavelet scheme for time series prediction. Given
the time series f(n), n = 1,….., N, our aim is to predict the lth sample a head, f(N+l),
of the series. That is, I = 1 for single step prediction; for each value of I we train a
separate prediction architecture. The hybrid scheme basically involves three stages,
which bear a similarity with the scheme in [11]. In the first stage, the time series is
decomposed into different scales by autocorrelation shell decomposition. In the
second stage, each scale is predicted by a separate NN and in the third stage, the
next sample of the original time series is predicted, using the different scale's
prediction, by another NN. More details are expounded as follows.
Predictor
Predictor
Predictor
Predictor
4. Experimental Results
For modeling we used 1200 data, saved in MATLAB software under jamal.m, we
developed 2-layer (hidden and output) feedforward ANN based on adaptive learning
algorithm. The first step in design is to create the network object. The function
newff creates a trainable ANN involving p = [r(t); r(t-1); r(t-2)] three inputs, 50
neurons in hidden and one neuron in output layer, target is d = r(t+1); ANN output
(predicted data) is a(t). MATLAB provides a function called bilinear to implement
this mapping. Its invocation is similar to impinvr function, but it also takes several
forms for different input-output quantities. Here is our input data plot as shown in
figure 4. Original Data
orig inal s ignal
0 . 06
0 . 04
0 . 02
0
0:1200
-0 . 02
-0 . 04
-0 . 06
-0 . 08
0 2 00 400 600 8 00 1000 120 0
0.05
0:1200
0
(a)
-0.05
-0. 1
0 200 400 600 800 1000 1200
a1
0.04
0.02
1:150
0 (b)
-0.02
-0.04
0 50 100 150
d1
0.1
0. 05
(c)
1:150
-0. 05
-0.1
0 50 100 150
Figure 5 (a) Original Signal, (b) Approximation of Decomposed Signal, and (c)
Detail of Decomposed Signal.
0
Performance is 0.000998151, Goal is 0.001
10
-1
10
Training-Blue Goal-Black
-2
10
-3
10
-4
10
0 200 400 600 800 1000 1200 1400 1600 1800 2000
2096 Epochs
0 (a)
-0.1
0 50 beta signal 100 150
0.1
0 (b)
-0.1
0 50 error signal 100 150
0.1
0 (c)
-0.1
0 50 100 150
Figure 7 (a) Detail Signal as Input Signal to Neural Network, (b) Beta Signal as
Target of N.N, and (c) Error Signal.
The parameters used are as follows: training rate η = 0.05, maximal number of
epochs is set to 3000, training goal (graidient) is 10-3. The training function train
involves network object net, input vector –p, and target vector d. The function sim
simulates the network- takes the network input p, network object net and return a
network output a(t). The training stopped after n = 2096 epochs when training error
remains within of performance index (goal) that is set to 10-3. The training is shown
in figure 7.
5. Conclusion
Literature review on application wavelet for time series prediction were realized,
continuous and discrete wavelet transform, their properties, time and frequency
selectivity were analyzed.
6. References
[1] A. Aussem and F. Murtagh. A neuro-wavelet strategy for web traffic forecasting.
Journal of official Statistics, 1:65, 87, 1998.
[2] Z. Bashir and M.E. El-Hawary. Short term load forecasting by using wavelet
neural networks. In Canadian Conference on Electrical and Computer
Engineering, pages 163, 166, 2000.
[3] V Bjorn. Multiresolution methods for Financial time series prediction. In
Proceedings of the IEEE/IAFE 1995 on Computational Intelligence for
Financial Engineering, page 97, 1995.
[4] R. Cristi and M. Tummula. Multirate, multiresolution, recursive Kalman Fillter.
Signal Processing, 80:1945, 1958, 2000.
[5] D. L. Donoho and I. M. Johnstone. Adapting to unknown smoothness via
wavelet shrinkage. Journal of the American Statistical Association, 90:1200,
1224, 1995.
[6] D.L. Donoho. Nonlinear wavelet methods for recovery of signals, densities, and
spectra from indirect and noisy data. Proceedings of Symposia in Applied
Mathematics, 47, 1993.
[7] D.L. Donoho and I.M. Johnstone. Ideal spatial adaptation by wavelet shrinkage.
Technical Report 400, Stanford University, 1993.
[8] L. Hong, G. Chen, and C.K. Chui. A Filter-bank based Kalman Filter technique
for wavelet estimation and decomposition of random signals. IEEE Trans. on
Circuit Systems – II Analog and Digital Signal Processing., 45(2):237{241,
1998.
[9] J. Moody and W. Lizhong. What is the \true price"? State space models for high
frequency FX data. In Proc. IEEE/IAFE 1997 Conference on Computational
Intelligence for Financial Engineering (CIFEr), pages 150, 156, 1997.
[10] S. Soltani, D. Boichu, P. Simard, and S. Canu. The long-term memory
prediction by
multiscale decomposition. Signal Processing, 80:2195, 2205, 2000.
[11] J.L. Starck and F. Murtagh. Image _ltering from multiple vision model
combination. Technical report, CEA, January 1999.
[12] J.L. Starck and F. Murtagh. Multiscale entropy Filtering. Signal Processing,
76(2):147,165, 1999.
[13] J.L. Starck, F. Murtagh, and A. Bijaoui. Image Processing and Data Analysis:
The Multiscale Approach. Cambridge University Press, Cambridge (GB),
1998.
[14] J.L. Starck, F. Murtagh, and R. Gastaud. A new entropy measure based on the
wavelet transform and noise modeling. Special Issue on Multirate Systems,
Filter Banks, Wavelets, and Applications of IEEE Transactions on CAS II,
45(8), 1998.
[15] E.G.T. Swee and S. Elangovan. Applications of symmlets for denoising and
load forecasting. In Proceedings of the IEEE Signal Processing Workshop on
Higher-Order Statistics, pages 165, 169, 1999.
[16] K. Xizheng, J. Licheng, Y.Tinggao, and W. Zhensen. Wavelet model for the
time scale. In Proceedings of the 1999 Joint Meeting of the European
Frequency and Time Forum, 1999 and the IEEE International Frequency
Control Symposium, 1999, volume 1, pages 177,181, 1999.
[17] G. Zheng, J.L. Starck, J. Campbell, and F. Murtagh. The wavelet transform for
Filtering Financial data streams. Journal of Computational Intelligence in
Finance, 7(3), 1999.