You are on page 1of 11

Application of Artificial Neural Network

for Short Term Load Forecasting in


Electrical Power System

Ch.Lavanya, V.Srividya,
K.S.R.M.College of Engineering, K.S.R.M.College of Engineering,
III E.E.E, I SEM, III E.E.E, I SEM,
KADAPA. KADAPA.
e-mail: lallu18_18@yahoo.com e-mail: sri_vidya_eee@yahoo.co.in

Y. Swathi,
K.S.R.M.College of Engineering,
III E.E.E, I SEM,
KADAPA.
e-mail: yswathibtech@yahoo.co.in
ABSTRACT

This paper presents a method for short term load forecasting in eectric power
system using artificial neural network. A multilayered feed forward network with back
propagation learning algorithm is used because its good generalising property. The input
to the neural network is in terms of past load data which was heuristically chosen such
that they reflect the trend, load shape as well as some influence of weather. The weather
data is not used to train the metwork. The network is trained to predict one hour ahead
load forecasting. The generalisation capability of the neural network is also studied.
Simulation results using the system data are presented.

Introduction:

Short term load forecasting is an essential tool in operation and planning of the
power system. It helps in coordinating the generation and area interchange to meet the
load demand. It also helps in security assessment, dynamic state estimation, load
management and other related functions. In the last few decades, various methods for
short term load forecasting have been proposed. The methods vary from simple
regression and extrapolation of fading memory Kalman filter and knowledge based
systems.
Among the various methods available in the literature, most can be classified into
two categories. In the first category are the methods, which rely solely on the past data
and fit the load pattern as a time series. In the second category are the methods, which
give emphasis to the weather variables, ie, temperature, humidity, lightintensity, etc, and
find a functional relationship between these variables and the load demand.
Recently, Artificial Neural Networks (ANN) has been used for short term load
forecasting. Both time series models and weather dependent models have been used in
ANN based short tem load forecasting. In this paper, a short-term load forecasting
method using the ANN is proposed. A multilayered feed forward (MLFF) neural
network with back propagation learning algorithm has been used because of its simplicity
and good generalization property. The input, to the neural network is based only on past
load data and are heuristically chosen in such a manner that they inherently reflect all the
major components, such as, trend, type of day, load shape as well as weather which
influence the system load.
The main contributions of this paper are: (i) Heuristic choice of a small set of
input which inherently represents the major components of the load pattern (ii)
introduction of a stopping criteria during learning phase to avoid over fitting of the
network to learning examples, and (iii) A detailed analysis of the generalisation
properties like interpolation/extrapolation ability of the ANN, working life of a trained
network, ie, useful period of a network after which a retaining is required etc.

FIGURE 1
SINGLE PROCESSING UNIT(PE)

NEURON
X! W
I
N X2 OUTPUT
P . Z =Σ W X
j u i
f(Z)
j
U .
T .
XN

ARTIFICIAL NEURAL NETWORK(ANN)

Artificial Neural Networks are increasingly finding use as alternative


computational paradigm for solving complex problems like pattern recognition etc.
Neurons in ANN can be viewed as simple processing elements (PE). A commonly used
PE representation of an artificial neuron is shown in Fig 1. The PEs can be interconnected
in various topologies. Depending on the various topologies, activation functions and
weight change strategies, a large number of ANN architectures have been developed, eg,
Back propagation, Hoffield net, Kohonen net, etc. Among the various ANN architectures
available in the literature, the multilayer feed forward (MLFF) network with error back
propagation learning algorithm has been selected for this problem mainly because (i) it is
the most simple and comprehensive neural approval for model based prediction and/or
control and (ii) it has the generalisation capability.

Multilayered Feed Forward Network(MLFF)


In MLFF network the PEs are arranged in layers and only in adjacent layers are
connected. It has a minimum of layers of PEs; (i) the input layer, (ii) the middle or hidden
layer(s), (iii) the output layer. The information propagation is only in the forward
direction and there are no feedback loops. A MLFF network topology is shown in Fig 2.

FIGURE 2
SHEMATIC ILLUSTRATION OF MULTILAYER
FEED FORWARD (MLFF) NETWORK

OUTPUT PATTERN
k
..... OUTPUT LAYER

Wkj

j ........ HIDDEN LAYER

Wji

..... INPUT LAYER


i
INPUT PATTERN
In order to obtain bounded output from PEs a sigmoidal activation function is chosen
where output is limited to (0,1) for the input range (-∝,∝).
The MLFF network uses separate stages for learning and operation. The learning
problem can be stated as: given a set of input-output pairs (I1 , O1), …..(In , On), find the
interconnection weights Wij for each interconnection of ANN such that the network maps
Ii to Oi for i = 1, 2, 3, ……, n, as closely as possible. The error back propagation learning
algorithm, the interconnection weights are adjusted such that the error function

E = - (1/2) Σ (tk - Ok)


k

is minimized, where,
tk = desired output for unit in layer k
Ok = actual output for unit in layer k
The minimisation process is based on gradient descent algorithm. The interconnecting
weights between jth layer (upper layer) neurons and ith layer (lower layer) neurons is
modified using the following relationship.
Wji (new) = Wji (old) + η δ j Oi + ∝ [∆ Wji (old)]
Where, if, PEj is an output layer PE, then
δ j = Oj (tj – Oj) (1 - Oj)
if, PEj is an hidden layer PE, then

δ j = Oj(1 - Oj) Σ δ k Wkj


k

where k is over all PE’s in the layer above the jth layer of PE and η , the learning rate, ∝,
the momentum factor. The momentum term helps in faster convergence of the algorithm.
Once the network gets trained, the resulting connection weights are frozen. In the
operation stage the network is used to compute an output from a set of inputs.
PROPOSED METHOD
Characteristics of the Load Data

In order to reflect the load behavior in the input information, the historical hourly
load data for 1 year of a number of systems were analyzed. It was observed that the load
data exhibits a daily and weekly periodicity. It was also observed that the daily load
pattern for the working days showed marked similarity whereas the holiday load patterns
were quite different from those of the working days. Therefore, hourly loads for working
days and holidays were treated separately. Auto-correlation of hurly load was obtained
using

n-k - -
Σ (yt – y )(yt+k – y )
t=1
rk = ---------------------------
n -
Σ (yt – y )2
t=1
where,
rk = auto-correlation factor for time lag k
n = total number of available data
-
y = mean value of that available data

yh = hth hour data


for two weeks (336 hours) data for the test systems and is shown in Fig 3. Loads for 24
hours and 168 hours are highly correlated and based on these observations. Five hourly
loads were heuristically chosen and used as input information. These inputs are as
follows: (i) previous hour load (L-1), (ii) previous to previous hour load(L-2),(iii)previous
day(same day type) same hour load(L-24), (iv) previous week same day and same hour
load(L-168), (v) previous week same day but previous hour load(L-169).

Figure 3
Auto-Correlation Factor (rk) for two weeks load on best system
Among these, L-24 and L-168 reflect the daily and weekly periodicity of the hourly load.
L-1, L-2, L-168 and L-169 reflect the trend of the hourly load pattern and L-1 and L-2 also
implicitly reflect the weather effect.
Scaling of the Input and Output Data
The input and output variables for the neural network will have very different
ranges if the actual hourly load data is directly used. This may cause convergence
problem during the learning process. To avoid this, the input and output load data were
scaled such that they were within the range (0,1), with majority of the data having values
near to 0.5. For this purpose the actual load was scaled using the following relationship.

L - Lmin
Ls = ------------------------
Lmax - Lmin
Where,
L = the actual load

Ls = the scaled load which is used as input to the net

Lmax = the maximum load, 1.5 to 2 times the peak load for the whole year

Lmin = the minimum load, 0.5 to 0.75 times the valley

ANN Architecture
The artificial neural network architecture used is a feed forward network with three
layers, ie, input layer, one hidden layer and output layer. The number of neurons in the
input layer is equal to the number of variables in the input data. The output layer consists
of one neuron. Although, the choice of number of hidden layer neurons is arbitrary and a
optimal number of hidden layer neurons is generally obtained through trail and error. On
the basis of a large number of simulations a large number of neurons in the hidden layer
leads to large training time, as well as, it creates a grandmother network. The new
network memorizes the learning patterns very well but does not perform well for new set
of input. Whereas, with too small number of hidden layer neurons, the network has
difficulty in learning, as it is unable to create the required complex decision
boundaries. Therefore, a good starting point for optimal choice of hidden layer neuron by
trail and error is to use geometric mean of the input and output layer neurons.

Stopping Criteria
Fig 4 shows the convergence characterizes of the learning algorithm for IEEE 24
bus system. The testing was done after every iteration during learning. Initially, the Mean
Square error (MSE) for both the training and testing set decreases gradually. But after
some iterations, is, around 2000 iterations, the MSE for the testing examples increases,
through, the MSE for the learning examples still decreases, is, network starts over fitting
for the training set from this point. Thus, the learning should be stopped at this point.

Simulation and Results


Test Systems
The developed algorithm was tested with hourly load data for the following
systems: (i) OSEB (Orissa State Electricity Board, India); (ii0 IEEE 24 bus reliability
system. The two systems have quite different daily load patterns. The load data of OSEB
system for the year 1990 has peak load in August and a valley load in March. While the
IEEE 24 bus reliability test system is a winter peaking system with peak load in
December, it has a second peak in June at 90% of annual peak.
As daily load pattern for normal working days were quite different from those of
weekend days and holidays. The load data for each system was divided into two groups,
ie, normal working days and weekend days and holidays. These two sets were treated
separately.

TABLE 1
INPUTS TO THE DIFFERENT NETWORKS
System Day η α Period for No. of Period for No of
Name Type Training set Patterns for Test set Patterns for
Samples Training set Samples Test set

IEEE-24 bus Weekday 0.7 0.9 April to Aug 60 October 20


IEEE-24 bus Weekend 0.7 0.9 April to Aug 40 October 8
OSEB Weekday 0.7 0.9 Nov to Feb 75 Dec, 1990 20
Figure 4
Convergence characteristics of MLFF Neural Network

Supervised Learning
The ANNs used for forecasting hourly load consists of five input neurons, two 25
25hidden layer neurons and one output layer neuron. Twenty-four separate ANNs one for
each hour forecast, were trained using the input-output data pairs. Separate ANNs were
trained for weekdays and weekends. Thus, a total of 48 ANNs were trained for each
system. On the basis of a large number of simultaneous optimal values for the learning
coefficient(η ) and momentum factor(α ) used for training of each ANN was obtained.
After the convergence of the training algorithm, each ANN was tested using input output
TABLE 3
TABLEpairs
2 from the test set data. The details of the training DESCRIPTION
set and test set
FORdata as wellAND
TRAINING as the
TESTING
SUMMARY OF ANN FORECASTING RESULTS DATA SET FOR ANN GENERALISATION TEST
learning parameters η and α for a particular hour (10AM) are presented in Table 1. The
Training Set Data Testing Set Data Remark
System Index Peak
summary of theValley Max
ANN forecasts Av, day (24 hours) is presented in Table 2.
for one
Name Load Load forecast forecast Case I Taken randomly Taken randomly Testing
error(%) error(%) from region B&C from region A&D extrapolating
ability in both
hour 11th 4th direction
IEEE-24 Act value(MW) 1982.37 1149.76 1.945 0.748 Case II Taken randomly Taken randomly Testing
Bus Pred value(MW) 2011.63 1149.63 from region C&D from region A&B extrapolating
Error(%) 1.476 0.011 ability in upward
direction
hour 21th 7th
IEEE-24 Act value(MW) 1931.26 1197.25 1.881 0.913 Case III Taken randomly Taken randomly Testing
Bus Pred value(MW) 1919.29 1186.99 from region A&B from region C&D extrapolating
Error(%) 0.62 0.857
ability in downward
direction
hour 19th 15th
OSEB Act value(MW) 1056.04 724.973 2.627 1.224
Pred value(MW) 1045.78 726.998 Case IV Taken randomly Taken randomly Testing
Error(%) 0.97 0.279 from region A,B, from region A,B extrapolating
C &D C &D ability of the ANN
Testing Generalisation Property
In order to test the generalization property or exploitation and interpolation
capability of the net in more details, the hourly load data was divided into four groups, ie,
A, B, C and D. Four distinct training and testing sets were prepared as detailed in Table
3. The results are presented in Table 3. From Table 4 it can be seen that the network is
able to perform both interpolation and extrapolation quite well with less than 5% average
error. Extrapolation ability is of particular interest as it shows that network can predict
even for unknown situations. Only for few stray cases, the errors have been more than
5%.

TABLE 4
RESULTS FROM GENERALISED NETWORKS

Training Error(%) Testing Error(%)


max min av max min av

Case I - 3.62 0.213 1.487 - 5.334 0.22 2.26


Case II - 2.93 0.085 0.986 - 6.269 0.02 3.89
Case III - 3.19 - 0.040 1.40 4.915 0.12 3.36
Case IV - 3.91 0.048 1.26 - 4.594 -0.01 0.97
Conclusions
This paper presents a short-term load forecasting method using a multi-layered
feed-forward Artificial Neural Network with back-propagation learning algorithm. A
heuristic choice of only five inputs from the load data history is used to represent the
major factors influencing the load pattern. The small number of inputs used leads to a
small network, which in turn requires less learning time. Stopping criteria for learning is
introduced. This avoids unnecessary over-learning, which may degrade the generalization
capability of the network. Extensive testing of the network shows that it has very good
generalization capability.

References:

1. G.Gross and F.D.Galiana. ‘Short-term Load Forecasting’, Proceedings of IEEE,


vol 75, no12, December 1987, p 1558-1573.
2. A.K. Mahalnobis, D.P.Kothari and S.I.Ahson. ‘Computer Aided Power System
Analysis and Control’, Tata Mcgraw Hill Publication Co, New Delhi. 1988.

You might also like