You are on page 1of 8

A FAST ALGORITHM FOR POWER SYSTEM OPTIMIZATION

PROBLEMS USING AN INTERIOR POINT METHOD


K. Ponnambalamt V.H. Quintana* A. Vannelli*
t Department of Systems Design Engineering
* Department of Electrical & Computer Engineering
University of Waterloo
Waterloo, Ontario
Canada
ABSTRACT. Linear Programming (LP) is a widely used
tool for solving many linear/non-linear power-system optimi-
zation problems such as transmission planning, constrained-
security dispatch, optimal power flow, emergency control,
etc. Variants of simplex-based methodologies are generally
used to solve the underlying LP problems. In this paper an
implementation of the newly developed Dual Affine (DA)
algorithm (a variant of Karmarkars interior point method) is
described in detail and some computational results are
presented. This algorithm is particularly suitable for prob-
lems with a large number of constraints, and isapplicable to
linear and nonlinear optimization problems. In contrast with
the simplex method, the number of iterations required by the
DA algorithm to solve large-scale problems is relatively
small, generally between 20 - 60, irrespective of problem size;
this feature makes the DA method very fast in practical
applications. The DA algorithm has been implemented con-
sidering the sparsity of the constraint matrix. The normal
equation that is required to be solved in every iteration is
solved using a preconditioned conjugate gradient method.
An application of the proposed optimization technique to
hydro-scheduling is presented; the largest problem is
comprised of 880 variables and 3680 constraints, and is
solved over 9 ti mes faster than an efficient simplex (MINOS)
code. Moreover, as the problem size grows, the speedup
ratio over the simplex method also increases. We have also
implemented a new heuristic basis recovery procedure to
provide primal and dual optimal basic solutions which are
not generally available if interior point methods are used.
The tested examples indicate that this new approach
requires less than ten percent of the original iterations of the
simplex method to find the optimal basis. This feature is
very useful in accelerating simplex-based linear/nonlinear
optimization procedures.
KEYWORDS. Optimization, Linear Programming,
Karmarkars Interior Point Methods, Hydro Scheduling
N2L 3G1
1. INTRODUCTION
A large number of current techniques used to solve
linear/non-linear power-system optimization problems
depend greatly on primal/dual simplex methods because of
the underlying linear programming structure, (1-31. A new
interior-point method for solving L P problems was developed
in 1984 by Karmarkar 141 as an alternative to the simplex
method. I t became apparent that with an efficient imple-
mentation of the Karmarkars algorithm it was possible to
attain considerable speedups over the simplex method for
some very large, sparse problems. Karmarkar and Ramak-
rishnan [5] reported significant speedups over the simplex
method of MINOS (61, sometimes over 100 times, for a
variety of problems which included sparse and dense con-
straint matrices, and very large problems of many thousands
of variables and constraints.
In this paper we describe a preliminary implementation
of the Dual Affine (DA) algorithm for solving large-scale
engineering problems, such as the hydro-scheduling problem
that isincluded as an application The dual affine algorithm
isan interior point method that uses a linear transformation,
while the original Karmarkar algorithm used a nonlinear pro-
jective transformation of the problem. The original Karmar-
kar algorithm [4] has mathematically provable better compu-
tational complexity but, in practice, the affine algorithms
(which could be derived as simplifications of the Karmarkar
algorithm (5]), are better for computational speed. Although
an affine algorithm was first proposed by Dikin [7], it was
Karmarkar ( Kozlov, [SI) who had any significant computa-
tional results until 1986 when Adler et al. [9], implementing
the dual affine algorithm with the help of Karmarkar, also
reported manifold speedups over the simplex method. Many
others (see Bazaraa et al., [lo], page 417, for a historical note
on projective and affine scaling algorithms) also had pro-
posed affine scaling algorithms following the publication of
Karmarkars original algorithm in 1984.
In the dual affine algorithm, the problem solved is usu-
ally the dual of the original Linear Programming (LP) prob-
lem. In the case of degeneracy, as in most real world prob-
lems, determining the primal basic solution is a non-trivial
exercise. The determination of the primal basis is critical for
warm starting the simplex iterations for modified LP prob-
lems, especially if Successive Linear Programming (SLP)
iterations are used to solve non-linear optimization problems.
Determining basic solution from a non-basic solution is an
old problem (see Charnes et al. 1111); and in the context of
recent interior point methods, basis recovery procedures
were presented by Ye and Kojima 1121 and Megiddo [13] and
others (see [lo] for a short review. The work by Ye and
Kojima 112) had an assumption of non degeneracy and in
CH2948-8/9 1/0ooO-0393$1.000 1991 IEEE
Megiddo [ 131 there were no reports of tests or results on the
proposed basis recovery procedure. Ponnambalam [ 141
independently presented a method for basis recovery based
on complimentary slackness, and also used the method as a
stopping criterion. The method was also presented in I151
and will be briefly described in a latter section of this paper.
For brevity, we will consider only the LP aspects in this
paper; SLP aspects for nonlinear problems are reported else-
where (161. Therefore, the main objective of this paper is to
report results corresponding to an application of dual affine
algorithm to a specific problem, namely, the hydro-
scheduling problem. Our implementation is simple and
easily applicable but sophisticated enough to demonstrate
the greater potential of the dual affine algorithm over the
traditionally-used simplex method for solving problem similar
to the hydro-scheduling problem. For a more advanced
implementation of this algorithm, suitable for a wider variety
of problems, the reader isreferred to [5] and [9].
Before we describe the dual affine algorithm, we want to
state the major differences between the simplex method and
the dual affine algorithm, especially those that are important
to engineering applications. I t is noted that, the simplex
method is used to solve problems generally with more vari-
ables than constraints -- and most commercial packages such
as MINOS [6] and MPSX [17] assume such a form. However,
in many power-system optimization problems there are usu-
ally a greater number of constraints ( m) than variables (n).
In such problems, we usually resort to in-house implementa-
tions of the dual simplex method that is suitable for such
problems (for instance, [l]); the number of simplex iterations
(pivots) may depend linearly on the number of constraints
plus variables, that is, on ( n+m) and each iteration costs
approximately in the order of n2, [18]. However, in practice,
because of the accumulation of numerical roundoff errors,
especfally in large problems where the constraint matrix is
not of purely network structure, there is a frequent need
(approximately once in 50-200 simplex pivots) for reinverts
(that is, the entire basis has to be refactorized) which costs
in the order of na operations. Therefore, for large problems
consisting of over 1000 to 2000 variables, the simplex method
may need upwards of 20 to 50 LU factorizations, each cost-
ing in the order of ns operations.
The dual affine method isdirectly applicable to problems
with many more constraints than variables. In fact, the
computer time to solve the LP problem depends directly on
n8, and only indirectly on the number of constraints. This
point will become clear in the section where the dual affine
algorithm is described. In addition, as found in practice, the
number of iterations to solve linear problems of any size is
only about 20 to 60 iterations, [5,9 and 15, among many oth-
ers]. Therefore, for large problems consisting of over 1000 to
2000 constraints and variables, it is clear that the simplex
method will be easily outrun by the dual affine algorithm;
moreover, the larger the problem size, the larger will be the
speed up in favour of the dual affine algorithm. Only in
problems where the constraint matrix is purely of network
structure, a special purpose network simplex code such as in
[19] can match the dual affine algorithm even up to problems
consisting of tens of 1000's of variables and constraints. This
is mainly due to the non-requirement of frequent basis rein-
verts. Lastly, the dual affine algorithm can take advantage
of indirect methods of solving simultaneous equations such
as the conjugate gradient methpd, unlike the simplex method
where the direct factorization methods isessential.
2. LP MODEL OF HYDRO-SCHEDULING
OPTIMIZATION
For simplicity, let us assume a system of two reservoirs seri-
ally connected and a linear objective function [23],
T 2
max {C C cijdij 1 (2.0)
j -1 i-1
where T is the number of periods in the horizon, cij is the
linear objective function coefficient and di, is the release
from the reservoir i in period j . The constraints are basi-
cally the volume conservation constraints, the lower and
upper bounds on the reservoir storage volumes and releases.
Let us assume a system with no spills and evaporation
losses, and define si, as the i th reservoir storage volume at
the beginning of period j , q;, the natural inflow, all for the i
th reservoir during period j . The lower and upper bounds
for releases and storage volumes for the i th reservoir in
period j are day, day, s a y , and s, ?, respectively. The ini-
tial and final storage volume for the i th reservoir is given
by sfl and siT+1, respectively. The complete model can be
formulated as follows:
max{--cllsl2-~2~sz +C Cci; (si j -Si ,j +J +C,,T~~,T +c2. ~ 1
subject to (2.1)
T-1 2
j=2 i=1
(2.12)
(2.13)
(2.14)
(2.15)
(2.16)
(2.17)
(2.18)
(2.19)
394
The reasons for the above form are
(i) the constraint set isof the form Ax 5 b where the equal-
ity constraints with respect to the initial and final
storage volumes, that is, the volume conservation con-
straints, are eliminated; and
(ii) as it is, the matrix A contains the staircase structure,
that is, blocks along the diagonal and zeros everywhere
else, essentially requiring no reordering of rows and
columns to get a tight banded structure. In this serially
connected two-reservoirs problem, the maximum number
of nonzeros in any one row is4. The number of nonzeros
in a row isa function of the number of reservoirs and the
systems configuration. For example, series of reservoirs
produce more nonzeros per row than parallel system of
reservoirs. As the number of reservoirs and the number
of periods in the horizon grows, the matrix A becomes
larger and sparser but retaining the staircase structure.
The sparsity and staircase structure facilitates fast solu-
tion of projection calculations in the dual affine algo-
rithm and, in fact, can be done more efficiently using
parallel processors.
However, in certain reservoir configurations, mainly, in
systems consisting of large number of reservoirs in series, the
constraint matrix as formed above may become unneces-
sarily dense. In such problems, the following formulation is
more favourable to retaining sparsity without increasing the
size of the problem. For example, the problem considered
has the form where the number of constraints are larger
than the number of variables:
Primal: max {cTx } (2.20)
Alx - bl (2.21)
A'2x I b2 (2.22)
subject to
Because the dual affine algorithm is suitable for problems
with only inequality constraints, the above problem is
transformed to the following form:
F'rimal: max {cTx f p T (A,x - b,) } (2.23)
subject to
(2.24)
(2.25)
where the equality constraint set (2.21) has been converted
to inequality as shown in (2.24); pT =[P, P, . . . P,].
If a suitable large value for p , is chosen -- for instance, lo' --
then the solution of problems in (2.20>(2.22) and (2.23)-(2.25)
will be identical, usually up to 4 to 6 decimal places.
3. THE DUAL AFFINE ALGORITHM
The LP form considered in the dual affine algorithm isgiven
by (2.23)-(2.25). I t is assumed that the problem has an inte-
rior point xo and isbounded. These assumptions are reason-
able for solving real-world problems; if the problem fails to
adhere to these assumptions, then usually there is no solu-
tion or the solution isunbounded. The results from the algo-
rithm will indicate the problem's failure to satisfy the above
assumptions.
With xo as the starting point, the algorithm generates a
sequence of feasible interior points for LP problem,
(XI, x2, . . . , xk, ' . . ) such that
The algorithm terminates at the satisfaction of the stopping
criterion. Let
v k =b - Axk
(3.2)
be an m vector of nonnegative slack variables at each itera-
tion k. Let the diagonal matrix
be used to do the affine transformation of the constraint
matrix A. The body of the algorithm is described below.
The reader isreferred to [9] for the derivation details.
Algorithm
Let xo and 7 be given such that Axo <b and 7 =0.99
Set k :=0
Whi l e stopping criterion not satisfied, do
y k :=b - h k
d, :=-Ad,
y k :=-Did,
U;
(dv)i '
a: =vxmax{- . (d,); <0, i =l , . . * ,m }
xk+l :=xk - ad,
set k :=k+l
end
where y k is the tentative dual solution. The key point is
that the directional vector d,, at each iteration k, can be
approximately calculated but maintaining feasibility of xk+'.
As in gradient projection methods, a feasible direction vector
along which the objective function increases is found and
then an appropriate step length is determined to find the
new feasible solution which is strictly better than the previ-
ous solution. We see the differences between the simplex
approach and the new interior point approach in the steps to
find the next solution. A new corner point solution isdeter-
mined in each iteration of the simplex method, whereas in
the interior point method, a new feasible solution is deter-
mined by projecting the current interior point onto the
boundary of the largest ellipsoid contained in the feasible
region.
3.1 Calculation of the Direction Vector d,
Almost all the calculations required in each iteration of the
algorithm are spent in finding the direction vector d, along
which the objective function increases inside the feasible
PolYtoPe,
d, =(ATDiA)-'c (3.4)
For a dense constraint matrix A, the number of calculations
required to find the direction vector d, isO(ns) since the fol-
lowing normal equations are solved directly,
Bkd, =c (3.5)
B~ - ( A~ D~ A) (3.6)
where
and is a symmetric and positive-definite matrix (this is
because A is a full rank matrix due to the nature of the
395
problem constraints). As the iterations proceeds, only the
diagonal matrix Dk changes and Bk remains a symmetric
positive definite matrix and retains the same structure.
3.2 I ni ti al Sol uti on
Adler et al. [9] propose the following Phase I procedure for
determining the initial feasible interior point. Let
(3.7)
If vo >b-Axo, then xo is the initial solution. If not, an
artificial variable
xP - -2 x mini up; i-I, ..., m} (3.8)
Max {cTx-Mx, } (3.9)
is added and (xo, x t ) is an initial point for the following LP
problem of the Phase I step:
subject to
where eT =(l ,l ,...,l )T and A4 is estimated as follows:
Ax-eTx, 5 b (3.10)
(3.11)
where p isa large constant, for example 10'. The dual affine
algorithm described above is applied on the above problem
and is terminated when x: <0. Provided p is chosen large
enough, this Phase I procedure will find either an initial
feasible point or the optimal solution itself.
3.3 Stoppi ng Cri teri on and Basis Recovery
Adler et al. [9] used the following stopping criterion, a cri-
terion often used in non-linear optimization algorithms. The
iterations are stopped when the relative improvement of the
objective function issmall, that is,
(3.12)
where E is a small positive value, for example, lo-* used in
their implementation. Adler et al. [9] also suggested an
alternative stopping criterion based on the complementary
slackness. The complimentary slackness criterion can also be
used to verify that xk and y k are really the approximate
optimal primal and dual solutions, respectively. The above
alternative stopping criterion is to iterate until
(3.13)
and
are satisfied for given small values of t 1 and t z. However, we
have used a different stopping criterion that, in practice,
saves significant number of iterations towards the end. The
proposed stopping criterion 1141 is based on the dual and
slack values and, therefore, is similar to the criterion based
on the complimentary slackness. The advantage of the
latter two stopping criterion is that the solution obtained is
indeed optimal or in the neighborhood of optimality, as is in
the last one.
The stopping criterion check is activated only when
either the dual solution bTy or the primal solution cTx has a
growth of less than t , similar to that defined in 191. However,
herein the t can be much larger than the 10" used in [9]. A
conservative value for 6 herein is lo". If the value of c istoo
large then there is the danger of the premature stopping of
iterations.
The following steps are used in the application of the
proposed stopping criterion.
St ep 1.
Find the indices of the first n constraints whose slack vari-
ables given by vf =b; - ~ a , ; x ~ , i=l, ..., m are sorted in an
ascending order.
Step 2 .
Check 1 . If the dual values yf belonging to set of n con-
straints determined above is positive then stop the itera-
tions. The n constraints determined above form the basic
constraint set. If Check 1 fails and there are only n--7 con-
straints whose dual values are positive then perform Check
2 .
Check 2. If the lgfl belonging to the m- n+r constraints are
near zero, for example =lo-", then stop the iteration but
inform that there are multiple optimal solutions. If Check 2
fails continue the iterations until either Check 1 or Check 2
becomes true or the maximum allowed number of iterations
is exceeded. It is always possible to use the first stopping
criterion suggested in 191as an additional stopping criterion
in case both Checks 1 and 2 fail to stop and the number of
iterations are already prohibitively large.
The proposed stopping criterion may find most of the
basic constraints but not all of them, especially in degen-
erate problems. The degeneracy could be due to: (i ) the
existence of multiple optimal solutions for the dual problem;
that is, the optimal objective function plane passes through
a constraint plane or through the plane intersected by many
constraint planes resulting in a non-basic solution, (note that
the dual affine method will always tend to end up in a non-
basic optimal solution if multiple optimal solutions exist);
and (ii) an active constraint is linearly dependent on one or
more of the active constraints. Due to multiple optimal solu-
tions and linear dependencies (degenerate problems have
either one or both), the basis identified may be singular.
Currently, we have used MINOS [6] to remove the singular-
ity where the singularity is removed by substituting with an
appropriate slack column. Once a basis is identified, the
Phase I and I1 of the simplex method is used to the identify
the optimal basis. For many degenerate problems, we have
tested the above procedure and had considerably fewer itera-
tions in the simplex method for optimal solution than with a
cold start; that is, with no starting basic feasible solution
(Ponnambalam et al. [20]). In summary, the advantages of
the proposed stopping criterion are
(i ) early stopping saving considerable number of iterations
towards the end of optimization; and
(i i ) identification of the basic constraint set which can be
used in a standard simplex package to perform post-
optimality and sensitivity analyses.
n
; =1
4. COMPUTATI ONAL I SSUES
The two of the most computatlonally expensive steps in the
algorithm are (I ) formulating the ATDzAT matrix, and (11)
solving for d, The computer implementation should vary
depending upon whether the problem is sparse wlth any spe-
cial structure or dense The number of calculations required
to determine the direction vector d, is O(ns) for dense
396
matrices since the following nori.:al equations are solved
directly:
Bkd, - c
where Bk =(ATDEA) is an n x n symmetric and positive-
definite matrix. As the iterations proceed the diagonal
matrix Dk changes and Bk remains a symmetric positive
definite matrix retaining the same nonzero structure. In our
implementation, a minimum-degree ordering algorithm is
used to order Bk such that minimum fill-in occurs during the
factorization of Bk. However, the fact that the sparsity
structure of Bk remains the same through all iterations is
taken advantage of while solving the normal equations in
each iteration.
For dense problems, formulation of the ATD2A is much
more expensive (z 0.5n2m multiplications) than solving for
d, ( x 0. 16n3 multiplications for the Cholesky factorization).
Moreover, we have used the Cholesky factorization which is
a few times faster than the QR factorization, maintaining,
however, a factorization accuracy of 10-". For dense prob-
lems, at most iterations, we update the factorization for solv-
ing d,; the normal equations are explicitly formulated only
at the first time and at most a couple of times in a total of
about 20 iterations. The computational advantage is clear.
Updating the factorization involves O( n2) calculations. If
only m' (m' <m) of the elements in the diagonal matrix D2
changes significantly, then it is more efficient to update the
factorization of ATD2A matrix than formulating the entire
matrix once every iteration for solving d,. That is, O( n2m' )
calculations instead of the O(n2m) provided that m' <m.
Moreover, if an element, say Di , issmall (or, equivantly, the
slack of constraint i islar e), then D, has little effect on the
condition number of A D2A but may have tremendous
effect on the fill-in [24]. As an illustrative example, let us
consider
B
1 0 0
A = [i , D - 1 0 10' 0
0 1 0 0 los
Using these matrices, we obtain
1
1 +10' 1
1 1 +10'
A ~ D ~ A =
Since D 1 - 1 is small compared to other elements, and has
little effect in the solution of ATDzA d, =c, we can set
D, =0. Thus, we can now define a more sparse D,
D'- 1 $
Using D' yields
which ismuch more sparse than ATD2A.
For large sparse implementations, the formulation of
ATDzA is usually done as ~j m, l D, a~aj . Because only Dj's
change from one iteration to another the details (sparsity
structure and values) of aTaj for each J' isstored in memory
once at the beginning of the iterations and used in every
iteration saving considerable computing time.
Because the projection needed to calculate the direction
vector d, in the dual affine algorithm need not be accurate,
fast algorithms such as the conjugate gradient technique can
be used instead of the direct method as described herein.
The reader is referred to [5] for further implementation
details on the conjugate gradient method. In our experi-
ments one of the most useful step is not considering rows
(constraints) for formulation of the normal equations if the d;
of the corresponding row is significantly smaller than other
dis. This results in increased sparsity and in better condi-
tioning of the normal equations; such a method iscalled the
row reduction technique.
The preconditioned method we use is that of
Munksgaard [21] and the accompanying general purpose rou-
tine MA31AD from HARWELL software library. The conju-
gate gradient method (CGM) isan efficient and fast solver of
large sparse system of equations i f the condition number of
system solved is small. If the system is ill-conditioned, as is
in our case, the number of iterations in the CGM will be too
large and often there may not be convergence. In CGM, the
CPU time required is in the order of CGM iterations X
O( n2) . In order to overcome the ill-conditioning and to
reduce the number of CGM iterations, the preconditioned
conjugate gradient method (PCCGM) is used. An approxi-
mate but easily factorizable matrix M isfound such that the
precon litioned system
M-'A~D;A x I ( 44
The preconditioning is expected to improve the condition
number of the system of normal equations solved and, as a
result, much fewer CGM iterations are required to determine
the direction vector d,. However, one of the major expense
in terms of CPU time and memory requirement in PCCGM
goes towards finding the preconditioner M-'. As mentioned
before, the row reduction technique aids in sparsifying and
better conditioning of the normal equations solved. Because
of the row reduction technique, the structure of the normal
equations with respect to the non-zeroes changes every itera-
tion and hence the minimum-degree ordering routine is used
in every iteration. A major improvement possible in our
code is that if the preconditioner found in one iteration --
that is, M-' -- can be reused for a few subsequent iterations
as in [5]. Such an improvement will greatly benefit problems
where the normal equations are somewhat denser than in the
example studied herein. There are other preconditioning
techniques available that may be suitable for certain prob-
lems and In addition the PCCGM is very suitable for solving
the normal equations using parallel processors. The reader is
referred to Ortega [22], an excellent source for studying such
applications.
5. TESTING RESULTS
The proposed Dual Affine (DA) algorithm has been code in
FORTRAN, and its performance compared to the sparse
simplex solver MINOS 5.0, which is also coded in FOR-
TRAN. While the DA algorithm solves trhe primal problem
(2.23)-(2.25). which is in the form
max {cTx 1
s. t . A x 5 b
where A is an mx n matrix, m >n (i.e., there more con-
straints than variables), the simplex MINOS algorithm is
used to solve the more appropriate dual simplex problem,
391
min {bTy }
Problem MINOS 5.0 DA
nxm sec Iters. sec Iters.
22x92 0.75 26 2.41 15
220x920 57.7 260 36.75 24
440x1840 236.0 520 64.48 23
880x3680 986.2 1040 106.04 18
8. t . A T y = c , y>O
where AT is an n x m matrix. That is, the dual simplex
problem has fewer constraints ( n) than variables ( m) .
The DA algorithm has been tested on a hydro-scheduling
optimization problem for 4 different horizon lengths, that is,
for 1 period, 10 periods, 20 periods and 40 periods, respec-
tively. The largest problem comprises 880 variables and
3680 constraints. Table 1 presents some results without
basis recovery in the DA algorithm. As observed by all other
researchers who implemented the DA method, the number of
iterations in the DA method is very small and in the simplex
method it grows linearly as the size of the problem increases.
The last column of Tables 1 and 2 indicates the speed-up of
the DA algorithm versus the simplex MINOS; the speed-up
isevaluated by calculating the ratio of the execution times of
MINOS and DA algorithms.
Table 2 presents results corresponding to the same prob-
lems with basis recovery in the DA algorithm.
MINOS
DA
0.31
1.57
3.66
9.30
I
Table 1. LP Multi-reservoir Optimization results
(Without Basis Recovery)
Problem
Size
Growth
10
20
40
SIMPLEX STATISTICS Dual Affine
Iterations CPU Time (Col 1)' CPU Time
Growth Growth Growth
10 76 100 15.25
20 314.67 400 26.76
40 13 14.67 1600 44
Problem
nxm
22x92
220x920
440x1840
880x3680
A comparison of the CPU time required by MINOS 5.0
and the dual affine techniques, as a function of the horizon
time,is presented in Fig. 1.
Table 3 presents results corresponding to problem size
growth (both in terms of number of constraints and variables
and in the number of nonzeroes) versus CPU time growth for
the two methods on these problems. Table 4 presents results
similar to Table 3 but with the inclusion of the CPU time for
basis recovery.
MINOS 5.0 DA Pivots MINOS
sec (Iters) sec (Iters) BRect DA
0.75 (26) 2.53 (15) 4 0.30
57.7 (260) 44.52 (24) 35 1.30
236.0 (520) 87.17 (23) 50 2.71
for /
986.2 (1040) 225.52 (18) 126 4.37
It isclear that for these problems, the CPU times of the inte-
rior point method grows only linearly (column 5 in Table 3)
and in the simplex method it is approximately quadratic
(column 3 in Table 3) as size of the problem grows (column
Problem
Size
Growth
750
w
E
3
c
0
F: 500
250
SIMPLEX STATISTICS Dual Affine
Iterations CPU Time (Col 1)' CPU Time
Growth Growth Growth
U MI NOS 5.0 Results
P
0 Dual Affine Results
10 20 30 40
Time Horizon (years I
Fig. 1 CPU time vs horizon time
10 10 76 100 17.6
40
I 1
1 in Table 3). This fact is also reflected in Table 4 although
with a somewhat higher CPU time because of the basis
recovery procedure.
In addition, the convergence of the objective function in the
DA method was found to be monotonic all through the itera-
tions and was rapid in the beginning and was slow in the
end. Therefore, i f only an approximate solution is required
398
then the iterations could be stopped earlier saving a few
iterations, unlike in the simplex method where the conver-
gence is not necessarily monotonic; that is, in the simplex
method, at 90% of the total CPU time the solution deter-
mined need not be 90% of the optimal solution. Also, with
the basis recovery procedure in conjunction with the dual
affine method, the iterations in the dual affine method could
be stopped earlier and a very accurate optimal basic solution
can be determined. With the basis recovered, the solutions
from both the simplex and dual affine methods are qualita-
tively the same since the final solution is primal and dual
optimal basic solutions. When using SLP approaches to
solve nonlinear problems, the first one or two SLP iterations
requires a large number of simplex iterations [3] whereas, by
using the previous basis to initialize or start the subsequent
SLP iterations, the required simplex iterations to solve subse-
quent LP problems isgreatly reduced. With the dual affine
method and with the basis recovered the same procedure
could be followed, where, for the first one or two SLP itera-
tions the dual affine algorithm could be used instead of the
simplex method with a possibility for significant reduction in
computing time. In addition, if the problem solved is in the
original dual affine form and that there is no need for accu-
rate solutions then it ispossible to stop the dual affine algo-
rithm much earlier, that is only with about half the number
of originally required iterations, saving a large slice of CPU
time.
6. CONCLUSIONS
A preliminary implementation of the dual affine algorithm
demonstrates its potential for solving large-scale specially
structured LP problems. The superiority of the DA method
over the simplex method for staircase structured multi-
period hydro-scheduling problems is now well established.
However, with the basis recovery procedure, it is possible to
use both the DA and the simplex method in their respective
places for solving nonlinear optimization problems with the
successive linear programming method. This results in only
a minor change to the current simplex-based codes used in
the industry but can lead to a considerable speedup in solv-
ing some very large scheduling problems.
ACKNOWLEDGEMENTS
The authors acknowledge the Natural Sciences and
Engineering Research Council (NSERC) of Canada for the
financial support.
REFERENCES
BStott, J .L. Marinho and 0. Alsac, "Review of Linear
Programming Applied Power System Rescheduling",
Proc. of 1979 PICA Conf., pp. 142- 154, Minnesota, MN,
1979.
J .S. Horton and L. L. Grigsby, "Voltage Optimization
Using Combined Linear Programming and Gradient
Techniques", IEEE Tran. on Power Apparatus and Sys-
tems, Vol. PAS-102,No.7, J uly 1984.
H. Habibollahzadeh, G.X. Luo and A. Semlyen, "Hydroth-
ermal Optimal Power Flow Based on A Combined Linear
and Nonlinear Programming Methodology", IEEE Tran.
on Power Systems, vo1.4, NO.^., May 1989.
N. Karmarkar, "A New Polynomial-time Algorithm for
Linear Programming", Combinatorica, 4, pp. 373-395,
1984.
N. Karmarkar and K.G. Ramakrishnan, "Implementation
and Computational Results of the Karmarkar Algorithm
for Linear Programming, Using an Iterative Method for
Computing Projections", 13th International Math. Prog.
Symposium, Tokyo, J apan, August 1988.
B.A. Murtaugh and M.A. Saunders, "MINOS 5.0 User's
Guide", Tech. Rep. SOL 83-20, Stanford Univ., CA,
1983.
1.1. Dikin, "Iterative Solution of Problems of Linear and
Quadratic Programming", Soviet Math. Doklady, 8, pp.
A. Kozlov, "The Karmarkar Algorithm: Is It for Real?,
SIAM News, 18(6), pp. 1-4, 1985.
I. Adler, N. Karmarkar, M.G.C. Resende and G. Veiga,
"An Implementation of Karmarkar's Algorithm for Linear
Programming", Working Paper, Operations Research
Center, Univ. of California, Berkeley, C.A, 1986, (also in
Math. Prog., 44, pp. 297-335, 1989).
674-675, 1967.
[lo] M.S. Bazaraa, J .J . J arvis and H.D. Sherali, "Linear Pro-
gramming and Network Flows", Second Edition, J ohn
Wiley and Sons, Toronto, 1990.
[l l ] A. Charnes, K.O. Kortanek and W. Raike, "Extreme
Point Solutions in Mathematical Programming: An Oppo-
site Sign Algorithm", Systems Research Memorandum
No. 129, Northwestern University, Evanston, 1965.
(121 Y. Ye, and M. Kojima, "Recovering Optimal Dual Solu-
tions in Karmarkar's Polynomial-time Algorithm for
Linear Programming", Math. Prog., 39, pp. 305-318,
1987.
[13] N. Megiddo, "On Finding Primal- and Dual-Optimal
Bases", Research Report, RJ 6328 (61997), IBM York-
town Heights, New York, 1988.
[14] K. Ponnambalam, "New Starting and Stopping Pro-
cedures for the Dual Affine Method", Working Paper,
Dept. of Civil Engg., University of Waterloo, 1988.
[15] K. Ponnambalam, A. Vannelli, E.A. McBean and T.E.
Unny, "Solving Large-scale Electric Energy Production
Planning and Pollution Control Problems with Karmar-
kar Algorithms", Proc. of Workshop on Resource Plan-
ning Under Uncertainty for Electric Power Systems,
Stanford Univeristy, Eds. G.B. Dantzig and P. Glynn,
179-196, J an 1989.
[ 161 K. Ponnambalam, "Large Scale Nonlinear Programming
Using Interior Point and Successive Linear Programming
Methods, SIAM Annual Meeting, J uly 16-20, Chicago,
1990.
[ 171 Mathematical Programming System Extended (MPSX),
IBM Computers Manual, 1979.
[18] M.J . Best and K. Ritter, "Linear Programming: Active
Set Analysis and Computer Programs", Prentice Hall
Inc., NJ ., 1985.
[19] J .L. Kennington and R Tr Tlelgason, "Algorithm for Net-
work Programming", J ohn Wiley and Sons, N.Y., 1981.
[20] K. Ponnambalam, A. Vannelli, and S. Woo, "An Interior
Point Implementation for Solving Large Planning Prob-
lems in the Oil Refinebi v Industry", submitted to Cana-
dian J . of Chem. Engg., J uly 1990.
399
1211 N. Munksgaard, "Solving Sparse Symmetric Sets of
Linear Equations by Preconditioned Conjugate Gra-
dients", ACM Trans. on Math. Soft., 6, pp. 206-219, 1980.
[22] J .M. Ortega, "Introduction to Parallel and Vector Solu-
tion of Linear Systems", Plenum Press, N.Y., 1988.
(231 D.P. Loucks, J .R. Stedinger and D.A. Haith, "Water
Resource Systems Planning and Analysis", Prentice-Hall
Inc., N.J ., 1981.
[24] Anthony Vannelli, "An Adaptation of the Interior Point
Method for Solving the Glabal Routine Problem," IEEE
Trans. on Computer-Aided Design, Vol. 10, No. 2, Feb.
1991, pp. 193-203.
K. Ponnambalam received the B.E. degree from Madras
University in 1979, MSc. degree from the National Univer-
sity of Ireland in 1981 and Ph.D. degree from the University
of Toronto in 1987. He iscurrently an Assistant prof ;sor in
the Department of Systems Design Engineering, University
of Waterloo. His main research interests include large scale
optimization, interior point methods and stochastic modeling
as applicable to hydrc-power optimization.
V.H. Quintana (IEEE SM'73) received the Dipl. Ing. degree
from the State Technical University of Chile in 1959, and the
MSc. and Ph.D. degrees in Electrical Engineering from the
University of Wisconsin, Madison in 1965, and University of
Toronto, Ontario, in 1970, respectively.
Since 1973 he has been at the University of Waterloo,
Department of Electrical Engineering, where he is currently
a full professor and Associaf,e Chairman for Graduate Stu-
dies.
His main research interests are in the areas of numerical
optimization techniques, state estimation and control theory
as applied to power systems.
Dr. Quintana is an Associate Editor of the International
J ournal of Energy Systems, and a member of the Association
of Professional Engineers of the Province of Ontario.
A. Vannelli received the Ph.D. degree from the Department
of Electrical Engineering at the University of Waterloo,
Waterloo, Ontario, Canada, in 1983. From 1983-84, he was
an IBM Post-Doctoral Fellow in the Mathematical Sciences
Department at IBM Thomas J. Watson Research Center.
He joined the Department of Industrial Engineering at the
University of Toronto in 1984. Since 1987 he has been with
the Department of Electrical and Computer Engineering at
the University of Waterloo, where he's currently an Associ-
ate Professor. He has been a Natural Sciences and Engineer-
ing Council of Canada University Research Fellow since
1984. His main research focuses on the development of effi-
cient linear, nonlinear, and combinatorial optimization tech-
niques to solve VLSI circuit layout and design problems.

You might also like