You are on page 1of 5

TECHNIA

International Journal of Computing Science and Communication Technologies, VOL. 4, NO. 1, July 2011. (ISSN 0974-3375)

Use of Particle Swarm Optimization Algorithm


for Solving Integer and Mixed Integer
Optimization Problems
1
12

Ashok Pal, 2S.B. Singh, 3Kusum Deep

Deptt. of Mathematics, Punjabi University Patiala India


3
Deptt. of Mathematics, IIT Roorkee India
1
ashokpmaths@gmail.com, 2sbsingh69@yahoo.com, 3kusumfma@iitr.ernet.in
Abstract This paper presents use of Particle Swarm
Optimization (PSO) algorithm introduced by Kennedy and
Eberhart [1] for solving Integer and Mixed Integer
Optimization problems. In PSO, The potential solutions,
called particles, are flown through the problem space by
learning from the current optimal particle and its memory.
PSO is started with a group of feasible solutions and a
feasibility function is used to check if the new explored
solutions satisfy all the constraints. All particles keep only
those feasible solutions in their memory. PSO algorithm is
used on 15 test problems given in the appendix. Our results
show that PSO is an efficient method and can be used for
solving integer and mixed integer optimization problems.
Keywords:
Particle
Swarm
Optimization,
and
Mixed
Integer
Optimization,
Non
Optimization, RCGA.

Integer
linear

I. INTRODUCTION
Particle swarm optimization (PSO) is a population
based optimization technique, which is an alternative tool
to genetic algorithm (GAs) and other evolutionary
algorithms (EAs) and gained lots of attention in recent
years. PSO is a search technique with reduced memory
requirements, computationally effective and easier to
implement as compared to EAs. In 1995 Kennedy and
Eberhart introduced the PSO as a new heuristic method
[1]. The idea is based on the simulation of the social
behavior of bird flocking and fish schooling. Initially PSO
was designed for continuous optimization problems, but
later a wide variety of challenging engineering and science
applications came into being. Also has a more global
searching ability at the beginning of the run and has
greater local search ability near the end of the run [2].
A linear or non- linear optimization problem, with or
without constraints, in which some or all decision
variables are restricted to have integer values is known as
a Mixed Integer Optimization Problem (MIOP). Such
problems frequently arise in various application fields
such as process industry, finance, engg. design,
Management Science, portfolio selection, automobile
engg., aircraft design and VLSI manufacturing.
The general Mathematical model of an MIOP is:
Min
Subject to:

663

xiL
y

L
i

xiu , i 1,2,

xi

, n1 .

u
i

yi

y : int eger , i 1,2,

x1 , x 2 ,

, x n1

y1 , y 2 ,

, y n2 .

, n2 .

Motivation of this work is to develop a robust


optimization technique for integer and mixed integer
constrained optimization problems using PSO algorithm.
A penalty function approach (Deb, 2000) is incorporated
for handling constraints of the problem.
II. PSO ALGORITHM
The beauty of PSO lies in its simplicity and ease of
applicability. The co-ordinate of each particle represent a
possible solution associated with two vectors- the position
vector and the velocity vector.
Consider the n-dimensional optimization problem
Min f (x) , where
Corresponding to each feasible solution, the position
vector is represented by xi ( xi1 , xi 2 , xi 3 ......., xin )
and velocity vector is represented by
A swarm consists of a no. of particles (feasible
solutions) that proceed (fly) through the search space
towards the optimal solution. Each particle updates its
position based on its own best exploration, over all best
swarm exploration and its previous velocity vector
according to the following equations:
(1)
(2)
Where

and c2 are two positive constants called

acceleration coefficients,
and 2 are random numbers,
uniformly distributed in [0, 1]
is the current position of the
th
particle.
the

th

is the position of
particle achieved based on its own experience.

TECHNIA

International Journal of Computing Science and Communication Technologies, VOL. 4, NO. 1, July 2011. (ISSN 0974-3375)

is

the

position of the best particle based on the overall swarms


experience and k is the iteration counter.
A constant, maximum velocity (Vmax) is used to
arbitrarily limit the velocities of the particle and improve
the resolution of the search.
Shi and Eberhart [3] found that without the velocity
memory the swarm would simply contract to the global
best solution best within the initial swarm boundary
(providing a local search). Conversely, with the velocity
memory, the swarm will provide a global search. To better
control exploration and exploitation, reduce the
importance of Vmax, and perhaps eliminate it altogether, a
modified PSO, incorporating an inertia weight w was
introduced. The resulting velocity update equation
becomes
(3)
The initial experiments suggested that a value
between 0.8 and 1.2 provided good results. Later work
(Eberhart and Shi [4]) indicates that the optimal strategy is
to initially set w to 0.9 and reduce it linearly to 0.4,
allowing initial exploration followed by acceleration
toward an improved also available to adjust the inertia
weight. For example, in (Eberhart and Shi, 2000) the
adaptation of w using a fuzzy system was reported to
significantly improve PSO performance. Another effective
strategy is to use an inertia weight with a random
component, rather than time decreasing. For example,
(Eberhart and Shi [5]) successfully used w= u (0.5,1), a
uniformly distributed random number between 0.5 and 1.
There are also studies, e.g., (Zheng et al [6]), in which an
increasing
inertia
weight
was used.
Clerc and Kennedy [7, 8] applied a constriction factor
to the new velocity. Clerc and Kennedy noted that
there can be many ways to implement the constriction
coefficient. One of the simplest methods of incorporating
it is the following
(4)
where

and
2

4)

(5)
Eberhart and Shi [4] compared inertia weight PSO
and constricted PSO. It can be seen that equation (3) is
equivalent to equation (4) if the inertia weight w is set to
be
, and
meet
the
conditions
. The PSO algorithm with the
constriction factor can be considered as a special case of
the algorithm with inertia weight since the three
parameters are connected through equation (5). Eberhart
and Shi [4] also showed that constriction factor with
Vmax= Xmax, where Xmax is the upper bound of the
decision variables, provides good results.
The PSO algorithm is shown below
For t = 1 to the max. Bound of the no. of iterations,
For i = 1 to the swarm size.
For j = 1 to the problem dimensionality.
Apply the velocity update equation (1)
664

Update position using equation (2)


End-for-j;
Compute fitness of updated position;
If needed, update historical information for pbest
& gbest;
End-for-i;
Terminate if gbest meets problem requirements;
End-for-t;
End algorithm.
III. SOLUTION OF TEST PROBLEMS
The above PSO algorithm is used to solve a set of 15
test problems taken from different sources in literature.
These are listed in the appendix. These include integer and
mixed integer constrained optimization problems. The
results are presented in Table I. Performance of PSO
algorithm is compared with the Real Coded Genetic
Algorithm (RCGA) [21].
TABLE I
Pb
No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Using PSO
S
NR
10
25
20
50
25
40
25
70
80
50
50
25
25
50
400

30
30
30
30
30
30
30
30
30
30
30
30
30
30
30

ANE

SR (%)

3232
5207
6060
60560
142
152
4532
2803
235
700
770
162
3205
3245
2380

100
100
90
70
100
100
100
80
100
100
100
100
100
100
100

Using RCGA
ANE
SR
172
62
18608
10933
671
84
7447
3571
258
171
299979
77
78
2437
1075

84
85
43
95
100
100
59
41
93
100
71
99
100
92
100

S=Swarm Size,
NR= No. of runs,
ANE=Average no. of function evaluations in
each run.
SR = Rate of success =Percentage of successful runs
to total runs.
IV. CONCLUSION
The PSO algorithm with inertia weight w has been
used for solving constrained integer and mixed integers
optimization problems. The performance of the PSO
algorithm has been compared with RCGA [21] on a set of
15 test problems on the basis of success rate and no. of
functions evaluations. Our results show that the PSO
algorithm out performs RCGA algorithm in most of the
cases. In future we intend to apply PSO algorithm to solve
the larger real life optimization problems.
ACKNOWLEDGEMENT
One of the authors (Ashok Pal) would like to thank
Dr. J.C. Bansal, Assistant Professor, Department of
Mathematics, IIITM-Gwalior (India) for his valuable
guidance in the development of PSO codes for solving the
problems.

TECHNIA

International Journal of Computing Science and Communication Technologies, VOL. 4, NO. 1, July 2011. (ISSN 0974-3375)

V. APPENDIX

s.t.

Problem 1: This problem is taken from [15] and is


also given in [10, 11, 13].
Min
s.t.

The global optimal solution is


= (0.5,1; 2).
Problem 2: This problem is taken from [13]. This is a
modified form of problem in [15, 11].
Min
s.t.

The global optimal solution is


= (0,0,1, 1;-6).
Problem 7: This problem is taken from [13]. This is a
modified form of problem in [15, 11].
Min
7.5 x1
50

5.5(1 x1 ) 7 x 2

6 x3

x1 (2 x1 1)
0.9(1 exp( 0.5 x 2 )

50

1 ( x1 (2 x1 1)
0.8(1 exp( 0.4 x3 )

s.t.

The global optimal solution is


= (1.375, 1;
2.214).
Problem 3: This problem is taken from [10]. It is also
given in [11, 13].
Min
s.t

.
The global optimal solution is
= (1, 3.514, 0; 99.245209).
Problem 8: This problem is taken from [13]. It is also
given in [10, 11].
1
4

2
2

s.t.

The global optimal solution is


= (0.94194, 2.1, 1; 1.07654).
Problem 4: This problem is taken from [27].
Min
s.t.

.
The global optimal solution is
= (14.095, 0.84296; -6961.741616).
Problem 5: This problem is taken from [14].
Min

(integers).

s.t.

The
(Integers).

The global optimal solution is

= (2, 0,

5; -68).
Problem
6:
This
problem
represents
a
quadratic capital budgeting problem, taken from [15] and
it is also given in [11].
Min f
665

global

optimal

solution

is

= (0.2, 1.280624, 1.954483, 1, 0, 0, 1; 3.557463).


Problem 9: This problem is taken from [17] and it is
also studied by Cardoso et al [11].

max f

(1 (0.1) x1 (0.2) x2 (0.15) x3 )

(1 (0.05) x4 (0.2) x5 (0.15) x6 )


1 (.02) x7 (.06) x8
s.t.

TECHNIA

International Journal of Computing Science and Communication Technologies, VOL. 4, NO. 1, July 2011. (ISSN 0974-3375)

Problem 13: This problem is taken from [20] and is


also given in [12].
9

Min

i 1

x3

Where

The global optimal solution is


(
)

s.t.

= (0, 1, 1, 1, 0, 1, 1, 0; 0.94347).
Problem 10: This problem is taken from [9] and is
also given in [16].
Min
s.t.

,
1

are integers. The global optimal solution is


= (50, 25, 1.5; 0).

Problem 14: This problem is taken from [20] and is


also given in [12]. Min f
2

2
1

3
2

s.t.
(integers).
The global optimal solution is (

= (1, 1, 1, 1, 2; 8).
Problem 11: This problem is taken from [9] and is
also reported in [16].
Min
s.t.

(integers).
The global optimal solution is
= (16, 22, 5, 5, 7; 807).
Problem 15: This problem is taken from [20] and is
also given in [12].
Maxf ( x ) 215 x1 116 x 2 670 x3 924 x 4 510 x5
600 x6 424 x7 942 x8 43x9 369 x10 408 x11
52 x12 319 x13 214 x14 851x15 394 x16 88x17
124 x18 17 x19 779 x 20 278 x 21 258 x 22 271x 23
281x 24 326 x 25 819 x 26 485 x 27 454 x 28
297 x 29 53x30 136 x31 796 x32 114 x33 43x34
80 x35 268 x36 179 x37 78x38 105 x39 281x 40 ;
s.t.

Integers.
The
(

global

optimal
solution
is
) = (0, 2, 4, 0, 2, 1, 4; 14).

Problem 12: This problem is taken from [19] and is


also given in [16].
Min
s.t.

(integers).
The global optimal solution is
42.632).

11x9 11x10
x16

666

2 x11

x12 16 x13 18 x14

2 x18 3 x19

2 x 23

2 x 24

2 x30

x31 9 x32

10 x37

= (1, 3; -

x17

8 x38

x 25
6 x39

4 x 20

2 x 26

x 27

x33 9 x34
x 40

2 x15

7 x 21 6 x 22
8 x 28 10 x 29
2 x35

25000;

4 x36

TECHNIA

International Journal of Computing Science and Communication Technologies, VOL. 4, NO. 1, July 2011. (ISSN 0974-3375)

15 x9

8 x10 16 x11

7 x16

2 x17

2 x18

4 x19 3x 20

2 x21 13 x 22

8 x 23

2 x 24 3x 25

4 x 26 3x 27

2 x 28

x 29

x34 8 x35

6 x36

10 x30
3x37

x12

6 x31 3x32
4 x38

6 x39

4 x33
2 x 40

3 x9

7 x10

2 x11 16 x12

8 x16

9 x17

7 x18

3 x 23 14 x 24

28 x13

3x13

3x14

2 x32

8 x33

3 x34

x37

2 x38

6 x39

5 x 40

25000;

20

xi

Where

2 x35

[10]

9 x15
x 22

6 x 27 16 x 28

x31

xi

[8]

[9]

6 x19 16 x 20 12 x 21

7 x 25 13x 26

[7]

7 x15

25000;

2 x30
0

2 x14

[11]

3x 29

7 x36

[12]

99; i 1,2,..............20;
99; i

21,22,............40;

[13]

( i =1,

The known optimal solution is


[48 73 16 86 49 99 94 79 98 86
94 33 95 80 53 86 87 50 39 78
47 72 97 98 73 86 99 81 77 95
28 95 58 23 55 70 35 82 32 94]
With max f = 1030361.

[14]

[15]

REFERENCES
[1]

[16]

[4]

J. Kennedy, and R.
Proceedings IEEE International Conference Neural Networks, 4,
1942 1948, 1995.
M.S. Arumugam, M.V.C. Rao, and A.W.C.
effective particle swarm optimization like algorithm with
extrapolation
308 320, 2009.
Y. Shi, and R.C. Eberhart,
Proceedings of the IEEE International Conference on Evolutionary
Computation, Piscataway, NJ: IEEE Press, 69 73, 1998.
R.C. Eberhart, and Y.

[5]

Evolutionary Computing, 1, 84 88, 2000.


R.C. Eberhart, and Y. Shi

[2]

[3]

[6]

mic

on evolutionary computation (CEC), Seoul, Korea. Piscataway:


IEEE, 94 100, 2001.
Y. Zheng, L. Ma, L. Zhang, and J.
, In Proceedings of the
IEEE International Symposium on Intelligence Control, 974 979,
2003.

667

[17]
[18]
[19]
[20]
[21]

[22]

M.

Congress on Evolutionary
Computation,
Washington
D.C.,
p.
1951 1955, 1999.
M. Clerc, and J.
and convergence in a multiTransactions
on
Evolutionary
Computation,
6,
58 73, 2002.
H.M. Salkin, Integer Programming, Eddison Wesley Publishing
Com., Amsterdam, 1975.
C.A.
Floudas,
Nonlinear
Mixed-integer
Optimization.
Fundamentals and Applications, Oxford University Press, New
York, USA, 1995.
M.F. Cardoso, R.L. Salcedo, S.F. Azevedo, D. Barbosa, A
simulated annealing approach to the solution of minlp problems,
Computers and Chemical Engineering 21, p. 1349 1364, 1997.
C. Mohan, H.T. Nguyen, A controlled random search technique
incorporating the simulating annealing concept for solving integer
and mixed integer global optimization problems, Computational
Optimization
and
Applications,
14,
p. 103 132, 1999.
L.P. Costa, E. Oliveria, Evolutionary algorithms approach to the
solution of mixed integer non-linear programming problems,
Computers and Chemical Engineering, 21, p. 257 266, 2001.
Y.X. Li, M. Gen, Nonlinear mixed integer programming problems
using genetic algorithm and penalty function, in: Proceeding of the
IEEE International Conference on Systems, Man and Cybernatics,
vol. 4, p. 2677 2682, 1996.
G.R. Kocis, I.E. Grossmann, Global optimazation of nonconvex
mixed- integer nonlinear programming (minlp) problems in process
synthesis, Industrial & Engineering Chemistry Research, 27, p.
1407 1421, 1998.
H.T. Nguyen, Some Global Optimization Techniques and Their Use
in Solving Optimization Problems in Crisp and Fuzzy
Environments, Ph.D. Thesis, Department of Mathematics,
University of Roorkee, Roorkee, India, 1996.
O. Berman, N. Ashrafi, Optimization models for reliability of
modular software systems, IEEE Transactions on Software
Engineering, 19 p. 11 19, 1993.
M.S. Bazaraa, H.D. Sherah, C.M. Shetty, Nonlinear Programming:
Theory and Algorithms, second ed., John Wiley and Sons, Asia,
2004.
D.M. Himmelblau, Applied Nonlinear Programing, McGraw Hill,
New York, USA, 1972.
W. Conley, Computer Optimization Techniques, Petrocelli Books,
Newjersy, USA, 1984.
K. Deep, K.P. Singh, M.L. Kansal, C.
algorithm for solving integer and mixed integer optimization
Applied Mathematics and Computation, 212(2), p. 505
518, 2009.
K. Deb, An efficient constraint handling method for genetic
algorithms, Computer Methods in Applied Mechanics and
Engineering, 186, p. 311 338, 2000.

You might also like