You are on page 1of 26

Applied Mathematics and Computation 235 (2014) 292317

Contents lists available at ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

A modied real coded genetic algorithm for constrained


optimization
Manoj Thakur , Suraj S. Meghwani, Hemant Jalota
School of Basic Sciences, Indian Institute of Technology Mandi, Mandi 175001, India

a r t i c l e

i n f o

Keywords:
Real coded genetic algorithms
Laplace crossover
Power mutation
Constrained optimization

a b s t r a c t
The performance of a genetic algorithm (GA) largely depends upon crossover and mutation
operators. Deep and Thakur (2007) [14,15] proposed a real coded genetic algorithm (RCGA)
incorporating Laplace crossover (LX) and power mutation (PM) operator and shown that
the resulting GA (named LXPM) outperforms many existing RCGAs on a large set of scalable test problems of varying difculty level. In this paper, LXPM is modied by improving
the LX operator. The modied LX operator, named as bounded exponential crossover (BEX)
operator, always creates offspring within the variable bounds. A new RCGA (named BEX
PM) incorporating BEX and PM operator is proposed. The performance of the modied GA
is tested against the original algorithm LXPM and three other popular constrained optimization algorithms (HX-NUM, HX-MPTM and SBX-POL) over a test suite containing twenty
ve constrained optimization problems collected from global optimization literature. The
performance of all RCGAs and quality of solution obtained is compared on the basis of standard criteria used in GA literature. The comparative study shows that BEXPM performs
signicantly better than the original algorithm LXPM and outperforms all RCGAs considered in this study.
2014 Elsevier Inc. All rights reserved.

1. Introduction
Many real life problems can be modeled as nonlinear optimization problems including one or more decision variables.
The problem of locating the global minimum/maximum of a multimodal function of several variables come across in many
elds of engineering, sciences, nance and other scientic applications, discussed in Goldberg [23], Michalewicz [43] and
Deb [11]. Moreover many optimization problems involve constraints due to which, the size of feasible region reduces and
search for the optima becomes difcult. Without loss of generality, a general nonlinear programming (NLP) problem can
be formulated as

Min f x;

where f : Rn ! R;

where x 2 X # S, and S is an n-dimensional rectangular hypercube in Rn identied by ai 6 xi 6 bi ; i 1; 2; 3; . . . ; n. These are


often called bounds on the decision variables.
The feasible region X # S is dened by a set of m nonlinear inequality and p nonlinear equality constraints:

g k x 6 0;

k 1; 2; 3; . . . ; m;

Corresponding author.
E-mail address: manojpma@gmail.com (M. Thakur).
http://dx.doi.org/10.1016/j.amc.2014.02.093
0096-3003/ 2014 Elsevier Inc. All rights reserved.

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

hj x 0;

293

j 1; 2; 3; . . . ; p

A feasible solution x 2 Rn is a point in the search space which satises all bound constraints as well as equality and
inequality constraints. i.e. ai 6 xi 6 bi , i 1; 2; 3; . . . ; n, g k x 6 0, k 1; 2; 3; . . . ; m and hj x 0, j 1; 2; 3; . . . ; p. On the other
hand, an infeasible solution is the one which violates at least one of these constraints. A point x 2 X is called a local minima
of f if f x 6 f x, 8x 2 N e x \ X, where N e x fxjkx  x k < e; e > 0g is a small neighborhood (e-neighborhood) of the
point x . If f x 6 f x 8 x 2 X then x is said to be the global minima of f.
In last three decades, nding the global optimal solution of nonlinear programming problems has become an active area
of research. Many techniques for solving a general nonlinear optimization problem have been reported in the literature.
These techniques may be broadly classied into: deterministic and stochastic.
The methods of rst group are more complex to apply and depend upon a priori information about the objective function.
There are number of deterministic algorithms which may solve a given problem directly by transforming the problem into an
unconstrained problem using some merit function or by solving a series of unconstrained problems e.g. Quadratic Penalty,
Augmented Lagrangian, Barrier function [48,63] etc. or by transforming the given problem into a sequence of easier constrained optimization problem e.g. Sequential Quadratic programming (SQP) and Trust region based method etc. [21,8].
Deterministic techniques are local optimization techniques and the search procedure and its performance majorly relies
upon an initial guess solution and information about the nature of the problem to steer the search process towards a local
optimal solution. Also these algorithms lack comprehensiveness as most of them are able to solve only a particular class of
problems.
In the recent past, many population based stochastic methods such as genetic algorithms (GA), simulated annealing (SA),
differential evolution (DE), particle swarm optimization (PSO) etc. have been developed and are used to solve constrained
optimization problems. A detailed description and empirical study of these could be found in Deb [11], Kirkpatrick et al.
[55], Price et al. [53], Eberhart et al. [17].
Most of the attempts has been devoted to study the behavior of these algorithms based on empirical analysis as compared
to the work done towards theoretical analysis. This is due to difculty in formulating complex interactions between individuals in population, at each iteration (or generation). Nevertheless, some efforts on theoretical analysis has been reported in
the literature. Recently, Yoon et al. [65] and Someya [59] investigates dynamics involved in these type of algorithms.
The main advantages of these methods are as follows:






Comprehensive in nature.
Easy implementation.
No continuity and/or differentiability required.
Global optimizers in nature.
Well suited for black-box type of problems.

Amongst the above mentioned techniques, GAs are one of the most popular stochastic search methods. GAs are nature
inspired, population based general purpose search techniques which try to mimic the principle of natural selection laid
by Charles Darwin. GAs work with a population of solutions and the decision variables are represented by an encoding of
variables in the search space. Every solution of the population is assigned its tness value based on some criterion(s) and
a series of three genetic operators namely selection, crossover and mutation is applied iteratively until predened termination conditions are satised. Each iteration is called a generation.
In the original implementation of De-Jong [16], the decision variables were encoded as binary strings [23]. Later on it was
found that the computation effort required to solve a problem using binary GA increases rapidly with the problem size and/
or with the required degree of precision [22]. Also, binary representation suffers with so called Hamming Cliff problem. Many
real coded GAs are developed to address the difculties of binary coded GAs. The superiority of real coded GAs has now been
well established for continuous optimization problems [29].
RCGAs performance largely relies upon its operators (crossover and mutation) and over the years lot of effort has been put
into development of new real coded operators as well as in improving the efciency of existing operators to enhance its performance. Subbaraj et al. [60] used Tugachi method with simulated binary crossover (SBX) to improve exploitation capability
and robustness of the algorithm. Deep and Thakur [14] proposed Laplace crossover (LX) operator and concluded that GA with
LX operator performs better than GA with Heuristic crossover (HX) (Wright [62]) operator. In their subsequent work, Deep
and Thakur [15] proposed a new real coded mutation operator (PM). Performance of PM was compared with two well established mutation operators viz. Non-Uniform Mutation (NUM) [40] and Makinen, Periaux and Toivanen Mutation (MPTM)
using well known methods for comparing genetic algorithms. It was concluded that genetic algorithms with PM operator
performs much better than genetic algorithms with NUM and MPTM. Moreover genetic algorithm with LX and PM outperforms all other algorithms considered in the studies. An important modication to LX operator called Few Promising Decent
Direction Laplace crossover (FPDD-LX) proposed by C.H.E.N. and W.A.N.G [67] and also proposed robust framework called
Real Coded Conditional Genetic Algorithm (rc-CGA). Further along the same lines, Chen and Yin [4] extended FPDD-LX
(EX-FPDD-LX) which chooses the promising direction with Laplace distribution based on center of mass of parents. Apart
from the development of new recombination operators several efforts have been made to better understand these existing

294

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

operators on the basis of general theory. One such study is based on the geometricity of genetic operators which was
discussed recently by Yoon and Kim [66].
The objective of this study is to enhance the performance of LXPM genetic algorithm to solve constraint optimization
problems. For this purpose, the search power of LX operator has been improved and a modied crossover operator called
bounded exponential crossover (BEX) is proposed. Unlike LX, BEX operator always produced offspring which satises box
constraints. Parameter free penalty method of Deb [10] is used to deal with constraints. The performance of the proposed
GA (BEXPM) is evaluated on a set of twenty ve constrained optimization problems and results are compared with
well-known existing constrained RCGAs.
The rest of this article is organized as follows: In Section 2, motivated from LX operator, BEX crossover is dened and discussed. In Section 3, a brief review of various penalty function based constraint handling techniques are discussed and constraint handling technique used in this study is described. Proposed GA (BEXPM) is discussed in Section 4. Section 5 gives
the brief description of experimental setup used in the current study. Numerical results and discussion on results are presented in Section 6. In Section 7, the conclusions from the present study are drawn.
2. Proposed modication in LX operator
In this section we briey review LX operator dened in Deep and Thakur [14]. It is interesting to see that instead of Laplace distribution; LX may be equivalently redened using double exponential distribution. Modications in LX operator are
proposed to enhance its search power and the modied LX operator is named as bounded exponential crossover (BEX).
2.1. Laplace distribution as a combination of two exponential distributions
LX operator is a self adaptive parent centric crossover operator. It makes use of Laplace distribution to generate offspring
near parent solutions as explained in Deep and Thakur [14]. Laplace distribution (also called double exponential distribution)
may be thought of as two exponential distributions (having additional location parameter) joined together back-to-back.
LX produces two exponentially distributed offspring solutions placed symmetrically with respect to the parents. Here,
rst we see how LX can be rewritten with the help of exponential distribution. Following the same naming convention as
in case of LX, we name the crossover as exponential crossover (EX). The density function of exponential distribution is given
by

f x

 x
1
exp  ;
k
k

xP0

and distribution function of exponential distribution is given by

 x
Fx 1  exp  ;
k

x > 0;

where k > 0 is called the scale parameter. It can be observed from Fig. 1 that for small values of k, offspring solutions are
likely to produce near parents comparatively for larger values of k.
The procedure of generating offspring solutions n n1 ; n2 ; . . . ; nn and g g1 ; g2 ; . . . ; gn from two parent solutions
x x1 ; x2 ; . . . ; xn and y y1 ; y2 ; . . . ; yn using EX is explained in following steps.
Steps involved in generating offspring from parent solution using EX
STEP 1: Given two parent solutions x x1 ; x2 ; . . . ; xn 2 Rn and y y1 ; y2 ; . . . ; yn 2 Rn and scaling parameter k > 0.
while i 6 n do
STEP 2: Generate uniformly distributed random number ui 2 U0; 1.
STEP 3: Evaluate bi which is derived by inverting the exponential distribution, thereby bi following exponential
distribution with k as scaling parameter.
bi k log1  ui

STEP 4: Offspring solutions are generated using



ni

gi

xi bi jxi  yi j;

if r i < 0:5;

xi  bi jxi  yi j;

if r i P 0:5;

yi bi jxi  yi j;
yi  bi jxi  yi j;

if ri < 0:5;
if ri P 0:5;

where, r i 2 U0; 1 is a uniformly distributed random variable.


End while

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

295

EX, = 1.0
EX, = 0.5

f(x)

0
0

0.5

1.5

x
Fig. 1. Density function of exponential distribution (k 1:0 and k 0:5).

Since bi 2 1; 1, above procedure creates exponentially distributed offspring solutions in the interval 1; 1. From
Step 4, it is clear that EX produces offspring solutions on either side of the parent with equal probability. It is quite obvious
from above description LX is similar to EX which encompasses two exponential distribution with location parameter at the
location of parent solution.
2.2. Bounded exponential distribution and Bounded exponential crossover (BEX)
Exponential distribution is supported on the interval 0; 1. We modify the density function of exponential distribution in
such a way that the distribution is supported only in the nite interval 0; a i.e. zero probability is assigned in the interval
a; 1. To accomplish this we divide the density function of exponential distribution by a factor which is the cumulative
probability of exponential distribution in the bounds 0; a .
The density function of modied distribution, named bounded exponential distribution (0 6 x 6 a), is given by

 
exp  xk
  ;
f x 
k 1  exp  ak

06x6a

and its distribution function is given by

 
1  exp  xk
 a ;
Fx 
1  exp  k

0 6 x 6 a:

For illustration the density function of exponential distribution and bounded exponential distribution for k 1 and

a 2:25 are shown in Fig. 2. It is clear from Fig. 2 that for a x value of k > 0 (1 in this case) the density function of bounded
exponential distribution resides above the density function of exponential distribution in the interval 0; a and bounded
exponential distribution has zero probability outside the interval 0; a.
Bounded exponential distribution is used to produce the offspring solutions in the range xli ; xui  (box constraint for ith
decision variable) which LX does not. It is interesting to note that the parameter which follows the bounded exponential distribution includes an additional factor a which, not only depends upon the variable bounds xli ; xui  but also on the position of
the parent. So, in order to produce offspring solutions in the bound xli ; xui  corresponding to a each pair of the parent xi and yi ,
1.2

EX, = 1.0
BEX, = 1.0, = 2.25

f(x)

0.8
0.6
0.4
0.2
0
0

2
x

Fig. 2. Density function of exponential distribution and bounded exponential distribution (k 1:0 and k 0:5; a 2:25).

296

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

a1 and a2 needs to be chosen appropriately where a1 and a2 are truncation from left and right side of the parent. Corresponding to xi and yi , offspring solutions are found using two different parameter bxi and byi respectively. The resulting crossover
operator is named as bounded exponential crossover (BEX). The procedure for nding bxi and byi is as follows.
Steps involved in generating offspring from parent solution using BEX
STEP 1: Given two parent solutions x x1 ; x2 ; . . . ; xn 2 Rn and y y1 ; y2 ; . . . ; yn 2 Rn and scaling parameter k > 0:
while i 6 n do
STEP 2: Generate uniformly distributed random number ui 2 U0; 1.
STEP 3: Evaluate bxi and byi which is derived by inverting the bounded exponential distribution, thereby bxi and byi
following bounded exponential distribution.
8
n
 xl x 

 xl x o
>
< k log exp kyi xi ui 1  exp kyi xi
; if ri 6 0:5;
i
i
i
i
bxi
n

 xi xu o
>
i
: k log 1  ui 1  exp
;
if ri > 0:5;
ky x
i

8
n
 xl y 

 xl y o
>
< k log exp kyi xi ui 1  exp kyi xi
; if ri 6 0:5;
i
i
i
i
byi
n

 y xu o
>
: k log 1  ui 1  exp i i
;
if ri > 0:5;
ky x
i

where, r i 2 U0; 1 is a uniformly distributed random variable and xli and xui are lower and upper bounds of the ith
decision variable.
STEP 4: Offspring solutions are generated using
ni xi bxi yi  xi ;

gi yi byi yi  xi ;
where, r i 2 U0; 1 is a uniformly distributed random variable.
End while

In the above mentioned procedure, it is assumed that xi < yi . The above procedure permits a zero probability of generating an offspring solutions outside the range xli ; xui  of the decision variables. Also, a straightforward modication to the above
equation can be made for the case xi > yi . It is obvious that for xli 1 and xui 1, BEX reduces to LX. For a xed pair of
parents p1 1:75, p2 3:25 and xed nite variable bounds xl 0:80, xu 4:05, the distribution of offspring solutions for
scale parameter k 0:25 is shown in Fig. 3. It is clear from the same gure that distribution functions of both the offspring
solutions have some common area around mean of the parents which indicate that there is slightly higher probability of producing an offspring around the mean of the parent solutions.
Over the years, development of crossover operators are broadly done on the basis of two approaches namely Mean centric
and Parent centric operators, in which former generates offspring in the vicinity of mean of the participating parents and
latter generates offspring solutions near each of the parents. Mean centric and Parent centric approaches are discussed in
a fair detail in Deb et al. [12], Herrera et al. [26], Ono et al. [50] and Sinha et al. [56]. Although, LX is a parent centric crossover
operator nevertheless it does not assign zero probability across the mean of the participating parents unlike SBX dened in
Deb and Agrawal [13]. The fundamental feature of LX operator is that it has traits of both mean and parent centric crossover.
EX
BEX

3
2.5
f(x)

2
1.5
1
0.5
0
0

p1

p2

x
Fig. 3. Distribution of offspring solutions using exponential distribution and bounded exponential distribution for p1 1:75, p2 3:25, xl 0:80, xu 4:05,
k 0:3.

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

297

Further, Laplace distribution being a member of heavy tailed distribution family, LX operator generates offspring in the
vicinity of parent solutions with high probability as compared to its capability to generate offspring far from parents. Exploration capability of the Laplace distribution is further enhanced by using bounded exponential distribution. Exact mathematical formulation of bounded exponential distribution is given in Appendix A. It is easy to check that the probability of
offspring solutions generated in the vicinity of the parent is increased by bounded exponential distribution when compared
with Laplace.
Let X be a random variable which follows Laplace distribution, for comparison, consider the location parameter as 0. Let
P1 be a parent solution located at origin. The probability of offspring solutions generated near P1 or in the e neighborhood
N e P1 is given by

P1 0  e 6 X 6 0 e

e

Z e
1 x=k
1 x=k
dx
dx 1  ee=k ;
e
e
2k
2k
0

e; k > 0:

Similarly, if X be a random variable which follows bounded exponential distribution, probability of offspring solutions generated in the vicinity of the parent is given by

P2 0  e 6 X 6 0 e

1
2k1  ea1 =k

ex=k dx

e

1  ee=k
1
1
;

2
1  ea1 =k 1  ea2 =k

1
2k1  ea2 =k

ex=k dx

e; k; a2 > 0; a1 < 0:

Increase in probability of generating offspring solution in the vicinity of P1, is given by


P2  P1

1  ee=k
2

1
1
1  ee=k ea1 =k ea2 =k  2ea1 a2 =k


2
1  ea1 =k 1  ea2 =k
2
1  ea1 =k 1  ea2 =k

Also, it is clear from expression that P 2  P1 > 0.


The superiority of the bounded exponential distribution for generating offspring solutions can also be judged by using
second central moment or variance of the distribution. First moment and second moment of the bounded exponential distribution can be derived as

l EX 
EX 2

1
a1 ea1 =k
a2 ea2 =k
;

2 1  ea1 =k 1  ea2 =k

!
a21  2k2 2a1 kea1 =k 2k2 a22  2k2  2a2 kea2 =k 2k2
:

21  ea1 =k
21  ea2 =k

Using Eqs. (8) and (9), variance can calculated easily by r2 a1 ; a2 EX 2  l2 . For xed value of k > 0, we plotted the
surface plot of variances of bounded exponential distribution over sufciently large ranges of a1 and a2 . The plot clearly indicates that the variance of bounded exponential distribution is bounded by the variance of Laplace distribution (viz. 2k2 ).
Fig. 4 clearly indicates, for same value of k 2, variances of bounded exponential distribution over different combinations
of a1 and a2 , is bounded by the variance of Laplace distribution (viz. 8). Low variance of bounded exponential suggests that it
samples offspring solutions more closely around the mean when compared with Laplace. Adapting the variance of the crossover distribution (a.k.a. expansion rate of crossover operator) also helps in avoiding premature convergence of the algorithms as suggested by Akimoto et al. [1]. From above discussion it is also apparent that
(1) BEX does not create offspring symmetrically with respect to the position of the parents.
(2) For xed value of parents p1 and p2 the spread of offspring is proportional to k as shown in Fig. 5.
(3) For a xed k, BEX dispense offspring proportional to the spread of parents.
With this initial analytical evidences, we compared BEX and LX operator, by using it in Genetic algorithm without using
any mutation strategy. For comparison, we basically collected three statistics namely
 Feasibility count (number of feasible solution).
 Mean tness values.
 Normalized Average Feasible solution (NAF) dened as ratio of Average of feasible objective value to the best known optimal value.
where, all statistics have been calculated for each generation in all fty runs of both the GAs with solely BEX and LX operators. Further these statistics are collected for all twenty ve problems considered in current study (refer Table 3). Since, BEX
is modication to LX operator, comparing both the GAs on statistics of just one run and/or for single problem may not be
appropriate in judging the superiority of one operator over other. Due to this reason, we gather the empirical data for all fty
independent runs and for all twenty ve problems. The collected data is further averaged over each generation for all fty

298

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

Variance

2
20
0
30

15
10

20
5

10
0

Fig. 4. Surface plot of variances of bounded exponential distribution for xed k 2 and 30 6 a1 < 0, 0 < a2 6 20.

BEX, = 0.30

BEX, BEX,
= 0.15 = 0.30
BEX, = 0.15

f(x)

4
3
2
1
0
0

3
x

p2

Fig. 5. Spread of the offspring solutions using bounded exponential distribution when k 0:15 and k 0:30.

runs of both the GAs. It is observed that in most of the problems, GA with BEX operator scores over GA with LX operator.
Figs. 611 show the graph of all the three statistics for problem number 13 and 20 (refer Table 3). Consolidated results containing percentage improvement in successful runs, average and standard deviation of successful runs after a xed number
of generations are summarized in the form of bar plot shown in Fig. 12, where bars above zero indicates better performance
of GA with BEX and bars below zero suggests better performance of GA with LX. In addition to that, performance index,
dened in Bharti [3], is also plotted for both the GAs which also strongly supports better performance of BEX operator shown
in Figs. 1315.
It is worth mentioning here that all the parameters (refer Table 2) and initial population are kept same for both the GAs
with BEX and LX operator for each independent run.
3. Constraints handling in genetic algorithms
GAs are primarily designed for nding the global optimal solution of unconstrained optimization problems and originally
are not capable of handling constraints. To employ genetic algorithms to solve constrained optimization problems, many
methods have been proposed in the literature. Michalewicz and Schoenauer [42] classied the techniques for handling constraints into four groups:
(i) Preserving feasibility of solutions.
(ii) Making distinction between feasible and infeasible solutions.

299

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

120

Average Feasibility Count

100

80

60

40

20
BEX
LX

500

1000

1500

2000

2500

3000

3500

4000

Generation Number
Fig. 6. Average feasibility count and generation number of GA with BEX and LX operator (without mutation) for problem 13.

BEX
LX

Average of Mean Fitness of all solutions

10

15
500

1000

1500

2000

2500

3000

3500

4000

Generation Number
Fig. 7. Average of mean tness value and generation number of GA with BEX and LX operator (without mutation) for problem 13.

(iii) Penalty functions.


(iv) Hybrid methods.
Among these, penalty based approach is the most popular and successful method to tackle constraints in which, the constrained problem is transformed into an unconstrained one by combining objective function and constraints. The general
formulation of the modied problem using penalty function is

/x f x

p
m
X
X
Rk max f0; g k xga
rj jhj xjb ;
k1

10

j1

where /x is the modied objective function to be optimized. Rk , k 1; 2; 3; . . . ; m and rj , j 1; 2; 3; . . . ; p are penalty parameters and a, b are positive constants. Penalty based approaches can be divided into following two categories.

300

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

NAF

0.8

0.6

0.4

0.2
BEX
LX

0
0

500

1000

1500

2000

2500

3000

3500

4000

Generation Number
Fig. 8. Normalized average of feasible solutions (NAF) and generation number of GA with BEX and LX operator (without mutation) for problem 13.

60

Average Feasibility Count

50

40

30

20

10
BEX
LX

500

1000

1500

2000

2500

3000

3500

4000

Generation Number
Fig. 9. Average feasibility count and generation number of GA with BEX and LX operator (without mutation) for problem 20.

3.1. Penalty functions with penalty parameter


The simplest way to pick penalty parameters is to choose static penalty parameters based on some criteria which are
xed in the beginning and are kept constant throughout the search process (static penalties). Homaifar et al. [28] suggested
a penalty function approach in which the penalty parameter depends upon the level of constraint violations. Some more instances of static penalty strategy are discussed in Kuri Morales and Quezada [34] and Hoffmeister and Sprave [27]. In all
these methods, the modied objective function consists of number of additional parameters in the form of penalty parameters which remain problem specic and are not easy to generalize.
Unlike static penalties approach there are several other techniques which update penalty parameter(s) during the evolutionary search process (dynamic penalties). The penalty parameter increases as and how the search process progresses.

301

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

50
BEX
LX

Average of Mean Fitness of all solutions

40
30
20
10
0
10
20
30
40

500

1000

1500

2000

2500

3000

3500

4000

Generation Number
Fig. 10. Average of mean tness value and generation number of GA with BEX and LX operator (without mutation) for problem 20.

1
0.8
0.6
0.4

NAF

0.2
0
0.2
0.4
0.6
0.8
1

BEX
LX

500

1000

1500

2000

2500

3000

3500

4000

Generation Number
Fig. 11. Normalized average of feasible solutions (NAF) and generation number of GA with BEX and LX operator (without mutation) for problem 20.

Joines and Houck [30], Kazarlis and Petridis [31], Crossley and Williams [9] tested several strategies where penalty parameters are dependent upon the recent generation number. Michalewicz and Attia [40], Carlson et al. [57], Joines and Houck
[30] used method of annealing penalties which is based on the concept of simulated annealing in which there is a conditional
modication of penalty parameter in each generation. In this method also the penalty parameter is a non decreasing function
of current generation. Dynamic and annealing penalties methods both carry similar shortcomings as in the case of static penalty based approaches. Initialization and update of penalty parameters remains the most critical issue using these
approaches.
Another interesting class of penalty function methods is one which relies upon the current dynamics of the evolutionary
search process. Bean and Hadj-Alouane [2], Hadj-Alouane and Bean [25], Smith and Tate [58], Coit and Smith [6], Coit et al.
[7], Norman and Smith [49], Yokota et al. [64], Gen and Cheng [19], Gen and Cheng [20], Meittinen et al. [39] proposed some
approaches where penalty parameter is dynamically updated depending upon the tness of the current best solution. In

302

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

Success rate
Average of successful runs
Standard deviation of successful runs

60

Percentage Improvement

40

20

20

40

60
0

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
Problem Number

Fig. 12. Bar plot showing consolidated results for GA with BEX and LX operator without using any mutation operator.

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEX
LX

0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k1 = w and k2 = k3 = (1w)/2


Fig. 13. Performance index of GAs with BEX and LX solely (without mutation) when k1 w and k2 k3

1w
2

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEX
LX

0
0

0.2

0.4

0.6

0.8

Weight (w)
Performance Index when k2 = w and k1 = k3 = (1w)/2
Fig. 14. Performance index of GAs with BEX and LX solely (without mutation) when k2 w and k1 k3

1w
2

303

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEX
LX

0
0

0.2

0.4

0.6

0.8

Weight (w)
Performance Index when k = w and k = k = (1w)/2
3

Fig. 15. Performance index of GAs with BEX and LX solely (without mutation) when k3 w and k1 k2

1w
2

Rasheed [54] the search begins with a small penalty parameter and when the best t solution becomes infeasible, penalty
parameter is assigned a relatively larger value. The problems with these methods are again the manner of choosing and
updating the penalties.
3.2. Penalty function without penalty parameter
The simplest approach for handling constraints is to discard infeasible solutions generated during the search process. This
approach is called death penalty and can be implemented easily by assigning zero tness value to infeasible solutions.
Though, this method is very simple to apply yet it might be computationally expensive in problems where feasible region
is small as compared to the entire search space. Hoffmeister and Sprave [27]. Michalewicz [44], Michalewicz and Nazhiyath
[41], Michalewicz and Schoenauer [42] and later on Coit and Smith [6] revealed that, in general, the death penalty method
performs poorly as compared to those which make use of penalties.
Deb [10] proposed a parameter independent penalty function approach which makes use of parameter space niching for
preserving diversity. In an earlier study, Powell and Skolnick [52] also used a similar approach for constraint handling but
Debs approach does not require any penalty parameter. The method is quite generic in nature and may be applied to any
population based technique. Oyman et al. [51] used Debs constraint handling method in conjunction with evolution strategy
and compared its performance against death penalty method. The result obtained using Debs technique was found better
than that of death penalty approach. Kukkonen and Lampinen [33] applied the same criteria for differential evolution
(DE) [53]. In another approach, Coello Coello and Mezura Montes [5] proposed a dominance based tournament selection approach for constraint handling which does not make use of any penalty parameter or diversity preserving mechanism. Along
the similar lines, Lemonge and Barbosa [35], Lemonge and Barbosa [36], Lemonge and Barbosa [37] proposed an adaptive
penalty method which adapt from the information contained in the current population during search process.
3.3. Parameter free penalty method
Deb [10] proposed a constraint handling technique which overcomes the difculty of choosing the penalty parameter by
suitably modifying the algorithm purposed by Powell and Skolnick [52]. The proposed approach uses the following tness
function (for maximization problems)

8
f x
>
<

if x is feasible
p
m
X
X
Fx
g j x
jhk xj; otherwise
>
: fmax
j1

where

g j x P 0;
hk x 0;

k1

16j6m
where, fmax is worst feasible value.
16k6p

Some of the advantages of this approach are







For any infeasible individuals the objective function is not evaluated.


Objective function value and constraint value both are not combined for any solution.
No penalty parameter is required.
Can be applied very easily in any population based algorithm.

11

304

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

The method makes use of tournament selection to compare the solutions in hand. The winner of the tournament is
decided by the following rules
 If both the solutions are feasible then the one with a better objective function value is selected.
 If one of the solutions is feasible and the other is infeasible then the feasible solution is selected.
 If both the solutions are infeasible then the one with lower constraint violation is selected.
Equality constraints are treated by converting them to inequality constraints as d  jhk xj P 0 8 k:. Where, d is the
amount of relaxation given in the equality constraints in order to allow some room to the search algorithm for exploring
the relaxed feasible region.
4. Proposed GA
In the present work, an attempt has been made to improve the performance of an existing genetic algorithm LXPM
which comprise of Laplace crossover and power mutation. To achieve this goal, LX operator has been suitably modied
by increasing its search capabilities. The resulting crossover operator is called bounded exponential crossover (BEX). We propose a new RCGA named BEXPM which is the combination of BEX and power mutation operator.
The properties of bounded exponential crossover are discussed in Section 2.2. Unlike LX, BEX generates offspring solutions within the variable bounds. Further the probability of sampling offspring solution in the vicinity of parent solutions
is higher in BEX, as compared to LX operator. Preliminary empirical analysis also gives promising results for BEX.
In GA, mutation operator is used to maintain adequate diversity in the population. It tries to provide genetic drift during
the search process to jump out of local or suboptimal solutions and helps in avoiding premature convergence. A variety of
mutation operators are proposed in the GA literature. Here, we use power mutation as mutation operator. Power mutation
(PM) is based on power distribution and is discussed in Deep and Thakur [15]. Perturbation in the solution produced by
power mutation depends upon parameter p (index of mutation) and the probability of producing a mutated solution on
either side of a solution is proportional to distance from the respective parameter bound.
In earlier work of Deep and Thakur [15] showed that LX operator along with power mutation shown better results when
compared with other GAs considered there. Now, since BEX is modication over LX operator, power mutation becomes
appropriate choice as a mutation strategy (see Table 1).
The performance of resulting GA called BEXPM is compared against some well known GAs existing in literature viz. LX
PM [14,15], HX-NUM [46], HX-MPTM [38] and SBX-POL [10]. All the GAs considered in this study are structurally similar to
Simple Genetic Algorithm (SGA) of Goldberg [23]. Table 2 summarizes the values of various parameters in the operators used
in all the GAs.
5. The experimental setup
The test bed chosen consists of a set of twenty ve benchmark test problems from the global optimization literature having different difculty levels and multimodality. Out of these twenty ve problems those which are of maximization type are
rst converted into equivalent minimization problem by negating the objective function and the results in the tables are of
converted problems. All the problems are summarized in Table 3. The test problems considered for the study is chosen with
varying dimension, type (linear, non-linear, quadratic) and type of constraints (linear equality and inequality, non-linear
equality and inequality constraints). References to exact formulation of each of the problem has been cited in Table 3.
The parameter settings suggested in Deep and Thakur [15] for LXPM are no more appropriate for the present set of problems. The main reason for this is that the problems considered in the present work are constrained and the nature of objective functions and constraints inuences the performance of RCGAs. To suggest a suitable combination of parameters
corresponding to each of the RCGAs an extensive experiment is carried out. An empirical study of various possible combinations of crossover probability, mutation probability and the indexes occurring in the expressions of crossover and mutations are done. The nal parameter values for all the RCGAs are given in Table 2. These parameter settings are chosen because
they are repeatedly found to give good results for most of the problems and thus are suitable to select if we look at the overall performance of the algorithms on the present test suite in general. However, it cannot be claimed that these parameter

Table 1
Summary of operators used in this study.
Operators used

Selection
Crossover
Mutation
Elitism

Algorithm name
BEXPM

LXPM

HX-NUM

HX-MPTM

SBX-POL

Tournament
BLX
PM
No

Tournament
LX
PM
NO

Tournament
HX
NUM
Yes

Tournament
HX
MPTM
Yes

Tournament
SBX
POL
Yes

305

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317


Table 2
Values of various parameters in the operators used in six GAs (t being current generation number).
Algorithm name

Crossover probability

Index of crossover

Mutation probability

Index of mutation

BEXPM
LXPM
HX-NUM
HX-MPTM
SBX-POL

0.50
0.50
0.75
0.40
0.40

0.65
0.65

0.25

1/No. of variables
1/No. of variables
0.13
0.08
1/No. of variables

100 + t
100 + t
4
4
100 + t

Table 3
Summary of problems considered in the study.
Problem
No.

No. of
variables

Type of obj.
function

No. of different types of constraints


Linear
equality

Linear
inequality

Non-linear
equality

Refs.
Nonlinear
inequality

Linear

Quadratic

Quadratic

Non-linear

Linear

6
7
8
9
10
11
12
13
14
15
16
17

4
5
2
10
2
2
7
13
2
10
2
6

Non-linear
Non-linear
Non-linear
Non-linear
Non-linear
Quadratic
Non-linear
Quadratic
Non-linear
Quadratic
Quadratic
Quadratic

0
0
0
3
0
0
0
0
0
0
0
0

2
0
2
0
0
0
0
9
0
3
1
2

3
3
0
0
0
0
0
0
0
0
0
0

0
0
0
0
2
2
4
0
2
5
1
0

18

13

Quadratic

19

Quadratic

20

10

Quadratic

21

20

Quadratic

10

22

20

Quadratic

10

23

20

Quadratic

10

24
25

5
4

Non-linear
Non-linear

0
0

1
1

0
0

37
4

Floudas and Pardalos [18]


pr.3.1
Floudas and Pardalos [18]
pr.3.2
Floudas and Pardalos [18]
pr.3.3
Floudas and Pardalos [18]
pr.4.4
Floudas and Pardalos [18]
pr.4.6
Michalewicz et al. [45] G5
Kim and Myung [32] pr.6
Michalewicz [43] test#8
Michalewicz [43] test#2
Kim and Myung [32] pr.1
Kim and Myung [32] pr.4
Michalewicz et al. [45] G9
Michalewicz et al. [45] G1
Michalewicz et al. [45] G6
Michalewicz et al. [45] G7
Kim and Myung [32] pr.5
Floudas and Pardalos [18]
pr.2.2
Floudas and Pardalos [18]
pr.2.3
Floudas and Pardalos [18]
pr.2.4
Floudas and Pardalos [18]
pr.2.6
Floudas and Pardalos [18]
pr.2.7/1
Floudas and Pardalos [18]
pr.2.7/2
Floudas and Pardalos [18]
pr.2.7/3
Deb [10] Test problem 2
Reklaitis et al. [24]

settings would work best for any arbitrarily chosen problem. The population size for all the algorithms is ten times the
number of decision variables. Fifty independent runs with different initial populations are conducted with each algorithm.
Tournament selection incorporating parameter free penalty is applied in all the GAs in this study. Elitism (if used) is applied
with size one i.e. if the best individual of two consecutive (current and previous) generations is preserved in the current generation. The value of a for LX is xed to be zero and for HX, k is xed to be 4. The value of d in case of equality constraint is
taken as 103 . The termination criterion set for all the GAs are maximum numbers of 4000 generations. For a xed problem
and a xed algorithm, if the best feasible objective function value recorded in a run falls within 1% range of the best known
feasible objective function value of that problem, the run is said to be a success.
To judge the efciency, accuracy and reliability of the algorithm, execution time, average number of function evaluations
of successful runs, number of successful runs (success ratio), minimum, maximum, average and standard deviation of objective function values of feasible runs are recorded. All algorithms are coded in C++ and all computational work is carried out
on a PIV 2.8 GHz machine on WINXP platform.

306

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

Table 4
Feasibility count of the algorithms.
Problem No.

BEXPM

LXPM

HX-NUM

HX-MPTM

SBX-POL

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Total

50
50
50
50
50
39
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
1239

50
50
50
50
50
44
50
50
50
50
50
50
50
47
50
50
50
50
50
50
50
50
50
50
50
1241

50
50
50
50
50
0
13
50
0
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
1113

50
50
50
50
50
0
10
50
1
50
50
50
4
50
50
50
50
50
50
50
50
50
50
50
50
1065

50
50
50
50
50
0
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
1200

6. Results and discussions


To get an idea of ability to deal with constraints, we record the feasibility count of all algorithms and for all the problems.
Feasibility count of a particular algorithm and for a particular problem is the number of runs where the algorithm was able to
locate at least one feasible solution. Table 4 consist of feasibility counts for ve GAs (BEXPM, LXPM, HX-NUM, HX-MPTM
and SBX-POL) and for twenty ve test problems considered in this study. As far as total feasibility count is considered, BEX
PM outperforms all other algorithms except LXPM. BEXPM and LXPM have approximately identical feasibility count value and are able to nd feasible solution for all problems however SBX-POL, HX-NUM and HX-MPTM could not found any
feasible solution for problem 6. Also, HX-NUM unable to locate any feasible solution for problem 9.
Table 5 consists of number of successful runs, average number of function evaluations and the average execution time of
successful runs for all GAs considered. All algorithms were successful to locate global optima of seven problems (problems 5,
8, 10, 16, 17, 18 and 19) in all fty runs. None of the GAs is able to solve two problems (problems 7 and 9). So, we will limit
our discussion over the set of 23 problems where at least one algorithm is able to achieve positive success rate.
The reason for zero success rate for problems 7 and 9 may be due to higher number of equality constraints (linear equality) which makes the feasible region so small that none of the algorithms are able to explore the relaxed constraints. Further
non linearity of objective function makes it more difcult to locate global optima.
To compare the quality of the solutions obtained by algorithms for each test problem by each of the algorithm, we record
the minimum, maximum, mean and standard deviation of the best feasible objective function values achieved after a xed
number of generations. For comparing performance of the algorithms we dene two criteria
Criterion 1 is used to compare the performance of GAs on the basis of following information
 Number of successful runs.
 Average number of function evaluations of successful runs.
 The average execution time of successful runs.
GA A is said to perform better than GA B on criterion 1 at a particular problem if
(i) Success rate of GA A = Success rate of GA B.
(ii) Average number of function evaluations of successful runs of GA A 6 Average number of function evaluations of successful runs of GA B.
(iii) Average execution time of successful runs of GA A 6 Average execution time of successful runs of GA B.

307

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317


Table 5
Number of successful runs, average No. of function evaluations and execution time of successful runs of all algorithms.
Prob
em No.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

Number of successful runs (out of 50


runs)

Average number of function evaluations of


successful runs

Average time of execution (in sec.)

BEX
PM

LX
PM

HXNUM

HXMPTM

SBXPOL

BEX
PM

LXPM

HXNUM

HXMPTM

SBXPOL

BEX
PM

LX
PM

HXNUM

HXMPTM

SBXPOL

5
50
50
2
50
17
0
50
0
50
50
50
50
50
31
50
50
50
50
48
50
50
50
50
2

3
49
50
0
50
21
0
50
0
50
29
50
50
40
6
50
50
50
50
48
29
40
29
50
0

7
50
47
20
50
0
0
50
0
50
50
45
0
50
0
50
50
50
50
41
0
0
0
50
8

5
50
50
33
50
0
0
50
0
50
50
50
0
13
15
50
50
50
50
48
0
0
0
49
0

0
50
27
16
50
0
0
50
0
50
50
50
46
17
1
50
50
50
50
31
46
50
46
50
2

28,000
3246
2320
113166
2244
30,512

225

713
1570
11,364
9974
21,811
226708
507
5285
3768
14,030
10,344
123694
128099
141356
12,860
28,355

31,387
3769
9648

1364
5901

516

1992
630
6280
64,332
41,424
60,018
1556
6451
61,033
19,733
34,405
330194
345820
375560
8535

297,010
919
1034
17,976
1283

440

295
3067
133313

28,694

97
1030
3181
4856
4405

15,369
5361

100,842
2911
4527
17,438
813

255

631
2511
8215

4182
240284
510
3669
13,048
6678
35,273

6306

0.0560
0.0069
0.0059
0.2265
0.0050
0.0651

0.0006

0.0009
0.0022
0.0356
0.0362
0.0353
0.8801
0.0009
0.0109
0.0134
0.0434
0.0342
0.7641
0.7837
0.8725
0.0350
0.0630

0.0623
0.0073
0.0219

0.0031
0.0010

0.0006

0.0025
0.0005
0.0181
0.1972
0.0625
0.1953
0.0022
0.0122
0.1903
0.0606
0.0967
1.7582
1.8410
1.9994
0.0209

0.9150
0.0030
0.0040
0.0280
0.0030

0.0010

0.0010
0.0060
0.4900

0.0630

0.0010
0.0030
0.0150
0.0210
0.0180

0.0460
0.0180

0.2410
0.0070
0.0130
0.0250
0.0040

0.0010

0.0010
0.0040
0.0270

0.0070
0.8360
0.0010
0.0080
0.0420
0.0770
0.1080

0.0170

0.0090
0.0490
0.1590
0.0040

0.0010

0.0010
0.0010
0.0120
0.0380
0.0760
1.2500
0.0010
0.0090
0.0170
0.1330
0.0750
1.3760
1.2780
1.4870
0.0690
0.0010

5564
23,585
102586
2437

382

617
844
5043
15,536
55,656
367501
533
5869
6828
53,904
29,942
268648
257333
290092
30,464
121

If we replace condition (i) above by following


Success rate of GA A > Success rate of GA B
then GA A is said to perform strictly better than GA B on criterion 1.
It is obvious from the above description that an algorithm which performs better or strictly better on criterion 1 than other
algorithm(s) would be preferred.
Criterion 2 is used to compare the algorithms on the basis of minimum, average, standard deviation and maximum of
objective function values of feasible runs of algorithms.
(i) GA A and GA B are said to perform identically on criterion 2, if minimum, average, standard deviation and maximum of
objective function values of GA A and GA B are equal to each other.
(ii) GA A is said to perform better than GA B on criterion 2, if average, standard deviation and minimum of objective function values of GA A is strictly less than that of GA B and maximum of objective function values of GA A is equal to that
of GA B.
(iii) If GA A gives less value for all the four statistics than GA B then GA A is said to perform strictly better than GA B on
criterion 2.
Now, since the problems we are solving are of minimization type, a lower mean together with lesser standard deviation
value indicates that the algorithm repeatedly nd better feasible solution in surroundings of each other and thereby indicates the consistency of the algorithm. If minimum and maximum of GA A are less than that of other algorithm(s) then it
signies that in best and worst case also GA A performs better.
The performance index (PI) dened by Bharti [3] is also used to judge the relative performance of GAs. Mohan and Nguyen
[47], Deep and Thakur [14], Deep and Thakur [15], Thakur [61] also used this PI for comparing some other population based
heuristic algorithms for global optimization. A higher PI value indicates better performance of the algorithm. PI has been discussed in good detail in Bharti [3].
Due to vary nature of problems and the algorithms it is quite difcult to compare the performance of all GAs simultaneously from the collected empirical data. In order to better understand the superiority of BEXPM over other algorithms
we rst do a pair wise comparison of BEXPM with other four algorithms on criterion 1, 2 and PI.

308

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

Table 6
Average, standard deviation, minimum and maximum of all feasible solution.
Problem No.

10

11

12

13

14

15

16

17

Statistics

Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum

Algorithms
BEXPM

LXPM

HX-NUM

HX-MPTM

SBX-POL

9627.711929
1011.762479
7343.069018
11099.50332
30627.22913
37.335415
30665.53659
30526.50421
310
0
310
310
1.854367
0.245548
2.090867
1.021669
5.508013
0.000002
5.508013
5.507996
5351.922738
252.054776
5126.875894
5974.040102
0.707651
0.169718
0.057445
0.999999
1
0
1
1
43.518381
1.185934
46.535746
40.93702
0.25
0
0.25
0.25
5.002565
0.006379
5
5.037578
680.793423
0.199183
680.637204
681.581954
15
0
15
15
6961.810255
0.002579
6961.813521
6961.801292
24.546161
0.200831
1
25.387952
1
0
24.31585
1
213
0
213

7939.864635
845.303616
7067.048443
10832.71499
30561.71733
79.782025
30662.52377
30355.28306
8310
0
310
310
1.666412
0.277312
2.121419
0.855163
5.508013
0
5.508013
5.508013
5274.710337
196.124427
5126.485281
5956.07084
0.675687
0.108807
0.461379
0.999999
1
0
1
1
43.264692
1.108492
45.1003
40.653243
0.25
0
0.25
0.25
5.071449
0.106674
5
5.472014
681.412803
0.73675
680.669685
683.914309
15
0
15
15
6412.740006
1376.85256
6961.118052
1740.447572
25.777529
1.130643
1
29.0395
1
0.000002
24.433957
1.000017
213
0
213

7310.045175
255.928578
7072.097961
8677.96264
30665.12655
0.500525
30665.53733
30662.62421
309.325399
2.682982
310
298
2.171502
0.032273
2.216509
2.002403
5.506694
0.001196
5.50801
5.502401

0.455035
0.037065
0.413664
0.545533
1
0
1
1

0.25
0
0.25
0.25
5.001534
0.002063
5.000001
5.008293
684.522954
2.247931
680.641325
689.449599
7.230851
1.527004
12.14907
5.034275
6955.877056
10.650049
6961.810661
6914.840792
53.985936
27.214384
1
132.632441
1
0
27.673001
1
213
0
213

7937.871059
904.082479
7086.594023
10929.48908
30665.44426
0.634787
30665.53867
30661.00287
309.999999
0.000008
310
309.999946
2.156986
0.082831
2.216362
1.637233
5.508013
0
5.508013
5.508011

0.501435
0.571069
0.056947
1.813001
1
0
1
1
46.643953
0
46.643953
46.643953
0.25
0
0.25
0.25
5.002081
0.003925
5
5.017676
680.887181
0.30426
680.639294
681.909256
5.613894
0.871155
6.295726
4.122512
5991.73895
1172.995126
6961.813876
2080.191551
25.069077
1.602394
1
32.952978
1
0
24.335775
1
212.999961
0.000154
213

7207.547201
58.673015
7120.81989
7339.186755
30664.84935
2.704602
30665.46269
30645.93012
304.442437
6.053487
310
294
1.898996
0.265442
2.205993
1.107191
5.507961
0.000026
5.508007
5.507909

0.791889
0.365192
0.055421
2.494908
0.999987
0.000028
1
0.999875
41.230379
10.214403
46.411767
1
0.25
0
0.25
0.25
5.000064
0.000041
5.000004
5.000168
680.699178
0.020822
680.658293
680.747114
14.784492
0.624863
14.980614
12.401199
6864.03683
57.508153
6950.674923
6716.720846
24.651604
0.049667
1
24.758267
1
0
24.543369
1
213
0
213

309

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317


Table 6 (continued)
Problem No.

Statistics

Algorithms
BEXPM

18

19

20

21

22

23

24

25

Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum
Average
Standard deviation
Minimum
Maximum

LXPM

213
15
0
15
15
10.99989
0.000767
11
10.994523
38.82
0.712461
39
36
394.562867
0.30548
394.75059
393.13695
884.488202
0.242318
884.749967
883.823688
8689.60784
11.26756
8695.011132
8638.307198
1.914609
0
1.914609
1.914608
3.681558
0.859486
2.386219
6.371712

213
15
0
15
15
11
0
11
11
38.88
0.587878
39
36
390.806114
3.86312
394.704435
377.627409
879.196085
3.542525
884.225402
869.512355
8597.050482
79.423293
8689.196794
8364.314861
1.914605
0.00003
1.914609
1.914398
4.128408
1.093431
2.648752
7.499291

HX-NUM

HX-MPTM

213
15
0
15
15
11
0
11
11
38.46
1.152562
39
36
327.283186
13.204153
359.378507
307.410718
791.786229
10.562369
822.944912
774.077505
7049.577713
175.624213
7539.478395
6654.72756
1.914468
0.00012
1.914607
1.914162
2.476568
0.096973
2.388122
2.94922

212.999061
14.999998
0.000003
15
14.999988
10.999948
0.000192
11
10.998939
38.686954
1.812078
39
26.347676
361.947072
5.137265
376.621989
350.271769
819.44512
6.861479
835.188804
806.674013
7582.232647
133.168648
7897.984481
7353.515789
1.913388
0.004352
1.914609
1.888366
2.663927
0.47735
2.381141
4.424415

SBX-POL
213
14.99799
0.000943
14.999878
14.996013
11
0
11
11
36.807689
3.930173
39
16.967254
393.354303
1.439601
394.495186
388.659275
883.264489
1.302642
884.430776
879.244307
8663.480456
30.218527
8689.837093
8575.630395
1.911264
0.000713
1.913016
1.909867
2.391071
0.002417
2.38385
2.396225

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
LXPM

0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k1 = w and k2 = k3 = (1w)/2


Fig. 16. Performance index of BEXPM and LXPM when k1 w and k2 k3

1w
.
2

6.1. BEXPM Vs LXPM


BEXPM has better or identical success rate in twenty two out of twenty three problems than LXPM on criterion 1. In eight
problems (problems 1, 2, 4, 14, 21, 22, 23 and 25), BEXPM performs strictly better than LXPM on criterion 1. In twelve problems, BEXPM and LXPM have identical success rate and out of which in nine instances (problems 3, 8, 10, 16, 17, 18, 19 and
20), BEXPM perform better than LXPM on criterion 1. There is only one problem (problem 6) where LXPM found to perform strictly better than BEXPM and in only three instances (problems 5, 12 and 24) LXPM perform better than BEXPM on
criterion 1.
It is evident from Table 6, BEXPM performs strictly better than LXPM on criterion 2, in eight problems (problems 2, 12,
14, 15, 21, 22, 23 and 25) compared to just two problems (1 and 6) where LX performs strictly better than BEXPM. There are

310

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

six problems (3, 8, 10, 13, 17 and 18) where both the GAs gives identical performance on criterion 2. The PI for BEXPM and
LXPM is shown in Figs. 1618. In all the three cases, PI of BEXPM is higher than that of LXPM.
6.2. BEXPM Vs HX-NUM
On twenty problems, BEXPM demonstrate more or equal success to HX-NUM except (1, 4 and 25). On seven problems (6,
12, 13, 15, 21, 22 and 23), BEXPM is found to perform strictly better than HX-NUM on criterion 1. Out of these seven problems, in six problems (6, 13, 15, 21, 22 and 23), HX-NUM has completely failed in locating global optima and has zero success
rates. Out of eleven test problems (2, 5, 8, 10, 11, 14, 16, 17, 18, 19 and 24) where both the algorithms show same success

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
LXPM

0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k = w and k = k = (1w)/2


2

Fig. 17. Performance index of BEXPM and LXPM when k2 w and k1 k3

1w
.
2

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
LXPM

0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k3 = w and k1 = k2 = (1w)/2


Fig. 18. Performance index of BEXPM and LXPM when k3 w and k1 k2

1w
.
2

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
HXNUM

0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k1 = w and k2 = k3 = (1w)/2


Fig. 19. Performance index of BEXPM and HX-NUM when k1 w and k2 k3

1w
.
2

311

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

rate, in four test problems (8, 11, 14 and 24), BEXPM perform better than HX-NUM on criterion 1. There are only two instances (problem 4 and 25) where HX-NUM comes out to be strictly better than BEXPM and in four cases (problems 2, 4,
17 and 19) it performs perform better than BEXPM on criterion 1.
From Table 6, in ve problems (problems 8, 10, 16, 17 and 18), BEXPM and HX-NUM have identical performance on criterion 2. However, BEXPM outperforms in eleven test problems (5, 6, 9, 12, 13, 14, 15, 21, 22, 23 and 24) by performing
strictly better than HX-NUM on criterion 2. There are just four problems (1, 2, 4 and 19) out of twenty three where HXNUM performs either strictly better or better than BEXPM on criterion 2. Figs. 1921 consist of PI of and BEXPM and
HX-NUM. In all three conditions the PI of BEXPM is more than PI of HX-NUM.

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
HXNUM

0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k2 = w and k1 = k3 = (1w)/2


Fig. 20. Performance index of BEXPM and HX-NUM when k2 w and k1 k3

1w
.
2

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
HXNUM

0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k3 = w and k1 = k2 = (1w)/2


Fig. 21. Performance index of BEXPM and HX-NUM when k3 w and k1 k2

1w
.
2

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
HXMPTM

0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k1 = w and k2 = k3 = (1w)/2


Fig. 22. Performance index of BEXPM and HX-MPTM when k1 w and k2 k3

1w
.
2

312

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

6.3. BEXPM Vs HX- MPTM


In nineteen instances BEXPM has the same or more success than HX-MPTM. From Table 5, it is apparent that in six test
examples (6, 13, 21, 22, 23 and 25), BEXPM acts upon strictly better than HX-MPTM on criterion 1. In fact HX-MPTM could
not succeed in even one run for all the six examples. Out of thirteen instances where, BEXPM and HX-MPTM have similar
success rate, in seven instances (problems 1, 3, 8, 11, 16, 18 and 20), BEXPM perform better than HX-MPTM on criterion 1.
There is only one problem (problem 4) where HX-MPTM performs strictly better than BEXPM on criterion 1. In three cases
(problems 5, 12 and 17) HX-MPTM performs better than BEXPM on criterion 1.
In twelve cases (3, 6, 12, 13, 15, 17, 18, 20, 21, 22, 23 and 24), BEXPM has better or strictly better statistics then HX-MPTM
on criterion 2. In three cases (8, 10 and 16), both of the GAs performance are identical on criterion 2. HX-MPTM shows better

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
HXMPTM

0
0

0.2

0.4

0.6

0.8

Weight (w)
Performance Index when k = w and k = k = (1w)/2
2

Fig. 23. Performance index of BEXPM and HX-MPTM when k2 w and k1 k3

1w
2

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
HXMPTM

0
0

0.2

0.4

0.6

0.8

Weight (w)
Performance Index when k3 = w and k1 = k2 = (1w)/2
Fig. 24. Performance index of BEXPM and HX-MPTM when k3 w and k1 k2

1w
2

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
SBXPOL

0
0

0.2

0.4

0.6

0.8

Weight (w)
Performance Index when k1 = w and k2 = k3 = (1w)/2
Fig. 25. Performance index of BEXPM and SBX-POL when k1 w and k2 k3

1w
.
2

313

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

performance in two problems (11 and 19). It is signicant to observe that in ve problems (1, 2, 3, 9 and 25), HX-MPTM
scores over BEXPM by performing strictly better on criterion 2. HX-MPTM on the basis of PI it is examined from
Figs. 2224 that in all three situations BEXPM outperforms HX- MPTM.
6.4. BEXPM vs SBX-POL
In this case also BEXPM has performed much better than SBX-POL as far as success rate is concerned. In twenty two
problems, BEXPM has identical or better success than SBX-POL. In nine test problems (1, 3, 6, 13, 14, 15, 20, 21 and 23) problems BEXPM has performed strictly better than SBX-POL on criterion 1. There are two test problems where SBX-POL could

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
SBXPOL

0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k = w and k = k = (1w)/2


2

Fig. 26. Performance index of BEXPM and SBX-POL when k2 w and k1 k3

1w
2

Performance Index (PI)

1
0.8
0.6
0.4
0.2
BEXPM
SBXPOL

0
0

0.2

0.4

0.6

0.8

Weight (w)
Performance Index when k3 = w and k1 = k2 = (1w)/2
Fig. 27. Performance index of BEXPM and SBX-POL when k3 w and k1 k2

Performance Index (PI)

1
0.8
0.6
0.4
BEXPM
LXPM
HXNUM
HXMPTM
SBXPOL

0.2
0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k1 = w and k2 = k3 = (1w)/2


Fig. 28. Performance index of all GAs when k1 w and k2 k3

1w
.
2

1w
2

314

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

not achieve any success in nding global optima. On thirteen problems (2, 5, 8, 10, 11, 12, 16, 17, 18, 19, 22, 24 and 25), BEX
PM and SBX-POL has similar success rate. Out of these thirteen instances, in seven cases (2, 8, 16, 18, 19, 22 and 24) BEXPM
performs better than BEXPM on criterion 1. Only in one instance (4), SBX-POL performs strictly better than BEXPM on criterion 1. There are three problems (11, 12 and 25) where SBX-POL performs better than BEXPM on criterion 1.
The superiority of BEXPM is evident when compared with SBX-POL on criterion 2. On ten problems (5, 6, 9, 13, 14, 18, 21,
22, 23 and 24), BEXPM performs strictly better than SBX-POL on criterion 2. There are three cases (10, 16 and 17) in which
BEXPM and SBX-POL performs identically and in only two problems (1 and 25) where SBX-POL outperforms BEXPM on
criterion 2. It is also clear while observing the plots of PI (Figs. 2527) that PI of BEXPM reside over PI of SBX-POL.
From above discussion it is clear that in pair wise comparison BEXPM performs signicantly better than other algorithms not only in terms of efciency, accuracy and reliability but also on the basis of quality of the solution and PI.
In order to compare the performance of all the algorithms at the same time we again revisit Table 5 and observe that there
are four instances (problems 8, 21, 22 and 23) where BEXPM strictly outperforms other four algorithms over criterion 1.
There is only one case (problem 5) where other algorithms shows strictly better results than BEXPM on criterion 1.
As far as qualities of solutions are concerned, BEXPM has strictly better, better or identical performance on ten problems
(3, 8, 10, 13, 17, 18, 21, 22, 23 and 24) as compared to other algorithms simultaneously on criterion 2. Among which in three
cases (21 ,22 and 23), BEXPM is strictly better than all other four GAs considered. LXPM behaves strictly better than other
algorithms in just one test case (problem 6) as compared to other algorithms on criterion 2. HX-NUM and SBX-POL does not
show better or strictly better performance on any of the problems when compared to all other algorithms simultaneously on
criterion 2. HX-MPTM shows strictly better results on problem 9 in comparison with all other algorithms but at the same
time none of the algorithms showed positive success rate for these problems. All the algorithms behave identically for problem 10 on criterion 2.
The performance index of all algorithms simultaneously is shown in Figs. 2830. Here also we observe that in all three
scenarios the PI value for BEXPM is far above than all other GAs considered in this study. Hence it can be concluded that
the BEXPM is the best algorithm amongst all ve algorithms considered. However it is rather difcult to rank the remaining
four GAs because the ranking varies according to the specied range of weights.

Performance Index (PI)

1
0.8
0.6
0.4
BEXPM
LXPM
HXNUM
HXMPTM
SBXPOL

0.2
0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k2 = w and k1 = k3 = (1w)/2


Fig. 29. Performance index of all GAs when k2 w and k1 k3

1w
.
2

Performance Index (PI)

1
0.8
0.6
0.4
BEXPM
LXPM
HXNUM
HXMPTM
SBXPOL

0.2
0
0

0.2

0.4
0.6
Weight (w)

0.8

Performance Index when k3 = w and k1 = k2 = (1w)/2


Fig. 30. Performance index of all GAs when k3 w and k1 k2

1w
.
2

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

315

7. Conclusions
In this paper a new crossover operator BEX is proposed which is the improved version of LX operator. The modied operator tries to enhance the search power of LX operator and generates a pair of offspring solutions from a given pair of parent
solutions. Like LX, BEX is also a parent centric crossover operator and has one control parameter k. Contrary to LX, the offspring solutions generated by BEX always lie within the variable bound of decision variables and offspring solutions created
by BEX are not place symmetrically with respect to the position of the parents. BEX is combined with PM and a new GA
called BEXPM is dened. The performance of BEXPM is compared against four other popular GAs existing in literature
viz. LXPM, HX-NUM, HX-MPTM and SBX-POL. The effectiveness of the algorithms assessed on a fairly large set of benchmark test problems collected from global optimization literature. To compare the performance of the algorithms on the basis
of efciency, accuracy, reliability and quality of the solution obtained, two performance evaluation criteria (criterion 1 and
criterion 2) are dened. In Criterion 1, the performance of the algorithms is measured on the basis of success ratio, average
number of function evaluations and average computational time of successful runs. Criterion 2 compares the algorithms on
the basis of minimum, average, standard deviation and maximum of the objective function values of successful runs.
First, a pair wise comparison of BEXPM is done with other algorithms and then all the algorithms are compared against
each other at the same time. It was established that, in pair wise comparison as well as in overall comparison of all algorithms simultaneously with respect to criterion 1 and criterion 2, BEXPM continue to perform signicantly better than other
algorithms.
The performance of BEXPM is also compared with other GAs considered in this study on the basis of PI, which also suggests that the BEXPM outperforms the other GAs both when compared pair wise and simultaneously with other GAs.
Acknowledgements
Authors are thankful to the Editor-in-chief and the anonymous referees for their valuable comments and suggestions to
improve presentation of the paper.
Appendix A
Laplace distribution
The density function of bounded exponential distribution is given by

(
f x

1 xa=b
e
;
2b
1 xa=b
e
;
2b

x 6 a;
x > a:

The cumulative probability distribution function is given by

(
Fx

1 xa=k
e
2
1  12 exa=k

x 6 a;
x > a;

where, a and b are location and scale parameter respectively.


Bounded exponential distribution
The density function of bounded exponential distribution is given by

8
0
>
>


>
>
1
ex=k
>
< 2k
1ea1 =k


f x
>
1
ex=k
>
>
2k 1ea2 =k
>
>
:
0

x < a1 ;

a1 6 x 6 0;
0 < x 6 a2 :
otherwise

The cumulative distribution function is given by

8
0
>
>


>
a =k
x=k
>
>
< 12 e 1eea1 =k1


Fx
>
1
1 1ex=k
>

a2 =k
>
2
2
1e
>
>
:
0

x < a1 ;

a1 6 x 6 0;
0 < x 6 a2 ;
otherwise;

316

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

where, k is the scale parameter and a1 and a2 are the truncation point from left and right side of the point x 0 respectively.
Clearly a1 < 0 and a2 > 0 .
References
[1] Y. Akimoto, J. Sakuma, I. Ono, S. Kobayashi, Adaptation of expansion rate for real-coded crossovers, in: Proceedings of the 11th Annual Conference on
Genetic and Evolutionary Computation, ACM, 2009, pp. 739746.
[2] J.C. Bean, A.B. Hadj-Alouane, A dual genetic algorithm for bounded integer programs, Technical Report TR 92-53, Department of Industrial and
Operations Engineering, The University of Michigan, 1992.
[3] Bharti, Controlled random search technique and their applications, Doctoral Dissertation, University of Roorkee, India, 1994.
[4] Z.Q. Chen, Y.F. Yin, An new crossover operator for real-coded genetic algorithm with selective breeding based on difference between individuals, in:
Natural Computation (ICNC), 2012 Eighth International Conference on, IEEE, 2012, pp. 644648.
[5] C.A. Coello Coello, E. Mezura Montes, Constraint-handling in genetic algorithms through the use of dominance-based tournament selection, Adv. Eng.
Inf. 16 (3) (2002) 193203.
[6] D.W. Coit, A.E. Smith, Penalty guided genetic search for reliability design optimization, Comput. Ind. Eng. (special issue on genetic algorithms) 30 (4)
(1996) 895904.
[7] D.W. Coit, A.E. Smith, D.M. Tate, Adaptive penalty methods for genetic optimization of constrained combinatorial problems, INFORMS J. Comput. 8 (2)
(1996) 173182.
[8] A.R. Conn, N.I.M. Gould, Ph.L. Toint, Trust Region Methods, Cambridge University Press, 1987.
[9] W.A. Crossley, E.A. Williams, A study of adaptive penalty functions for constrained genetic algorithm based optimization, in: AIAA 35th Aerospace
Sciences Meeting and Exhibit, AIAA Paper 97-0083, Reno, Nevada, 1997.
[10] K. Deb, An efcient constraint handling method for genetic algorithms, Comput. Meth. Appl. Mech. Eng. 186 (24) (2000) 311338.
[11] K. Deb, Multi-objective Optimization Using Evolutionary Algorithms, Wiley, Chichester, 2001.
[12] K. Deb, A. Anand, D. Joshi, A computationally efcient evolutionary algorithm for real-parameter evolution, Evol. Comput. J. 10 (4) (2002) 371395.
[13] K. Deb, R.B. Agrawal, Simulated binary crossover for continuous search space, Complex Syst. 9 (1995) 115148.
[14] K. Deep, M. Thakur, A new crossover operator for real coded genetic algorithms, Appl. Math. Comput. 188 (1) (2007) 895911.
[15] K. Deep, M. Thakur, A new mutation operator for real coded genetic algorithms, Appl. Math. Comput. 193 (1) (2007) 211230.
[16] K.A. De-Jong, An analysis of the behavior of a class of genetic adaptive systems, Doctoral Dissertation, University of Michigan, 1975.
[17] R.C. Eberhart, Y. Shi, J. Kennedy, Swarm Intelligence, Morgan Kaufmann, San Francisco, 2001.
[18] C.A. Floudas, P.M. Pardalos, A Collection of Test Problems for Constrained Global Optimization Algorithms, Springer, Berlin, 1987.
[19] M. Gen, R. Cheng, Optimal design of system reliability using interval programming and genetic algorithms, Comput. Ind. Eng. 31 (12) (1996) 237240.
[20] M. Gen, R. Cheng, A survey of penalty techniques in genetic algorithms, in: T. Fukuda, T. Furuhashi (Eds.), Proceedings of the 1996 International
Conference on Evolutionary Computation, IEEE, Nagoya, Japan, 1996, pp. 804809.
[21] P.E. Gill, W. Murray, M.H. Wright, Practical Optimization, London Academic Press, 1981.
[22] D.E. Goldberg, Real-coded genetic algorithms, virtual alphabets, and blocking, Complex Syst. 5 (2) (1991) 139168.
[23] D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, MA, 1989.
[24] G.V. Reklaitis, A. Ravindran, K.M. Ragsdell, Engineering Optimization Methods and Applications, Wiley, New York, 1983.
[25] A.B. Hadj-Alouane, J.C. Bean, A genetic algorithm for the multiple-choice integer program, Oper. Res. 45 (1997) 92101.
[26] F. Herrera, M. Lozano, J.L. Verdegay, Dynamic and heuristic fuzzy connectives based crossover operators for controlling the diversity and convergence
of real-coded genetic algorithms, Int. J. Intell. Syst. 2 (1999) 10131041.
[27] F. Hoffmeister, J. Sprave, Problem-independent handling of constraints by use of metric penalty functions, in: L.J. Fogel, P.J. Angeline, T. Baeck (Eds.),
Proceedings of the Fifth Annual Conference on Evolutionary Programming, MIT Press, San Diego, CA, 1996, pp. 289294.
[28] A. Homaifar, S.H.Y. Lai, X. Qi, Constrained optimization via genetic algorithms, Simulation 62 (4) (1994) 242254.
[29] C.Z. Janikow, Z. Michalewicz, An experimental comparison of binary and oating point representation in genetic algorithms, in: Proceedings of the
Fourth International Conference on Genetic Algorithms, Morgan Kaufmann, San Francisco, 1991, pp. 3136.
[30] J. Joines, C. Houck, On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GAs, in: D. Fogel (Ed.),
Proceedings of the First IEEE Conference on Evolutionary Computation, IEEE Press, Orlando, FL, 1994, pp. 579584.
[31] S. Kazarlis, V. Petridis, Varying tness functions in genetic algorithms: studying the rate of increase of the dynamic penalty terms, in: A.E. Eiben, T.
Baeck, M. Schoenauer, H.P. Schwefel (Eds.), Parallel Problem Solving from Nature V, Springer, Amsterdam, Netherlands, 1998, pp. 211220.
[32] J.-H. Kim, H. Myung, Evolutionary programming techniques for constrained optimization problems, IEEE Trans. Evol. Comput. 1 (2) (1997) 129140.
[33] S. Kukkonen, J. Lampinen, Constrained real-parameter optimization with generalized differential evolution, in: Proceedings of 2006 IEEE Congress on
Evolutionary Computation, IEEE Press, Vancouver, Canada, 2006, pp. 911918.
[34] Kuri Morales, A. & Quezada, C.V., (September 1998). A universal eclectic genetic algorithm for constrained optimization. in: Proceedings of the 6th
European Congress on Intelligent Techniques and Soft Computing, (EUFIT98), Verlag Mainz, Aachen, Germany (pp. 518522).
[35] A.C.C. Lemonge, H.J.C. Barbosa, An adaptive penalty scheme in genetic algorithms for constrained optimization problems, in: Proceedings of the
Genetic and Evolutionary Computation Conference 2002, Morgan Kaufmann Publishers, New York, 2002, pp. 287294.
[36] A.C.C. Lemonge, H.J.C. Barbosa, A new adaptive penalty scheme for genetic algorithms, Inf, Sci. 156 (34) (2003) 215251.
[37] A.C.C. Lemonge, H.J.C. Barbosa, An adaptive penalty scheme for genetic algorithms in structural optimization, Int. J. Numer. Methods Eng. 59 (5) (2004)
703736.
[38] R.A.E. Makinen, J. Periaux, J. Toivanen, Multidisciplinary shape optimization in aerodynamics and electromagnetic using genetic algorithms, Int. J.
Numer. Methods Fluids 30 (2) (1999) 149159.
[39] K. Meittinen, M.M. Makela, J. Toivanen, Numerical comparison of some penalty based constraint handling techniques in genetic algorithms, J. Global
Optim. 27 (2003) 427446.
[40] Z. Michalewicz, N.F. Attia, Evolutionary optimization of constrained problems, in: Proceedings of the 3rd Annual Conference on Evolutionary
Programming, World Scientic, Singapore, 1994, pp. 98108.
[41] Z. Michalewicz, G. Nazhiyath, Genocop III: a co-evolutionary algorithm for numerical optimization with nonlinear constraints, in: D.B. Fogel (Ed.),
Proceedings of the Second IEEE International Conference on Evolutionary Computation, IEEE Press, Piscataway, NJ, 1995, pp. 647651.
[42] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evol. Comput. 4 (1) (1996) 132.
[43] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, second ed., Springer Verlag, Berlin, 1994.
[44] Z. Michalewicz, Genetic algorithms, numerical optimization and constraints, in: L.J. Eshelmen (Ed.), Proceedings of the 6th International Conference on
Genetic Algorithms, Morgan Kaufmann, San Mateo, CA, 1995, pp. 151158.
[45] Z. Michalewicz, D. Dasgupta, R.G. Le Riche, M. Schoenauer, Evolutionary algorithms for constrained engineering problems, Comput. Ind. Eng. J. 30 (4)
(1996) 851870.
[46] Z. Michalewicz, T. Logan, S. Swaminathan, Evolutionary operators for continuous convex parameter space, in: A.V. Sebald, L.J. Fogel (Eds.), Proceeding
of 3rd Annual Conference on Evolutionary Programming, World Scientic, River Edge, NJ, 1994, pp. 8497.
[47] C. Mohan, H.T. Nguyen, A controlled random search technique incorporating the simulating annealing concept for solving integer and mixed integer
global optimization problems, Comput. Optim. Appl. 14 (1999) 103132.
[48] J. Nocedal, S. Wright, Numerical Optimization, Springer-Verlag, 2000.

M. Thakur et al. / Applied Mathematics and Computation 235 (2014) 292317

317

[49] B.A. Norman, A.E. Smith, Random keys genetic algorithm with adaptive penalty function for optimization of constrained facility layout problems, in: T.
Baeck, Z. Michalewicz, X. Yao (Eds.), Proceedings of the 1997 International Conference on Evolutionary Computation, IEEE, Indianapolis, Indiana, 1997,
pp. 407411.
[50] I. Ono, H. Kita, S. Kobayashi, A robust real-coded genetic algorithm using unimodal normal distribution crossover augmented by uniform crossover:
effects of self-adaptation of crossover probabilities, in: W. Banzhaf, J. Daida, A. Eiben, M. Garzon, V. Honavar, M. Jakiela, R. Smith (Eds.), Proceedings of
the Genetic and Evolutionary Computation Conference (GECCO-1 1999), Morgan Kaufmann, San Mateo, CA, 1999, pp. 496503.
[51] A. Oyman, H.G. Beyer, H.P. Schwefel, Where elitists start limping evolution strategies at ridge functions, in: A.E. Eiben, T. Baeck, M. Schoenauer, H.P.
Schwefel (Eds.), Parallel Problem Solving from Nature V, Springer, Amsterdam, Netherlands, 1998, pp. 3443.
[52] D. Powell, M.M. Skolnick, Using genetic algorithms in engineering design optimization with non-linear constraints, in: S. Forrest (Ed.), Proceedings of
the Fifth International Conference on Genetic Algorithms, University of Illinois at Urbana-Champaign, Morgan Kaufmann, San Mateo, CA, 1993, pp.
424431.
[53] K.V. Price, R.M. Storn, J.A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer Verlag, 2005.
[54] K. Rasheed, An adaptive penalty approach for constrained genetic-algorithm optimization, in: J.R. Koza, W. Banzhaf, K. Chellapilla, K. Deb, M. Dorigo,
D.B. Fogel, M.H. Garzon, D.E. Goldberg, H. Iba, R.L. Riolo (Eds.), Proceedings of the Third Annual Genetic Programming Conference, Morgan Kaufmann,
San Francisco, CA, 1998, pp. 584590.
[55] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science 220 (4598) (1983) 671680.
[56] A. Sinha, S. Tiwari, K. Deb, A population based steady state procedure for real parameter optimization, KANGAL Technical Report No. 2005004, 2005.
[57] S. Carlson Skalak, R. Shonkwiler, S. Babar, M. Aral, Annealing a genetic algorithm over constraints, in: Proceedings of the IEEE International Conference
on Systems, Man and Cybernetics, 1998, pp. 39313936.
[58] A.E. Smith, D.M. Tate, Genetic optimization using a penalty function, in: S. Forrest (Ed.), Proceedings of the Fifth International Conference on Genetic
Algorithms, University of Illinois at Urbana-Champaign, Morgan Kaufmann, San Mateo, CA, 1993, pp. 499503.
[59] H. Someya, Theoretical analysis of phenotypic diversity in real-valued evolutionary algorithms with more-than-one-element replacement, IEEE Trans.
Evol. Comput. 15 (2) (2011) 248266.
[60] P. Subbaraj, R. Rengaraj, S. Salivahanan, Enhancement o/f self-adaptive real-coded genetic algorithm using Taguchi method for economic dispatch
problem, Appl. Soft Comput. 11 (1) (2011) 8392.
[61] M. Thakur, New real coded genetic algorithms for global optimization, Doctoral Dissertation, Indian Institute of Technology Roorkee, India, 2007.
[62] A.H. Wright, Genetic algorithms for real parameter optimization, in: G.J.E. Rawlins (Ed.), Foundations of Genetic Algorithms I, Morgan Kaufmann, San
Mateo, 1991, pp. 205218.
[63] Xin-she Yang, S. Koziel, Computational optimization: an overview, in: Slawomir Koziel, Xin-she Yang (Eds.), Computational Optimization, Methods
and Algorithms, Springer-Verlag, 2011, pp. 110.
[64] T. Yokota, M. Gen, K. Ida, T. Taguchi, Optimal design of system reliability by an improved genetic algorithm, in: Electronics and Communications in
Japan (Part III: Fundamental Electronic Science), vol. 79, no. 2, 1996, pp. 4151.
[65] Y. Yoon, Y.H. Kim, A. Moraglio, B.R. Moon, A theoretical and empirical study on unbiased boundary-extended crossover for real-valued representation,
Inf. Sci. 183 (1) (2012) 4865.
[66] Y. Yoon, Y.H. Kim, Geometricity of genetic operators for real-coded representation, Appl. Math. Comput. 219 (23) (2013) 1091510927.
[67] C.H.E.N. Zhi-Qiang, W.A.N.G. Rong-Long, A new framework with FDPP-LX crossover for real-coded genetic algorithm, IEICE Trans. Fundam. Electron.
Commun. Comput. Sci. 94 (6) (2011) 14171425.

You might also like