You are on page 1of 10

18 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 37, NO.

1, FEBRUARY 2007

An Effective PSO-Based Memetic Algorithm


for Flow Shop Scheduling
Bo Liu, Ling Wang, and Yi-Hui Jin

AbstractThis paper proposes an effective particle swarm approaches [1], [2]. The flow shop scheduling problem (FSSP)
optimization (PSO)-based memetic algorithm (MA) for the per- is a class of widely studied problems based on ideas gleaned
mutation flow shop scheduling problem (PFSSP) with the ob- from engineering, which currently represent nearly a quarter of
jective to minimize the maximum completion time, which is a
typical non-deterministic polynomial-time (NP) hard combina- manufacturing systems, assembly lines, and information ser-
torial optimization problem. In the proposed PSO-based MA vice facilities [3], and has earned a reputation for being difficult
(PSOMA), both PSO-based searching operators and some special to solve [4]. As a simplification of FSSP, the permutation FSSP
local searching operators are designed to balance the exploration (PFSSP), i.e., the processing sequence of all jobs is the same
and exploitation abilities. In particular, the PSOMA applies the for all machines, for minimizing the maximum completion time
evolutionary searching mechanism of PSO, which is characterized
by individual improvement, population cooperation, and com- (i.e., makespan) is a typical and well-studied non-deterministic
petition to effectively perform exploration. On the other hand, polynomial-time (NP) hard problem [4]. Due to its significance
the PSOMA utilizes several adaptive local searches to perform both in theory and engineering applications, it is important to
exploitation. First, to make PSO suitable for solving PFSSP, a develop effective and efficient approaches for PFSSP.
ranked-order value rule based on random key representation is Since the pioneering work of Johnson [5], PFSSP has re-
presented to convert the continuous position values of particles
to job permutations. Second, to generate an initial swarm with ceived considerable theoretical, computational, and empirical
certain quality and diversity, the famous Nawaz-Enscore-Ham research work. To account for the difficulty of PFSSP, a large
(NEH) heuristic is incorporated into the initialization of popula- amount of optimization techniques has been proposed [6].
tion. Third, to balance the exploration and exploitation abilities, Yet, due to the complexity of PFSSP, exact solution techniques,
after the standard PSO-based searching operation, a new local such as branch and bound, and mathematical programming [7],
search technique named NEH_1 insertion is probabilistically ap-
plied to some good particles selected by using a roulette wheel are only applicable to small-scale problems. Thus, various
mechanism with a specified probability. Fourth, to enrich the heuristics have been proposed, including constructive heuris-
searching behaviors and to avoid premature convergence, a sim- tics, improvement heuristics, and their hybrids (sometimes
ulated annealing (SA)-based local search with multiple different called memetic algorithms (MAs) [8]). The constructive heuris-
neighborhoods is designed and incorporated into the PSOMA. tics build a feasible schedule from scratch, mainly for solving
Meanwhile, an effective adaptive meta-Lamarckian learning strat-
egy is employed to decide which neighborhood to be used in the two- and three-machine scheduling problems. But, the
SA-based local search. Finally, to further enhance the exploitation solution qualities of the constructive heuristics are not satis-
ability, a pairwise-based local search is applied after the SA-based factory, although the process is very fast. Experimental results
search. Simulation results based on benchmarks demonstrate the showed that the Nawaz-Enscore-Ham (NEH) heuristic [9] is
effectiveness of the PSOMA. Additionally, the effects of some currently one of the best constructive heuristics. The improve-
parameters on optimization performances are also discussed.
ment heuristics start from one or a set of solutions generated
Index TermsAdaptive meta-Lamarckian learning, memetic by some sequence generators and try to improve the solution(s)
algorithm (MA), particle swarm optimization (PSO), permutation by applying some specific problem knowledge to approach the
flow shop scheduling.
global or suboptimal optimum. Improvement heuristics are
generally based on metaheuristics such as simulated annealing
I. I NTRODUCTION (SA) [10], genetic algorithm (GA) [11], [12], tabu search (TS)
[13], evolutionary programming [14], and variable neighbor-
P RODUCTION scheduling plays a key role in
manufacturing systems of an enterprise for maintaining
competitive position in the fast changing market. To account
hood search (VNS) [15]. Improvement heuristics can obtain
fairly satisfactory solutions, but they are often time-consuming
for this factor, it is important to develop effective and efficient and parameter dependent.
advanced manufacturing and scheduling technologies and Recently, hybrid heuristics have been a hot topic in the
fields of both computer science and operational research [2],
[6], [12], [16][20]. It assumes that combining the features
Manuscript received April 29, 2005; revised August 23, 2005. This work was of different methods in a complementary fashion may result
supported in part by the National Science Foundation of China under Grants in more robust and effective optimization tools. Particularly,
60204008, 60374060, and 60574072 as well as the 973 Program under Grant
2002CB312200. This paper was recommended by Guest Editor Y. S. Ong. it is well known that the performances of evolutionary al-
The authors are with the Institute of Process Control, Department of gorithms can be improved by combining problem-dependent
Automation, Tsinghua University, Beijing 100084, China (e-mail: liub01@ local searches. MAs [8], [21] may be considered as a union of
mails.tsinghua.edu.cn; wangling@mail.tsinghua.edu.cn; jyh-dau@mail.
tsinghua.edu.cn). population-based global search and local improvements which
Digital Object Identifier 10.1109/TSMCB.2006.883272 are inspired by Darwinian principles of natural evolution and

1083-4419/$25.00 2007 IEEE


LIU et al.: EFFECTIVE PSO-BASED MEMETIC ALGORITHM FOR FLOW SHOP SCHEDULING 19

Dawkins notion of a meme, defined as a unit of cultural importance of balance between genetic and local searches
evolution that is capable of local refinements. So far, MAs was stressed.
have gained wide research on a variety of problems such as Recently, a new evolutionary technique, particle swarm
travelling salesman problem (TSP) [22], the graph bipartition optimization (PSO) [33], was proposed for unconstrained
problem [22], the quadratic assignment problem [22], cellular continuous optimization problems. Its development is based
mobile networks [23], and scheduling problems [24]. In MAs, on observations of the social behaviors of animals such as
several studies [21], [25] have been focused on how to achieve bird flocking, fish schooling, and swarm theory. It is initial-
a reasonable combination of global and local searches and how ized with a population of random solutions. Each individual
to make a good balance between exploration and exploitation. is assigned with a randomized velocity according to his own
Recently, a multiagent metaheuristic architecture [26] was in- and his companions flying experiences. The individuals, called
troduced as a conceptual and practical framework for designing particles, are then flown through the hyperspace. PSO has some
MAs. Traditionally, most of the MAs rely on the use of one sin- attractive characteristics. Because it has memory, knowledge
gle evolutionary algorithm for globally rough exploration and of good solutions is retained by all particles. Additionally, it
one single local search for locally fine improvements. Some re- has constructive cooperation between particles, and particles in
cent studies [21], [27][29] on the choice of local searches have the swarm share information between them. Due to the simple
shown that the choice significantly affects the searching effi- concept, easy implementation, and quick convergence, PSO
ciency. In order to avoid the negative effects of incorrect local has gained much attention and wide applications in different
search, Ong and Keane [28] coined the term meta-Lamarckian fields, mainly for continuous optimization problems [34]. How-
learning to introduce the idea of adaptively choosing ever, the performance of a simple PSO depends on its para-
multiple memes during a MA search in the spirit of Lamarckian meters, and it often suffers from the problem of being trapped
learning and successfully solved continuous optimization prob- in local optima. Some studies have been carried out to prevent
lems by the proposed MA with multiple local searches. premature convergence and to balance the exploration and ex-
In [29], a classification of memes adaptation in adaptive MAs ploitation abilities [35]. In addition, most of the published work
was presented on the basis of the mechanism used and the on PSO is for continuous optimization problems, whereas little
level of historical knowledge on the memes employed, and the research is found for combinatorial optimization problems.
global convergence properties of adaptive MAs were analyzed Obviously, it is a challenge to employ PSO to different areas of
by means of finite Markov chain. In addition, a study in [30] problems other than those areas the inventors originally focused
focused on the surrogate-assisted MAs framework for compu- on. To the best of our knowledge, there is little published work
tationally expensive problems and robust designing problems. on PSO for scheduling problems. Recently, a hybrid PSO [36]
MAs have already been investigated for single-objective based on VNS was proposed for PFSSP. In this paper, we
PFSSP in many studies. In [12], a multistep crossover fusion will propose a PSO-based MA (PSOMA) for PFSSP. PSOMA
operator carrying a biased local search was introduced into applies the evolutionary searching mechanism of PSO, which
GA. In [16], a hybrid GA was presented, where the standard is characterized by individual improvement, population coop-
mutation was replaced by SA and multiple crossover operators eration, and competition to effectively perform exploration. On
were applied to subpopulations. In [17], an ant-colony system the other hand, PSOMA utilizes several adaptive local searches
was proposed, which was enhanced by a fast local search to to perform exploitation. The features of PSOMA can be sum-
yield high-quality solutions. In [18], two hybrid versions of GA, marized as follows. First, to make PSO suitable for solving
i.e., genetic SA and genetic local search, were presented, where PFSSP, a ranked-order value (ROV) rule based on random
an improvement phase with local search and SA, respectively, key representation [37] is presented to convert the continuous
was performed before selection and crossover operation. In position values of a particle to a job permutation. Second,
[19], a hybrid SA was integrated with features borrowed from the NEH heuristic is incorporated into random initialization of
the GA and local searches, which worked from a population PSO to generate an initial population with certain quality and
of candidate schedules, and generated new populations by diversity. Third, a new local search named NEH_1 insertion is
applying suitable small perturbation schemes. Moreover, during proposed and probabilistically applied to some good particles
the annealing process, an iterated hill-climbing procedure was selected by using roulette wheel mechanism, with a specified
stochastically applied to the population. In [20], an ordinal probability to balance the exploration and exploitation abilities.
optimization-based GA was presented to ensure the quality Fourth, SA-based local search with multiple different neighbor-
of the solution found with a reduction in computation effort. hoods is also designed and incorporated to enrich the searching
In addition, a hypothesis-test-based GA was proposed in [31] behaviors and to avoid premature convergence, and an effective
for stochastic PFSSP with uncertain processing time. As for adaptive meta-Lamarckian learning strategy is employed to
the multiobjective scheduling problems, the hybridization with decide which neighborhood to be used. In addition, a pairwise-
local search was first implemented in [32] as a multiobjective based local search is applied after the SA-based local search
genetic local search (MOGLS), where a scalar fitness function to further enhance the exploitation ability. Simulation results
with random weights was used for the selection of parents and comparisons demonstrate the effectiveness of the proposed
and the local search for their offspring. In [25], the former PSOMA for PFSSP.
MOGLS [32] was modified by choosing only good individuals The remaining contents are organized as follows. In
as initial solutions for local search and assigning an appropriate Sections II and III, the PFSSP and PSO are introduced, respec-
local search direction to each initial solution. Meanwhile, the tively. In Section IV, the PSOMA is proposed after presenting
20 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 37, NO. 1, FEBRUARY 2007

solution representation, population initialization, NEH-based where c1 and c2 are acceleration coefficients, w is inertia factor,
local search, SA-based local search combining adaptive meta- r1 and r2 are two independent random numbers uniformly
Lamarckian learning strategy, and pairwise-based local search. distributed in the range of [0, 1].
In Section V, experimental results on benchmarks together with Thus, the position of each particle is updated in each genera-
comparisons to other previous metaheuristics are presented and tion according to the following equation:
analyzed. In Section VI, the effects of some parameters on op-
timization performances are discussed. Finally, in Section VII, xi,j (t + 1) = xi,j (t) + vi,j (t + 1), j = 1, 2, . . . , d. (8)
we end this paper with some conclusions and suggestions of
Generally, the value of each component in Vi can be clamped
possible future work.
to the range [vmax , vmax ] to control excessive roaming of
particles outside the search space. Then, the particle flies toward
II. P ERMUTATION F LOW S HOP a new position according to (8). This process is repeated until a
S CHEDULING P ROBLEM user-defined stopping criterion is reached.
The PFSSP can be described as follows: Each of the n The procedure of standard PSO is summarized as follows.
jobs is to be sequentially processed on machine 1, . . . , m. The Step 1) Initialize a population of particles with random po-
processing time pi,j of job i on machine j is given. At any time, sitions and velocities, where each particle contains d
each machine can process at most one job, and each job can variables (i.e., d = n).
be processed on at most one machine. The sequence in which Step 2) Evaluate the objective values of all particles; let
the jobs are to be processed is the same for each machine. The pbest of each particle and its objective value equal
objective is to find a sequence for the processing of the jobs to its current position and objective value; and let
in the machines so that a given criterion is optimized. In the gbest and its objective value equal to the position
literature, the criterion widely used is the minimization of the and objective value of the best initial particle.
maximum completion time, i.e., makespan (Cmax ) [9][14]. Step 3) Update the velocity and position of each particle
Let = {j1 , j2 , . . . , jn } denotes a permutation of all jobs, according to (7) and (8).
and C(ji , k) denotes the completion time of job ji on the Step 4) Evaluate the objective values of all particles.
machine; then, the completion time C(ji , k) can be calculated Step 5) For each particle, compare its current objective value
as follows: with the objective value of its pbest. If current
value is better, then update pbest and its objective
C(j1 , 1) = pj1 ,1 (1) value with the current position and objective value.
C(ji , 1) = C(jj1 , 1) + pji ,1 , i = 2, . . . , n (2) Step 6) Determine the best particle of the current swarm
C(j1 , k) = C(j1 , k 1) + pj1 ,k , k = 2, . . . , m (3) with the best objective value. If the objective value is
better than the objective value of gbest, then update
C(ji , k) = max {C(ji1 , k), C(ji , k 1)} + pji ,k ,
gbest and its objective value with the position and
i = 2, . . . , n, k = 2, . . . , m (4) objective value of the current best particle.
Cmax () = C(jn , m). (5) Step 7) If a stopping criterion is met, then output gbest and
its objective value; otherwise go back to Step 3).
The PFSSP is then to find a permutation in the set of all
permutation , such that
IV. PSOMA FOR PFSSP
= arg {Cmax () = C(jn , m)} min . (6) In this section, we will explain the implementation of
PSOMA for PFSSP in detail.
III. P ARTICLE S WARM O PTIMIZATION
In a PSO system, it starts with the random initialization of A. Solution Representation
a population (swarm) of individuals (particles) in the search Usually, a job-permutation-based encoding scheme [2] has
space and works on the social behavior in the swarm. The been used in many papers for PFSSP. However, because of the
position and the velocity of the ith particle in the d-dimensional continuous characters of the position of particles in PSO, the
search space can be represented as Xi = [xi,1 , xi,2 , . . . , xi,d ] standard encoding scheme of PSO cannot be directly adopted
and Vi = [vi,1 , vi,2 , . . . , vi,d ], respectively. Each particle has for PFSSP. So, the most important issue in applying PSO to
its own best position (pbest) Pi = [pi,1 , pi,2 , . . . , pi,d ] corre- PFSSP is to find a suitable mapping between job sequence and
sponding to the personal best objective value obtained so far positions of particles.
at time t. The global best particle (gbest) is denoted by Pg , In this paper, a ROV rule based on random key represen-
which represents the best particle found so far at time t in the tation [37] is presented to convert the continuous position
entire swarm. The new velocity of each particle is calculated as Xi = [xi,1 , xi,2 , . . . , xi,n ] of particles in PSO to permutation
follows: of jobs = {j1 , j2 , . . . , jn }, thus the performance of the par-
ticle can be evaluated. In particular, the position information
vi,j (t + 1) = wvi,j (t) + c1 r1 (pi,j xi,j (t))
Xi = [xi,1 , xi,2 , . . . , xi,n ] itself does not represent a sequence,
+ c2 r2 (pg,j xi,j (t)) , j = 1, 2, . . . , d (7) whereas the rank of each position value of a particle represents
LIU et al.: EFFECTIVE PSO-BASED MEMETIC ALGORITHM FOR FLOW SHOP SCHEDULING 21

TABLE I TABLE III


REPRESENTATION OF POSITION INFORMATION AND THE CONVERSION OF NEH SOLUTION (JOB PERMUTATION) TO
CORRESPONDING ROV (JOB PERMUTATION) PARTICLE POSITION INFORMATION

TABLE II
SWAP-BASED LOCAL SEARCH FOR JOB PERMUTATION AND THE velocities in a certain interval. Since the result produced by
CORRESPONDING ADJUSTMENT FOR POSITION INFORMATION the NEH heuristic is a job permutation, it should be converted
to position values of a certain initial particle to perform the
PSO-based searches. The conversion is performed using the
following equation:

xNEH,j = xmin,j + (xmax,j xmin,j )


(sNEH,j 1 + rand)/n, j = 1, 2, . . . , n. (9)

where xNEH,j is the position value in the jth dimension of the


a job index so as to construct a permutation of jobs. In our
particle, sNEH,j is the job index in the jth dimension of
ROV rule, the smallest position value of a particle is first picked
the permutation by the NEH heuristic, xmax,j and xmin,j are
and assigned a smallest rank value of one. Then, the second
the upper and lower bounds of the position value, respectively,
smallest position value is picked and assigned a rank value
rand denotes a random number uniformly distributed in the
of two. With the same way, all the position values will be
interval [0, 1], and n represents the number of dimensions of
handled to convert the position information of a particle to
a position, which is equal to the number of jobs. Table III
a job permutation. We provide a simple example to illustrate
provides an example of the above conversion from job per-
the ROV rule in Table I. In the instance (n = 6), position is
mutation to position information. Obviously, such a conversion
Xi = [0.06, 2.99, 1.86, 3, 73, 2.13, 0.67]. Because xi,1 = 0.06
obeys ROV rule. That is, the permutation can be obtained from
is the smallest position value, xi,1 is picked first and assigned
position information using ROV rule.
a rank value of one; then, xi,6 = 0.67 is picked and assigned
a rank value of two. Similarly, the ROV rule assigns a rank
value of threesix to xi,3 , xi,5 , xi,2 , and xi,4 , respectively. Thus,
C. PSO-Based Search
based on the ROV rule, the job permutation is obtained, i.e.,
= [1, 5, 3, 6, 4, 2]. In this paper, the PSO-based searches, i.e., (7) and (8), are
In our PSOMA, several local searches are not directly applied applied for exploration. That is, the position information of
to position information but to job permutation. So, when a local the particles in the current swarm is evolved by PSO-based
search procedure is completed, the particles position informa- searching operators. Note that, the PSO-based evolution is
tion should be repaired to guarantee that the permutation that performed on a continuous space. So, when evaluating the
will result from the ROV rule for the new position information performance of a particle, the position information should be
is the same as the permutation that will result from the local converted to job permutation using the above ROV rule. On the
search. That is to say, when applying these local searches other hand, PSO provides a parallel evolutionary framework
to job permutation, position information should be adjusted for the optimization of hard problems, and it is also easy to
correspondingly. Fortunately, the adjustment is very simple due incorporate local searches in PSO to develop hybrid algorithms.
to the mechanism of ROV rule. The process based on some In the following contents, we will present some local searches
local searches to position information is the same as the process that are incorporated in PSO to propose a PSO-based MA.
to permutation. For example, in Table II, if a SWAP operator
[16] is used as a local search for job permutation, obviously
swapping of job 5 and job 6 is corresponding to swapping D. NEH-Based Local Search
of position values 2.99 and 3.73. As for other local search In MAs, local searches are very important for exploitation.
operators such as INVERSE and INSERT [16], the adjustment For PFSSP, Aldowaisan and Allahverdi [38] designed a local
is similar. search named NEH_2 insertion to improve the quality of a job
permutation, where two consecutive jobs were considered as a
B. Population Initialization block and inserted in the sequence in a similar manner as in
NEH. The NEH_2 insertion is described as follows.
In the standard PSO, initial swarm is often generated ran-
domly. To guarantee an initial population with certain quality Step 1) Given a sequence of jobs .
and diversity, the NEH heuristic [9] is applied to generate Step 2) k = 1. Pick the first two jobs from and schedule
a solution (i.e., permutation of jobs), whereas the rest of them in order to minimize the partial makespan.
the particles are initialized with random position values and Select the better sequence as a current sequence.
22 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 37, NO. 1, FEBRUARY 2007

Step 3) Set k = k + 1. Generate candidate sequences by SWAP: Select two distinct elements from an n-job per-
selecting the next two jobs from , inserting this kth mutation randomly and swap them.
block of two jobs into each slot of the current se- INSERT: Choose two distinct elements from an n-job
quence, and interchanging the order of the two jobs permutation randomly and insert the back one
within the block. Among these candidates, select the before the front.
best one with the least partial makespan. Set the best INVERSE: Invert the subsequence between two different
one as a current sequence. random positions of an n-job permutation.
Step 4) Repeat step 3) until all jobs in are assigned. If 2) Adaptive Meta-Lamarckian Learning Strategy: Inspired
the last assignment is a single job, treat that job as by the idea of Ong and Keanes work [28], we not only
a block. apply multiple neighborhoods but also adopt the effective
Inspired by the NEH_2 insertion, we propose a local search adaptive meta-Lamarckian learning strategy to decide which
named NEH_1 insertion for permutation-based solutions. We neighborhood to be chosen for local search to reward the
will demonstrate the superiority of NEH_1 over NEH_2 in the utilization times of those neighborhoods, resulting in solution
later simulation. The NEH_1 is described as follows. improvement.
Step 1) Given a sequence of jobs . The adaptive meta-Lamarckian learning strategy in our
Step 2) The first two jobs from are taken, and the two PSOMA is illustrated as follows. We divide SA-based search
partial possible schedules are evaluated. Select the into training phase and nontraining phase. During the meta-
Lamarckian training phase, each SA-based local search with
better sequence as a current sequence.
different neighborhood is applied for the same times, i.e.,
Step 3) Take job k, k = 3, . . . , n, and find the best schedule
n(n 1) steps (a generation of Metropolis sampling). Then,
by placing it in all the possible k positions in the
the reward of each neighborhood is determined using the
sequence of jobs that are already scheduled. The best following equation:
partial sequence is selected for the next iteration.
In addition, in PSOMA, based on the performances of pbest = |pf cf|/ (n(n 1)) (10)
solutions of all particles in current swarm, pbest solution of where n denotes the number of jobs, pf is the objective value
each particle is assigned a probability to be selected by the of the old permutation, and cf is the objective value of the
rank-based fitness assignment technique [39]. Then, the roulette best permutation found by the local search based on different
wheel mechanism [39] is used to decide which pbest solutions neighborhood during consecutive n(n 1) steps.
will be selected. Subsequently, the selected pbest solutions will After the reward of each neighborhood is determined, the
perform the above NEH_1 or NEH_2 insertion with a prede- utilizing probability put of each neighborhood is adjusted using
fined probability pls . Due to the mechanism of roulette wheel the following equation:
rule, a good solution will gain more chance for exploitation.
Besides, it is easy to control such an exploitation process by 
K

adjusting the value of pls . For example, if pls = 1, the selected put,i = i / j (11)
j=1
pbest should perform NEH_1 or NEH_2; whereas if pls < 1,
the selected pbest will perform NEH_1 or NEH_2 with the where i is the reward value of the ith neighborhood and K is
certain probability of pls . the total number of neighborhoods.
At nontraining phase, according to the utilizing probability
of each neighborhood, a roulette wheel rule [39] is used to
E. SA-Based Local Search Combining Adaptive
decide which neighborhood to be used for SA-based local
Meta-Lamarckian Learning Strategy
search. If the ith neighborhood is used, its reward will be
In SA, starting from an initial state, the algorithm randomly updated by i = i + i , where i is the reward value of
generates a new state in the neighborhood of the original the ith neighborhood calculated during a nontraining phase
one, which causes a change of E in the objective function (i.e., SA-based searching based on the ith neighborhood with
value. For minimization problems, the new state is accepted consecutive n(n 1) steps). Of course, the utilizing probability
with probability min{1, exp(E/T )}, where T is a control put of each neighborhood should be adjusted again for the next
parameter. SA provides a mechanism to probabilistically escape generation of PSOMA.
from local optima, and the search process can be controlled by 3) Other Notes About SA-Based Local Search: A proper
the cooling schedule [40]. initial temperature should be high enough so that all states of
In this paper, we design a SA-based local search with the system have an equal probability of being visited, and at
multiple different neighborhoods to enrich the local searching the same time, it should not be rather high so that a lot of
behaviors and to avoid premature convergence. Moreover, unnecessary searches in high temperature will be avoided. In
the adaptive meta-Lamarckian learning strategy in [28] is this paper, we set an initial temperature by trial and error.
employed to decide which neighborhood to be used. Moreover, exponential cooling schedule tk = tk1 , 0 <
1) Neighborhoods in SA-Based Local Search: In order to < 1 is applied, which is often believed to be an excellent
maintain diversity of a population and enrich the local searching cooling recipe, since it provides a rather good compromise
behaviors, three different kinds of neighborhoods, i.e., SWAP, between a computationally fast schedule and the ability to
INSERT, and INVERSE are utilized. reach a low-energy state [41]. To provide a good compromise
LIU et al.: EFFECTIVE PSO-BASED MEMETIC ALGORITHM FOR FLOW SHOP SCHEDULING 23

Fig. 1. Outline of the PSOMA.

between solution quality and search efficiency, the step of is described as follows: It examines each possible pairwise
Metropolis sampling process is set to n (n 1). interchange of the job in first position, with the jobs in all other
In addition, the above SA-based local search is only ap- positions. Whenever there is an improvement in the objective
plied to the best solution found so far, i.e., gbest. Thus, at function, the jobs are interchanged. Similarly, the jobs in the
a high temperature, the algorithm performs exploration with second and other subsequent positions are examined. Pairwise-
certain jumping probability as well as local improvement from based search can be viewed as a detailed neighborhood search-
gbest, whereas at a low temperature, the algorithm stresses ing process, so it is applied after SA-based local search to
the exploitation from gbest. On the other hand, not too much further enhance the exploitation ability.
computational effort will be paid in such local search since it
is only applied to gbest. After the SA-based local search is
completed, gbest of the whole swarm should be updated if a G. PSO-Based MA
solution with better quality is found during the local search.
Based on the proposed ROV rule, population initialization,
NEH-based local search, SA-based local search combining
F. Pairwise-Based Local Search
adaptive meta-Lamarckian learning strategy, and pairwise-
As a complementary searching process, a pairwise-based based local search, a PSOMA outline is proposed, which is
local search [38] is employed for the gbest, whose procedure illustrated in Fig. 1.
24 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 37, NO. 1, FEBRUARY 2007

It can be seen that PSOMA not only applies the PSO- TABLE IV
COMPARISONS OF PM_NEH1, PM_NEH2, AND NEH
based evolutionary searching mechanism to effectively perform
exploration for promising solutions within the entire region
but also applies problem-dependent local searches to perform
exploitation for solution improvement in subregions. Due to
the hardness of PFSSP, PSOMA applies several local searches
simultaneously, including NEH-based initialization, NEH_1 or
NEH_2 insertion, SA-based local search combining adaptive
meta-Lamarckian learning strategy, and pairwise-based local
search. Since both exploration and exploitation are stressed and
balanced, it is expected to achieve good results for PFSSP. In the
next section, we will investigate the performances of PSOMA.

V. N UMERICAL T EST AND C OMPARISONS


A. Experimental Setup
To test the performance of the proposed PSOMA, com-
putational simulation is carried out with some well-studied
benchmarks. In this paper, 29 problems that were contributed to
the OR-Library are selected. The first eight problems are called
car1, car2 through car8 by Carlier [42]. The other 21 problems
are called rec01, rec03 through rec41 by Reeves [11], who used
them to compare the performances of some metaheuristics and
one hand, the optimal results obtained by the PM_NEH1 are
found these problems to be particularly difficult. So far, these
close to C (that is to say, the BRE values are very small),
problems have been used as benchmarks for study with different which demonstrates the effective searching quality of PSOMA.
methods by many researchers.
On the other hand, the ARE and WRE values that result
In our simulation, PSOMA is coded in MATLAB 7.0 (The from PM_NEH1 are also very small, which demonstrates the
Math Works, Inc., 2004), and experiments are executed on Mo- robustness of PSOMA on initial conditions.
bile Pentium IV 2.2-GHz processor with 512 MB RAM. The
Compared with NEH heuristic, PM_NEH1 can obtain much
following parameters are used in PSOMA: swarm size ps = 20, better results, and the WRE values that result from PM_NEH1
w = 1.0, c1 = c2 = 2.0, xmin = 0, xmax = 4.0, vmin = 4.0, and PM_NEH2 (two variants of PSOMA) are better than RE
vmax = 4.0, pls = 0.1, initial temperature T0 = 3.0, annealing
values that result from NEH heuristic, which demonstrates the
rate = 0.9, and stopping parameter L = 30. Each instance is significant improvement of PSOMA over the famous NEH
independently run 20 times for every algorithm for comparison. heuristic.
To validate the effectiveness of each part in PSOMA, variants
Compared with PM_NEH2, the BRE, ARE, and WRE values
of PSOMA are compared. The following abbreviations repre- that result from PM_NEH1 are better than those that result
sent the variants considered:
from PM_NEH2 for most instances, which demonstrates the
1) PM_NEH1: PSOMA with NEH_1; effectiveness in terms of searching quality and robustness of
2) PM_NEH2: PSOMA with NEH_2 replacing NEH_1; NEH_1 insertion with respect to NEH_2 insertion. As for
3) PM_NONEH: PSOMA without NEH-based local search the computational time performance, NEH_1-based PSOMA is
(either NEH_1 or NEH_2); competitive to NEH_2-based PSOMA. Thus, it is concluded
4) PM_NOSA: PSOMA without SA-based local search; that NEH_1 is superior to NEH_2 in our PSOMA.
5) PM_NOPAIR: PSOMA without pairwise-based search. 2) Comparison of PM_NEH1, PM_NONEH, PM_NOSA,
and PM_NOPAIR: We demonstrate the effectiveness of local
search by comparing PM_NEH1 (PSOMA) with PM_NONEH,
B. Simulation Results and Comparisons
PM_NOSA, and PM_NOPAIR. The statistical performances of
1) Comparison of PM_NEH1, PM_NEH2, and NEH Heuris- each algorithm with 20 independent runs are listed in Table V.
tic: The statistical performances of 20 independent runs are From Table V, it is shown that the BRE and ARE values
listed in Table I, including the best relative error to C (BRE), obtained by PM_NEH1 are much better than those obtained by
average relative error to C (ARE), the worst relative error to PM_NONEH, PM_NOSA, and PM_NOPAIR, which demon-
C (WRE), and average CPU executing time (Tavg). Since the strates the effectiveness by incorporating local searches into
NEH heuristic is a deterministic heuristic, the relative error PSO. That is to say, the superiority in terms of searching quality
(RE) values to C are given. In Table IV, C is the optimal and robustness of PSO owns to the combination of global
makespan or lower bound value known so far. search and local searches, i.e., the balance of exploration and
From Table IV, it can be seen that the PM_NEH1 (i.e., our exploitation. Since local search elements are employed, the
proposed PSOMA) provides the best optimization performance computational time of PSOMA is larger than that of algorithms
among the methods studied almost for all benchmarks. On the without local search, while it can be regarded as the cost to
LIU et al.: EFFECTIVE PSO-BASED MEMETIC ALGORITHM FOR FLOW SHOP SCHEDULING 25

TABLE V TABLE VII


COMPARISONS OF PM_NEH1, PM_NONEH, COMPARISONS OF PSOMA WITH SGA, SGA+NEH, AND HGA
PM_NOSA, AND PM_NOPAIR

TABLE VI
COMPARISONS OF PSOMA AND PSOVNS

of evaluation with PSOMA for each instance. The statistical


results of the two algorithms are listed in Table VI.
From Table VI, it is shown that the BRE values obtained
by PSOMA are much better than those obtained by PSOVNS
for all the instances, and the ARE and WRE values obtained
by PSOMA are much better than those obtained by PSOVNS
almost for all the instances. So, it is concluded that our proposed
local search methods, especially their utilization in hybrid
senses, are more effective than the pure VNS-based local search
in [36]. In brief, our PSOMA is more effective than PSOVNS.
4) Comparison of PSOMA and Other Popular Methods:
To further show the effectiveness of PSOMA, we carry out
some comparisons with several other popular and effective
metaheuristics such as simple GA (SGA), SGA+NEH, and
hybrid GA (HGA) [16], whose performance are comparable
or even superior to those of many other metaheuristics in the
previous papers. Table VII lists the statistical results of each
algorithm for solving the above 29 problems. From the results,
it can be concluded that our PSOMA is more effective than the
HGA [16].

VI. E FFECTS OF P ARAMETERS


In this section, we will investigate the effects of swarm
avoid premature convergence so as to achieve better optimiza- size and the utilizing probability of NEH_1 insertion on the
tion performances. performances of PSOMA.
3) Comparison of PSOMA and PSOVNS: To show the ef-
fectiveness of PSOMA, we carry out a simulation to compare
A. Effect of Swarm Size ps
our PSOMA with the hybrid PSO based on VNS (PSOVNS)
[36]. In our simulation, the parameters of PSOVNS are the same As we have known, the performance of PSO greatly depends
as those used in [36], while the PSOVNS uses the same number on swarm size. So, we investigate the effect of swarm size on
26 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 37, NO. 1, FEBRUARY 2007

TABLE VIII VII. C ONCLUSION AND F UTURE R ESEARCH


EFFECTS OF SWARM SIZE
In this paper, by hybridizing the population-based evolution-
ary searching ability of PSO with local improvement abilities
of some local searches to balance exploration and exploitation,
an effective PSOMA has been proposed for PFSSP with the
objective to minimize the makespan. Our improvement work
can be summarized as the following five aspects. First, to make
PSO suitable for solving PFSSP, a ROV rule was presented to
convert the continuous positions to job permutations. Second,
the NEH heuristic was incorporated into random initialization
to generate an initial population with certain quality and diver-
sity. Third, NEH_1 insertion was proposed and probabilistically
applied to some good particles selected by using the roulette
wheel mechanism to balance the exploration and exploitation
TABLE IX abilities. Fourth, SA-based local search was designed and
EFFECTS OF Pls incorporated into the MA to enrich the searching behaviors
and to avoid premature convergence, and an effective adaptive
meta-Lamarckian learning strategy was employed to decide
which neighborhood to be used. Besides, a pairwise-based local
search was also applied to enhance the exploitation ability.
Simulation results and comparisons demonstrate the superiority
of the PSOMA in terms of searching quality and robustness.
The future work is to investigate the parameter adaptation of
PSOMA and to develop other PSO-based MAs for job shop
scheduling problem and multiobjective scheduling problems.

ACKNOWLEDGMENT
The authors would like to thank the editors of this special
issue on MAs and the anonymous referees for their constructive
the performance of PSOMA. The effect of ps on searching
comments on the earlier manuscript of this paper.
quality and computational time when solving Rec25 problem
is illustrated in Table VIII.
R EFERENCES
From Table VIII, it can be seen that, with the increase
of swarm size, the computational time increases, while the [1] H. Stadtler, Supply chain management and advanced planning-
basics, overview and challenges, Eur. J. Oper. Res., vol. 163, no. 3,
searching quality varies with small magnitude. That is to say, pp. 575588, Jun. 2005.
the swarm size does not affect the searching quality of PSOMA [2] L. Wang, Shop Scheduling with Genetic Algorithms. Beijing, China:
too much. So, it is recommended to adopt a swarm size of 20 Tsinghua Univ. Press, 2003.
[3] M. Pinedo, Scheduling: Theory, Algorithms and Systems, 2nd ed. Engle-
for problems with small and medium size and is suggested to wood Cliffs, NJ: Prentice-Hall, 2002.
suitably increase the swarm size for problems with large size. [4] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to
the Theory of NP-Completeness. San Francisco, CA: Freeman, 1979.
[5] S. M. Johnson, Optimal two- and three- stage production schedules with
B. Effect of pls setup times included, Nav. Res. Logist. Q., vol. 1, no. 1, pp. 6168,
Mar. 1954.
In PSOMA, the parameter pls determines the probability to [6] R. Ruiz and C. Maroto, A comprehensive review and evaluation
of permutation flowshop heuristics, Eur. J. Oper. Res., vol. 165, no. 2,
use NEH_1 insertion for the selected pbest permutation. So, it pp. 479494, Sep. 2005.
affects the exploitation element as well as the computational [7] B. J. Lageweg, J. K. Lenstra, and A. H. G. Rinnooy Kan, A general
time. The effect of pls on searching quality and computational bounding scheme for the permutation flow-shop problem, Oper. Res.,
vol. 26, no. 1, pp. 5367, 1978.
time when solving Rec25 problem is illustrated in Table IX. [8] P. Moscato, On evolution, search, optimization, genetic algorithms
It can be seen that the performances obtained by PSOMA and martial arts: Toward memetic algorithms, in Caltech Concur-
when pls = 0 (that means the NEH-based local search is not rent Computation Program, California Inst. Technol., Pasadena, CA,
Tech. Rep. 826, 1989.
used) are worse than those obtained when pls > 0, which [9] M. Nawaz, E. Enscore, and I. Ham, A heuristic algorithm for the m-
demonstrates the effectiveness of incorporating the NEH-based machine, n-job flow-shop sequencing problem, Omega, vol. 11, no. 1,
local search into PSOMA. With the increase of pls , the compu- pp. 9195, 1983.
[10] I. H. Osman and C. N. Potts, Simulated annealing for permutation flow-
tational time increases, and the searching quality varies with shop scheduling, Omega, vol. 17, no. 6, pp. 551557, 1989.
a small magnitude. That is to say, the pls does not affect [11] C. R. Reeves, A genetic algorithm for flowshop sequencing, Comput.
the searching quality of PSOMA too much when pls > 0. So, Oper. Res., vol. 22, no. 1, pp. 513, Jan. 1995.
[12] C. R. Reeves and T. Yamada, Genetic algorithms, path relinking
to compromise between the performances and computational and the flowshop sequencing problem, Evol. Comput., vol. 6, no. 1,
time, it is recommended that pls = 0.1 in our studies. pp. 4560, 1998.
LIU et al.: EFFECTIVE PSO-BASED MEMETIC ALGORITHM FOR FLOW SHOP SCHEDULING 27

[13] E. Nowicki and C. Smutnicki, A fast tabu search algorithm for the permu- [39] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine
tation flow-shop problem, Eur. J. Oper. Res., vol. 91, no. 1, pp. 160175, Learning. Reading, MA: Addison-Wesley, 1989.
May 1996. [40] S. Kirkpatrick, C. D. Gelat, and M. P. Vecchi, Optimization by simulated
[14] L. Wang and D. Z. Zheng, A modified evolutionary programming for annealing, Science, vol. 220, no. 4598, pp. 671680, 1983.
flow shop scheduling, Int. J. Adv. Manuf. Technol., vol. 22, no. 7/8, [41] L. Wang and D. Z. Zheng, An effective hybrid optimization strategy
pp. 522527, Nov. 2003. for job-shop scheduling problems, Comput. Oper. Res., vol. 28, no. 6,
[15] P. Hansen and N. Mladenovic, Variable neighborhood search: Princi- pp. 585596, May 2001.
ples and applications, Eur. J. Oper. Res., vol. 130, no. 3, pp. 449467, [42] J. Carlier, Ordonnancements a contraintes disjonctives, R.A.I.R.O.
May 2001. Recherche Operationelle/Oper. Res., vol. 12, no. 4, pp. 333351, 1978.
[16] L. Wang and D. Z. Zheng, An effective hybrid heuristic for flow shop
scheduling, Int. J. Adv. Manuf. Technol., vol. 21, no. 1, pp. 3844, 2003.
[17] T. Stutzle, An ant approach to the flow shop problem, in Proc. 6th
Eur. Congr. Intell. Tech. and Soft Comput., Aachen, Germany, 1998,
pp. 15601564.
Bo Liu received the B.S. degree in electrical en-
[18] T. Murata, H. Ishibuchi, and H. Tanaka, Genetic algorithms for flowshop
gineering from Xian Jiaotong University, Xian,
scheduling problems, Comput. Ind. Eng., vol. 30, no. 4, pp. 10611071,
China, in 2001. He was an exchange student with
Sep. 1996.
the School of Electrical and Electronic Engineer-
[19] A. C. Nearchou, A novel metaheuristic approach for the flow shop
ing, Nanyang Technological University, Singapore,
scheduling problem, Eng. Appl. Artif. Intell., vol. 17, no. 4, pp. 289300,
Singapore, in 2001. He is currently working toward
Apr. 2004.
the Ph.D. degree at Tsinghua University, Beijing,
[20] L. Wang, L. Zhang, and D. Z. Zheng, A class of order-based genetic
China.
algorithm for flow shop scheduling, Int. J. Adv. Manuf. Technol., vol. 22,
His research interests include particle swarm op-
no. 11/12, pp. 828835, Dec. 2003.
timization, differential evolution, chaotic dynamic,
[21] W. E. Hart, N. Krasnogor, and J. E. Smith, Recent Advances in Memetic
and production scheduling.
Algorithms. Heidelberg, Germany: Springer-Verlag, 2004.
[22] P. Merz, Memetic algorithms for combinatorial optimization problems:
fitness landscapes and effective search strategies, Ph.D. dissertation,
Univ. Siegen, Siegen, Germany, 2000.
[23] A. Quintero and S. Pierre, Sequential and multi-population memetic
algorithms for assigning cells to switches in mobile networks, Comput. Ling Wang received the B.Sc. and Ph.D. degrees
Netw., vol. 43, no. 3, pp. 247261, Oct. 2003. from Tsinghua University, Beijing, China, in 1995
[24] P. M. Frana, A. Mendes, and P. Moscato, A memetic algorithm for the and 1999, respectively.
total tardiness single machine scheduling problem, Eur. J. Oper. Res., Since 1999, he has been with the Department
vol. 132, no. 1, pp. 224242, Jul. 2001. of Automation, Tsinghua University, where he be-
[25] H. Ishibuchi, T. Yoshida, and T. Murata, Balance between genetic came an Associate Professor in 2002. His current
search and local search in memetic algorithms for multiobjective permu- research interests are mainly in optimization the-
tation flowshop scheduling, IEEE Trans. Evol. Comput., vol. 7, no. 2, ory and algorithms, and production scheduling. He
pp. 204223, Apr. 2003. has published two books: Intelligent Optimization
[26] M. Milano and A. Roli, MAGMA: A multiagent architecture for meta- Algorithms with Applications (Tsinghua University
heuristics, IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 34, no. 2, and Springer Press, 2001) and Shop Scheduling with
pp. 925941, Apr. 2004. Genetic Algorithms (Tsinghua University and Springer Press, 2003). He has
[27] N. Krasnogor, Studies on the theory and design space of memetic algo- also authored over 100 refereed international and domestic academic papers.
rithms, Ph.D. dissertation, Univ. West England, Bristol, U.K., 2002. He is an Editorial Board Member of the International Journal of Automation
[28] Y. S. Ong and A. J. Keane, Meta-Lamarckian learning in memetic and Control (IJAAC). He was also a reviewer for many international journals
algorithms, IEEE Trans. Evol. Comput., vol. 8, no. 2, pp. 99110, such as the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS,
Apr. 2004. European Journal of Operational Research, Computers and Operations Re-
[29] Y. S. Ong, M. H. Lim, N. Zhu, and K. W. Wong, Classification of search, Computers and Industrial Engineering, Articial Intelligence in Medi-
adaptive memetic algorithms: A comparative study, IEEE Trans. Syst., cine, and so on.
Man, Cybern. B, Cybern., vol. 36, no. 1, pp. 141152, Feb. 2006. Dr. Wang received the Outstanding Paper Award in the International Con-
[30] Y. S. Ong, P. B. Nair, and A. J. Keane, Evolutionary optimization of ference on Machine Learning and Cybernetics (ICMLC02) organized by the
computationally expensive problems via surrogate modeling, AIAA J., IEEE SMC Society in 2002 and the National Natural Science Award (first place
vol. 41, no. 4, pp. 687696, 2003. prize) nominated by the Ministry of Education of China in 2003. He was the
[31] L. Wang, L. Zhang, and D. Z. Zheng, A class of hypothesis-test based Young Talent of Science and Technology of Beijing City awardee in 2004.
genetic algorithm for flow shop scheduling with stochastic processing
time, Int. J. Adv. Manuf. Technol., vol. 25, no. 11/12, pp. 11571163,
Jun. 2005.
[32] H. Ishibuchi and T. Murata, A multi-objective genetic local search algo-
rithm and its application to flowshop scheduling, IEEE Trans. Syst., Man, Yi-Hui Jin received the B.S. degree in power mech-
Cybern. C, Appl. Rev., vol. 28, no. 3, pp. 392403, Aug. 1998. anism from Tsinghua University, Beijing, China, in
[33] J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence. San Fran- 1959 and as a graduated student, received a degree
cisco, CA: Morgan Kaufmann Publishers, 2001. from the Mechanical School, Tsinghua University,
[34] B. Liu, L. Wang, H. Jin, and D. X. Huang, Advances in particle swarm in 1963.
optimization algorithm, Control Instruments Chemical Industry, vol. 32, Since 1963, she joined the Department of Power
no. 3, pp. 16, 2005. Mechanism and Department of Automation. Now,
[35] B. Liu, L. Wang, Y. H. Jin, F. Tang, and D. X. Huang, Improved particle she is a Full Professor with Tsinghua University.
swarm optimization combined with chaos, Chaos, Solitons Fractals, Her current research fields are in modeling, control,
vol. 25, no. 5, pp. 12611271, Sep. 2005. and optimization theory and applications, production
[36] M. F. Tasgetiren, M. Sevkli, Y. C. Liang, and G. Gencyilmaz, Parti- planning and scheduling, and supply chain manage-
cle swarm optimization algorithm for permutation flowshop sequencing ment. She has published two books: Process Control (Tsinghua University
problem, in Lecture Notes in Computer Science, vol. 3172, M. Dorigo, Press, 1993) and Control and Management for the Process Systems (China
M. Birattari, and C. Blum, Eds. New York: Springer-Verlag, 2004, Petrifaction Press, 1998). She has also published more than 200 papers in
pp. 382389. international and domestic journals and conferences.
[37] J. C. Bean, Genetic algorithms and random keys for sequencing and Prof. Jin obtained several awards from Ministries of China and some confer-
optimization, ORSA J. Comput., vol. 6, no. 2, pp. 154160, 1994. ences, such as the First Grade Prize of Advancement in Science and Technology
[38] T. Aldowaisan and A. Allahverdi, New heuristics for no-wait in 2001. She is the Vice Chairman of the Process Control Committee and a
flowshops to minimize makespan, Comput. Oper. Res., vol. 30, no. 8, member of the Expert Consulting Committee of the Chinese Association of
pp. 12191231, Jul. 2003. Automation.

You might also like