You are on page 1of 6

Particle Swarm Optimization with Cognitive Avoidance Component

Anupam Biswas1 , Anoj Kumar2 and K. K. Mishra3


Department of Computer Science & Engineering, Motilal Nehru National Institute of Technology Allahabad, Allahabad, India 1 abanumail@gmail.com, 2 anojk@mnnit.ac.in, 3 kkm@mnnit.ac.in
AbstractThis paper introduces cognitive avoidance scheme to the Particle Swarm Optimization algorithm. Random movements of particle inuenced by personal best solution and global best solution may encourage to take an unfruitful move. This may delay in convergence towards the optimal solution. With the similar notion as particles own known best position attracts towards it (cognitive attraction), particle may avoid itself from taking moves around its own known worst position (cognitive avoidance). This concept is added to the standard Particle Swarm Optimization algorithm as cognitive avoidance component with an additional coefcient. Experimental results on well known benchmark functions shows considerable improvement in proposed approach.

I. I NTRODUCTION Particle swarm optimization (PSO) is a population-based optimization technique introduced by Kennedy and Eberhart [1][3] in 1995. The algorithm simulates animal social behaviors such as sh schooling, bird ocking. The algorithm is initialized with a population of random solutions called particles, like other population-based evolutionary algorithm such as genetic algorithms (GA) [4][7]. However, PSO does not use direct recombination operation to update its population. PSO works on behavior of particles in the swarm rather implanting Darwins principle of survival of ttest as in GA. Each particle is associated with position (candidate solution in search domain) and velocity. Position and Velocity of each particle is adjusted depending on the experience of each particle and experience of its neighbors. Each particle tracks best solution it attained so far called personal best (pbest) and best solution attained so far by its neighbor called global best (gbest). Inuence of both pbest and gbest in movement of particles helps in converging to the global optimal solution. The position vector and the velocity vector of the ith particle in the d-dimensional search space at time t can be represented as Xi (t) = (xi1 , xi2 , xi3 , ...xid ) and Vi (t) = (vi1 , vi2 , vi3 , ...vid ) respectively. Current pbest and gbest vector of ith paricle can be represented as Pi (t) = (pi1 , pi2 , pi3 , ...pid ) and Gi (t) = (gi1 , gi2 , gi3 , ...gid ) respectively. Velocity at which particles moves to new position and the new position is evaluated with the following two equations: Vi (t + 1) = Vi (t) + C1 rand1 () (Pi (t) Xi (t)) +C2 rand2 () (Gi (t) Xi (t)) Xi (t + 1) = Xi (t) + Vi (t + 1) (1) (2)

Where C1 and C2 are positive constants called acceleration coefcients, rand1 () and rand2 () are two different uniformly distributed random numbers in range [0, 1]. First component of Equation 1 is the current velocity of the particle, which acting as inertia to the particle to move around the search space. Second component is the cognitive acceleration caused by of particles own best position, which represents particles awareness. Third component is social acceleration caused by best position of the swarm. This social attraction pulls each particle towards the best solution around the swarm and helps to attain global optimal solution. In order to improve efciency of PSO, numbers of proposals has been put forwarded since introduction of original version of PSO in 1995. To balance the local and global search during the optimization process, Shi and Eberhart introduced the concept of inertia weight [8] to the original version of PSO algorithm with the rst component i.e. the inertia or current velocity of particle: Vi (t + 1) = Vi (t) + C1 rand1 () (Pi (t) Xi (t)) (3) +C2 rand2 () (Gi (t) Xi (t)) Here is the inertia weight and remains xed value for all generations. This version of PSO is generally considered as standard version of PSO. Other variants of PSO are Discrete PSO [9] where position and velocity vectors are comprised with discrete values, Binary PSO [10] where particles have to take binary decisions. The parameter tuning approaches draws attention of researchers these days. In this approach parameters of standard PSO algorithm (inertia weight, cognitive acceleration coefcient and social acceleration coefcient) are tuned for improving the optimal solution in the search space. Initially, values of these parameters are set as xed. However, experimental results shows that it is better to set large values, it helps particles to explore search space as well as convergence towards the global optima, while smaller values results ne tuned solution around the current search area. Suitable parameter values reasonably balance the global and local exploration of the search space resulting good solution. With this notion Shi and Eberhart proposed Time Varying Inertia Weight [16] and Random Inertia Weight [11] to the standard PSO. Ratnaweera et. al proposes Time Varying Acceleration Coefcients [12] in

978-1-4673-6217-7/13/$31.00 c 2013 IEEE

149

addition to the time varying inertia weight factor to standard PSO. In this paper we have proposed Cognitive avoidance approach to the particles in addition to cognitive acceleration and social acceleration to the standard PSO. This new addition helps particles to move in proper direction by avoiding probable mishaps or negotiates the effect of improper movements. Like cognitive acceleration serves as particles awareness of personal best, cognitive avoidance serves as particles awareness of its own worst position or personal worst (pworst). This new introduction to the standard PSO we termed as Particle Swarm Optimization with Cognitive Avoidance (PSOCA). Henceforth, we will use PSOCA for the proposed approach and PSO for standard version of PSO throughout the paper. Rest of the paper is organized as follows. In section II describes key motivational aspects of the proposal. Section III describes the proposed approach with a suitable example. Section IV provides details about experimental setups, benchmark functions and summarizes simulation results. II. M OTIVATION Particles of PSO track pbest and gbest in every iteration. Movements of any particle are decided based on these two known values experienced by individual particle and the swarm. Particles are accelerated towards both pbest and gbest or Particles are attracted towards these two. Though pbest and gbest inuences particles movement but actual movement is effected by two random values rand1 () and rand2 () . However, this interference is necessary for the algorithm to improve its overall exploration of the solution domain. No doubt pbest and gbest motivates particles to move towards optimal value but exploration dened by random values may cause particles to move to some unfruitful positions. These unfruitful movements may degrade algorithms overall performance. Though these unfruitful movements may overcome in further iterations for the inuence of pbest and gbest, but the effect of unnecessary movement is always there causing extra iterations to attain same position. Awareness of these pitfalls may improve overall performance of PSO by avoiding them. It is impossible to assure whether the next position is good or bad, it can only be predicted probabilistically depending on previous and present condition. Tracking of pbest and gbest is the best predictive notion incorporated in PSO and which motivates each particle to make movement nearby them considering that the probable good solution might be around the pbest or gbest. With this little greedy approach PSO works very well but as mentioned above every solution around the pbest or gbest is not always good resulting unbeatable consequences. An approach to avoid such situations is introduced to improve overall performance of PSO. A very similar mechanism that pbest and gbest does to a particle to attract towards themselves is introduced in PSOCA, where pworst (particles own known worst solution so far) pushes particles backward so that they can never be trapped again into it. This avoidance mechanism also reduces movement of particles towards other bad solutions around

Fig. 1.

Effect of cognitive avoidance in pso

the pworst. This is a kind of inverse greedy approach where particles are distracted instead of attracting as pbest and gbest does in the case of PSO. III. P ROPOSED APPROACH As the nature of solution space is unknown to the particles so the only possibility is to predict next good movements ensuring that particles are moving in the right direction. In PSO convergence towards the optimal solution is guided by two acceleration components (cognitive component and social component). However it can not be assured 100% that next position is better. This is only an assumption that solutions near pbest or gbest may be better. There is always a possibility of attaining bad solution due to interference of random parameters as described in previous section. Any misguidance may slow down overall convergence of algorithm towards the optimal solution. Therefore, it is very crucial to handle properly this kind of discrepancy to guide particles in appropriate direction to enhance efciency and accuracy. Considering this issue, in this paper we have proposed an approach to avoid such situations. In our proposed approach each particle maintains its worst value that it has attained so far along with the pbest and gbest. With this known worst value particles tries to avoid further movement towards it, having the sense that solutions nearby the worst one may not be suitable. To dene such avoidance scheme we have added a new component (convergence avoidance)to the existing velocity equation of PSO. Current pworst vector of ith particle can be represented as Wi (t) = (wi1 , wi2 , wi3 , ...wid ) where, d is the dimension of particle. Velocity equation is redened as follows: Vi (t + 1) = Vi (t) + C1 rand1 () (Pi (t) Xi (t)) +C2 rand2 () (Gi (t) Xi (t)) (4) C3 rand3 () (Wi (t) Xi (t)) Fourth component in Equation 4 represents cognitive avoidance and is considered as negative since it represents distraction, which is opposite to cognitive acceleration and social

150

2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)

Fig. 2. Covergence of PSOCA compared with standard PSO by comparing mean tness value through each generation in Ackleys, Rastrigins, Rosenbrocks and Schwefels function

TABLE I B ENCHMARK FUNCTIONS Function name Rastrigin f ( x) =


n i=1

Denition x2 i 10 cos 2xi + 10 0 .2


1 n n i=1

Ackley

f (x) = (20 + e) 20 exp exp


1 n n i=1

x2 i

cos 2xi
2

Rosenbrock Schwefel

f ( x) =

n1 i=1

100 xi+1 x2 i
n i=1

+ xi 1 |xi |

f ( x) =

xi sin

TABLE II I NITIAL RANGE AND O PTIMA Function name Griewank Rosenbrock Sphere Schwefel Range [600, 600] [2.048, 2.048] [5.12, 5.12] [500, 500] f (x ) Optimal solution f (x ) = 0 f (x ) = 0 f (x ) = 0 = n 418.9829

acceleration. To control the effect of this avoidance to a particle a cognitive avoidance coefcient C3 is used along with randomness rand3 () in range [0,1]. Position equation remains unaltered as in Equation 2. Effect of the newly added component is shown with an example in Figure 1. Black circle represnts current position of a particle. Black dimond represents next position of the particle guided by pbest and gbest only. White dimond represents next position inuenced by cognitive avoidance component. Proposed approach avoids movement towards worst solution and pushes nal solution towards either pbest or gbest. In this case it move towards pbest. A population of particles is initialized with randomly generated positions and velocites. Fitness of each particle is evaluated with user dened objective function. At each generation velocity of the particle is updated with Equation 4 and next positions of particles are evaluated with Equation 2. At each generation particles nds best position or worst position, notes down that position. Updates its current pbest, pworst and gbest. Generally, velocity of particles are controlled with predened velocites. If any particle gains larger velocity than the predened velocity, modulus of the velocity is considered for updation of the position.

2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)

151

TABLE III C OMPARISION OF PSO AND PSOCA Objective function Dimension Measures Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation PSO 6.268238 4.974795 0.994959 13.929412 3.205177 35.679164 33.828545 14.924376 63.677127 11.133938 83.605148 81.586379 42.783599 152.227761 19.746506 0.000000 0.000000 0.000000 0.000000 0.000000 0.701055 0.000000 0.000000 3.125399 0.821616 2.379818 2.268669 0.931307 5.089657 0.878856 0.495389 0.015612 0.001164 4.009907 1.304553 12.151854 10.225332 0.437501 66.253883 9.862681 35.199659 24.475635 8.137773 85.072248 23.205804 -3726.731438 -3716.075534 -4071.390538 -3242.300441 186.940914 -6312.808570 -6306.932234 -7234.748399 -5556.836625 393.573309 -8259.610620 -8236.354402 -10023.035026 -7042.053798 584.277113 PSOCA 5.890155 4.974795 1.989918 11.939504 2.725471 30.087520 29.351257 16.914289 53.727612 7.594830 80.671492 76.611665 32.834629 151.233051 26.351290 0.000000 0.000000 0.000000 0.000000 0.000000 0.511282 0.000000 0.000000 2.452552 0.790906 2.202456 2.011870 0.003490 4.919257 0.045834 0.587538 0.020135 0.000562 4.053867 1.397056 9.016740 9.617160 0.146889 15.589512 3.645567 33.586877 24.057692 10.343212 105.536037 23.058436 -3733.443933 -3725.944034 -4071.390538 -3321.267486 175.685358 -6388.208710 -6405.636278 -7372.923733 -5596.169278 386.027770 -8280.060558 -8334.988735 -9371.577700 -6923.615322 507.231340

10

Rastrigins function

20

30

10

Ackleys function

20

30

10

Rosenbrocks function

20

30

10

Schwefels function

20

30

152

2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)

In this particular, we have not considered any predened velocity limits for particles. Let particles to move with any nite velocity if it moves outside the search space then that movement is controlled with dened limits of each dimensions. IV. S IMULATION RESULTS Four well known benchmark functions are used for performance evaluation as shown in Table I and their initial ranges shown in II. These benchmarks are widely used in evaluating performance of PSO methods [13][17]. Among these Rosenbrocks function is unimodel, where as Ackleys, Rastrigins and Schwefels functions are multimodel function All functions have global optimal solution at or near the origin except Schwefels function, which have global optimal solution at the edges of the solution domain. Since cognitive avoidance approach is added to PSO to improve overall performance so proposed PSOCA is compared with standard PSO only. We have considered ve performance metrics to compare the quality of the optimal solution of PSOCA: Mean, Median, Minimum value, Maximum value and Standard deviation. Asymmetric initialization scheme [19] is not considered as benchmark functions cover entire solution space. Symmetric initialization is used where initial population is uniformly distributed over the entire solution space. All benchmark functions are tested with dimensions 10, 20 and 30. For each function and corresponding considered dimension, 50 trials are carried out to present ve performance metrics. The effect of population size on performance of PSO has low signicance as shown by Eberhart and Shi [15]. Population size of PSO generally set in range 20 to 60. However, as shown in [18] increment of population results slight improvement in optimal solution. Hence, we have used population size 40 for all the experimental study. We have also set maximum generation limit as 1000 for each run to made comparisons more precise. Acceleration coefcients C1 , C2 and newly added C3 kept as constant values 0.6, 1.5 and 0.4 respectively. All the performance metrics of optimal solutions over 50 trials are presented as in Table III. Bold gures as in the Table III are represents comparatively larger than other. For Rastrigins function all dimensions PSOCA performs better than PSO in terms of both mean. Although PSO attain least optimal value for dimensions 10 and 20, but larger values of mean and median indicates PSOCA outperforms PSO. In dimension 30 all the performance metrics shows better in PSOCA, where as deviation is observed higher. For Ackleys function in dimensions 10 and 20 shows almost similar results, but in 30 PSOCA performs better than PSO. For Rosenbrocks function in dimension 10 PSOCA performance is poor although it attain less optimal value than PSO. In dimension 20 PSOCA performs very well. In dimension 30 shows little improvement in performance than PSO. For Schwefels function in all dimensions PSOCAs performance shows better than PSO. Covergence rate of PSOCA and PSO as presented in Figure 2 shows covergence rate of PSOCA is comparatively better for benchmark functions.

V. C ONCLUSION In this paper we have proposed PSOCA, a mechanism to improve performance of PSO by avoiding particles unfruitful movements. An additional component called cognitive avoidance is introduced to the velocity equation. Cognitive avoidance coefcient controls the effect of this component. The new addition to particles movement acts as an extra boost to the particle in convergence towards the optmal solution. It pushes particles either towards pbest or gbest in each generation resulting fast convergence. Performance of the proposed PSOCA is tested on four well known benchmark functions at various dimensions, shows signicant improvement. Study of convergence rate on benchmark functions shows PSOCA converges faster than the PSO as well as reaches better solution than that of PSO. R EFERENCES
[1] Kennedy J. and Eberhart R., Particle swarm optimization, in Procedings of IEEE International Conference on Neural Networks, 1995, pp. 19421948. [2] Eberhart, R. C., and Kennedy, J. . A new optimizer using particle swarm theory. Proc. Sixth International Symposium on Micro Machine and Human Science (Nagoya, Japan), IEEE Service Center, Piscataway, NJ, 39-43,1995. [3] Eberhart, R. C., Dobbins, R. W., and Simpson, P., Computational Intelligence PC Tools, Boston: Academic Press,1996. [4] Holland, J., Adaptation In Natural and Articial Systems. University of Michigan Press, Ann Arbor, 1975 [5] Goldberg, D. ,A note on Boltzmann tournament selection for genetic algorithms and population-oriented simulated annealing. TCGA 90003, Engineering Mechanics, University of Alabama,1990. [6] Srinivas. M and Patnaik. L, Adaptive probabilities of crossover and mutation in genetic algorithms, IEEE Transactions on System, Man and Cybernetics, vol.24, no.4, pp.656-667, 1994. [7] Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA, 1989. [8] Shi, Y.; Eberhart, R.C.,A modied particle swarm optimizer, in Procedings of IEEE International Conference on Evolutionary Computation, 1998, pp. 69-73. [9] E. Laskari, K. Parsopoulos, and M. Vrahatis, Particle swarm optimization for integer programming, in Procedings of IEEE Congress on Evolutionary Computation, May 2002, vol. 2, pp. 1582-1587. [10] Kennedy, J and Eberhart, R.C. A discrete binary version of the particle swarm algorithm, IEEE International Conference on Systems, Man, and Cybernetics, 1997. [11] Eberhart R. C. and Shi Y., Tracking and optimizing dynamic systems with particleswarms, in Procedings 2001 IEEE International Congress on Evolutionary Computation, pp. 94-100. [12] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefcients, IEEE Transactions on Evolutionary Computation, vol. 8, pp. 240-255, 2004. [13] J. Kennedy, Stereotyping: Improving particle swarm performance with cluster analysis, in Procedings of IEEE International Conference on Evolutionary Computation,vol. 2, 2000, pp. 303-308. [14] P. J. Angeline, Using selection to improve particle swarm optimization, in Procedings of IEEE International Conference Computational Intelligence, 1998, pp.84-89. [15] Eberhart, R.C.; Shi, Y., Comparing inertia weights and constriction factors in particle swarm optimization, in Procedings of IEEE International Congress on Evolutionary Computation, vol. 1, 2000, pp. 84-88. [16] Shi, Y.; Eberhart, R.C., Empirical study of particle swarm optimization, in Procedings of IEEE International Congress on Evolutionary Computation, vol. 3, 1999, pp. 101-106. [17] P. N. Suganthan, Particle swarm optimizer with neighborhood operator, in Procedings of IEEE International Congress on Evolutionary Computation, vol. 3, 1999,pp. 1958-1962.

2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)

153

[18] F. van den Bergh and A. P. Engelbrecht, Effect of swarm size on cooperative particle swarm optimizers, in Procedings of Genetic Evolutionary Computation Conf. (GECCO-2001), San Francisco, CA, July 2001, pp.892-899. [19] Angeline, Peter J., Evolutionary optimization verses particle swarm optimization: Philosophy and the performance difference, in Lecture Notes in Computer Science, vol. 1447, Procedings of 7th International Conference on Evolutionary Programming VII, Mar. 1998, pp. 600-610.

154

2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)

You might also like