You are on page 1of 8

Introducing a Binary Ant Colony Optimization

Min Kong and Peng Tian


Shanghai Jiaotong University, Shanghai, China kongmin@sjtu.edu.cn, ptian@sjtu.edu.cn

Abstract. This paper proposes a Binary Ant Colony Optimization applied to constrained optimization problems with binary solution structure. Due to its simple structure, the convergence status of the proposed algorithm can be monitored through the distribution of pheromone in the solution space, and the probability of solution improvement can be in some way controlled by the maintenance of pheromone. The successful implementations to the binary function optimization problem and the multidimensional knapsack problem indicate the robustness and practicability of the proposed algorithm.

Introduction

Ant Colony Optimization (ACO) is a stochastic meta-heuristic for solutions to combinatorial optimization problems. Since its rst introduction by M. Dorigo and his colleagues [1] in 1991, ACO has been successfully applied to a wide set of dierent hard combinatorial optimization problems, such as traveling salesman problem[2], quadratic assignment problem[3], and vehicle routing problem[4]. The main idea of ACO is the cooperation of a number of articial ants via pheromone laid on the path. Each ant contributes a little eort to the solution, while the nal result is an emergence of the ants interactions. Besides its success in practical applications, there have also been some studies on the theory of ACO[5,6,7], which are mainly focused on the convergence proof of ACO. But the cybernetics of ACO is still unclear, specially on the speed of convergence. This paper tries to reveal some properties of the convergence speed of ACO by a simple ant system, BAS, which works in binary space and whose performance is veried through the binary function optimization problem and the multidimensional knapsack problem.

2
2.1

The Binary Ant System


Solution Construction

In BAS, articial ants construct solutions by walking on the mapping graph as described in Fig. 1. At every iteration, a number of na ants cooperate together to search in the binary solution domain, each ant constructs its solution by walking sequentially from node 1 to node n + 1 on the routing graph. At each node i, ant either selects the upper path i0 or selects the lower path i1 to walk to the
M. Dorigo et al. (Eds.): ANTS 2006, LNCS 4150, pp. 444451, 2006. c Springer-Verlag Berlin Heidelberg 2006

Introducing a Binary Ant Colony Optimization


10 1 11 2 21 20 3 i i1 i0 i+1 n n1 n0 n+1

445

Fig. 1. Routing Diagram for Ants in BAS

next node i + 1. Selecting i0 means xi = 0; and selecting i1 means xi = 1. The selecting probability is dependent on the pheromone distributed on the paths: pis (t) = is (t), i = 1, , n, s {0, 1} (1)

where t is the number of iteration. The solutions constructed by the ants may not be feasible when tackling the constrained binary optimization problems. A solution repair operator is incorporated to transfer the infeasible solutions to the feasible domain. 2.2 Pheromone Update

Initially, BAS set all the pheromone values as is (0) = 0.5, which is the same as that of HCF[8], but uses a simplied pheromone update rule: is (t + 1) (1 )is (t) +
xSupd |isx

wx

(2)

where Supd is the set of solutions to be intensied; wx are explicit weights for each solution x Supd , which satisfying 0 wx 1 and xSupd wx = 1; is the evaporation parameter, which is set initially as = 0 , but decreases as 0.9 every time the pheromone re-initialization is performed. Supd consists of three components, they are: the global best solution S gb , the iteration best solution S ib , and the restart best solution S rb . Dierent combinations of wx are implemented according to the convergence status of the algorithm. The convergence status is monitored by a convergence factor cf , which is dened as: |i0 i1 | (3) n Under this denition, when the algorithm is initialized with is (0) = 0.5 for all the paths, cf = 0, while when the algorithm gets into convergence or premature, |i0 i1 | 1, thus that cf 1. Table 1 describes the pheromone update strategy in dierent value of cf , where wib , wrb and wgb are the weight parameters for S ib , S rb and S gb respectively, cfi , i = 1, , 5 are threshold parameters within the range of [0,1]. In BAS, once cf > cf5 , the pheromone re-initialization procedure is performed according to S gb : cf = is = H if is S gb is = L otherwise (4)
n i=1

446

M. Kong and P. Tian

where H and L are two parameters satisfying 0 < L < H < 1 and L +H = 1. This kind of pheromone re-initialization will focus more on the previous search experience rather than a total redo of the algorithm.
Table 1. Pheromone Update Strategy for BAS cf < cf1 1 0 0 cf [cf1 , cf2 ) cf [cf2 , cf3 ) cf [cf3 , cf4 ) cf [cf4 , cf5 ) 2/3 1/3 0 0 1/3 2/3 1 0 0 0 0 1

wib wrb wgb

3
3.1

Theoretical Analysis
Pheromone as Probability

Lemma 1. For any pheromone value is at any iteration step t in BAS, the following holds: 0 < is (t) < 1 (5) Proof. From the pheromone update procedure described in the previous section, obviously we have is (t) > 0. And because xSupd wx = 1, we can calculate the upper limit for any particular path according to equation (2):
max max is (t) (1 )is (t 1) + t t (1 ) is (0) + i=1 (1 )i1 t = (1 ) is (0) + 1 (1 )t

(6)

Since BAS set the initial pheromone value as 0 < is (0) < 1, we have max is (t) < 1 according to the nal sum of equation (6). Theorem 1. The pheromone values in BAS can be regarded as selecting probabilities throughout the iterations. Proof. Initially, since all the pheromone are set to 0.5, it obviously satises the statement of the theorem. For the following iterations, what we need to do is to prove i0 (t) + i1 (t) = 1, 0 < is (t) < 1 holds for every variable xi under the condition of i0 (t 1) + i1 (t 1) = 1. From Lemma 1, we can see that 0 < is (t) < 1 holds for any pheromone. After the pheromone update procedure, all the pheromone values are evaporated, and there must be one and only one of i0 and i1 associated with any x Supd that receives pheromone intensication, therefore, for any pheromone pair i0 and i1 , we have: i0 (t) + i1 (t) = (1 )i0 (t 1) + wx +(1 )i1 (t 1) +
xSupd |i0x xSupd |i1x

wx wx
xSupd

= (1 )(i0 (t 1) + i1 (t 1)) + =1+=1

Introducing a Binary Ant Colony Optimization

447

3.2

Relation with PBIL

It is interesting that BAS, which is developed from ACO, is much similar to PBIL[9], which is developed from another successful meta-heuristic, the Genetic Algorithm. The main reason lies probably in the fact that both BAS and PBIL incorporate the same reinforcement learning rule represented as equation (2). The pheromone as selecting probabilities in BAS seems identical to the probability vector in PBIL, but BAS focuses more on the pheromone monitor and is controlled by additional complicated pheromone maintenance methods to guide the search to the direction for a quick convergence. While PBIL only deals with binary function optimization problem, BAS also applies to constrained combinatorial optimization problems.

4
4.1

Experimental Results
Function Optimization Problem

Normally, the function optimization problem can be described as: min s.t. f (y), y = [y1 , , yv ] aj yj bj , j = 1, , v (7)

where f is the object function and v is the number of variable. In BAS, each variable yj is coded into a binary string [xj1 , , xjd ] in BCD code, where d is the coding dimension for every variable. The nal solution representation x = [x11 , , x1d , x21 , , x2d , , xv1 , , xvd ] is the combination of all the variables yj in series, so we have n = vd as the total dimension of the binary representation to the function optimization problem. Repair Operator. The purpose of the repair operator for the function optimization problem is to decode the binary solution string x = [x1 , , xn ] into the real variables and make sure that each variable yj falls into the constrained region [aj , bj ]. The process can be described as: yj = yj =
dj i=1+d(j1) yj (b 2d j

xi 2i1 ,

j = 1, , v

aj ) + aj

(8)

Local Search. A one-ip local search is applied to S ib and S gb , it checks every bit by ipping the value from 0 to 1 or from 1 to 0, to see whether the resulting solution is better than the original one. If it is improved, the solution is updated, otherwise, the original solution is kept for further ips. Comparison with Other Ant Systems. For all the tests, we use general parameter settings as: d = 12, na = 5, 0 = 0.5, H = 0.65, 0 = 0.3, cfi = [0.2, 0.3, 0.4, 0.5, 0.9], and the algorithm stops until the total function evaluation number exceed 10000 or the search is considered success by satisfying the following condition:

448

M. Kong and P. Tian

|f fknown

best |

<

|fknown

best |

(9)

where f is the optimum found by BAS, fknown best is the known global optimum. 1 and 2 are accuracy parameters, which is set to be: 1 = 2 = 104 . Table 2 reports the success rate and average number of function evaluations on dierent benchmark problems over 100 runs. It is clear that BAS nds the best known solutions every time for all the benchmarks. Meanwhile, considering the average number of function evaluation, BAS is also very competitive.
Table 2. Comparison of BAS with CACO[10], API[11], CIAC [12], and ACO[13] CACO % ok evals R2 100 6842 SM 100 22050 GP 100 5330 M G 100 1688 St [6000] Gr5 50000 Gr10 100 f API CIAC % ok evals % ok evals [10000] 100 11797 [10000] 100 50000 56 23391 20 11751 94 28201 [10000] 63 48402 52 50121 ACO % ok evals 2905 695 364 BAS % ok evals 100 5505.4 100 74.37 100 1255.59 100 2723.36 100 1044.66 100 1623 100 1718.43

4.2

Multidimensional Knapsack Problem

The multidimensional knapsack problem (MKP) is a well-known NP-hard combinatorial optimization problem, which can be formulated as:
n

maximize
j=1 n

pj xj rij xj bi ,
j=1

(10)

subject to

i = 1, ..., m,

(11) (12)

xj {0, 1},

j = 1, ..., n.

Repair Operator. In BAS, a repair operator is incorporated to guarantee feasible solutions. The idea comes from Chu and Beasley [14], which is based on the pseudo-utility ratio calculated accodring to the surrogate rate. The general idea of this approach is described very briey as follows. The surrogate relaxation problem of the MKP can be dened as:
n

maximize
j=1 n

pj xj
m m

(13) i rij )xj


i=1

subject to

(
j=1 i=1

i bi

(14) (15)

xj {0, 1},

j = 1, 2, ..., n

Introducing a Binary Ant Colony Optimization

449

where = {1 , ..., m } is a set of surrogate multipliers (or weights) of some positive real numbers. We obtain these weights by a simple method suggested by Chu and Beasley [14], in which we solve the LP relaxation of the original MKP and use the values of the dual variables as the weights. The weight i can be seen as the shadow price of the ith constraint in the LP relaxation of the MKP.
Table 3. The results of BAS MKP on 5.100 instances. For each instance, the table reports the best known solutions from OR-library, the best and average solutions found by Leguizamon and Michalewicz[15], the best solution found by Fidanova [16], the best and average solutions found by Alaya et.al. [17], and the results from BAS MKP, including the best and average solutions over 30 runs for each instance. N Best Known 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 24381 24274 23551 23534 23991 24613 25591 23410 24216 24411 42757 42545 41968 45090 42218 42927 42009 45020 43441 44554 59822 62081 59802 60479 61091 58959 61538 61520 59453 59965 L.&M. Best 24381 24274 23551 23527 23991 24613 25591 23410 24204 24411 Fidanova Best 23984 24145 23523 22874 23751 24601 25293 23204 23762 24255 42705 42445 41581 44911 42025 42671 41776 44671 43122 44471 59798 61821 59694 60479 60954 58695 61406 61520 59121 59864 Alaya et.al. Best Avg. 24381 24342 24274 24247 23551 23529 23534 23462 23991 23946 24613 24587 25591 25512 23410 23371 24216 24172 24411 24356 42757 42704 42510 42456 41967 41934 45071 45056 42218 42194 42927 42911 42009 41977 45010 44971 43441 43356 44554 44506 59822 59821 62081 62010 59802 59759 60479 60428 61091 61072 58959 58945 61538 61514 61520 61492 59453 59436 59965 59958 BAS MKP Best Avg. 24381 24380.7 24274 24270.7 23551 23539.7 23534 23524.1 23991 23978.5 24613 24613 25591 25591 23410 23410 24216 24205.4 24411 24405.5 42757 42736.2 42545 42498.9 41968 41966.5 45090 42074.8 42198 42198 42927 42927 42009 42009 45020 45016.5 43441 43408.8 44554 44554 59822 59822 62081 62010.4 59802 59772.7 60479 60471.8 61091 61074.2 58959 58959 61538 61522.5 61520 61505.2 59453 59453 59965 59961.7

Avg. 24331 24245 23527 23463 23949 24563 25504 23361 24173 24326

450

M. Kong and P. Tian

The pseudo-utility ratio for each variable, based on the surrogate constraint coecient, is dened as: pj uj = m (16) i=1 i rij The repair operator consists of two phases. The rst phase examines each bit of the solution string in increasing order of uj and changes the bit from one to zero if feasibility is violated. The second phase reverses the process by examining each bit in decreasing order of uj and changes the bit from zero to one as long as feasibility is not violated. Local Search. A random-4-ip method is designed as the local search for S ib and S gb , it randomly selects 4 bits to ip, then repairs the solution if necessary, and checks the result. If the resulting solution is better, then update the solution, otherwise, keeps the original solution. This kind of ips are performed for a certain number of 10n to S ib and S gb at each iteration, where n is the problem dimension. Comparison with Other Ant System. Table 3 displays the comparison results of 5.100 from OR library. The parameters for all the tests are: na = 20, 0 = 0.5, H = 0.55, 0 = 0.3, cfi = 0.3, 0.5, 0.7, 0.9, 0.95, and the algorithm stops when 2000 iterations are performed. On these instances, BAS MKP outperforms all the other three algorithms in the results of the average solution found. Actually, BAS MKP nds 29 best solutions out of the 30 instances tested.

Conclusions

This paper presented BAS, a binary version of hyper-cube frame of ACO to handle constrained binary optimization problems. In the proposed version of the system, pheromone trails are put on the selections of 0 and 1 for each bit of the solution string, and they directly represent the probability of selection. Experimental results show that BAS works well on binary function optimization problem and performs excellently in multidimensional knapsack problem. The results reported in the previous experimental sections demonstrate that BAS is capable of solving these various problems very rapidly and eectively.

References
1. Dorigo, M., Maniezzo, V., Colorni, A.: Positive feedback as a search strategy. Technical report, Dipartimento di Elettronica e Informatica, Politecnico di Milano, IT (1991) 2. Dorigo, M., Maniezzo, V., Colorni, A.: Ant System: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics, Part B 26(1) (1996) 2941 3. Gambardella, L., Taillard, E., Agazzi, G.: Macs-vrptw: a multiple ant colony system for vehicle routing problems with time windows. In Corne, D., Dorgo, M., Glover, F., eds.: New Ideas in Optimization. McGraw-Hill Ltd. (1999) 6376

Introducing a Binary Ant Colony Optimization

451

4. Gambardella, L., Taillard, E., Dorigo, M.: Ant colonies for the quadratic assignment problem. Journal of the Operational Research Society 50(2) (1999) 167176 5. Gutjahr, W.: A graph-based ant system and its convergence. Future Generation Computer Systems 16(8) (2000) 873888 6. Sttzle, T., Dorigo, M.: A short convergence proof for a class of ant colony optiu mization algorithms. IEEE Transactions on Evolutionary Computation 6(4) (2002) 358365 7. Kong, M., Tian, P.: A convergence proof for the ant colony optimization algorithms. In: 2005 International Conference on Articial Intelligence, ICAI2005. (2005) 2730 8. Blum, C., Dorigo, M.: The hyper-cube framework for ant colony optimization. IEEE Transactions on Systems, Man and Cybernetics, Part B 34(2) (2004) 1161 1172 9. Baluja, S., Caruana, R.: Removing the genetics from the standard genetic algorithm. In Prieditis, A., Russel, S., eds.: The International Conference on Machine Learning 1995, Morgan-Kaufmann Publishers (1995) 3846 10. Bilchev, G., Parmee, I.: Constrained optimization with an ant colony search model. In: 2nd International Conference on Adaptive Computing in Engineering Design and Control. (1996) 2628 11. Monmarche, N., Venturini, G., Slimane, M.: On how pachycondyla apicalis ants suggest a new search algorithm. Future Generation Computer Systems 16(8) (2000) 937946 12. Dreo, J., Siarry, P.: Continuous interacting ant colony algorithm based on dense heterarchy. Future Generation Computer Systems 20(5) (2004) 841856 13. Socha, K.: Aco for continuous and mixed variable optimization. In Dorigo, M., Birattari, M., Blum, C., eds.: ANTS 2004. LNCS 3172, Springer (2004) 2536 14. Chu, P., Beasley, J.: A genetic algorithm for the multidimentional knapsack problem. Journal of Heuristics 4(1) (1998) 6386 15. Leguizamon, G., Michalewicz, Z.: A new version of ant system for subset problems. In: Congress on Evolutionary Computation. (1999) 14591464 16. Fidanova, S.: Evolutionary algorithm for multidimensional knapsack problem. In: the Seventh International Conference on Parallel Problem Solving from Nature (PPSNVII) Workshop. (2002) 17. Alaya, I., Solnon, C., Ghedira, K.: Ant algorithm for the multi-dimensional knapsack problem. In: International Conference on Bioinspired Optimization Methods and their Applications (BIOMA 2004). (2004) 6372

You might also like