You are on page 1of 13

Applied Mathematics and Computation 219 (2013) 4163–4175

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

Ant colony optimization for continuous functions by using novel


pheromone updating
Serap Ulusam Seçkiner ⇑, Yunus Eroğlu, Merve Emrullah, Türkay Dereli
Department of Industrial Engineering, Faculty of Engineering, University of Gaziantep, 27310 Sßehitkâmil, Gaziantep, Turkey

a r t i c l e i n f o a b s t r a c t

Keywords: This paper presents an ant colony optimization (ACO) algorithm for continuous functions
Ant colony optimization based on novel pheromone updating. At the end of the each iteration in the proposed algo-
Continuous optimization rithm, pheromone is updated according to percentiles which determine the number of ants
Novel pheromone updating to track the best candidate solution. It is performed by means of solution archive and infor-
Global minimum
mation provided by previous solutions. Performance of the proposed algorithm is tested on
Comparative analysis
ten benchmark problems found in the literature and compared with performances of pre-
vious methods. The results show that ACO which is based on novel pheromone updating
scheme (ACO-NPU) handles different types of continuous functions very well and can be
a robust alternative approach to other stochastic search algorithms.
Ó 2012 Elsevier Inc. All rights reserved.

1. Introduction

A continuous function f may assume a minimum at a single point x⁄, or may have minima at a number of points if there
exists some e > 0 such that if f(x⁄) 6 f(x) when |x  x⁄| < e. If function is continuous and differentiable, the minimum value can
be found on the point of the first derivation of the function. However, wherever the function is not differentiable, it could
prove more advantageous to utilize stochastic methods instead of deterministic ones [1].
Some heuristic methods such as adaptive random search technique (ARSET) [1], genetic and Nelder–Mead algorithms [2],
hybrid genetic particle and particle swarm optimization [3], simplex-simulated annealing [4], successive zooming method
(SZGA) [5], ACO based algorithm (ACO-BA) [6], modified ant colony optimization (MACO) [7], continuous functions minimi-
zation by dynamic random search technique (DRASET) [8], heuristic random optimization (HRO) [9], improving genetic algo-
rithms by random search technique (IGARSET) [10] and ACO Reduced Search Space (ACORSES) [11] were proposed to find
global minimum.
In this paper, an ACO algorithm which is based on a novel pheromone updating scheme and called ACO-NPU is suggested
to find global minimum. The distinctive feature of the proposed algorithm is novel pheromone updating which depends on
better solutions so that ants could be directed to better solutions by intensity of pheromone value. It is performed by means
of solution archive and using the information provided by previous solution. Moreover, ACO-NPU differs in terms of the gen-
erated new colony with predetermined solution archive size. At the end of the each iteration, old solution archive and new
archive are combined and a new extended solution archive is kept to explore better solutions. Generally, ant colony method
uses a pheromone table to generate new solutions in combinatorial optimization problems. But it is not possible to obtain a
pheromone table for continuous optimization problems as the number of candidate values is infinite.
The paper is organized as follows. Firstly, ACO will be presented shortly as a reminder. The proposed ACO algorithm based
on a novel pheromone updating scheme to find global minimum will be detailed in Section 2.1. In Section 3, performance of
the algorithm will be experimented on the benchmark problems. Last section presents the conclusions.

⇑ Corresponding author.
E-mail address: seckiner@gantep.edu.tr (S.U. Seçkiner).

0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.amc.2012.10.097
4164 S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175

2. Ant colony optimization

Ant colony optimization (ACO) is an optimization technique that was introduced for the application to discrete optimi-
zation problems in the early 1990s by Dorigo [12]. The inspiring source of ACO is the foraging behaviour of real ant colonies
[13]. Initially, ants randomly explore the area to find food. When they move the food to the nest, they leave a chemical pher-
omone trail on the way. During the moving time of food, pheromone quantity increases according to food quantity. Other
ants go to food source according to pheromone trails. While focused on combinatorial optimization with applications in:
routing, assignment, graph coloring, scheduling and robotic control, among new aspects, constructive method and informa-
tion shared by population members can be mentioned.
The canonical ACO algorithm is illustrated in Fig. 1. The first step consists mainly in the initialization of the pheromone
trail. In the iteration (second) step, each ant constructs a complete solution to the problem according to a probabilistic state

Fig. 1. Canonical ant colony algorithm.


S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175 4165

transition rule. The state transition rule depends mainly on the state of the pheromone. The third step updates quantity of
pheromone; a global pheromone updating rule is applied in two phases. First, an evaporation phase where a fraction of the
pheromone evaporates, and then a reinforcement phase where each ant deposits an amount of pheromone which is propor-
tional to the fitness of its solution. This process is iterated until a stopping criterion.
Because its discrete nature restricts the application for continuous domains [14], adaptation to solve continuous optimi-
zation problems of ACO has an increasing attention. Early continuous ant colony works belong to Bilchev and Parmee [15],
Monmarché et al. [16], Dréo and Siarry [17], Mathur et al. [18], Socha and Dorigo [19]. Ant colony applications for continuous
problems in Koverik’s master of thesis [20] are referred as Continuous Ant Colony Optimization (CACO), a new search ant
algorithm for Pachycondyla apicalic (API) Continuous Interacting Ant Colony (CIAC), Adaptive Ant Colony Algorithm (AACA),
and Binary Ant System (BAS). Socha [21] used a probabilistic approach to generate new solutions in continuous and mixed-
variable domains. For the successful application of ACO algorithms to optimization problems, it should be suitable and/or
possible to define (or present) the solution space as a network (or graph) [22]. There are many different approaches on
ACO based algorithms to find optimum solution of a continuous problem; however, most of these approaches do not follow
the original ACO framework [13].

2.1. Ant colony optimization based on a novel pheromone updating scheme

When ACO is employed for the solution of combinatorial problems, the pheromone values are associated with a finite set
of discrete values related to the decisions that the ants make. Socha and Blum [13], Socha [21] and Afshar and Madadgar [23]
used a solution archive as a way of describing the pheromone distribution over the search space. We used this solution ar-
chive style to select better solutions when generating new ones. By using the solution archive, new solutions could be inde-
pendent from best solution generated by previous iterations and more than one solution vector could have a chance to
generate new solutions.
Main steps of the proposed algorithm are given in Fig. 2. Firstly, the solution archive is initialized. Then, at iterations, a
number of solutions are constructed by the ants according to pheromone values. At the last step, the solution archive and
pheromone values are updated.
Details of the proposed algorithm are as follows. As it was mentioned before, solution archive (xkinitial , k ¼ 1; 2; . . . ;
archive sizeÞ is initialized and f ðxkinitial Þ is computed. These initial solutions are limited to number of ants and extracted from
solution archive and then pheromone trails are assigned to remaining solutions. It is not necessary to assign same number of

Fig. 2. ACO algorithm for the global minimum based on novel pheromone updating.
4166 S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175

archive size to ant size. But, in the following step, ant size has to be equal to the selected solutions size which will be ex-
tracted from the archive. Then new pheromone values are computed for the last selected solutions (remaining solutions).
Up to this step, actually a local search is performed among the candidate solutions. Then, in order to approximate to best
solution at the end of the each iteration, Euclidean distances of candidate solutions to the best known solution are computed
(Eq. (1)). In other words, differences between minimum function value of f(fmin) and the other candidate function (f) values
are computed with Eq. (1)
Di ¼ fi  fmin ð1Þ
While a discrete random variable can be defined as a random variable whose cumulative distribution function (cdf) for
discrete objective functions, continuous objective functions are determined with continuous probability functions. While for
a discrete probability distribution an event with probability zero is impossible, this is not true in the case of a continuous
random variable. The normal distribution, Continuous uniform distribution, Beta distribution, and Gamma distribution
are well-known absolutely continuous distributions. The normal distribution, also called the Gaussian or the bell curve, is
ubiquitous in nature and statistics due to the central limit theorem: every variable that can be modeled as a sum of many
small independent variables is approximately normal. In order to define continuous variables, we selected Gaussian func-
tions (Eq. (2)). During the pheromone updating, we need to approximate to a value which has Euclidean distance is zero.
Gaussian function is used to compute probabilities by using Di
D2
i
/i ¼ e 2t ð2Þ
The parameter t in Gaussian function is a standard deviation. To determine parameter t for our algorithm, numerous
experiments were performed to determine the impact of the parameter t. In this work, t is set to 0.005. In order to determine
with percentiles of how many ants which would go into the best candidate solution, normalization is made with Eq. (3). Nor-
malized values are obtained from Gaussian function
/
si ¼ Xmi ð3Þ
i¼1
/i

si is pheromone value of ith solution. When generating new solutions, each ant chooses a reference point according to
pheromone values of solutions. For example, if s2 is computed as 0.3681 at the end of the iteration, 37% of ants approxi-
mately will use 2nd solution as a reference point to produce new solutions. Main idea of the proposed algorithm (ACO-
NPU) is illustrated in Fig. 3.
While our approach to generate new solutions (Eq. (4)) is similar to the one used in Hamzaçebi’s random search technique
[1], pheromone updating mechanism is thoroughly different

xkT ¼ xkT1  dx ð4Þ

xkt
where is solution vector of the kth ant at iteration t, xkt1
is the selected best solution in solution archive (reference point)
according to pheromone value at iteration t  1 for kth ant and dx is a vector generated randomly from [a, a]range to deter-
mine the length of jump.

Fig. 3. Main idea of the ACO-NPU.


S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175 4167

Generated new solutions by Eq. (4) are added to existing archive and archive size is increased to k + m (archive size + ant
size). Because archive size must be kept as k, archive size is updated. This process is iterated until number of maximum iter-
ation reached (T). At the end of the each iteration, quantity of pheromone is updated and quantity of pheromone is reduced
to simulate the evaporation process with the following formula (Eq. (5)):
aT ¼ ð0:9Þ  aT1 ð5Þ
The coefficient of evaporation at the end of the each iteration was set to 0.9 in Eq. (5). By experiments, it was realized that
if the coefficient of evaporation is accepted smaller or greater than 0.9, the algorithm was being trapped in local optimum.

3. Comparison of the proposed algorithm for selected benchmark problems

In this section, ten benchmark problems given in Sections 3.1–3.10 are chosen in order to test the performance and effec-
tiveness of the proposed ACO algorithm based on novel pheromone updating (ACO-NPU) and the results are compared with
those of available methods in the literature. The benchmark functions are taken from [1,5–11]. The results are given in Tables
1–10. Best function values, epoch numbers, solution times and the results of other outstanding algorithms are also given in
Tables.

3.1. Benchmark problem 1 (BP-1)

Objective function of the first problem is given in Eq. (6) and the graph of the function is given in Fig. 4. The global min-
imum point of this function is on x = 3
( )
x2 ; if x 6 1
min f ðxÞ ¼ : ð6Þ
ðx  3Þ2  3; if x > 1i

BP-1 was previously solved with ARSET [1], ACO-BA [6], MACO [7], HRO [9], IGARSET [10] and ACORSES [11]. ACO-NPU is
initiated with random initial solution xkinitial within range of [0, 10] and a is a random number within range of [1, 1]. Ant size,
archive size and iteration number were set as 4, 10 and 100, respectively. All comparisons are shown in Table 1.

Table 1
Comparison of algorithms for BP-1.

Algorithms Best x Best f(x) Epoch number Best solution time (s)
HRO 3.000324 2.9999998 1000 NA
ARSET 3 3 1000 NA
ACO-BA 3 3 500 NA
MACO 3 3 500 0.0090
ACORSES 3 3 500 0.0480
IGARSET 3 3 465 0.0420
ACO-NPU 3 3 400 0.3959

Table 2
Comparison of algorithms for BP-2.

Algorithms Best x Best f(x) Epoch number Best solution time (s)
ARSET 1.90E06 2.21E43 50,000 NA
ACO-BA 7.79E012 1.40E045 5000 NA
MACO 9.92E12 5.60E45 5000 0.032
IGARSET NA 1.01E74 1789 0.0669
ACORSES NA 4.5E83 4101 0.0550
ACO-NPU 2.64E21 4.59E98 1500 0.1547

Table 3
Comparison of algorithms for BP-3.

Algorithms Best x Best y Best f(x,y) Epoch number Best solution time (s)
ARSET 3.0157 2.9999 3.71E015 10,000 NA
ACO-BA 3  2066E09 3  2384E09 2.62E021 5000 NA
MACO 3 3 0 3750 0.0180
IGARSET NA NA 2.08E27 1821 0.0666
ACORSES NA NA 4.06E52 3624 0.0830
ACO-NPU 3 3 0 1500 0.0637
4168 S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175

Table 4
Comparison of algorithms for BP-4.

Algorithms Best x Best y Best f(x,y) Epoch number Best solution time (s)
ARSET 0.99401 0997 3.58E05 10,000 NA
ACO-BA 1 1 0 50,000 NA
MACO 1 1 0 3600 NA
IGARSET NA NA 0 2174 0.0568
ACORCES NA NA 0 3402 0.0590
ACO-NPU 1 1 0 20,000 0.2256

Table 5
Comparison of algorithms for BP-5.

Algorithms Best x Best y Best f(x,y) Epoch number Best solution time (s)
ARSET 10 6.67E08 10 50,000 NA
ACO-BA 10 8.07E11 10 5000 NA
MACO 10 0 10 3750 0.0440
IGARSET NA NA 10 1205 0.1043
ACORCES NA NA 10 2167 0.0620
ACO-NPU 10 1.82E17 10 1125 0.0308

Table 6
Comparison of algorithms for BP-6.

Algorithms Best x Best y Best f(x,y) Epoch number Best solution time (s)
SZGA NA NA 2.98E8 4000 NA
MACO 0 0 0 3750 0.0080
IGARSET NA NA 0 1004 0.0485
ACORSES NA NA 0 1832 0.0520
ACO-NPU 9.52E12 9.52E12 0 1000 0.0556

Table 7
Comparison of algorithms for BP-7.

Algorithms Best x Best y Best f(x,y) Epoch number Best solution time (s)
ACORSES NA NA 837.9658 1176 0.0690
ACO-NPU 420.9687 420.9687 837.9658 750 0.0289

Table 8
Comparison of algorithms for BP-8.

Algorithms Best x Best y Best f(x,y) Epoch number Best solution time (s)
MACO NA NA 2 235 0.0120
IGARSET NA NA 2 2400 0.0614
ACORSES NA NA 2 1610 0.0445
ACO-NPU 1.06E05 1.06E05 2 75 0.0494

Table 9
Comparison of algorithms for BP-9.

Algorithms Best x Best y Best f(x,y) Epoch number Best solution time (s)
MACO 3 4 1 36000 0.0210
DRASET NA NA 1 29663 14.468
IGARSET NA NA 1 1849 0.0537
ACORSES NA NA 1 1576 0.0630
ACO-NPU 3 4 1 2500 0.4252

Based on findings, our algorithm (ACO-NPU) can find the global optimum with less iteration number on the BP-1 when it is
compared with the other algorithms. It is clear that the proposed ACO-NPU can compete with the existing algorithms. Accord-
ing to the solution times, the ACO-NPU algorithm is capable of finding the global optimum in a reasonable time for BP-1.
S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175 4169

Table 10
Comparison of algorithm for BP-10.

Algorithms Best x Best y Best f (x,y) Epoch number Best solution time (s)
SZGA NA NA 1.0316 NA NA
ACO-NPU 0.0898 0.7125 1.0316 500 0.0142

Benchmark Problem 1
25

20

15

10
F1

-5
-5 -4 -3 -2 -1 0 1 2 3 4 5
x

Fig. 4. The graph of BP-1.

3.2. Benchmark problem 2 (BP-2)

BP-2 was previously solved with ARSET [1], ACO-BA [6], MACO [7], IGARSET [10] and ACORSES [11]. The objective func-
tion of BP-2 is given in Eq. (7) and Fig. 5 shows the graph of function. The minimum value of this function is on x = 0
  4   4
1 1
min f ðxÞ ¼ x  sin þ x  cos ð7Þ
x x
ACO-NPU algorithm is initiated with the random initial solution of xkinitial , a is a random number within range of [1, 1].
Ant size, archive size and iteration number were set as 3, 10 and 500, respectively.

-5
x 10 Benchmark Problem 2
7

4
F2

0
-0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1
x

Fig. 5. The graph of BP-2.


4170 S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175

All results can be seen in Table 2. As the best value of x is 0, obviously it is seen that the proposed ACO-NPU algorithm
outperforms ARSET [1], ACO-BA [6], MACO [7], IGARSET [10] and ACORSES [11] in regards of epoch number. ACO-NPU can
compete for solution time with the other algorithms.

3.3. Benchmark problem 3 (BP-3)

BP-3 was previously solved with ARSET [1], ACO-BA [6], MACO [7], IGARSET [10] and ACORSES [11]. The objective func-
tion of BP-3 is given in Eq. (8) and Fig. 6 shows the graph of function. The global minimum value of this function with two
variables is 0 with point on f(x, y) = f(3, 3)

ðx  3Þ8 ðy  3Þ4
min f ðx; yÞ ¼ 8
þ ð8Þ
1 þ ðx  3Þ 1 þ ðy  3Þ4
ACO-NPU is initiated with xkinitial and ykinitial which are set to 0. a is a random number within range of [10, 10], Ant size,
archive size and iteration number were set as 3, 10 and 500, respectively.
The best values of x and y obtained by using the proposed ACO-NPU algorithm is 3 and 3, respectively. It is clear from
Table 3 that the proposed ACO-NPU finds the global minimum points and outperforms ARSET algorithm [1], ACO-BA [6],
IGARSET [10], and ACORSES [11] except MACO [7].

3.4. Benchmark problem 4 (BP-4)

BP-4 was previously solved with ARSET [1], ACO-BA [6], MACO [7], IGARSET [10] and ACORSES [11]. The objective func-
tion of BP-4 is given in Eq. (9) and Fig. 7 shows the graph of function. The minimum points of this function are on x = 1, and
y = 1 and f(x, y) = 0

min f ðx; yÞ ¼ ð100  ðx  y2 Þ2 Þ þ ð1  xÞ2 ð9Þ


Our algorithm is initiated with random initial solutions of xkinitial and ykinitial . a is a random number within range of [5, 5].
Ant size, archive size and iteration number were set as 40, 40 and 500, respectively.
It can be seen in Table 4 that the proposed ACO-NPU finds the global minimum points and our algorithm is in a reasonable
solution time, but number of evaluation is somewhat high when it is compared with the other algorithms.

3.5. Benchmark problem 5 (BP-5)

BP-5 was previously solved with ARSET [1], ACO-BA [6], MACO [7], IGARSET [10] and ACORSES [11]. The objective func-
tion of BP-5 is given in Eq. (10). The minimum points of this function are on 10 of x, 0 of y and f(x, y) = 10. The graph within
range of [10, 10] is shown in Fig. 8
x
min f ðx; yÞ ¼ ð10Þ
1 þ jyj

Benchmark Problem 3

1.5

1
F3

0.5

0
10
5 10
0 5
0
-5 -5
y -10 -10
x

Fig. 6. The graph of BP-3.


S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175 4171

Benchmark Problem 4

4000

3000

2000

F4
1000

0
2
1 2
0 1
0
-1 -1
y -2 -2
x

Fig. 7. The graph of BP-4.

Benchmark Problem 5

10

0
F5

-5

-10
10
5 10
0 5
0
-5 -5
y -10 -10
x

Fig. 8. The graph of BP-5.

ACO-NPU is initiated with random initial solutions of xkinitial , ykinitial and a is a random number within range of [0, 10]. Ant
size, archive size and iteration number were set as 5, 100 and 1125, respectively.
The best values of x and y obtained by using the proposed ACO-NPU algorithm is on x = 10 and y = 1.82E17 with
epoch 1125. It can be seen in Table 5 that the proposed ACO-NPU finds global minimum point of x and global minimum value
and approximates to global minimum point of y. ACO-NPU for this benchmark problem outperforms ARSET [1] and ACO-BA
[6] MACO [7], IGARSET [10] and ACORSES [11] especially in terms of the solution time and epoch number.

3.6. Benchmark problem 6 (BP-6)

BP-6 was previously solved with SZGA [5], MACO [7], IGARSET [10] and ACORSES [11]. The objective function of BP-6 is
given in Eq. (11) and Fig. 9 shows the graph of function. The minimum points of this function are on x = 0, and y = 0 and
f(x, y) = 0. The proposed ACO-NPU is initiated with random initial solutions of xkinitial ; ykinitial , a is a random number within range
of [10, 10]. Ant size, archive size and iteration number were set as 2, 10 and 500, respectively

min f ðx; yÞ ¼ x2 þ 2y2  0:3 cosð3pxÞ  0:4 cosð4pyÞ þ 0:7 ð11Þ


As shown in Table 6, ACO-NPU outperforms SZGA [5] and ACORSES [11]. Our algorithm can compete with MACO [7] and
IGARSET [10].
4172 S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175

Benchmark Problem 6

12

10

F6 4

0
2
1 2
0 1
0
-1 -1
y -2 -2
x

Fig. 9. The graph of BP-6.

3.7. Benchmark problem 7 (BP-7)

BP-7 was previously solved with ACORSES [11]. The objective function of BP-7 is given in Eq. (12) and Fig. 10 shows the
graph of function. Theoretical optimal function value of this function is on xi = 420.9687, and f(x, y) = n  418.9829 within
range of [0, 500] a is a random number within range of [500, 500]. Ant size, archive size and iteration number were set as 5,
10 and 150, respectively
X
n pffiffiffiffiffiffiffi
min f ðxÞ ¼  xi  sin jxi j ð12Þ
i¼1

It can be seen in Table 7 that ACO-NPU finds global minimum points of x and y optimal function value. ACO-NPU for this
benchmark problem outperforms ACORSES [11].

3.8. Benchmark problem 8 (BP-8)

BP-8 was previously solved with MACO [7], IGARSET [10] and ACORSES [11]. The objective function of BP-8 is given in Eq.
(13) and Fig. 11 shows the graph of function. The minimum points of this function are on x = 0, and y = 0 and f(x, y) = 2.
The proposed ACO-NPU is initiated with xkinitial and ykinitial which are ranges of [0, 10]. a is a random number within range of
[5, 5]. Ant size, archive size and iteration number were set as 3, 100 and 75, respectively

Benchmark Problem 7

1000

500

0
F7

-500

-1000
500
500
0
0

y -500 -500
x

Fig. 10. The graph of BP-7.


S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175 4173

Benchmark Problem 8

10

4
F8
2

-2
2
1 2
0 1
0
-1 -1
y -2 -2
x

Fig. 11. The graph of BP-8.

min f ðx; yÞ ¼ x2 þ y2  cosð18xÞ  cosð18yÞ ð13Þ


It can be seen in Table 8 that the proposed ACO-NPU approximates global minimum point of x and y and finds optimal
function value exactly. ACO-NPU outperforms MACO [7], IGARSET [10] and ACORSES [11] in terms of epoch number.

3.9. Benchmark problem 9 (BP-9)

BP-9 was previously solved with MACO [7], DRASET [8], IGARSET [10] and ACORSES [11]. The objective function of BP-9 is
given in Eq. (14) and Fig. 12 shows the graph of function. The minimum points of this function are on x = 3, y = 4 and
f(x, y) = 1. The proposed ACO-NPU is initiated with initial solutions xkinitial within range of [0, 3] and ykinitial within range of
[0, 4]. a is a random number within range of [20, 20]. Ant size, archive size and iteration number were set as 5, 150 and
500, respectively
 
1 2 4 1
min f ðx; yÞ ¼ exp ðx þ y2  25Þ2 þ sin ð4x  3yÞ þ ð2x þ y  10Þ2 ð14Þ
2 2
It can be seen from Table 9 that ACO-NPU finds global minimum point of x, y and optimal function value at epoch 2500.
Also, ACO-NPU can compete with the other algorithms.

Benchmark Problem 9

135
x 10

3
F9

0
5
5
0
0

y -5 -5
x

Fig. 12. The graph of BP-9.


4174 S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175

Benchmark Problem 10

80

60

40

F10
20

-20
2
1 4
0 2
0
-1 -2
y -2 -4
x

Fig. 13. The graph of BP-10.

3.10. Benchmark problem 10 (BP-10)

BP-10 is a global optimization test function. It is named as six-hump camelback function where x lies between ±3 and y
lies between ±2. The global minimum lies at (0.0898, 0.7125) where f(x, y) = 1.0316. BP-10 was previously solved with
SZGA [5]. The objective function of benchmark problem 10 is given in Eq. (15) and Fig. 13 shows the graph of the function
 
x4 2
min f ðx; yÞ ¼ 4  2:1x2 þ x þ xy þ ½4 þ 4y2 y2 ð15Þ
3
ACO-NPU is initiated with the initial solutions xkinitial and ykinitial which are random numbers within ranges of [0, 3] and [0, 2]
respectively, and a is a random number within range of [10, 10]. Ant size, archive size and iteration number were set as 5,
10 and 100, respectively.
It can be seen from Table 10 that the proposed ACO-NPU finds global minimum point of x and y at epoch 500.
Based on all of the findings, it can be said that the proposed ACO-NPU algorithm can find the global optimum with less
iteration number on many benchmark problems. It finds solutions in a reasonable time when it is compared with the other
methods. It has been seen that performance of IGARSET [10] is very competitive to ACO-NPU’s in many test problems. IGAR-
SET gives better results than the ACO-NPU only for BP-4. However, for other benchmark problems, ACO-NPU is better than
IGARSET [10].
The colony size, archive size and iteration number are given different for all benchmark problems, since these parameters
requires tuning. In order to determine the parameters (archive size, ant size and iteration number) heuristically, numerous
experiments were performed. Particularly, performance of ACO-NPU is dependent on ant size and iteration number as is the
case of many ACO algorithms.
Initial values of variables are randomly generated for benchmark problem 2, 4–6 and 10 and while benchmark problems
1, 2 and 7 are one-dimensional functions, the remaining problems are two-dimensional functions.

4. Conclusion

A heuristic ant colony optimization algorithm based on a novel pheromone updating scheme has been presented in this
paper for finding global minimum of continuous functions. Novel pheromone updating is used to compute pheromone quan-
tity at the end of the each iteration and allows ants to generate new solutions by concentrating to better ants. The perfor-
mance of the proposed algorithm was evaluated on ten benchmark problems and compared with performance of several
algorithms available in literature such as ACO-BA, MACO, ARSET, SZGA, IGARSET, and ACORSES.
It is concluded that the use of a solution archive with a novel pheromone updating scheme during the steps of the ACO-
NPU can help to find global minimum without being trapped in local minimum in selected benchmark problems within a
reasonable solution time. The performance of the proposed algorithm was generally good than that of existing algorithms
proposed for one or two dimensional continuous problems, so it is obvious that the algorithm by using novel pheromone
updating proves useful for finding global minimum of continuous functions.
Further research is needed to see how the proposed algorithm performs on continuous problems under constraints. Also,
the proposed ACO-NPU can be experienced for multi-dimensional functions such as three or four variables to see conver-
gence capability of the algorithm.
S.U. Seçkiner et al. / Applied Mathematics and Computation 219 (2013) 4163–4175 4175

References

[1] C. Hamzacebi, F. Kutay, A heuristic approach for finding the global minimum: adaptive random search technique, Appl. Math. Comput. 173 (2) (2006)
1323–1333.
[2] R. Chelouah, P. Siarry, Genetic and Nelder–Mead algorithms hybridized for a more accurate global optimization of continuous multiminima functions,
Eur. J. Oper. Res. 148 (2003) 335–348.
[3] Y.T. Kao, E. Zahara, A hybrid genetic algorithm and particle swarm optimization for multimodal functions, Appl. Soft Comput. 8 (2008) 849–857.
[4] M.F. Cardoso, R.L. Salcedo, S. Feyo de Azevedo, The simplex-simulated annealing approach to continuous non-linear optimization, Comput. Chem. Eng.
20 (9) (1996) 1065–1080.
[5] Y.D. Kwon, S.B. Kwon, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems,
Comput. Struct. 81 (2003) 1715–1725.
[6] M.D. Toksari, Ant colony optimization for finding the global minimum, Appl. Math. Comput. 176 (2006) 308–316.
[7] M.D. Toksarı, A heuristic approach to find the global optimum of function, J. Comput. Appl. Math. 209 (2007) 160–166.
[8] C. Hamzacebi, Continuous functions minimization by dynamic random search technique, Appl. Math. Model. 31 (2007) 2189–2198.
[9] J. Li, R.R. Rhinehart, Heuristic random optimization, Comput. Chem. Eng. 22 (3) (1998) 427–444.
[10] C. Hamzacebi, Improving genetic algorithms performance by local search for continuous function optimization, Appl. Math. Comput. 196 (2008) 309–
317.
[11] O. Baskan, S. Haldenbilen, H. Ceylan, H. Ceylan, A new solution algorithm for improving performance of ant colony optimization, Appl. Math. Comput.
211 (2009) 75–84.
[12] M. Dorigo, Optimization, Learning and natural algorithms (in Italian) (1992), Ph.D thesis, Dipartimento di Elettronica, Politecnico di Milano, Italy.
[13] K. Socha, C. Blum, An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training, Neural
Comput. Appl. 16 (3) (2007) 235–247.
[14] Y. Zhao, J. Wang, X. Xie, Continuous ant colony algorithm based on entity and its convergence, in: Second Int. Symp. on IITA (2008) pp. 80–84.
[15] B. Bilchev, I.C. Parmee, The ant colony metaphor for searching continuous design spaces. In: Proc. of the AISB Workshop on Evolutionary Computation,
Lect. Notes in Comput. Sci. 993 (1995) pp. 25–39.
[16] N. Monmarché, G. Venturini, M. Slimane, On how pachycondyla apicalis ants suggest a new search algorithm, Future Gen. Comput. Syst. 16 (9) (2000)
937–946.
[17] Dréo and Siarry, J. Dréo and P. Siarry, A new ant colony algorithm using the heterarchical concept aimed at optimization of multiminima continuous
functions, in: Proc. of the Third Int. Workshop on Ant Algorithms, Lec. Notes in Comput. Sci. (2002) 216–221.
[18] M. Mathur, S.B. Karale, S. Priye, V.K. Jayaraman, B.D. Kulkarni, Ant colony approach to continuous function optimization, Ind. Eng. Chem. Res. 39 (10)
(2000) 3814–3822.
[19] K. Socha, M. Dorigo, Ant colony optimization for continuous domains, Eur. J. Oper. Res. 185 (2008) 1155–1173.
[20] O. Kovarik, Ant colony optimization for continuous problems, Master thesis Czech Tech. Univ. in Prague, The Faculty of Electrical Engineering,
Electronics and Comput. Sci. & Eng. (2006).
[21] K. Socha, Ant colony optimization for continuous and mixed-variable domains, Ph.D thesis, Universit́e Libre de Bruxelles (2008).
[22] T. Dereli, S.U. Seçkiner, G.S. Dasß, H. Gökçen, M.E. Aydın, An exploration of the literature on the use of ‘swarm intelligence-based techniques’ for public
service problems, Eur. J. Ind. Eng. 3 (4) (2009) 379–423.
[23] A. Afshar, S. Madadgar, Ant colony optimization for continuous domains: application to reservoir operation problems, in: Eighth Int. Conference on
Hybrid Intell. Sys. (2008) pp. 13–18.

You might also like