Professional Documents
Culture Documents
4.1 Introduction
Self-organization
(a) (b)
(c) (d)
Fig. 4.1. Examples of observable patterns in biological systems. (a) motives for
the dress of a reticulated giraffe (U.S. Fish and Wildlife Service, Gary M. Stolz),
(b) double spiral of Fibonacci in the heart of a daisy, (c) birds flocking, (d) fish
schooling.
126 4 Ant Colony Algorithms
Stigmergy
Stigmergy is one of the basic concepts for the creation of ant colony meta-
heuristics. It is precisely defined as a “form of communication by means of
modifications of the environment”, but one can utilize the term “indirect so-
cial interactions” to describe the same phenomenon. The biologists differenti-
ate the “quantitative stigmergy” from the “qualitative” one, but the process
in itself is identical. An example of the use of stigmergy is described in the
section 4.2.2. The great force of stigmergy is that the individuals exchange
information by means of the task in progress, to achieve the state of the total
task in advance.
Decentralized control
Dense heterarchy
which such a system forms a highly connected network, where each individ-
ual can exchange information with any other. This concept is to some extent
contrary to that of hierarchy where, in a popular but erroneous vision, the
queen would control her subjects while passing orders in a vertical structure,
whereas, in a heterarchy, the structure is rather horizontal (figure 4.2).
(a) (b)
Fig. 4.2. Hierarchy (a) and dense heterarchy (b): two opposite concepts.
It should be noted that this concept not only matches with that of decen-
tralized control, but also with that of stigmergy. This is because the concept
of heterarchy describes the manner in which information flows through the
system. However, in a dense heterarchy, any sort of communication must be
taken into account, which includes the stigmergy as well as the direct exchange
of information between the individuals.
The ants use the trails of pheromones to mark their way, for example
between the nest and a source of food. A colony is thus able to choose
(under certain conditions) the shortest path towards a source to exploit
[Goss et al., 1989, Beckers et al., 1992], without the individuals having a global
vision of the path.
Indeed, as illustrated in figure 4.4, those ants which followed the two short-
est branches, arrived at the nest quickest, after having visited the source of
food. Thus, the quantity of pheromone present on the shortest path is slightly
more significant than that present on the longest path. However, a trail pre-
senting a greater concentration of pheromones is more attractive for the ants
and it has a larger probability to be followed. Hence the short trail will be
reinforced more than the long one, and, in the long run, will be chosen by the
great majority of the ants.
Here it should be noted that the choice is implemented by a mechanism
of amplification of an initial fluctuation. However, it is possible that if, at the
beginning of the exploitation, a greater quantity of pheromones is deposited
on the large branches, then the colony may choose the longest route.
Other experiments [Beckers et al., 1992], with another species of ants,
showed that if the ants can make half-turns on the basis of very big vari-
ation compared to the direction of the source of food, then the colony is more
flexible and the risk to be trapped in the long route is weaker.
It is difficult to know precisely the physiochemical properties of the trails of
pheromone, which vary from species to species and depend on a great number
of parameters. However, the metaheuristics of ant colony optimization are
4.3 Optimization by ant colonies and the traveling salesman problem 129
Food Food
Nest Nest
(a) (b)
Fig. 4.4. Experiment for selection of the shortest branches by a colony of ants:
(a) at the beginning of the experiment, (b) at the end of the experiment.
One of the earliest problems for which an ant colony algorithm was imple-
mented was the traveling salesman problem (TSP ): the “Ant System” (AS )
[Colorni et al., 1992]. The graduation of the metaphor to the algorithm is rel-
atively easily understood and the traveling salesman problem is well known
and extensively studied.
130 4 Ant Colony Algorithms
It is interesting to dig deep into the principle of this first algorithm for bet-
ter understanding the operating principle of the ant colony algorithms. There
are two ways of approaching these algorithms. The first approach, most obvi-
ously in conformation with the earliest development, is that which historically
led to the development of the original “Ant System”; we chose to describe it
in this section. The second is a more formal description of the common mech-
anisms for the ant colony algorithms, it will be described in the section 4.5.
The traveling salesman problem consists in finding the shortest path con-
necting n cities specified, each city has to be visited only once. The problem is
more generally defined like a totally connected graph (N, A), where the cities
are the nodes N and the paths between these cities are the edges A.
1. the list of the already visited cities, which defines the possible movements
in each step, when the ant k is on the city i: Jik ;
2. the reciprocal of the distance between the cities: ηij = d1ij , called visibility.
This static information is used to direct the choice of the ants towards close
cities, and to avoid the cities too remote;
3. quantity of pheromone deposited on the edge connecting the two cities,
called intensity of the trail. This parameter defines the relative attraction
of part of the total path and changes with each passage of an ant. This
can be viewed as a global memory of the system, which evolves through
a training process.
k
Q
Lk (t)
if (i, j) ∈ T k (t)
∆τij (t) = (4.2)
0 if (i, j) ∈
/ T k (t)
where T k (t) is the path traversed by the ant k during the iteration t, Lk (t)
the length of the turn and Q a fixed parameter.
However, the algorithm would not be complete without the process of
evaporation of the trails of pheromone. In fact, it is necessary that the system
should be capable of “forgetting” the bad solutions, to avoid being trapped
in sub-optimal solutions. This is achieved by counterbalancing the additive
reinforcement of the trails by a constant decrease of the values of the edges
in each iteration. Hence, the update rule for the trails is given as:
For t = 1, . . . , tmax
For each ant k = 1, . . . , m
Choose a city randomly
For each non visited city i
Choose a city j, from the list Jik of remaining cities, according to the
formula 4.1
End For
k
Deposit a trail ∆τij (t) on the path T k (t) in accordance with the equa-
tion 4.2
End For
Evaporate trails according to the formula 4.3
End For
4.3.2 Variants
Ant System & elitism
An early variation of the “Ant System” was proposed in [Dorigo et al., 1996]:
the introduction of the “elitist” ants. In this version, the best ant (that which
traversed the shortest path) deposits a large quantity of pheromone, with
a view to increase the probability of the other ants of exploring the most
promising solution.
132 4 Ant Colony Algorithms
Fig. 4.5. The traveling salesman problem optimized by the AS algorithm, the
points represent the cities and the thickness of the edges represents the quantity
of pheromone deposited (a) example of the path built by an ant, (b) at the begin-
ning of calculation, all the paths are explored, (c) the shortest path is reinforced
more than the others, (d) the evaporation allows to eliminate the worse solutions.
Ant-Q
1
a reinforcement based training algorithm
4.3 Optimization by ant colonies and the traveling salesman problem 133
where τ0 is the initial value of the trail. At each passage, the visited edges
see their quantity of pheromone decreasing, which supports diversification
by taking into account the non explored paths. At each iteration, the total
update is carried out as:
where the edges (i, j) belong to the best turn length T + of length L+ and
where ∆τij (t) = L1+ . Here, only the best trail is thus updated, which takes
part in an intensification by selection of the best solution.
3. The system uses a list of candidates. This list stores for each city v the
closest neighbors, classified by increasing distances. An ant will consider
an edge towards a city apart from the list only if this one was already
explored. To be specific, if all the edges were already visited in the list of
candidates, the choice will be done according to the rule 4.4, if not, then
it is the closest to the not visited cities which will be selected.
The best results are obtained by updating the best solution with an increas-
ingly strong frequency, during the execution of the algorithm.
For the AS algorithm, the authors recommend that, although the value of
Q has little influence on the final result, this value is of the same order of
magnitude as the estimated length of the best found path. In addition, the
town of departure for each ant is typically selected at random as no significant
influence of specific starting point for the ants could be demonstrated.
With regard to the ACS algorithm, the authors advise to use the relation
τ0 = (n · Lnn )−1 , where n is the number of cities and Lnn the length of a turn
found by the nearest neighbor method. The number of ants m is a significant
parameter, since it takes part in the principal positive feedback of the system.
The authors suggest using as many ants as the cities (i.e. m = n) for obtaining
good performances for the traveling salesman problem. It is possible to use
only one ant, but the effect of amplifying different lengths is then lost, just
as the natural parallelism of the algorithm, which can prove to be harmful
for certain problems. In general, the ant colony algorithms do not seem to be
very sensitive to a precise selection of the number of ants.
An elegant description was proposed in [Dorigo and Stützle, 2003], which can
be applied to the (combinatorial) problems where a partial construction of the
solution is possible. This description, although restrictive, makes it possible
to highlight the original contributions of these metaheuristics (called ACO,
for “Ant Colony Optimization”, by the authors).
Artificial ants used in ACO are stochastic solution construc-
tion procedures that probabilistically build a solution by iteratively
adding solution components to partial solutions by taking into account
(i) heuristic information on the problem instance being solved, if avail-
able, and (ii) (artificial) pheromone trails which change dynamically
at run-time to reflect the agents’ acquired search experience.
4.5.1 Formalization
try to work out feasible solutions, but if necessary, they can produce unfeasible
solutions. The components and the connections can be associated with the
trails of pheromone τ (establishing an adaptive memory describing the state
of the system) and a heuristic value η (representing a priori information about
the problem, or originating from a source other than that of the ants; it is
very often the cost of the state in progress). The trails of pheromone and the
value of the heuristics can be associated either with the components, or with
the connections (figure 4.6).
(a) (b)
Fig. 4.6. In an ant colony algorithm, the trails of pheromone can be associated with
the components (a) or connections (b) of the graph representing the problem to be
solved.
Each ant has a memory to store the path traversed, an initial state and the
stopping conditions. The ants move according to a probabilistic rule of decision
function of the local trails of pheromone, state of the ant and constraints
of the problem. At the time of addition of a component to the solution in
progress, the ants can update the trail associated with the component or the
corresponding connection. Once the solution is built, they can update the trail
of pheromone components or connections used. Lastly, an ant has the capacity
of at least building a solution for the problem.
In addition to the rules governing the behavior of the ants, another major
process is activated: the evaporation of the trails of pheromone. In fact, with
each iteration, the value of the trails of pheromone is decreased. The goal
of this reduction is to avoid a too fast convergence and the trapping of the
algorithm in local minima. This causes a gradual lapse in memory which helps
in exploration of new areas.
According to the authors of the ACO formalism, it is possible to implement
other processes requiring a centralized control (and thus not being able to be
directly controlled by some ants), as additional processes. In our opinion,
4.5 Formalization and properties of ant colony optimization 137
this is not desirable; in fact, one then loses the decentralized characteristic
of the system. Moreover, the implementation of the additional processes with
rigorous formalization becomes difficult, because one should be able to view
any process there.
The use of the stigmergy is a crucial factor for the ant colony algorithms.
Hence, the choice of the method for implementation of the trails of pheromone
is significant to obtain the best results. This choice is mainly related to the
possibilities of representation of the search space, each representation being
able to bring a different way to implement the trails. For example, for the
traveling salesman problem, an effective implementation consists in using a
trail τij between two cities i and j like a representation of the interest to visit
the city j after the city i. Another possible representation, less effective in
practice, consists in considering τij as a representation of the interest to visit
i as the jth city. In fact, the trails of pheromone describe the state of the
search for the solution by the system in each iteration and the agents modify
the way in which the problem will be represented and perceived by the other
agents. This information is shared by the ants by means of modifications
of the environment, in form of an indirect communication: the stigmergy.
Information is thus stored for a certain time duration in the system, which
led certain authors to consider this process as a form of adaptive memory
[Taillard, 1998, Taillard et al., 1998], where the dynamics of storage and of
division of information will be crucial for the system.
4.5.3 Intensification/diversification
The problem of the relative use of the process of diversification and inten-
sification is an extensively explored problem in the design and the use of
a metaheuristic. By intensification, one understands the exploitation of the
information gathered by the system at a given time. On the other hand, di-
versification is the exploration of search space areas imperfectly taken into
account. Very often, it is a question of choosing where and when “to inject
the random perturbation” in the system (diversification) and/or to improve a
solution (intensification). In the ACO type algorithms, as in the majority of
the cases, there are several ways in which these two facets of metaheuristics
of optimization can be organized. The most obvious method is by adjusting
the parameters α and β, which determine the relative influence of the trails of
pheromone and the heuristic information. Higher the value of α, more signifi-
cant will be the intensification, because the trails will have more influence on
the choice of the ants. Conversely, lower the value of α, stronger diversification
will take place, because the ants will avoid the trails. The parameter β acts in
a similar manner. Hence both the parameters must be tuned simultaneously
to have a tighter control over these aspects.
138 4 Ant Colony Algorithms
The ant colony metaheuristics are often more effective when they are hy-
bridized with local search algorithms. These algorithms optimize those so-
lutions found by the ants before the ants are used for updating the trails of
pheromone. From the point of view of local search, the advantage of employing
ant colony algorithms to generate an initial solution is undeniable. Very often
hybridization with a local search algorithm becomes the important factor in
differentiating an interesting ACO type metaheuristic from a really effective
algorithm.
Another possibility to improve the performances is to inject more rele-
vant heuristic information. This addition generally has a high cost in term of
additional computational burden.
It should be noted that these two approaches are similar from the point of
view of employing cost information to improve a solution. In fact, local search
in a way is more direct than the heuristics, however the latter is perhaps more
natural to use a priori information about the problem.
4.5.5 Parallelism
4.5.6 Convergence
The metaheuristics can be viewed as modified versions of a basic algorithm: a
random search. This algorithm has the interesting property to guarantee that
the optimal solution will be found, early or late, and hence one can concentrate
on the issue of convergence. However, since this basic algorithm is skewed, the
guarantee of convergence does not exist any more.
If, in certain cases, one is sure about the convergence of an ant colony
algorithm (MMAS for example, see section 4.3.2), the problem of convergence
of an unspecified ACO algorithm remains unsolved. However, there is a variant
of the ACO whose convergence was proven [Gutjahr, 2000, Gutjahr, 2002]: the
“Graph-Based Ant System” (GBAS ). The difference between the GBAS and
the AS algorithm lies in the updating of the trails of pheromone, which is
allowed only if a better solution is found. For certain values of parameters,
and for a given small > 0, the algorithm will find the optimal solution with
a probability Pt ≥ 1 − , after a time t ≥ t0 (where t0 is a function of ).
4.6 Prospect
Armed with the early success of the ant colony algorithms, allied research
interests started exploring many areas other than that of combinatorial op-
timization: for example, the use of these algorithms for continuous and/or
dynamic problems, or the comparison of this type of algorithms within a
framework of swarm intelligence and with other metaheuristics.
The first of these algorithms, quite naturally called CACO (“Continuous Ant
Colony Algorithm”) [Bilchev and Parmee, 1995, Wodrich and Bilchev, 1997,
Mathur et al., 2000], uses two approaches: an evolutionary algorithm selects
and crosses areas of interest, that the ants explore and evaluate. An ant selects
an area with a probability proportional to the concentration of pheromone in
that area, in an identical manner as — in the “Ant System” —, an ant would
select a trail going from a city to another:
where N is the number of areas and ηiβ (t) is used to include specific heuristics
for the problem. The ants then leave the centre of the area and move in a
direction chosen randomly, as long as an improvement in the objective function
is observed. The displacement step used by the ant in each evaluation is given
by:
t c
δr(t, R) = R · 1 − u(1− T )
(a) (b)
Fig. 4.7. The CACO algorithm: the global ants (a) take part in the displacement
of the areas which the local ants (b) evaluate.
A hybrid method
Another algorithm was developed by two of the co-authors of this book, which
focused on the principles of communication of the ant colonies. It proposes to
142 4 Ant Colony Algorithms
1. At each iteration, each ant selects an initial value in the group of candidate
values with the probability:
τij (t)
pkij (t) =
τir (t)
2. Use the mutation and the crossover operators on those m values in order to
obtain m new values;
3. Add these new values to the group of candidate values for the component xi,e ;
4. Form m solutions of the new generation;
5. Calculate the “fitness” of these solutions;
6. When m ants traversed all the edges, update the trails of pheromone of candidate
values of each component by:
τij (t + 1) = (1 − ρ)τir (t) + k
τij
7. If the kth ant chooses the j th candidate value of the group of components, then
k k
δτij (t + 1) = W fk , if not δτij = 0. With W a constant and fk the “fitness” of
the solution found by the kth ant;
8. Erase the m values having the lowest intensities of pheromone in each group of
candidates.
Algorithm 4.2: A hybrid ant colony algorithm for the continuous case.
add the direct exchanges of information [Dréo and Siarry, 2002] to the stig-
mergic processes, being inspired by a similar action adopted in “heterarchic
approach” described previously in the 4.2.1. Thus, a formalization of the ex-
change of information is proposed, based on the concept of communication
channels. Indeed, there are several possible ways to pass information between
two groups of individuals, for example either by deposits of trails of pheromone
or by direct exchanges. One can define various types of channels of commu-
nication representing the set of the characteristics of the transmission of in-
formation. From the point of view of metaheuristics, there are three principal
characteristics (see figure 4.8):
Range: the number of individuals involved in the exchange of information.
For example, information can be emitted by an individual and received
by several others, and vice-versa.
Memory: the persistence of information in the system. Information can remain
within the system for a specific time duration or can be only transitory.
Integrity: the modifications generated by the use of the channel of communi-
cation. Information can vary in time or be skewed during its transmission.
Moreover, information passing through a communication channel can be
of varied interest, such as for example the value and/or the position of a point
on the search space.
The CIAC algorithm (acronym for “Continuous Interacting Ant Colony”)
uses two communication channels:
4.6 Prospect 143
Channel Informations
•Range
•Memory
•Integrity
14 000
standard deviation
12 000
10 000
8 000
6 000
4 000
100 120 140 160 180 200
evaluations number
Fig. 4.9. Oscillations observed during the simultaneous use of the two channels of
communication in CIAC algorithm.
Random initialization
Evaporation
Threshold choice
no no
Spots detection Messages in queue
yes yes
To gravity center To message
of detected spots Add noise
Threshold choice
motivation
Threshold choice
Local search
1 − motivation
no yes
Message throw Visible spot
Spot reinforcement
Increase motivation Spot deposit
Reduce visible zone
In all these algorithms adapted for continuous problems, the term “ant
colonies” could be utilized as all of them use processes very similar to stig-
mergy for information exchange.
However, there is one algorithm which can be adapted to the continu-
ous case [Monmarché et al., 2000] that utilizes the behavior of primitive ants
(which does not mean not-adapted ) of the Pachycondyla apicalis species as a
starting point, and that does not utilize the indirect communication by trails
of pheromone: the API algorithm.
In this method, one can start by positioning a nest randomly on the search
space, and then ants are sent at random in a given perimeter. These ants
then locally explore the “hunting site” by evaluating several points in a given
146 4 Ant Colony Algorithms
perimeter (see figure 4.11). Each ant memorizes the best-found point. If during
the exploration of its hunting site it finds a better point, then it will reconsider
this site, if not after a certain number of explorations, it will choose another
site. Once explorations of the hunting sites are completed, randomly peeked
ants compare, on two by two basis (as can be the case for the real ants when
they exhibit the behavior of “tandem-running”), their best results and then
they memorize the best two hunting sites. After a specified time period, the
nest is re-initialized at the best point found, the memory of the sites of the
ants is reset and the algorithm executes a new iteration.
Fig. 4.11. The API algorithm: a method with multiple starting inspired by a species
of primitive ant. The ants (full circles) explore hunting sites (small squares) within
a perimeter (large circle) around the nest. The nest is moved to the best point when
the system is re-initialized (arrow in thick feature).
It should be noted that out of these four algorithms, two were in fact more
or less hybridized with an evolutionary algorithm, and a third one did not
utilize the “classic” metaphor for ant colonies. Generally, it can be opined
that research in this domain is still at its primitive stage and the proposed
algorithms are not fully matured, and are thus not yet really competitive
compared to the other established metaheuristic classes for the continuous
problems.
4.6 Prospect 147
A problem is known as a dynamic one if it varies with time, i.e. the optimal
solution does not have the same characteristics during the time of optimiza-
tion. These problems give rise to specific difficulties, owing to the fact that it
is necessary as well as possible to approach the best solution at each instant
of time.
The first application of the ant colony algorithms for dynamic prob-
lems was proposed for optimization of the routing of the telephone networks
[Schoonderwoerd et al., 1996]. However the proposed algorithm was not in-
tensively studied in the literature and hence it is difficult to learn some lesson
from it. Another application on similar problems was proposed by White et
al. [White et al., 1998, Bieszczad and White, 1999]. An application for prob-
lems of routing of Internet networks (see figure 4.12) has also been presented:
the AntNet algorithm [Di Caro and Dorigo, 1997]. This metaheuristic was the
subject of several studies (see in particular [Di Caro and Dorigo, 1998]) and
seems to have proven its effectiveness for several test problems.
Fig. 4.12. The network example used to test the AntNet algorithm: NFSNET
(each edge represents an oriented connection).
Very often the metaheuristics originate from metaphors drawn from nature,
and in particular from biology. The ant colony algorithms are inspired by the
behavior of social insects, but they are not the only algorithms which evolved
from the study of the animal behavior (ethology). For example, optimization
by particle swarms (“Particle Swarm Optimization” [Eberhart et al., 2001],
see 5.6) originated from an analogy with the collective behaviors of animals
148 4 Ant Colony Algorithms
The metaheuristics form a wide class of algorithms, where many concepts are
found across several categories. Moreover, many variations of a specific cate-
gory of algorithms make the borders between different metaheuristics fuzzy.
An example of overlapping between two metaheuristics can be cited by the
term “swarm intelligence”, which is used not only to describe the operating
mode of the ant colony algorithms [Bonabeau et al., 1999], but also of other
algorithms like the “particle swarm” [Eberhart et al., 2001] (see section 5.6 for
a detailed description). Generally, this term refers to any system (normally
artificial) having self-organization properties — similar to those described in
the section 4.2.1 — that is able to solve a problem by utilizing only the forces
of interactions at the individual level.
A broader attempt for unified presentation has also been made: the frame-
work of the “adaptive memory programming” [Taillard et al., 1998] (see sec-
tion 7.5), in particular including the ant colonies, the tabu search and the evo-
lutionary algorithms. This framework insists on the use of a form of memory in
these algorithms, and on the use of the intensification and the diversification
phases (see section 4.5.3 for this aspect of the artificial ant colonies).
Thus several metaheuristic algorithms can be brought closer to the ant
colony algorithms and vice-versa. One feature that strongly supports this over-
lapping is the fact that the ant colony algorithms are very often effective only
with a local search (see section 4.5.4). Hence, from a certain point of view, an
ant colony algorithm strongly resembles the GRASP [Feo and Resende, 1995,
Resende, 2000] (“Greedy Randomized Adaptive Search Procedures”, see sec-
tion 5.8) algorithm with a specific construction phase.
Similarly, the “Cross-Entropy” [Rubinstein, 1997, Rubinstein, 2001] (see
section 5.9) method has two phases: initially generate a random data file,
then change the parameters which generate this data file to obtain a better
performance for the next iteration. Still, this method can be considered to be
close to the ant colony algorithm [Rubinstein, 2001]. Some works have even
aimed at using these two methods jointly [Wittner and Helvik, 2002].
One can also point out the similarities of these algorithms with particle
swarm optimization [Kennedy and Eberhart, 1995, Eberhart et al., 2001] (de-
scribed in section 5.6), which also strongly utilizes the attributes of distributed
4.7 Conclusion 149
systems. Here, large groups of particles are traversing the search space with
a displacement dynamic that make them gathering each other.
Another very interesting overlapping of ant colony algorithms can be ob-
served with the estimation of distribution algorithms (EDA,
[Larranaga and Lozano, 2002], described section 5.7). Indeed, these algorithms
— derived from the evolutionary algorithms in the beginning — are based
on the fact that in each iteration, the individuals in the search space are
chosen at random according to a distribution, built from the states of the
preceding individuals. Schematically, for a better individual, the probability
of creation of other individuals in the neighborhood is higher. One can ob-
serve that the similarity of these EDA algorithms to the ACO algorithms is
remarkable [Monmarché et al., 1999].
One can thus draw a parallel between evolutionary algorithms (see chap-
ter 3) and ant colonies, that both use a population of “agents” selected on
the basis of memory-driven or probabilistic procedures. One can also harp
on the idea, supported by some biologists, that the phenomenon of self-
organization has an important role to play in the evolutionary processes
[Camazine et al., 2000]. . . which the evolutionary algorithms consider as a
starting point.
A new approach — less related to the metaheuristics — consists in con-
sidering a particular class of ant colony algorithms (the class called “Ant
Programming”) and can be placed in between the optimal control theories
and the reinforcement learning [Birattari et al., 2002].
It is well observed that many interactions and overlapping do exist and
the relations between evolutionary algorithms, evolution of distribution algo-
rithms and ant colonies do iterate the fact that each one can finally reveal the
characteristics of the others. It is thus difficult to study ant colony metaheuris-
tic as a homogeneous, stand-alone algorithm which in itself is a separate class
from the others. However, the power of the metaphor utilized and the com-
bination of a whole group of relatively well-known characteristics (see section
4.5) make it possible to clarify its definition.
4.7 Conclusion
The metaheuristic which is inspired by the ant colonies is initiated to be
well described and formalized. The entire set of properties required for its
description is known: probabilistic construction of a solution (by addition of
components in the ACO formalism), heuristics on the specific problem, use of
indirect memory form and a structure comparable with that of a self-organized
system. The ideas underlying the ant colony algorithms are powerful; one
can describe this metaheuristic like a distributed system where the interac-
tions between basic components, by means of stigmergic process, facilitate the
emergence of a coherent global behavior so that the system is able to solve
difficult optimization problems.
150 4 Ant Colony Algorithms