Professional Documents
Culture Documents
Swarm Intelligence (SI) is a relatively new paradigm being applied in a host of research
settings to improve the management and control of large numbers of interacting entities such
as communication, computer and sensor networks, satellite constellations and more. Attempts
to take advantage of this paradigm and mimic the behavior of insect swarms however often
lead to many different implementations of SI. The rather vague notions of what constitutes selforganized behavior lead to rather ad hoc approaches that make it difficult to ascertain just what
SI is, assess its true potential and more fully take advantage of it.
This work provides a set of general principles for SI research and development. A precise
definition of self-organized behavior is described and provides the basis for a more axiomatic
and logical approach to research and development as opposed to the more prevalent ad hoc
approach in using SI concepts. The advances and applications of Swarm Intelligence is also dealt
with in this work.
CHAPTER ONE
1
INTRODUCTION
1.1 Background of the study
Swarm Intelligence (SI) is the property of a system whereby the collective behaviors of agents
interacting locally with their environment cause coherent functional global patterns to emerge. SI
provides a basis with which it is possible to explore distributed problem solving without
centralized control or the provision of a global model. One of the cores tenets of SI work is that
often a decentralized, bottom-up approach to controlling a system is much more effective than
traditional, centralized approach. Groups performing tasks effectively by using only a small set
of rules for individual behaviour is called swarm intelligence. Swarm Intelligence is a property
of systems of non-intelligent agents exhibiting collectively intelligent behaviour. In Swarm
Intelligence, two individuals interact indirectly when one of them modifies the environment and
the other responds to the new environment at a later time. For years scientists have been studying
about insects like ants, bees, termites etc. The most amazing thing about social insect colonies is
that theres no individual in charge. For example consider the case of ants. But the way social
insects form highways and other amazing structures such as bridges, chains, nests and can
perform complex tasks is very different: they self-organize through direct and indirect
interactions. The characteristics of social insects are (Bonabeau, 1999.)
1.
Flexibility
2.
Robustness
3.
Self-Organization
1.3
Swarm intelligence is a relatively new discipline that deals with the study of self-organizing
processes both in nature and in artificial systems. Researchers in ethology and animal behavior
have proposed many models to explain interesting aspects of social insect behavior such as selforganization and shape-formation. Recently, algorithms inspired by these models have been
proposed to solve difficult computational problems.
An example of a particularly successful research direction in swarm intelligence is ant colony
optimization, the main focus of which is on discrete optimization problems. Ant colony
optimization has been applied successfully to a large number of difficult discrete optimization
problems. Another interesting approach is that of particle swarm optimization, that focuses on
continuous optimization problems. Here too, a number of successful applications can be found in
the recent literature. Swarm robotics is another relevant field. Here, the focus is on applying
swarm intelligence techniques to the control of large groups of cooperating autonomous robots.
1.4
This study is important to the field of science and engineering as it will improve the applications
the following applications of swarm intelligence;
Cellular Robotic System". In the paper, swarm intelligence is described as \systems of nonintelligent robots exhibiting collectively intelligent behaviour evident in the ability to
unpredictably produce `specific' ([i.e.] not in a statistical sense) ordered patterns of matter in the
external environment" Beni and Wang (1989).
Craig Reynolds' flocking system is one of the influential swarm intelligence example.
Individuals in flocking system are called boids. They have three layers of motion, action
4
selection, steering and locomotion. The three simple steering behaviours {aggregation,
separation, alignment {can emergent into complicated result that looks like a flock of bird or a
school of _sh. In his paper, other steering behaviours such as obstacle avoidance, seeking,
foraging are also added to the system to enhance the over all realism. Reynolds (1999)
only local information that the individuals exchange directly or via the environment
The overall behaviour of the system results from the interactions of individuals with each
other and with their environment, that is, the group behavior self-organizes.
Modelling Swarm Behaviour
5
Source: Wikipedia.com/swarm-intelligence/fish-colony
2.2 Example Algorithms of Swarm Intelligence:
1.
(ACO) is a probabilistic technique for solving computational problems which can be reduced to
finding good paths through graphs. In the real world, ants wander randomly, and upon finding
food return to their colony while laying down pheromone trails. If other ants find such a path,
they are likely not to keep travelling at random, but to instead follow the trail, returning and
reinforcing it if they eventually find food through that way. This algorithm is inspired by
forgiving behavior of the ants.
1. The first ant finds the food source (F), via any way (a), then returns to the nest (N),
leaving behind a trail pheromone (b)
2. Ants indiscriminately follow four possible ways, but the strengthening of the runway
makes it more attractive as the shortest route.
3. Ants take the shortest route, long portions of other ways lose their trail pheromones.
In a series of experiments on a colony of ants with a choice between two unequal length
paths leading to a source of food, biologists have observed that ants tended to use the shortest
route. A model explaining this behavior is as follows:
1. An ant (called "blitz") runs more or less at random around the colony;
2. If it discovers a food source, it returns more or less directly to the nest, leaving in its path
a trail of pheromone;
3. These pheromones are attractive, nearby ants will be inclined to follow, more or less
directly, the track;
4. Returning to the colony, these ants will strengthen the route;
5. If there are two routes to reach the same food source then, in a given amount of time, the
shorter one will be traveled by more ants than the long route;
6. The short route will be increasingly enhanced, and therefore become more attractive;
7. The long route will eventually disappear because pheromones are volatile;
8. Eventually, all the ants have determined and therefore "chosen" the shortest route.
space. Hypotheses are plotted in this space and seeded with an initial velocity, as well as a
communication channel between the particles. Particles then move through the solution space,
and are evaluated according to some fitness criterion after each time step. Over time, particles
are accelerated towards those particles within their communication grouping which have better
fitness values. The main advantage of such an approach over other global minimization strategies
such as simulated annealing is that the large number of members that make up the particle swarm
make the technique impressively resilient to the problem of local minima.
4. Stochastic Diffusion Search
It belongs to a family of swarm intelligence and naturally inspired search and
optimization algorithms which includes ant colony optimization, particle swarm optimization
and genetic algorithms. It is an agent-based probabilistic global search and optimization
technique best suited to problems where the objective function can be decomposed into multiple
independent partial-functions. Each agent maintains a hypothesis which is iteratively tested by
evaluating a randomly selected partial objective function parameterized by the agent's current
hypothesis.
5. Gravitational Search Algorithm
Gravitational search algorithm (GSA) is constructed based on the law of Gravity and the
notion of mass interactions. The GSA algorithm uses the theory of Newtonian physics and its
searcher agents are the collection of masses. In GSA, we have an isolated system of masses.
Using the gravitational force, every mass in the system can see the situation of other masses. The
gravitational force is therefore a way of transferring information between different masses.
individual insects and swarms. The assumption is that if ants, with brains that weigh less than the
ink in this comma, can run efficient supply chains, why do humans have such trouble?
The question has dogged Eric Bonabeau for years. The 34-year-old Frenchman a scientist and
student of the chaos-theory branch of complexity science has spent nearly a decade studying
the organisation, co-ordination, and work habits of social insects. Ant colonies are so efficient,
Bonabeau deduced, because they lack centralised control; no single ant boss runs the business.
Bonabeau took this notion a step further in his 1999 book, Swarm Intelligence, in which he
described how the study of an organisation's ants, its myriad individual parts, could help
businesses find solutions to problems that elude ordinary top-down analysis. (Bonabeau 1999)
For example, how the late arrival of a single package can derail an entire supply chain or why
adding a lane to a highway can often worsen traffic jams.
It all seemed great on paper, but until recently Bonabeau had not had the chance to prove that his
theories would work in practice. When hired by a client company, the consultancies applying
Bonabeau's ideas, notably Santa Fe, New Mexico's BiosGroup, where Bonabeau started, and
Icosystem, the firm he founded after leaving BiosGroup, created algorithms to generate superrealistic simulations of an operation's moving parts. In other words, they plug in virtual
employees, products, and customers; order them to do different tasks; and watch what happens.
Bonabeau's agent-based modelling techniques now help to tighten supply chains, speed up the
drugs to market process, and even design better, unmanned drones, for the Pentagon.
Bonabeau's virtual ants, for example, are crawling all over Air Liquide, the French industrial gas
giant. The company supplies liquid oxygen, nitrogen and other gases to some 10,000 customers
from over 300 sources through 30 depots and using 200 trucks and 200 trailers. It is a supply
chain that can create 3trn daily combinations among all its constituent parts. Twenty-two fulltime logistics analysts took nearly half a day to generate a delivery schedule that would move
every product to its destination on time.
11
Working with BiosGroup, Air Liquide chose to run agent-based simulations to see how they
could draw up more efficient delivery routines. Air Liquide programmed its trucks, like ants, to
find the shortest routes or to follow the equivalent of pheromone trails, which means that
subsequent trucks were to retrace shortcuts that others found. Then, using reams of data from Air
Liquide's business operations, BiosGroup engineers retested their computer simulations until
they found the most efficient combination of rules. The result: just one Air Liquide analyst is
necessary to create daily shipping and production schedules across its numbingly complex
supply chain in about two hours.
Southwest Airlines in the US, too, has benefited from Bonabeau's ideas. The airline has
simulated the various parts of its cargo-shipping business: its aircraft, destinations, types of
cargo, ground personnel, and so on. The simulations showed what Southwest's logistics
managers had suspected all along, it is sometimes more efficient not to take the shortest route, as
long as fewer hands actually touch the cargo in the process. Southwest claims that it saves $2m a
year in labour costs because of this insight.
Agent-based modelling has evolved beyond ant behaviour, and so too has Bonabeau's work.
After leaving BiosGroup in 2000, he soon founded Icosystem and began to apply his algorithms
to new business applications. To move Eli Lilly's drugs to market faster, engineers, for example,
built a model of the company's sprawling clinical-development processes, using people, drugs,
regulatory hurdles, and other individual parts to simulate this laborious task. By finding and
applying the right rules, Bonabeau hopes to speed up Eli Lilly's development time by as much as
80 per cent. Insurance firms are also using these methods to better understand how people choose
their own health plans.
Even the US Office of Naval Research is using these techniques to help it design smarter
unmanned aerial vehicles (UAVs). Because a centralised command on the ground controls the
UAVs, they tend not to work well as a team. They bunch up, miss large areas, or fail to respond
12
to enemy threats. So Bonabeau and his colleagues have created simulations in which virtual
drones follow hundreds of different rules: stay away from other drones, fly to areas no other
drones have covered, and so on. Initially, the goal is to help drones communicate directly with
each other. But by 2020, Pentagon planners hope to create entire swarms of unmanned vehicles
that communicate and attack in concert.
Despite the costs and complexities, more organisations aspire to emulate ant colonies. IBM is
experimenting with its own agent-based modelling programs for its future e-commerce software.
The EU is also funding a three-year modelling research project at the Santa Fe Institute, the
hotbed of complexity science.
Beyond modelling, Bonabeau says, the ultimate goal of the technology is to solve problems
before they happen not unlike, for example, autonomic computing systems that seek out and
fix software glitches without any human intervention. But as Bonabeau admits, advancing to the
next phase means that he will have to do better than just deliver more efficient shipping routines
for his customers.
13
CHAPTER THREE
3.1 HOW SWARM INTELLIGENCE IMPACT US
3.1.1 Positive impact on human
It helps in the reduction of man power in companies, firms as swarm robots will be
designed to do the jobs. Example these jobs include assembly of the various parts of auto
mobile
Industrial automated robots have the capacity to dramatically improve product quality.
Applications are performed with precision and high repeatability every time. This level of
sleep, vacations, it has the potential to produce more than a human worker.
Robots increase workplace safety. Workers are moved to supervisory roles where they no
The initial investment to integrated automated robotics into your business is significant,
especially when business owners are limiting their purchases to new robotic equipment.
The cost of robotic automation should be calculated in light of a business' greater
financial budget. Regular maintenance needs can have a financial toll as well.
Incorporating industrial robots does not guarantee results. Without planning, companies
can have difficulty achieving their goals.
14
Employees will require training program and interact with the new robotic equipment.
3.1.3
Robots rarely make mistakes and are more precise than human workers.
They can work at a constant speed with no breaks, days off, or holiday time.
They can work in hazardous conditions, such as poor lighting, toxic chemicals, or tight
spaces.
Robots increase worker safety by preventing accidents since humans are not performing
risky jobs.
Work cells provide safety features, separating the worker from harms way.
They also reduce the amount of wasted material used due to their accuracy
15
Robots save companies money in the long run with quick ROIs (return on investment),
fewer worker injuries (reducing or eliminating workers comp), and with using less
materials.
3.1.4
(i)
(ii)
(iii)
other robots are doing. If this is not clear robots can compete instead of cooperate.
Overall system cost: the fact of using more than one robot can make the economical
cost bigger. This is ideally not the case of swarm-robotic systems, which intend to use
many cheap and simple robots which total cost is under the cost of a more complex
single robot carrying out the same task.
when compared to a single individual, which can measure as little as a few millimeters.
Scientists have been studying the coordination mechanisms that allow the construction of these
structures and have proposed probabilistic models exploiting insects behavior. Some of these
models are implemented in computer programs to produce simulated structures that recall the
morphology of the real nests.
c) FLOCKING AND SCHOOLING IN BIRDS AND FIFISH
Scientists have shown that these elegant swarm-level behaviors can be understood as the
result of a self-organized process where no leader is in charge and each individual bases its
movement decisions solely on locally available information: the distance, perceived speed, and
direction of movement of neighbours. These studies have inspired a number of computer
simulations that are now used in the computer graphics industry for the realistic reproduction of
flocking in movies and computer games.
17
It is inspired by social behaviors in flocks of birds and schools of fish. In practice, in the
initialization phase each particle is given a random initial position and an initial velocity. The
position of the particle represents a solution of the problem and has therefore a value, given by
the objective function. At each iteration of the algorithm, each particle moves with a velocity that
is a weighted sum of three components: the old velocity, a velocity component that drives the
particle towards the location in the search space where it previously found the best solution so
far, and a velocity component that drives the particle towards the location in the search space
where the neighbor particles found the best solution so far.
f) SWARM BASED NETWORK MANAGEMENT
Schoonderwoerd et al. proposed Ant-based Control (ABC), an algorithm for routing and
load balancing in circuit-switched networks; Di Caro and Dorigo proposed AntNet, an algorithm
for routing in packet-switched networks. While ABC was a proof-of-concept, AntNet, which is
an ACO algorithm, was compared to many state-of-the-art algorithms and its performance was
found to be competitive especially in situation of highly dynamic and stochastic data traffic as
can be observed in Internet-like networks. An extension of AntNet has been successfully applied
to ad-hoc networks.
g) COOPERATIVE BEHAVIOUR IN SWARMS OF ROBOTS
There are a number of swarm behaviors observed in natural systems that have inspired
innovative ways of solving problems by using swarms of robots. This is what is called swarm
robotics. In other words, swarm robotics is the application of swarm intelligence principles to the
control of swarms of robots. As with swarm intelligence systems in general, swarm robotics
18
systems can have either a scientific or an engineering flavour. Clustering in a swarm of robots
was mentioned above as an example of artificial/scientific system.
3.2.1
The current and only practice of swarm intelligence is in the area of swarm robotics.
Swarm robotics is a new approach to the coordination of multi-robot systems which consist
of large numbers of mostly simple physical robots. It is supposed that a desired collective
behavior emerges from the interactions between the robots and interactions of robots with the
environment. This approach emerged on the field of artificial swarm intelligence, as well as
the biological studies of insects, ants and other fields in nature, where swarm behaviour
occurs.
The use of robots for tackling dangerous tasks is clearly appealing as it eliminates or reduces
risks for humans. The dangerous nature of these tasks implies a high risk of losing robots.
Therefore, a fault-tolerant approach is required, making dangerous tasks an ideal application
domain for robot swarms. Example of dangerous tasks that could be tackled using robot
swarms are demining, search and rescue, and cleaning up toxic spills.
Potential applications for robot swarms are those in which it is difficult or even impossible to
estimate in advance the resources needed to accomplish the task. For instance, allocating
resources to manage an oil leak can be very hard because it is often difficult to estimate the
oil output and to foresee its temporal evolution. In these cases, a solution is needed that is
scalable and flexible. A robot swarm could be an appealing solution: robots can be added or
removed in time to provide the appropriate amount of resources and meet the requirements of
19
the specific task. Example of tasks that might require an a priori unknown amount of
resources are search and rescue, tracking, and cleaning.
Another potential application domain for swarm robotics are tasks that have to be
accomplished in large or unstructured environments, in which there is no available
infrastructure that can be used to control the robotse.g., no available communication
network or global localization system. Robot swarms could be employed for such
applications because they are able to act autonomously without the need of any infrastructure
or any form of external coordination. Examples of tasks in unstructured and large
environments are underwater or extraterrestrial planetary exploration, surveillance, demining,
and search and rescue.
Some environments might change rapidly over time. For instance, in a post earthquake
situation, buildings might collapsethereby changing the layout of the environment and
creating new hazards. In these cases, it is necessary to adopt solutions that are flexible and
can react quickly to events. Swarm robotics could be used to develop flexible systems that
can rapidly adapt to new operating conditions. Example of tasks in environments that change
over time are patrolling, disaster recovery, and search and rescue.
3.3
20
or to change beyond a narrow range of options. Countless novel possibilities exist in the
exponential combinations of many interlinked individuals;
Boundless -- Plain old linear systems can sport positive feedback loops -- the screeching
disordered noise of PA microphone, for example. But in swarm systems, positive
feedback can lead to increasing order. By incrementally extending new structure beyond
the bounds of its initial state, a swarm can build its own scaffolding to build further
structure. Spontaneous order helps create more order. Life begets more life, wealth
creates more wealth, information breeds more information, all bursting the original
cradle. And with no bounds in sight.
(3) They don't reckon individuals, so therefore individual variation and imperfection can
be allowed. In swarm systems with heritability, individual variation and imperfection will
lead to perpetual novelty, or what we call evolution.
Non-optimal Because swarm systems are highly redundant and have no central control,
they tend to be inefficient. The allocation of resources is not efficient, and duplication of
effort is always rampant. Swarms can dampen inefficiency, but never to the degree that a
linear system can;
Non-immediate Linear systems tend to be very direct: Flip a switch and the light comes
on. Simple collective systems tend to operate simply. But complex swarm systems with
rich hierarchies take time. The more complex the swarm, the longer it takes to shift states.
22
Each hierarchical layer has to settle down, peripheral players have to come to rest, and a
multitude of autonomous agents need to become acquainted with each other.
CHAPTER FOUR
4.2 SUMMARY / CONCLUSION
The idea of swarm behavior may still seem strange because we are used to relatively linear
bureaucratic models. In fact, this kind of behavior characterizes natural systems ranging from
flocks of birds to schools of fish. Humans are more complex than ants or fish and have lots more
capacity for novel behavior, some unexpected results are likely, and for this reason, leading
scientists and organizations will further pursue swarm approaches. Swarm Intelligence provides
a distributive approach to the problem solving mimicking the very simple natural process of
cooperation. According to my survey many solutions that had been previously solved using other
AI approach like genetic algorithm neural network are also solve able by this approach also. Due
to its simple architecture and adaptive nature like ACO has it is more likely to be seen much
more in the future.
from Nature, volume 2439 of Lecture Notes in Computer Science, pages 883892.
Seattle, Washington, USA, July 8-12, 2006, pages 310. ACM, 2006..
T. Blackwell and P. J. Bentley. Dynamic search with charged swarms. In Proc. the
Workshop on Evolutionary Algorithms Dynamic Optimization Problems (EvoDOP
10(4):459472, 2006.
M. Blesa and C. Blum. Ant colony optimization for the maximum edge-disjoint paths
problem. In G. R. Raidl, S. Cagnoni, J. Branke, D. W. Corne, R. Drechsler, Y. Jin, C. G.
24
2004.
C. Blum. Beam-ACO Hybridizing ant colony optimization with beam search: An
application to open shop scheduling. Computers & Operations Research, 32(6):1565
1591, 2005.
25