International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 1

ISSN 2229-5518

Ant Colony Optimization

Utkarsh Jaiswal, Shweta Aggarwal

Abstract- Ant colony optimization (ACO) is a new natural computation method from mimic the behaviors of ant colony. It is a very good combination optimization method. Ant colony optimization algorithm was recently proposed algorithm, it has strong robustness as well as good distributed calculative mechanism, and it is easy to combine with other methods, and the well performance has been shown on resolving the complex optimization problem. An ant colony optimization approach for the resource-constrained project scheduling problem (RCPSP) is presented. The TSP problem is chosen as example for introducing the basic principle of ACO, and several improvement

algorithms are present for the problem of ACO.

Index Terms- Metaheuristic, Pheromones, Resource-constrained project scheduling problem (RCPSP), Stigmergy, Swarm Intelligence

—————————— • ——————————

1 INTRODUCTION

1.1 History of ACO

CO was proposed by Marco Dorigo in 1992 in his PhD Thesis. ACO is a probabilistic technique

for solving computational problems which can be reduced to finding good paths through graphs. They are inspired by the behavior of ants in finding paths from the colony to food.
Some of the years in which workshop on ACO was held is given below-
In 1998, ANTS’98 first international workshop on ACO was held. ANTS'98 was the first event entirely devoted to Ant Colony Optimization and more in general to algorithms inspired by the observation of ant colonies behavior.

ANTS’2000 second international workshop which included real models of real ant colonies organization and functioning which could simulate new algorithm approaches.

ANTS’2002 third event based only on ACO

which gave researches in both real
ant behavior and in ant colony optimization an opportunity to meet.

ANT’2004 fourth event on ACO in which tutorials of ACO were given.

ANTS’2006 discussed about the Swarm Intelligence, the concept which is based on Artificial Intelligence.

ANTS’2008 gave theoretical and experimental research in swarm robotics systems and

applications of Swarm Intelligence.

ANTS’2010 is the latest event held based on ACO which was about the behavioral models of social insects and swarm intelligence.

1.2 TECHNOLOGY USED IN ACO:

Ant colony optimization is based on the technique known as Swarm Intelligence, which is a part of Artificial Intelligence.
Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence. It is the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent.
Artificial Intelligence has been widely used in many
fields. Some of the fields which use AI include finance, medicines, stock trading, robot control, scientific discovery, aviation, online and telephone customer service, and many more. Some computer programs that are used to perform AI tasks are designed to manipulate symbolic information at extremely high speeds, in order to compensate for their partial lack of human knowledge and selectivity. Such programs are usually called "expert systems."
Artificial Intelligence has a task domain with two types of tasks, namely-
1. Mundane tasks- Mundane tasks correspond to following AI problems-

Planning: Ability to decide on a good sequence of actions to achieve our goals.

Vision: Ability to make sense of what we see.

IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 2

ISSN 2229-5518

Robotics: The ability to move and act in the world, possibly regarding to our new perception.

Natural language: Ability to communicate with others in any human language.

2. Expert tasks- These require specialized skills and training and include-

Medical diagnosis.

Equipment repair

Computer configuration.

Financial planning.

1.2.1 SWARM INTELLIGENCE:

Amorphous computing consists of a multitude of interacting computers with modest computing power and memory, and modules for intercommunication.
These collections of devices are known as swarms. The desired coherent global behavior of the
computer is achieved from the local interactions between the individual agents. The global behavior of these vast numbers of unreliable agents is resilient to a small fraction of misbehaving agents and noisy and intimidating environment. This makes them highly useful for sensor networks, MEMS, internet nodes, etc. Swarm Intelligence may be derived from the randomness, repulsion and unpredictability of the agents, thereby resulting in diverse solutions to the problem. There are no known criteria to evaluate swarm intelligence performance. Swarm Intelligence relies upon stigmergic principles in order to solve complex problems using only simple agents.
Swarm Intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents interacting locally with their environment cause coherent functional global patterns to emerge. SI provides a basis with which it is possible to explore collective (or distributed) problem solving without centralized control or the provision of a global model. Swarm intelligence is the collective
behavior of decentralized, self organized
systems, natural or artificial. The concept is employed in work on artificial intelligence. Since the early 1990’s, a significant amount of work has been done using social insect-inspired algorithms to solve both ‘toy’ and ‘real’ problems.

1.2.1.1 BRIEF HISTORY OF SWARM INTELLIGENCE:

Swarm intelligence was actually first implemented by the earliest living organisms of our planets:
Bacteria- Bacteria are one-cell organisms, using swarm-intelligence to Communicate and coordinate themselves with a chemical language and specialized enzymes and receptors. Bioluminescence, for example, works like this: Bacteria can detect the distance of conspecifics, and start to give light only if there are many of them very close to each other. Else no light. Another example of swarm intelligence in bacteria is their coordinated attacks on other organisms, where single bacteria would not have any chance for success. Furthermore, bacteria can communicate across species. Recent research projects discovered that all bacteria have two different enzymes and receptors: one for innerspecies and one for interspecies talk. For more information have a look at the inspiring Ted
IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 3

ISSN 2229-5518

talk by Bonnie Bassler: The secret, social lives of bacteria.
In the 1950s, scientists started to conduct research about how social animals and insects such as bees, ants, termites, etc. are communicating and coordinating themselves. One of the first researchers in this field was French biologist Grassé, identifying a structured approach in the seemingly chaotic nest-building process of termites. This area of research continued as a niche until the 1970s, when ARPA financed research projects on the first multi- agent systems with theHearsay-II blackboard project. In this decade the "Actor Model" was invented by Carl Hewitt, Peter Bishop und Richard Steiger, providing a mathematical model which perceives single actors as smallest parts in a system. In this model, actors could respond to "Messages" by:

Making a decision

Creating new actors

Sending out messages

Preparing an answer for the next message

The Actor Model has become the foundation of many algorithms and frameworks for the development of multi-agent systems.
The Multi-Agent Systems Lab at University of
Massachusetts at Amherst held the first workshop on "distributed artificial intelligence" in 1980, which was officially establishing this new are of science and research. A special edition of the "IEEE Transactions of Systems, Man and Cybernetics" on this topic has been published in 1981, sparking even broader interest in the scientific community. Scientists could first successfully simulate the nest-building behavior of termites in 1991, implementing a number of simple algorithms (Deneubourg 1992)

1.2.1.2 EVALUATION OF SWARM INTELLIGENCE:

Although many studies on swarm intelligence have been presented, there are no general criteria to evaluate a swarm intelligent system's performance. They proposed measures of fault tolerance and local superiority as indices. They compared two swarm intelligent systems via simulation with respect to these two indices. There is a significant need for more analytical studies.
In biology, researchers proposed “continuum models" for swarm behavior based on nonlocal interactions. The model consists of integro-differential advection-
diffusion equations, with convolution terms that describe long range attraction and repulsion. They found that if density dependence in the repulsion term is of a higher order than in the attraction term, then the swarm has a constant interior density with sharp edges as observed in biological examples. They did linear stability analysis for the edges of the swarm.

1.2.1.3 PRINCIPLES OF SWARM INTELLIGENCE:

The basic principles of swarm intelligence are:
1. Self-Organization is based on:-

Positive feedback(amplification)

Negative feedback (for balancing)

Amplification of fluctuations(random walks, errors)

Multiple interactions

2. Stigmergy- Indirect communication via interaction with environment.
The objective of this engagement is to provide a comprehensive assessment of the state of the art in Swarm Intelligence; specifically the role of stigmergy in distributed problem solving. In order to do this, working definitions have to be provided along with the essential properties of systems that are swarm- capable; i.e. problem solving is an emergent property of a system of simple agents.
The principle of stigmergy implies the interaction
of simple agents through a common medium with no central control. This principle implies that querying individual agents tells one little or nothing about the emergent properties of the system. Consequently, simulation is often used to understand the emergent dynamics of stigmergic systems. Stigmergic systems are typically stochastic in nature; individual actions being chosen probabilistically from a limited behavioral repertoire. Actions performed by individual agents change the nature of the environment; for example a volatile chemical called a pheromone is deposited. This chemical signal is sensed by other agents and results in modified probabilistic choice of future actions.
The advantages of such a system are clear. Being a
system in which multiple actions
of agents are required for a solution to emerge, the activity of an individual agent is not as important. That is, stigmergic systems are resilient to the failure of individual agents and, more importantly still react extremely well to dynamically changing
environments. Optimal use of resources is often a
IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 4

ISSN 2229-5518


significant consideration in designing algorithms. Another stigmergic system -- the raid army ant model
– efficiently and effectively forages for food using pheromone-based signaling. In a raid army ant system, agents develop a foraging front that covers a wide path, leading to extremely effective food finding. This model has military value in that it could potentially be exploited as a series of mechanisms for searching for land mines, a problem that, tragically, is all too common in parts of the world.
A third stigmergic model of military interest is that of flocking or aggregation. Here,
large numbers of simple agents can be made to move through a space filled with obstacles (and potentially threats) without recourse to central control. The environmental signals here are the position and
velocities of the agents themselves.

1.2.1.3 ADVANTAGES AND DISADVANTAGES:

There are several advantages of swarm Intelligence. Some are:

Agents are not goal directed; they react rather than plan extensively.

Agents are simple, with minimal behavior and memory.

Control is decentralized; there is no global information in the system.

Failure of individual agents is tolerated; emergent behavior is robust with respect to individual failure.

Agents can react to dynamically changing environments.

Direct agent interaction is not required.

TABLE 1. ADVANTAGES OF SWARM INTELLIGENCE
Swarm Intelligence has some disadvantages also. They are:

Collective behavior cannot be inferred from individual agent behavior. This implies that observing single agents will not necessarily allow swarm-defeating behavior to be chosen. (This can be viewed as an advantage too from an aggressive point of view).

Individual behavior looks like noise as action choice is stochastic.

Designing swarm-based systems is hard. There are almost no analytical mechanisms for design.

Parameters can have a dramatic effect on the emergence (or not) of collective behavior.

TABLE 2. DISADVANTAGES OF SWARM INTELLIGENCE
IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 5

ISSN 2229-5518



This was about the technology used for Ant Colony
Optimization.

2.1 WHY ONLY ANTS FOR ACO?

Why are ants so interesting?

Ants solve complex tasks by simple local means.

Ant’s productivity is better than the sum of their single activities.

Ants are good masters in search and exploitation.

Which mechanism of ants is better?

Cooperation and division of labor.

Adaptive task allocation.

Work stimulation by cultivation.

Pheromones.

2.2 NATURAL BEHAVIOR OF ANTS:

The given figure shows the following:
(A) Real ants follow a path between nest and food source.
(B) An obstacle appears on the path: Ants choose whether to turn left or right with equal probability.
(C) Pheromone is deposited more quickly on the
shorter path.
(D) All ants have chosen the shorter path.
"ACO is characterized as a policy search strategy aimed at learning the distributed parameters (called pheromone variables in accordance with the biological metaphor) of the stochastic decision policy which is used by so-called ant agents to generate solutions."
The premise of ACO is fairly simple, in nature, ant [colonies] rely on a complex system of fairly unsophisticated autonomous agents which individually cooperate to provide for the basic needs of the collective. ACO itself developed from studies of ant-behavior when foraging for food, specifically the methods by which ants choose the shortest path to a particular source even in the presence of multiple trails. Using trails of pheromones, ants indicate the presence of food to the rest of the colony.
Ant algorithms (also known as Ant Colony Optimization) are a class of metaheuristic search algorithms that have been successfully applied to solving NP hard problems. Ant algorithms are biologically inspired from the behavior of colonies of real ants, and in particular how they forage for food. One of the main ideas
behind this approach is that the ants can
communicate with one another through indirect means (stigmergy) by making modifications to the concentration of highly
volatile chemicals called pheromones in their
immediate environment.
The Traveling Salesman Problem (TSP) is an NP
complete problem addressed by the
IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 6

ISSN 2229-5518

optimization community having been the target of considerable research. The TSP is
recognized as an easily understood, hard optimization problem of finding the shortest
circuit of a set of cities starting from one city, visiting
each other city exactly once, and returning to the start city again. The TSP is often used to test new, promising optimization heuristics. Formally, the TSP is the problem of finding the shortest Hamiltonian circuit of a set of nodes. There are two classes of TSP problem:

symmetric TSP, and

asymmetric TSP (ATSP)

The difference between the two classes is that with symmetric TSP the distance between two cities is the same regardless of
the direction you travel; with ATSP this is not necessarily the case.
Ant Colony Optimization has been successfully applied to both classes of TSP with good, and often excellent, results. The ACO algorithm skeleton for TSP is as follows :
Procedure ACO algorithm for TSPs:
1. Set parameters, initialize pheromone trails.
2. while (termination condition not met) do
3. ConstructSolutions
4. ApplyLocalSearch % optional Update Trails
5. end ACO algorithm for TSPs.

2.3 APPLICATIONS OF ANT COLONY OPTIMIZATION:

Ant colony optimization algorithms have been applied to many combinatorial optimization problems, ranging from quadratic assignment to fold protein or routing vehicles and a lot of derived methods have been adapted to dynamic problems in real variables, stochastic problems, multi-targets and parallel implementations. It has also been used to produce near-optimal solutions to the travelling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches of similar problems when the graph may change dynamically; the ant colony algorithm can be run continuously and adapt to changes in real time. This is of interest in network routing and urban transportation systems.
As a very good example, ant colony optimization algorithms have been used to produce near-optimal
solutions to the travelling salesman problem. The first
ACO algorithm was called the Ant system [9]and it was aimed to solve the travelling salesman problem, in which the goal is to find the shortest round-trip to link a series of cities. The general algorithm is relatively simple and based on a set of ants, each making one of the possible round-trips along the cities. At each stage, the ant chooses to move from one city to another according to some rules:

It must visit each city exactly once;

A distant city has less chance of being chosen (the visibility);

The more intense the pheromone trail laid out on

an edge between two cities, the greater the probability that that edge will be chosen;

Having completed its journey, the ant deposits

more pheromones on all edges it traversed, if the journey is short;

After each iteration, trails of pheromones evaporate.

The initial applications of ACO were in the domain of NP-hard combinatorial optimization problems. The largest body of ACO research is still, not surprisingly, to be found in this area. The interested reader will find a rather complete overview of these applications in (Dorigo & Stützle 2004). Another application that was considered early in the history of ACO is routing in telecommunication networks. A particularly successful example of ACO algorithm in this domain is AntNet (Di Caro & Dorigo 1998). Current research in ACO algorithms is devoted both to the development of theoretical foundations and to the application of the metaheuristic to new challenging problems. The development of theoretical foundation was started by Gutjahr, who was the first to prove convergence in probability of an ACO algorithm (Gutjahr 2000). An overview of theoretical results available for ACO can be found in (Dorigo & Blum
2005). Concerning applications, the use of ACO for the solution of dynamic, multi-objective, stochastic, continuous and mixed-variable optimization problems is a current hot topic, as well as the creation of parallel implementations capable of taking advantage of the new available parallel hardware.
Some other applications of ACO are:

Scheduling problems.

Vehicle routing problem.

Assignment problem.

Set problem.

Travelling salesman problem.

NP-Hard combinational problems.

IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 7

ISSN 2229-5518

Data mining.

Connectionless networking routing, etc.

3 FUTURE OF ANT COLONY OPTIMIZATION:

It should be clear from this outline that the Ant Colony Optimization metaheuristic represents a powerful fundament for solving complex combinatorial problems. Ant Systems in particular are particularly well suited for tackling optimization problems such as TSP, DTSP, QAP, and others. Their superiority when stacked against other powerful methods is demonstrable and the savings in computation time (generated tours) combined with a high level of optimality cannot be denied.
Concepts such as reinforcement learning greedy searching give AS a competitive advantage even over highly communicative multi-agent systems, while the ability to quickly self-train allows for lightweight and agile implementations. Finally, the decoupling of individual swarm agents from one another allow AS to parse incredibly large data sets with considerably less waste than competing algorithms.
In all, ACO is a new field with very real
applications and powerful concepts. There is little doubt that further research will yield promising results which may be able to offer solutions for currently intractable combinatorial problems. However, AS, and ACO as a whole have clearly demonstrated their use in solving many permutations of TSP. The ACO and PSO can be analyzed for future enhancement such that new research could be focused to produce better solution by improving the effectiveness and reducing the limitations. More possibilities for dynamically determining the best destination through ACO can be evolved and a plan to endow PSO with fitness sharing aiming to investigate whether this helps in improving performance. In future the velocity of each individual must be updated by taking the best element found in all iterations rather than that of the current iteration
only.
A wide range of technology levels have been observed, from TRL 1 through to TRL
8. Digital Pheromone target tracking system should be considered the most mature technology at TRL 7/8 having flown in operational experimental conditions. The Swarm-bots project –arguably the most exciting project from a robotics perspective – is assessed at TRL 4/5.Mechatronic research is generally at TRL 4.
The algorithms derived from the Ant Colony
Optimization metaheuristic (“Smart Algorithms”) should be rated at TRL 2/3 (only because physical systems are not generated in this environment). The MANET routing algorithm research should be rated at TRL 3. Routing algorithms for sensor networks would also attract a TRL rating of 3. Sensor technology achieves the rating of TRL 5/6.

REFERENCES

[1] Ant colony optimization, Macro Dorigo, Thomas
Stutzle.
[2] Ant Algorithms, Marco Dorigo, Gianni Di
Caro, Michael Sampels.
[3] Swarm Intelligence, Eric Bonabeau, Guy
Theraulaz, Marco Dorigo.
[4] Swarm Intelligence, Christian Blum, Daniel
Merkle.
[5] Swarm Intelligence, James Kennedy, Russell C.
Eberhart, Yuhui Shi.
[6] Ant Colony Optimization and Swarm Intelligence, Dorigo, M.; Birattari, M.; Blum, C.; Gambardella, L.M.; Mondada, F.; Stützle, Th. (Eds.)
[7] Bio Inspired Artificial Intelligence, Dario
Floreano, Claudio Mattiussi.
[8] Artificial Intelligence-A modern approach, Staurt
Russell and Peter Norvig.
[9] Artificial Intelligence- A system approach, M.
Tim Jones.
[10] Introducing Artificial Intelligence, Henry
Brighton.
[11] The Essence of Artificial Intelligence, Alison
Cawsey.
[12] Artificial Intelligence, Patrick Henry Winston.
[13] Artificial Intelligence, Elain Rich and Kevin
Knight
IJSER © 2011 http://www.ijser.org