You are on page 1of 67

The Traveling Salesman

Problem
Algorithms and(TSP)
Networks

V6 Prof. Bento S. de Mattos


Contents
• TSP and its applications
• Heuristics and approximation
algorithms
– Construction heuristics, a.o.:
Christofides, insertion heuristics
– Improvement heuristics, a.o.: 2-opt, 3-
opt, Lin-Kernighan
– GA applied to solve TSP
– Artificial Neural Networks Applied to TSP
2
1

Problem definition
Applications
TSP Problem
• Instance: n vertices
4
(cities), distance 1 2
between every pair 3
2 5
of vertices 2
3 2 4
• Question: Find
shortest (simple) 4 4
1 2 1 2
cycle that visits 3 3
2 2 5
every city 5
2 2
3 2 4 3 2 4
11

A&N: TSP 4
Applications
• Collection and delivery problems
• Robotics
• Board drilling

5
NP-complete
• Instance: cities, distances, K
• Question: is there a TSP-tour of length at
most K?
– Is an NP-complete problem
– Relation with Hamiltonian Circuit problem

6
Assumptions
• Lengths are non-negative (or
positive)
• Symmetric: w(u,v) = w(v,u)
– Not always: painting machine
application
• Triangle inequality: for all x, y, z:
w(x,y) + w(y,z) ≥ w(x,z)
• Always valid?
7
If triangle inequality does not
hold
Theorem: If P≠ NP, then there is no polynomial
time algorithm for TSP without triangle inequality
that approximates within a ratio c, for any
constant c.
Proof: Suppose there is such an algorithm A. We
build a polynomial time algorithm for Hamiltonian
Circuit (giving a contradiction):
– Take instance G=(V,E) of HC
– Build instance of TSP:
• A city for each v ∈ V
• If (v,w) ∈ E, then d(v,w) = 1, otherwise d(v,w) = nc+1
– A finds a tour with distance at most nc, if and only if G
has a Hamiltonian circuit

A&N: TSP 8
Heuristics and
approximations
• Two types
– Construction heuristics
• A tour is built from nothing
– Improvement heuristics
• Start with `some’ tour, and continue to
change it into a better one as long as
possible

A&N: TSP 9
2

Construction heuristics
Spanning Trees
A spanning tree of a graph is just a subgraph that contains all the vertices and is a tree. A graph may
have many spanning trees; for instance the complete graph on four vertices

has sixteen spanning trees:


Minimum Spanning Trees
Now suppose the edges of the graph have weights or lengths. The weight of a tree is just the sum of
weights of its edges. Obviously, different trees have different lengths. The problem: how to find the
minimum length spanning tree? This problem can be solved by many different algorithms. It is the topic
of some very recent research. There are several "best" algorithms, depending on the assumptions you
make:
A randomized algorithm can solve it in linear expected time. [Karger, Klein, and Tarjan, "A randomized
linear-time algorithm to find minimum spanning trees", J. ACM, vol. 42, 1995, pp. 321-328.]
It can be solved in linear worst case time if the weights are small integers. [Fredman and Willard,
"Trans-dichotomous algorithms for minimum spanning trees and shortest paths", 31st IEEE Symp.
Foundations of Comp. Sci., 1990, pp. 719--725.]
Otherwise, the best solution is very close to linear but not exactly linear. The exact bound is O(m log
beta(m,n)) where the beta function has a complicated definition: the smallest i such that
log(log(log(...log(n)...))) is less than m/n, where the logs are nested i times. [Gabow, Galil, Spencer, and
Tarjan, Efficient algorithms for finding minimum spanning trees in undirected and directed graphs.
Combinatorica, vol. 6, 1986, pp. 109--122.]
These algorithms are all quite complicated, and probably not that great in practice unless you're looking
at really huge graphs. The book tries to keep things simpler, so it only describes one algorithm but (in
my opinion) doesn't do a very good job of it. I'll go through three simple classical algorithms (spending
not so much time on each one).
Minimum Spanning Trees

Diagram of a minimum spanning tree.


Each edge is weighted with a number
roughly equal to its length. Dark, thick
edges are in the minimum spanning tree.

Created by Derrick Coetzee


Why Minimum Spanning Trees

The standard application is to a problem like phone network design. You have a business with several
offices; you want to lease phone lines to connect them up with each other; and the phone company
charges different amounts of money to connect different pairs of cities. You want a set of lines that
connects all your offices with a minimum total cost. It should be a spanning tree, since if a network
isn't a tree you can always remove some edges and save money. A less obvious application is that the
minimum spanning tree can be used to approximately solve the traveling salesman problem. A
convenient formal way of defining this problem is to find the shortest path that visits each point at least
once.
Note that if you have a path visiting all points exactly once, it's a special kind of tree. For instance in
the example above, twelve of sixteen spanning trees are actually paths. If you have a path visiting
some vertices more than once, you can always drop some edges to get a tree. So in general the MST
weight is less than the TSP weight, because it's a minimization over a strictly larger set.
On the other hand, if you draw a path tracing around the minimum spanning tree, you trace each edge
twice and visit all points, so the TSP weight is less than twice the MST weight. Therefore this tour is
within a factor of two of optimal. There is a more complicated way (Christofides' heuristic) of using
minimum spanning trees to find a tour within a factor of 1.5 of optimal;
Prim’s Algorithm for Minimum
Rather than build a subgraph one edge atSpanning Trees
a time, Prim's algorithm builds a tree one vertex at a time.

Prim's algorithm: let T be a single vertex x while (T has fewer than n vertices)
{ find the smallest edge connecting T to G-T add it to T }

Since each edge added is the smallest connecting T to G-T, the lemma we proved shows that we only add edges that
should be part of the MST.
Again, it looks like the loop has a slow step in it. But again, some data structures can be used to speed this up. The idea is
to use a heap to remember, for each vertex, the smallest edge connecting T with that vertex.
Prim with heaps:
make a heap of values (vertex,edge,weight(edge))
initially (v,-,infinity) for each vertex
let tree T be empty
while (T has fewer than n vertices)
{
let (v,e,weight(e)) have the smallest weight in the heap
remove (v,e,weight(e)) from the heap
add v and e to T
for each edge f=(u,v)
if u is not already in T
find value (u,g,weight(g)) in heap
if weight(f) < weight(g)
replace (u,g,weight(g)) with (u,f,weight(f))
}

Analysis: We perform n steps in which we remove the smallest element in the heap, and at most 2m steps in which we
examine an edge f=(u,v). For each of those steps, we might replace a value on the heap, reducing it's weight. (You also
have to find the right value on the heap, but that can be done easily enough by keeping a pointer from the vertices to the
corresponding values.) I haven't described how to reduce the weight of an element of a binary heap, but it's easy to do in
O(log n) time. Alternately by using a more complicated data structure known as a Fibonacci heap, you can reduce the
weight of an element in constant time. The result is a total time bound of O(m + n log n).
Boruvka’s Algorithm for Minimum
Spanning Trees

Although this seems a little complicated to explain, it's probably the easiest one for computer
implementation since it doesn't require any complicated data structures. The idea is to do steps like
Prim's algorithm, in parallel all over the graph at the same time.

Boruvka's algorithm: make a list L of n trees, each a single vertex


while (L has more than one tree)
for each T in L, find the smallest edge connecting T to G-T
add all those edges to the MST
(causing pairs of trees in L to merge).

As seen in Prim's algorithm, each edge you add must be part of the MST, so it must be ok to add them
all at once.

Analysis: This is similar to merge sort. Each pass reduces the number of trees by a factor of two, so there
are O(log n) passes. Each pass takes time O(m) (first figure out which tree each vertex is in, then for
each edge test whether it connects two trees and is better than the ones seen before for the trees on either
endpoint) so the total is O(m log n).
1st Construction heuristic:
Nearest neighbor
• Start at some vertex s; v=s;
• While not all vertices visited
– Select closest unvisited neighbor w of v
– Go from v to w;
– v=w
• Go from v to s. Can
Canhave
haveperformance
performance
ratio
ratioO(log
O(logn)n)

A&N: TSP 17
Heuristic with ratio 2
• Find a minimum spanning tree
• Report vertices of tree in preorder

A&N: TSP 18
Christofides
• Make a Minimum Spanning Tree T
• Set W = {v | v has odd degree in tree T}
• Compute a minimum weight matching M
in the graph G[W].
• Look at the graph T+M. (Note: Eulerian!)
• Compute an Euler tour C’ in T+M.
• Add shortcuts to C’ to get a TSP-tour

A&N: TSP 19
Ratio 1.5

• Total length edges


in T: at most OPT
• Total length edges
in matching M: at
most OPT/2.
• T+M has length at
most 3/2 OPT.
• Use ∆ -inequality.

A&N: TSP 20
Closest insertion heuristic
• Build tour by starting with one
vertex, and inserting vertices one by
one.
• Always insert vertex that is closest to
a vertex already in tour.

A&N: TSP 21
Closest insertion heuristic has
performance ratio 2
• Build tree T: if v is added to tour, add
to T edge from v to closest vertex on
tour.
• T is a Minimum Spanning Tree
(Prim’s algorithm)
• Total length of T ≤ OPT
• Length of tour ≤ 2* length of T

A&N: TSP 22
Many variants
• Closest insertion: insert vertex closest to vertex
in the tour
• Farthest insertion: insert vertex whose
minimum distance to a node on the cycle is
maximum
• Cheapest insertion: insert the node that can be
inserted with minimum increase in cost
– Gives also ratio 2
– Computationally expensive
• Random insertion: randomly select a vertex
• Each time: insert vertex at position that gives
minimum increase of tour length

A&N: TSP 23
Cycle merging heuristic
• Start with n cycles of length 1
• Repeat:
– Find two cycles with minimum distance
– Merge them into one cycle
• Until 1 cycle with n vertices
• This has ratio 2: compare with
algorithm of Kruskal for MST.

A&N: TSP 24
Savings
• Cycle merging heuristic where we
merge tours that provide the largest
“savings”: can be merged with the
smallest additional cost / largest
savings

A&N: TSP 25
Some test results
• In an overview paper, Junger et al report on tests
on set of instances (105 – 2392 vertices; city-
generated TSP benchmarks)
• Nearest neighbor: 24% away from optimal in
average
• Closest insertion: 20%;
• Farthest insertion: 10%;
• Cheapest insertion: 17%;
• Random Insertion: 11%
• Preorder of min spanning tress: 38%
• Christofides: 19% with improvement 11% / 10%
• Savings method: 10% (and fast)

A&N: TSP 26
3

Improvement heuristics
Improvement heuristics
• Start with a tour (e.g., from heuristic)
and improve it stepwise
– 2-Opt
– 3-Opt Iterative
Iterativeimprovement
improvement
– K-Opt
– Lin-Kernighan
– Iterated LK Local
Localsearch
search
– Simulated annealing, …

A&N: TSP 28
Scheme
• Rule that modifies solution to
different solution
• While there is a Rule(sol, sol’) with
sol’ a better solution than sol
– Take sol’ instead of sol
• Cost decrease
• Stuck in `local minimum’
• Can use exponential time in theory…

A&N: TSP 29
Very simple
• Node insertion
– Take a vertex v and put it in a different
spot in the tour
• Edge insertion
– Take two successive vertices v, w and
put these as edge somewhere else in
the tour

A&N: TSP 30
2-opt
• Take two edges
(v,w) and (x,y) and
replace them by
(v,x) and (w,y) OR
(v,y) and (w,x) to
get a tour again.
• Costly: part of tour
should be turned
around

A&N: TSP 31
2-Opt improvements
• Reversing shorter part of the tour
• Clever search to improving moves
• Look only to subset of candidate
improvements
• Postpone correcting tour
• Combine with node insertion
• On R2 : get rid of crossings of tour

A&N: TSP 32
3-opt
• Choose three edges from tour
• Remove them, and combine the
three parts to a tour in the cheapest
way to link them

A&N: TSP 33
3-opt
• Costly to find 3-opt improvements:
O(n3) candidates
• k-opt: generalizes 3-opt

A&N: TSP 34
Lin-Kernighan
• Idea: modifications that are bad can
lead to enable something good
• Tour modification:
– Collection of simple changes
– Some increase length
– Total set of changes decreases length

A&N: TSP 35
LK
• One LK step:
– Make sets of edges X = {x1, …, xr}, Y =
{y1,…,yr}
• If we replace X by Y in tour then we have
another tour
– Sets are built stepwise
• Repeated until …
• Variants on scheme possible

A&N: TSP 36
One LK step
• Choose vertex t1, and edge x1 = (t1,t2) from tour.
• i=1
• Choose edge y1=(t2, t3) not in tour with g1 = w(x1) – w(y1) > 0
(or, as large as possible)
• Repeat a number of times, or until …
– i++;
– Choose edge xi = (t2i-1 ,t2i) from tour, such that
• xi not one of the edges yj
• oldtour – X + (t2i,t1) +Y is also a tour
– if oldtour – X + (t2i,t1) +Y has shorter length than oldtour,
then take this tour: done
– Choose edge yi = (t2i, t2i+1 ) such that
• gi = w(xi) – w(yi) > 0
• yi is not one of the edges xj .
• yi not in the tour

A&N: TSP 37
Iterated LK
• Construct a start tour Cost
Costmuch
muchtime
time
Gives
Givesexcellent
excellentresults
results
• Repeat the following r times:
– Improve the tour with Lin-Kernighan
until not possible
– Do a random 4-opt move that does not
increase the length with more than 10
percent
• Report the best tour seen

A&N: TSP 38
Other methods
• Simulated annealing and similar methods
• Problem specific approaches, special cases
• Iterated LK combined with
treewidth/branchwidth approach:
– Run ILK a few times (e.g., 5)
– Take graph formed by union of the 5 tours
– Find minimum length Hamiltonian circuit in
graph with clever dynamic programming
algorithm

A&N: TSP 39
A dynamic programming
algorithm

4
Held-Karp algorithm for TSP
• O(n22n) algorithm for TSP
• Uses Dynamic programming
• Take some starting vertex s
• For set of vertices R (s ∈ R), vertex w ∈ R,
let
– B(R,w) = minimum length of a path, that
• Starts in s
• Visits all vertices in R (and no other vertices)
• Ends in w

A&N: TSP 41
TSP: Recursive formulation
• B({s},s) = 0
• If |S| > 1, then
– B(S,w) = minv ∈ S – {x}B(S-{x}, v}) +
w(v,x)
• If we have all B(V,v) then we can
solve TSP.
• Gives requested algorithm using DP-
techniques.
A&N: TSP 42
5

TSP SOLVER USING GENETIC


ALGORITHM
The outline of the GA for
The algorithm consists ofTSP
the following steps:
• Initialization: Generation of M individuals randomly.
• Natural Selection: Eliminate p1% individuals. The
population decreases by M.p1/100.
• Multiplication: Choose M.p1/100 pairs of individuals
randomly and produce an offspring from each pair of
individuals (by crossover). The population reverts to
the initial population M.
• Mutation by 2-opt: choose p2% of individuals
randomly and improve them by the 2-opt method.
The elite individual (the individual with the best
fitness value in the population) is always chosen. If
the individual is already improved, do nothing.

44
Representation
• We use the path representation for solution
coding.
• EX: the chromosome g = (D, H, B, A, C, F, G,
E) means that the salesperson visits D, H, B,
A,…E successively, and returns to town D.

45
Mutation by 2-opt

• The 2-opt is one of the most well-known local


search operator (move operator) among TSP
solving algorithms.
• It improves the tour edge by edge and reverses
the order of the subtour.
• When we apply the 2-opt mutation to a
solution, the solution may fall into a local
mimimum.

46
47
Crossover operator
• Greedy Subtour Crossover (GSX). It acquires the
longest possible sequence of parents’ subtours.
• Using GSX, the solution can pop up from local minima
effectively.

48
Greedy Subtour Crossover

Inputs: Chromosomes ga = (D, H, B, A, C, F, G, E) and


gb = (B, C, D, G, H, F, E, A).
Outputs: The offspring g = (H, B, A, C, D, G, F, E)
49
procedure crossover(ga, gb){
fa ← true Algorithm: Greedy Subtour
fb ← true Crossover
choose town t randomly
choose x, where ax = t
Inputs: Chromosomes ga = (a0, a1,
choose y, where by = t …, an-1 ) and gb = (b0, b1,…, bn-1 ).
g ←t
do {
Outputs: The offspring
x ← x -1 (mod n), chromosome g.
y ← y + 1 (mod n).
if fa = true then {
if ax ∉ g then
g ← ax.g
else fa ← false
}
if |g| < ga| then {
if fb = true then {
if bx ∉ g then add the rest of towns to g in the
g ← g.by random order
else fb ← false
} }
} while fa = true or fb = true
return g
50
}
Example:
• Suppose that parent chromosomes ga = (D, H, B, A, C, F, G, E) and
gb = (B, C, D, G, H, F, E, A).
• First, choose one town at random, say, town C is chosen. Then x =
4 and y =1. Now the child is (C).
• Next, pick up towns from the parents alternately. Begin with a3 (A)
and next is b2 (D). The child becomes g = (A, C, D).
• In the same way, add a2 (B), b3 (G), a1 (H), and the child becomes
g = (H,B,A,C,D,G).
• Now the next town is b4 = H and H has already appeared in the
child, so we can’t add any more towns from parent gb.
• Now, we add towns from parent ga. The next town is a0 = D, but D
has already used. Thus, we can’t add towns from parent ga, either.
• Then we add the rest of the towns, i.e., E and F, to the child in the
random order. Finally the child is g = (H,B,A,C,D,G,F,E).

51
Survivor Selection
• Eliminate R = M.p1/100. We eliminate similar
individuals to maintain the diversity in order to
avoid immature convergence.
• First, sort the individuals in fitness-value order.
Compare the fitness value of adjoining
individuals. If the difference is less than ε ,
eliminate preceding individual while the number
of eliminated individuals (r) is less than R.
• Next, if r < R, eliminate R - r individuals in the
order of lowest fitness value.

52
Other parameters
• Test data: TSPLIB
• gr96.data (666 cities)
– Population size: 200
– maximum number of generations: 300
– p1 = 30%
– p2 = 20%
– produces the minimum solution
• Conclusion: The solver brings out good solution
very fast since GA here utilizes heuristic search
(2-opt) in genetic operators. It is a hybrid
algorithm.

53
6

Artificial Neural Networks


Applied to TSP
Hopfield Network
A Hopfield net is a form of
recurrent (fully connected)
artificial neural network
invented by John Hopfield.
Hopfield nets serve as
content-addressable memory
systems with binary
threshold units. They are
guaranteed to converge to a
local minimum, but
convergence to one of the
stored patterns is not
guaranteed. The Hopfield
Neural Network is able to
solve optimization
problems.
At Left, Hopfield net with four nodes
55
Hopfield Network
The network consists of N neurons fully interconnected. Each
neuron has two possible states, Vi = -1 and Vi = 1. The sum
of the inputs of the neuron i is

I i = ∑ j ω ijV j
The state of the network as a whole is given by the N values Vi, represented by a word of N bits.

The network operated over time, sequenced by a clock. We write::

Vi (t ) the state of neuron i at ime t


Vi (t + 1) the state of neuron i at time t + δt

the term δt represents the interval between two clocks edges

56
Hopfield Network
Each neuron may change its state at a random moment, asynchronously with respect to all
others, all neurons changing state with a constant mean frequency. Alternatively, another
interpretation is to consider that, at each clock tick, one neuron is selected at random to
change state. Each state change is evaluated following the McCulloch and Pitts rule:
 1 if ∑ j ωijVJ > θ i ,
ai 
− 1 otherwise
1 if ∑ j ωijVJ > θ i ,
Hopfield’s usage  ai 
0 otherwise
Where
ω ij is the strength of the connection weight from unit j to unit i (the weight of the
connection).
Vj is the state of unit j.
θiThe
is the threshold of
connections in unit i.
a Hopfield net typically have the following restrictions:
 no units has a connection with itself
 Connections are symmetric 57
Learning in Hopfield Networks
Learning in Hopfield networks is sometimes described as rote learning, to distinguish it from
learning methods based on trial and error. In fact the connection weights can be directly calculate,
given the full set of states to be memorized. The problem is as follows: suppose that the Hopfield
network is desired to memorize a set of states Vs, where s=1,…n. These states are called prototypes.

Used in this context, “memorizing” means that all of the V states must be stable states of the
network, following the state dynamics described above. In addition, there must be attractor states,
which enable each stable state to be reached from states that are slightly different.

In order to produce this effect, Hopfield used connection weights as


ωij = ∑s Vi sV js
follows: ωii = 0
It may be simply demonstrated that these weights can be found using a quantitative application of
the Hebb rule. The Hebb rule consists of increasing the weight of a connection between two neurons
every time that the two neurons are simultaneously active. In a Hopfield network, this can be
expressed in the following manner: the network starts with a completely null set of connections (the
null hypothesis), and the network is forced into a particular state Vs. Each of the possible pairs of
neurons (i,j) are examined and the weight ω ij increased by Δω ij, where Δω ij is calculated according
to the following table Vi s V js ∆ωij
+1 +1 +1 neurons simultaneously active
+1 −1 −1 neurons in opossition
−1 +1 −1 neurons in opossition
−1 −1 +1 neurons simultaneously inactive 58
Energy in Hopfield Networks
The requirement that weights be symmetric is typically used, as it will guarantee that the
energy function decreases monotonically while following the activation rules, and the
network may exhibit some periodic or chaotic behavior if non-symmetric weights are used.
However, Hopfield found that this chaotic behavior is confined to relatively small parts of
the phase space, and does not impair the network's ability to act as a content-addressable
associative memory system.
Hopfield nets have a scalar value associated with each state of the network referred to as the
"energy", E, of the network, where:

This value is called the "energy" because the definition ensures that if units are randomly
chosen to update their activations the network will converge to states which are local
minima in the energy function (which is considered to be a Lyapunov function). Thus, if a
state is a local minimum in the energy function it is a stable state for the network. Note that
this energy function belongs to a general class of models in physics, under the name of
Ising models; these in turn are a special case of Markov networks, since the associated
probability measure, the Gibbs measure, has the Markov property. 59
Hopfield Network Applied to
TSP
Step 1 2 3 4 5
A 0 1 0 0 0
B 0 0 0 1 0
C 1 0 0 0 0
D 0 0 1 0 0
E 0 0 0 0 1

The method for solving the TSP with Hopfield network consists of defining a system characterized
by a binary set of state variables and a quadratic energy function over this system, such that the
solutions to the problem, the shortest paths, are minima of this energy function by construction.
Hopfield chose to characterize this system using a table of towns and steps: an entry (X,i) is 1 if
town X is visited at step i, and 0 otherwise. One possible table is shown above. Values of 0 and 1
are used rather than -1 and +1 in order to simplify the calculation.

Reference: Andrzej Kos and Zbigniew Nagórny, “Modified Hopfield Neural Network for Travelling Salesman Problem,” Tools of Information Technology, Rzeszów, 2007
60
Hopfield Network Applied to
TSP
The activation function for the Hopfield network is given by

(1)

where U is the neuron input signal, V is the output signal and U0 is a constant.

The energy function E is defined for this network is

(2)

where X is the number of neurons, ω i j is the weight of interconnect between the output of
the neuron j and the input of the neuron i and Ii is the external input signal of the neuron i.
The Hopfield network continuously evolves in time to minimize the energy function
Reference: Andrzej Kos and Zbigniew Nagórny, “Modified Hopfield Neural Network for Travelling Salesman Problem,” Tools of Information Technology, Rzeszów, 2007
61
Hopfield Network Applied to
TSP
The Hopfield network for the TSP is built of X = N N neurons. The network consists of
N rows, containing N neurons according to picture below.

The scheme of the network.

All neurons have two subscripts. The first one defines the city number and the second
one the position of the city in the tour. If a neuron in the stable state of the network,
has the output signal Vx i = 1, then it means that the city x should be visited in the stage
i of the tour.
Reference: Andrzej Kos and Zbigniew Nagórny, “Modified Hopfield Neural Network for Travelling Salesman Problem,” Tools of Information Technology, Rzeszów, 2007
62
Hopfield Network Applied to
TSP
The cost function of the TSP problem proposed by Hopfield and Tank is the reason of difficulties in converging to
valid tours . The modified cost function of the TSP problem consists of four components: E1, E2, E3 and E4. E1
ensures that all cities should be visited exactly once (in the stable state of the network, in all rows exactly one neuron
has the output signal equal 1). E2 ensures that the salesman should visit exactly one city in each stage of the tour (in
all columns exactly one neuron has the output signal equal 1). E3 forces neurons to have the output signal equal 0 or 1
or near these values. E4 is equal the total length of the salesman’s tour .

(3)

(4)

(5)

(6)

63
Hopfield Network Applied to
TSP
In the preceding equations, A, B, C, D are constants, dxx ’ is the distance between the cities x and x’,
subscripts i + 1 and i - 1 are defined modulo N

64
7

Research Material
Algorithms on the Web
Concorde TSP:
http://www.tsp.gatech.edu/concorde.html

TSP with GA (MATLAB):


http://www.mathworks.com/matlabcentral/fileexchange/13680-traveling-salesman-problem-genetic-algorithm

66
Bibliography
• H. Sengoku, I. Yoshihara, “A Fast TSP Solver Using GA on Java”,
1997.
• Rojas, R., “Neural Networks – A Systematic Introduction,” Springer
Verlag, Berlin, New York,1996.
This book can be found @ http://page.mi.fu-berlin.de/rojas/neural/index.html
• E. Davalo, P. Naïm, “Neural Networs, Mcmillan Education, London, 1991.

67

You might also like