You are on page 1of 7

Transactions on Computer Science and Technology December 2013, Volume 2, Issue 4, PP.

62-68

A Two-phase Methodology Heuristic Insertion Algorithm for TSP


Jianjun Liu, Yuan Li , Xinrui Wang, Yuan Wen, Tingying Zhou
College of Science, China University of Petroleum, Beijing 102249, China
Email:

kkong001@163.com

Abstract
There exist various algorithms for TSP, which is a classical combinatorial problem. Insertion algorithm is one of heuristic algrothms applied to solving TSP in past decade years. In this paper, a heuristic greedy algorithm based on node(s)-Insertion strategy with random disturbance is proposed which is constituted of two phases (insertion and disturbance). Finilly, we implemented it to a set of TSP benchmark instances. The experimental results show that the algorithms always result in better solutions. This algorithm is simpler, easier to achieve and holds better convergence rate for the most instances of TSP. Keywords: Optimization; Insertion; Disturbance; TSP

1 INTRODUCTION
The travelling salesman problem (TSP) is a famous combinatorial optimization problem [1-2]. It can be simply stated as follows: Given a set of cities and the distances between each pair of them, and a shortest cycle visiting each city exactly once. If the distance between two cities does not depend on the direction, theproblem is called symmetric. The size of the problem instance is defined as thenumber n of cities. Formally, for a complete, undirected and weighted graph with n vertices, the problem consists of finding a minimal Hamiltonian cycle. In thispaper we consider Euclidean TSP (ETSP) instances whose cities are embeddedin the Euclidean plane. As is known, TSP is an intractable problem and is in fact provably NP-complete. However, solving TSP is a significant work in many areas. So it is necessary to provide a fast and efficient algorithm to solve the problem. As we know, approximation algorithms or heuristic algorithms occupy a major position in the process of solving TSP. Broadly speaking, TSP heuristics can be classified into tour construction procedures, which consist of gradually building a solution by adding a new vertex at each step, and tour improvement procedures, which improve upon a feasible solution by performing various exchanges. The best methods are two-phase procedures that combine those two features [3]. In recent years, with the development of artificial intelligence, more and more intuitive methods for finding approximate solutions are provided, such as TabuSearch [4] Simulated Annealing [5] Ant Colony Optimization [6-9], Genetic Algorithms [10] Neural Networks [11-13], Particle Swarm Optimization [14-15], Bee Colony Optimization [16].However, they are not the most appropriate techniques for quickly generating good tours and their performance is directly related to running time. Meanwhile, such intelligent algorithms spare tremendous computation and huge memory for solving TSP. Insertion heuristicalgorithm is a two-phase procedure and has been studied in several articles. Y Gendreau M. et al proposed an efficient algorithm (GENIUS) for the TSP, which is a combination of a new insertion procedure and a new post optimization routing [17]. Later in 1998, they proposed a generalized insertion heuristic for the TSP with Time Windows in which the objective is the minimization of travel times [18]. Their tests performed on 375 instances indicate that the proposed heuristic compares very well with alternative methods and very often produces optimal or near-optimal solutions. Murat Albayrak and Novruz Allahverdi developed GA with Greedy Sub Tour Mutation (GSTM) [19]. In this paper, we propose another greedy insertion algorithm with random disturbance to deal with the TSP. This algorithm is simple and easy to be performed, with the advantage of better convergence rate for most instances of TSP.
- 62 http://www.ivypub.org/cst

The rest of this paper is organized as follows. Section 2 presents the insertion algorithm and the technical to improve insertion algorithms. Section 3 applies several examples to illustrate the performance of proposed algorithm and makes comparison between two improved algorithms. The conclusion is drawn in Section 4.

2 GREEDY INSERTION ALGORITHM (GIA)


Classical insertion algorithms iteratively add a city to a partial tour, initially made of one city chosen at random, until all cities have been inserted. When a city i is inserted in a tour T, the sequence of all cities already in T remains unchanged. The length of the partial tour T is increased by , where denotes the distance between city a and x, a and b are consecutive cities in T, chosen to minimize the length increase [3]. Here we propose another basic insertion technique which consists of single node insertion, p nodes insertion.

2.1 Single node insertion


Using Graph Theory terms, a city is called a node. The basic insertion process can be described as follows:

( Firstly, set insertion method; set

), where V is the set of all nodes; let T be the set of travelled nodes yielded by , select two nodes form V randomly, put them into T, then ( ); ( ) to insert between and ( )can be obtained; ) , until the
is satisfied.

Secondly, select one node ( ) , then a new tour Finally, for the other

), repeat above insert process, then stop until

There are some problems utilizing the insertion method to solve the TSP. This method generally cannot obtain the optimal solution. Therefore, we define two insertion methods. Suppose

) is the original path and

is a random positive integer (

).

Definition 1: We take out from the original path, and then insert it to the path ( ) between and ( ) , until find a new path whose length is smaller than that of original path. We define this process as Single Node Insertion. As illustrated inFig.1.

(a)

(b)

(c)

1(a) originaltour 1(b) tour after removing node4 1(c) new tour after Insertion node 4 between node 3 and node 5 FIGURE 1 SINGLE NODE INSERTION Sometimes Single Node Insertion will not get a better solution. As illustrated in Fig.2, if we just insert one node such as node 5 between node 4 and node 7, we could not get a better solution. Only when we insert node 5 and node 6 together between node 4 and node 7 can we get the best solution. So we need to insert several nodes when we cannot get a better solution by Single Node Insertion.

2.2 p nodes insertion


- 63 http://www.ivypub.org/cst

(a) (b) 2(a)theoriginal tour 2(b) a tour after removing node5 and node 6 2(c) a new tour after Insertion node 5 and node 6 between node 4 and node 7 FIGURE 2 P NODES INSERTION

(c)

Definition 2: We pick out the sub-path( ) from T, and then insert them to the path ( ) between and ( ) , until we get a new path whose length is smaller than that of the origin. We define the process as p Nodes Insertion.

2.3 steps of GIA


GIA is presented as follow steps in detail. Step1 initialization. The maximal number of the insert nodes (P), the maximal iteration number (PN). If the path doesnt change after Q times of iteration, we do the random disturbance process of M nodes. Set p=q=pn=1. Randomly generate an initial solution IT. Step2 set pn=pn+1If pn=PNgo to Step 8; Otherwise, do Single Node Insertion and generate a new path Step3 Judge whether the path changes, If it changes, let Step4 Ifp<P, do p Nodes Insertion and generate a new path Step5 Judge whether the path changes, If it changes, let .

, go to Step 2; Otherwise, p=p+1 and go to Step 4. , go to Step 5; Otherwise, go to Step 7. , go to Step 2; Otherwise, p=p+1 and go to Step 4.

Step6 q=q+1; if q=Q, go to Step 7; Otherwise, set p=1, go to Step 2. Step7 Stop iterating and output the optimal solution IT. When we use the Single Node Insertion together with p Nodes Insert, the iteration results converged quickly, but sometimes the result fall into a local optimal solution. This is because the improved insertion method can only guarantee the descent of solution, but not the global optima. So we need to figure out a strategy which can make the result jump out of local optimal solution as far as possible. Seeking the difference between the local and global optimal solution, we find that the difference is in several consecutive nodes. So, sometimes we need disrupt part of the nodes which fall into the local optimal solution, then research it.

3 GREEDY INSERTION ALGORITHM WITH RANDOM DISTURBANCE (GIARD)


GIA is easy to trap in local optima. For this reason, a diversification strategy needs to be used. Here An improved insertion technique which consists of single node insertion, p nodes insertion and greedy insertion with random disturbance. According to the idea of the insert method, we rearrange the nodes forming the sub-path which may cause the local optima and search the path again; its an improvement of the insert method. As we are unable to find out which segment of the path that fall into the local optimum, its useful to select some nodes randomly and disrupt it. In order not to destroy most of the optimal tour, we only select a part of the tour in this process. Then, a tabu table is used to tabu the nodes which have been disrupted, so that most of the nodes have the opportunity to be disrupted after the repeat procedure. Definition 3: Suppose ( ) is the original path and (
- 64 http://www.ivypub.org/cst

) is a random positive integer. If a is

not in the tabu list, we disturb the nodes from the a-th to the (a+M)-th of the current feasible solutions randomly. Then we can obtain a new path. We called this process Random Initialization Disturbance. Then, the flow chart of GIARD is figured out (see Fig. 3).
Stop = +1

4 COMPUTATIONAL RESULTS
4.1 Result and comparison of GIA with GIARD
Here five instances from the TSPLIB library [20] have been selected to examine the validities and performances of GIA and GIARD for solving TSP. The relative parameters in previous section P= M=S/10, PN= 500, Q=20, respectively. Each instance is run for 100 times in turn. The experimental results illustrated in Tab.1 show that the GIARD has a better performance. The data in the third column of table is stand for the best known optimal tour length (Opt). The data in the fourth, fifth and sixth column respectively stand for the best, the worse and the average value for each instance. The data in seventh column stand for the relative error ( ( ) ).
TAB. 1 RESULT OF COMPARISON BETWEEN GIA AND GIARD
Opt 9074.148 Algorithm GIA bayg29 29 GIARD GIA att48 48 33523.708 GIARD GIA berlin52 52 7544.3659 GIARD GIA gr96 96 512.3094 GIARD GIA kroA100 100 21285.443 GIARD Best result 9074.1 9074.1 33608.3 33588.3 7554.3 7544.3 513.8 510.8 21290.4 21285.4 Worst result 9084.1 9074.1 33656.5 33600.5 7594.3 7544.3 523.9 513.9 21324.8 21381.8 Avg 9078.4 9074.1 33622.7 33582.7 7558.3 7544.3 518.1 512.1 21323.1 21309.1

pn<Pn
Single Node Inserting Yes whether the path changes No p Nodes Inserting Yes whether the path changes No q=q+1 qQ

p=Q
Random Disturbance

FIG. 3 GIARD FLOW


Size Err(%) CHART -0.025 0 0.2734 0.1759 0.067 0.002 -0.131 -0.041 0.211 0.111

From Tab.1it can be seen that all of the average relative errors of GIA are less than GIARD for five instances and the best results yielded by GIARD are better than that by GIA. So random disturbance added in GIA play a role which can prevent the solution be trapped in local optima.

FIG. 4 CONVERGENCE OF THE AVERAGE TOUR LENGTH OF BAYG29 AND GR96


- 65 http://www.ivypub.org/cst

Due to the random disturbance during iteration procedure, GIARD holds a better convergence rate than GIA. In the early iteration stage, GIARD converges a little slowly than GIA while in the later stage the result of GIARD is better than GIA. Fig. 4 only demonstrates the convergence of the GIA and GIARD for TSP- bayg29 and TSP-gr96.

4.2 Parameters analysis


In GIARD, there are two important parameters R1 and R2 which directly affect the algorithms performance. R1 denotes the maximal rate of the city numbers when we do p Nodes Insertion and R2is the proportions of the nodes which need random disturb. Firstly, results of the experiments with theR2tuning are presented. It can be seen from Tab.2 that whenR2=0.3 and R2=0.4, the performances of this algorithm are the best.
TAB. 2 EXPERIENCERESULTS OF TUNING R2
bayg29 R1=0.9 R2=0.2 R1=0.9 R2=0.3 R1=0.9 R2=0.4 R1=0.9 R2=0.5 R1=0.9 R2=0.6 Min Max Avg Min Max Avg Min Max Avg Min Max Avg Min Max Avg 9074.1 9094.6 9078.2 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 att48 33523.7 33588.3 33549.6 33523.7 33701.3 33574.5 33523.7 33784.0 33644.0 33600.5 33831.7 33646.8 33523.7 33701.2 33574.5 berlin52 7544.4 7777.3 7625.4 7544.4 7544.4 7544.4 7544.4 7782.9 7595.9 7544.4 7642.8 7564.0 7544.3 7746.8 7584.8 gr96 512.3 520.5 514.7 510.9 513.7 512.0 511.4 515.5 512.9 513.2 521.1 515.4 511.5 516.9 513.6 kroA100 21285.4 21400.8 21343.8 21285.4 21521.2 21424.1 21307.4 21438.1 21370.3 21285.4 21452.9 21339.3 21307.4 21627.7 21454.9

The larger the R1 is, the more the Insertion need and the greater the number of calculations required. Hence, we must decrease the value of R1without degrading the calculation quality. It can also be seen from Tab.3that we can get a better results when choosing R1=0.7, so we can use R1=0.7 in the specific implementation.
TAB. 3 EXPERIENCERESULTS OF TUNING R1
bayg29 Min R1=0.9 R2=0.3 R1=0.8 R2=0.3 R1=0.7 R2=0.3 R1=0.6 R2=0.3 Max Avg Min Max Avg Min Max Avg Min Max Avg 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9074.1 9094.6 9078.2 att48 33523.7 33701.3 33574.5 33523.7 33701.3 33623.0 33523.7 33966.1 33683.2 33600.5 33804.2 33698.1 berlin52 7544.4 7544.4 7544.4 7544.4 7544.4 7544.4 7544.4 7742.6 7584.0 7544.3 7897.7 7661.6 gr96 510.9 513.7 512.0 513.3 516.2 514.6 510.9 515.5 513.3 510.9 516.3 513.0 kroA100 21285.4 21521.2 21424.1 21285.4 21381.3 21309.0 21285.4 21454.4 21369.0 21285.4 21452.9 21345.6

5 CONCLUSION
GIARD is proposed in term of combining p Nodes-insertion method with random disturbance. The results of our
- 66 http://www.ivypub.org/cst

computational experiments show clearly that those combined algorithm can be used for wide variety of the TSP instances as a fast heuristic of relatively good quality. Considering the instance which result falls into a local optimal solution, we find that they cannot be solved only by the GIARD mentioned above. Hence, a study of developing a new insertion method and more comprehensive random disturbance will be conducted. On the other hand, GIARD is not a swarm intelligent algorithm, so we do not make the comparison between them.

ACKNOWLEDGMENT
This paper is part of the project supported by China University of Petroleum (Grant No. KYJJ2012-06-03).The authors wish to express their appreciation to the supporters.

REFERENCES
[1] D.L Applegate, R.E. Bixby, etc. The Travelling Salesman Problem. A Computational Study. Princeton University Press, 2006 [2] G. Gutin, A.P. Punnen, The Travelling Salesman Problem and Its Variations.Kluwer, Dordrecht, 2002 [3] Righini G., The Largest Insertion algorithm for the TravellingSalesman Problem, Note del Polo - Ricerca 29, Dipartimento di TecnologiedellInformazione,Universita di Milano, 2000 [4] C.N. Fiechter, A parallel tabu search algorithm for large travellingsalesman problems. Discrete Applied Mathematics 51, (1994): 243-267 [5] S. Kirkpatrick; C. D. Gelatt; M. P. Vecchi. Optimization by Simulated Annealing. Science, New Series, Vol. 220, No. 4598. (1983): 671-680 [6] Dorigo, M., & Gambardella, L. M. Ant colonies for the travelling salesmanproblem. BioSystems, 43, (1997):73-81 [7] Fogel, D. B. An evolutionary approach to the travelling salesman problems. Biological Cybernetics, 60, (1988):139-144 [8] Fogel, D. B.. Empirical estimation of the computation required to discoverapproximate solutions to the travelling salesman problem using evolutionaryprogramming. In Proceedings of 2nd annual conference on evolutionaryprogramming. (1993a):56-61 [9] Fogel, D. B. Applying evolutionaryprogramming to selected travellingsalesman problems. Cybernetics and Systems: An International Journal, 24, (1993b):27-36 [10] Freisleben, B., &Merz, P. A genetic local search algorithm for solvingsymmetric and asymmetric travelling salesman problems. In Proceedings of theIEEE international conference on evolutionary computation. IEEEPress. (1996): 616-621 [11] JJ. Hopfield and DW Tank, "Neural" Computation of decisions in optimization problems, Biol.Cyber. 52, (1985):141-152 [12] A. Joppe, H.R.A. Cardon, J.C. Bioch , "A Neural Network forSolving the Travelling Salesman Problem on the Basis of CityAdjacency in the Tour", in Proceedings of the Int. Joint Conf. onNeural Networks, San Diego, CA, (1990): III-961-964 [13] JacekMa ndziuk, Solving the Travelling Salesman Problemwith a Hopfield - type neural network. Demonstratio Mathematica, 29(1), (1996):219-231 [14] Eberhart, R. C. & Shi, Y. Particle swarm optimization: developments, applications andresources, Proceedings of the 2001 Congress on Evolutionary Computation, Vol. 1, (2001):81-86 [15] Clerc, M. Discrete particle swarm optimization, illustrated by the travelling salesmanproblem. In: Studies in Fuzziness and Soft Computing New optimization techniques inengineering, Babu, B.V. & Onwubolu, G.C. (Eds.), Vol. 141, (2004): 219-239 [16] L. P. Wong, M. Y. H. Low, and C. S. Chong, Bee colony optimizationwith local search for travelling salesman problem, in Proc. of 6thIEEE International Conference on Industrial Informatics (INDIN 2008).IEEE, 1019-1025 [17] Gendreau M, Hertz A, Laporte G. New insertion and postoptimization procedures for the travelling salesman problem [J].Operational Research (40),(1992):1086-1094 [18] Gendreau, M., A. Hertz, G. Laporte, M. Stan.A generalizedinsertion heuristic for the travelling salesman problem withtime windows. Oper. Res. 46. (1998):330-335 [19] Murat Albayrak a, Novruz Allahverdi, development a new mutation operator to solve the Travelling Salesman Problemby aid of Genetic Algorithms, Expert Systems with Applications 38, (2011):1313-1320 [20] G. Reinelt, TSPLIBA travelling salesman problem library,ORSA Journal on Computing 3 (4), (1991):376-384

- 67 http://www.ivypub.org/cst

AUTHORS
1

Jianjun Liu, born in Dec. 1973 in Inner

Xinrui Wang, born in May.1990 in Shandong Province,

Mongolia, received a Doctor's degree of mathematics at Northwest University in 2003, became aAssociate Professor in China University of Petroleum (Beijing) from 2005 to present. interests mainly Optimization Algorithm and Optimal Control.
2

received a Bachelors degree of statistics at Chongqing University in 2012. She is now a graduate student at China University of Petroleum (Beijing).
4

His research Intelligent

Yuan Wen, born in July. 1988 in Sichuan Province, received a

include

Bachelor's degree of mathematics at China University of Petroleum (Beijing) in 2010. She is now a graduate student at China University of Petroleum (Beijing). Her research interest is swarm intelligence optimization algorithm.
5

Yuan Li, born in Sep. 1987 in Jiangsu

Province, received a Bachelor's degree of mathematics at Tianjin University in 2010, He is now a graduate student at China University of Petroleum (Beijing). His research interest is swarm intelligence optimization algorithm.

Tingying Zhou, born in July. 1988 in Heilongjiang Province,

received a Bachelor's degree of mathematics at Fudang University in 2010. He is now a graduate student at China University of Petroleum (Beijing). Her research interest is swarm intelligence optimization algorithm.

- 68 http://www.ivypub.org/cst

You might also like