Professional Documents
Culture Documents
of a plane graph G is a plane graph whose vertices correspond to the faces of G. The
edges of G
correspond to the edges of G as follows: If e is an edge of G with face X on one side and
a face Y on the other side, then the endpoints of the dual edge e
that
represent the faces X, Y of G. This concept is only valid for plane graphs [Jai06, Wes01].
In some applications, it is natural to assign a number to each edge of a graph. The resulting graph is
called a weighted graph. The weight associated with an edge is usually a real number and it is denoted
by w(e). The sum of the weights of all edges of a graph is called its weight [Gib85].
2.2 Data Structures
We now introduce some of the common data structures that are used to represent graphs for computa-
tional purposes.
Section 2.3. Trees Page 4
2.2.1 Adjacency Matrices. An adjacency matrix is a way of representing the edges in a graph as a
matrix. For a given graph G = (V, E), its adjacency matrix is a |V | |V | matrix A such that
A[i, j] =
_
1 if (v
i
, v
j
) E
0 otherwise,
where V = {v
1
, v
2
, . . . , v
|V |
}. In the case of weighted graph G = (V, E), its adjacency matrix A is
given by
A[i, j] =
_
w((v
i
, v
j
)) if (v
i
, v
j
) E
0 otherwise.
Figure 2.4, shows a weighted graph G and its corresponding adjacency matrix A. Note that A is
symmetric [Wil96].
3
2
4
1
6
2
5
v
1
v
2
v
3
v
4
v
5
Graph G
A =
_
_
v
1
v
2
v
3
v
4
v
5
v
1
0 2 0 2 4
v
2
2 0 1 0 3
v
3
0 1 0 6 0
v
4
2 0 6 0 5
v
5
4 3 0 5 0
_
_
L =
v
1
v
2
v
3
v
4
v
5
v
2
, 2 v
4
, 2 v
5
, 4
v
1
, 2 v
3
, 1 v
5
, 3
v
2
, 1 v
4
, 6
v
1
, 2 v
3
, 6 v
5
, 5
v
1
, 4 v
2
, 3 v
4
, 5
Figure 2.4: The adjacency matrix A and the adjacency list L of graph G.
Clearly, the specication of A requires O(n
2
) steps, thus the use of an adjacency matrix for representing
graphs eliminates all algorithms of complexity O(|E|).
2.2.2 Adjacency List. An adjacency list is a representation of all edges or arcs in a graph as a list.
Notice that with an undirected graph, an adjacency list representation duplicates the edge information.
Adjacency lists can also be used to represent weighted graphs, where each node in the adjacency list
represents an edge with its weight as shown in Figure 2.4. An adjacency list is an ecient representation
of a graph, because each node has a list which only contains the nodes that are its neighbours. Thus
the specication of the adjacency list requires O(|E|) steps.
2.3 Trees
In this section, we introduce the notion of trees, which is an important class of graphs. A tree T is a
connected graph with no cycles. The edges of trees are called branches. A forest is a disjoint union of
Section 2.4. Depth First Search and Breadth First Search Page 5
trees.
Let T be a graph with n vertices, then the following statements are equivalent [Wil96]:
T contains no cycles, and has n 1 edges.
T is connected, and has n 1 edges.
There is a unique path between every pair of vertices in T.
T contains no cycles, and adding a new edge to T creates a unique cycle.
T is connected, and removing any edge from T makes it disconnected.
A spanning tree of a connected graph G is a subgraph of G that includes all the vertices of G and has
the properties of a tree.
2.3.1 Theorem. A graph is connected if, and only if, it has a spanning tree.
Proof. Let G be a connected graph. While G has a cycle, remove one edge from the cycle until we
get a connected acyclic subgraph H. Then H is a spanning tree of G. On the other hand, if G has a
spanning tree, then there is a path between each pair of vertices in G; thus G is connected.
2.4 Depth First Search and Breadth First Search
In this section, we introduce some commonly-used algorithms for generating spanning trees of a con-
nected graph.
2.4.1 Depth First Search Algorithm. To generate a spanning tree of a given connected graph G,
we need to visit all the vertices of G only once. Suppose we are currently at vertex v in Depth First
Search (D.F.S) algorithm, the general requirement is that the next vertex to be visited is adjacent to v,
and has not yet been visited. If no such vertex exists, then the search returns to the vertex visited just
before v. This process is repeated until every vertex has been visited. The algorithm of D.F.S is shown
in Algorithm 1.
Algorithm 1 D.F.S algorithm
Require: Connected graph G = (V, E).
Ensure: Spanning tree T.
T
for all v V do
v.visit False
end for
for all v V do
if v.visit = False then
DFS(v) {Dened in Algorithm 2}
end if
end for
Section 2.4. Depth First Search and Breadth First Search Page 6
Algorithm 2 DFS(u):
Require: vertex u.
Ensure: DFS(u).
u.visit True
for all u
adjacent to u do
if u
)}
DFS(u
)
end if
end for
2.4.2 Breadth First Search Algorithm. The general requirement in Breadth First Search (B.F.S) is
that all vertices adjacent to v which have not yet been visited, are visited in some order right afterwards.
This process is repeated for each of those adjacent vertices, until each vertex has been visited. The
algorithm of B.F.S is shown in Algorithm 3.
The running time of D.F.S and B.F.S is O(|E|), therefore a spanning tree can be found in linear time
using any of these algorithms [Gib85].
Algorithm 3 B.F.S algorithm
Require: Connected graph G = (V, E).
Ensure: Spanning tree T.
i 1
L []
T
for all v V do
v.order 0
end for
choose u from V
L.append(u)
while L non-empty do
pick rst element u in L
for all u
adjacent to u do
if u
.order = 0 then
u
.order i
i i + 1
L.append(u
)
T T {(u, u
)}
end if
end for
end while
3. Optimum Spanning Trees
In this chapter, we introduce the problem of nding Minimum Spanning Trees (M.S.T) of a connected
weighted graph. We also discuss Prims and Kruskals algorithms, which are the best-known classic
greedy algorithms in solving M.S.T problem of a given graph G. In order to simplify the description of
the algorithms, we assume G is simple and connected.
3.1 Problem Statement
Consider the problem of constructing a railway system linking a set of towns, or connecting a set of
routers in a computer network by giving the cost of all possible direct connections. How can we make
all the communications at a minimum cost? We can model this kind of problem using graph theory,
by generating a weighted graph G whose vertices represent the objects we need to connect (towns or
routers), and edges with their weights representing the direct paths with their costs.
Greedy algorithms are algorithms that use the strategy of constructing a solution piece by piece, and
always choosing the next piece that makes the solution locally optimal. Although this approach can fail
to give the optimal solution for some computational tasks, our problem is one of the applications where
the greedy algorithms succeed to compute its optimum solution [DPV06].
Problem statement: Given a weighted graph G, we are interested in nding a spanning tree T, with
the smallest total weight.
3.2 Kruskals Algorithm
Kruskals algorithm is an iterative method for solving the M.S.T problem of a connected weighted graph
G with n vertices. The algorithm starts by initializing a graph T with all the vertices in G and no edges.
After that, it repeats the operation of adding an edge with a minimum weight to T such that no cycle
is created, otherwise it ignores the edge. This repetition terminates when T has (n 1) edges.
Algorithm 4 Kruskals Algorithm for Solving the MST Problem
Require: A simple connected weighted graph G with n vertices.
Ensure: A minimum spanning tree T of G.
Sort the edges with respect to their weights as e
1
, e
2
, . . . , e
m
T
i = 0
while T has less than (n 1) edges do
if (T {e
i
}) does not contain a cycle then
add e
i
to T
i = i + 1
end if
end while
Figure 3.1 illustrates the execution of this algorithm.
7
Section 3.2. Kruskals Algorithm Page 8
2
3
1
5
3
4
4
v
1
v
2
v
3
v
4
v
5
Connected weighted graph G
v
1
v
2
v
3
v
4
v
5
1
v
1
v
2
v
3
v
4
v
5
Step 0 Step 1
2
1
v
1
v
2
v
3
v
4
v
5
2
1
3
v
1
v
2
v
3
v
4
v
5
Step 2 Step 3
2
1
3
4
v
1
v
2
v
3
v
4
v
5
Step 4: Minimum Spanning Tree of G
Clusters Step 0 Step 1 Step 2 Step 3 Step 4
C(v
1
) {v
1
} {v
1
, v
4
} {v
1
, v
2
, v
4
} {v
1
, v
2
, v
4
} {v
1
, v
2
, v
3
, v
4
, v
5
}
C(v
2
) {v
2
} {v
2
} {v
1
, v
2
, v
4
} {v
1
, v
2
, v
4
} {v
1
, v
2
, v
3
, v
4
, v
5
}
C(v
3
) {v
3
} {v
3
} {v
3
} {v
3
, v
5
} {v
1
, v
2
, v
3
, v
4
, v
5
}
C(v
4
) {v
4
} {v
1
, v
4
} {v
1
, v
2
, v
4
} {v
1
, v
2
, v
4
} {v
1
, v
2
, v
3
, v
4
, v
5
}
C(v
5
) {v
5
} {v
5
} {v
5
} {v
3
, v
5
} {v
1
, v
2
, v
3
, v
4
, v
5
}
Figure 3.1: The steps of constructing an M.S.T of a connected weighted graph G by using Kruskals
algorithm, and the table shows the changing of the clusters of vertices at each step.
Section 3.2. Kruskals Algorithm Page 9
3.2.1 Theorem (Correctness). Given a connected weighted graph G with n vertices, Kruskals algorithm
generates an M.S.T of G.
Proof. Clearly, Kruskals algorithm generates a spanning tree T of G, that refers to the process of
ignoring the edges which create cycles. Since G is a connected graph, the iteration terminates when T
gets exactly (n 1) edges, hence, T is a spanning tree.
We want to show that T has minimum total weight. By contradiction, assume T is not an M.S.T of
G, also, assume that the (n 1) edges of T are added in the order e
1
, e
2
, . . . , e
n1
. Dene an M.S.T
T
k
= T for G with the largest index k such that e
1
, e
2
, . . . , e
k1
T
k
and e
k
is not in T
k
. Adding
e
k
to T
k
creates a unique cycle, and since T has no cycles, this implies that there exists an edge e
k
in the cycle which is dierent from e
1
, e
2
, . . . , e
k
. Therefore, T
= T
k
e
k
+ e
k
is a spanning tree
and e
1
, e
2
, ..., e
k
T
k
), thus
w(T
) w(T
k
). Since T
k
is an M.S.T of G, w(T
) w(T
k
), hence, T
is a M.S.T of G. But
this contradicts that T
k
is M.S.T which has a maximum number of (k 1) rst edges shared with T.
Therefore, T is an M.S.T of G.
3.2.2 The Implementation of Kruskals Algorithm. The progress of Kruskals algorithm is based on
two main operations. It starts by sorting the edges by their weights. This operation can be implemented
by using a priority queue Q, which contains all edges of G by the order of their weights.
The second operation tests if the addition of an edge e
i
to T, in step i, creates a cycle. To simplify
this test, we can dene a cluster C(v) for each vertex v in G, such that at each step, C(v) shows the
list of all vertices connected with v (as shown in Figure 3.1). Initially, the cluster C(v) contains only
v. To test if the addition of e
i
= (u, v) to T generates a cycle, we can easily check if u and v belong
to the same cluster or not The python implementation code of this algorithm is shown in Appendix A
section A.1.
.
Algorithm 5 The Implementation Algorithm
Require: A simple connected weighted graph G with n vertices.
Ensure: A minimum spanning tree T of G.
for each vertex in G do
initialize a cluster of v v.cluster {v}
end for
Dene a priority queue Q that contains all the edges in G ordered with respect to their weights.
T
while T has less than (n 1) edges do
remove the rst edge (u, v) from Q
if u.cluster = v.cluster then
add (u, v) to T
u.cluster merge(u.cluster, v.cluster)
v.cluster u.cluster
end if
end while
Section 3.3. Prims Algorithm Page 10
3.2.3 The Complexity of the Algorithm. The initialization of a priority queue Q, which contains
sorted edges, needs an application of a sorting algorithm with complexity O(mlog m) (Merge-sort,
etc). We can dene C(v) as a pointer to the position of the cluster list of v, hence we only need one
comparison to test C(u) = C(v).
The merging of two clusters can be executed by appending the cluster with a smaller number of elements
to the larger one, and make both pointers C(u) and C(v) point to the same merged cluster. Thus, the
time complexity of merging the two clusters C(u) and C(v) is O(min(|C(u)|, |C(v)|)).
For a vertex v, the cluster C(v) of v, starts with one element v. At each step of merging clusters C(u)
and C(v), the element v is appended to C(u) while |C(v)| |C(u)| and |C(v)| n. However, at each
time of appending v in the merging process, the size of C(v) is at least duplicated; hence, v moves at
most (log n) times. Since this occurs for every vertex v, thus the time complexity of moving all vertices
during the merge operation is O(nlog n).
3.3 Prims Algorithm
Prims algorithm is another greedy algorithm which is used to solve the M.S.T problem of a connected
weighted graph G with n vertices. The algorithm starts by initializing a tree T = (V, E) with one vertex
V = {v}. Dene a set E
T
as the set of edges that have one endpoint in V and the other endpoint
in V . The algorithm repeats the operation of adding the edge in E
T
with minimum weight to E, and
the endpoint in V to V . This repetition terminates when T has n vertices, or equivalently, has (n 1)
edges.
Algorithm 6 Prims Algorithm for Solving the M.S.T Problem
Require: A simple connected weighted graph G with n vertices.
Ensure: A minimum spanning tree T = (V, E) of G.
V {v}
E
while T has less than (n 1) edges do
Choose e = (v, v) with the minimum weight such that v V, v V .
V V {v}
E E {e}.
end while
We illustrate the execution of this algorithm in the example shown in Figure 3.2.
3.3.1 Theorem (Correctness). Given a connected weighted graph G with n vertices, Prims algorithm
constructs T, an M.S.T of G.
Proof. Prims algorithm constructs a spanning tree T of G: at any step, the current set of edges only
connects the vertices of V . We only insert edges between V and V so no cycles are created. Further,
the iteration stops when T has (n 1) edges, which implies T is a spanning tree.
Now, we want to show that T has minimum total weight. By contradiction, assume T is not an M.S.T
of G, and also, assume that (n1) edges of T are added in the order e
1
, e
2
, . . . , e
n1
. Dene an M.S.T
T
k
= T for G with the largest index k such that e
1
, e
2
, . . . , e
k1
T
k
and e
k
not in T
k
. Adding e
k
to
Section 3.3. Prims Algorithm Page 11
2
3
1
5
3
4
4
v
1
v
2
v
3
v
4
v
5
Connected weighted graph G
v
1
1
v
1
v
4
Step 0 Step 1
2
1
v
1
v
2
v
4
2
1
4
v
1
v
2
v
3
v
4
Step 2 Step 3
2
1
3
4
v
1
v
2
v
3
v
4
v
5
Step 4: Minimum Spanning Tree of G
Set of vertices Step 0 Step 1 Step 2 Step 3 Step 4
V {v
1
} {v
1
, v
4
} {v
1
, v
2
, v
4
} {v
1
, v
2
, v
3
, v
4
} {v
1
, v
2
, v
3
, v
4
, v
5
}
V {v
2
, v
3
, v
4
, v
5
} {v
2
, v
3
, v
5
} {v
3
, v
5
} {v
5
}
Figure 3.2: The steps of constructing an M.S.T of a connected weighted graph G by using Prims
algorithm.
T
k
creates a unique cycle C, since T has no cycles, this implies that there exists an edge e
k
in C that
is dierent from e
1
, e
2
, . . . , e
k
, and one of its endpoint is in V
k
and the other in V
k
, where V
k
is the set
of vertices in T at step k.
Section 3.3. Prims Algorithm Page 12
Hence, T
= T
k
e
k
+ e
k
is a spanning tree and e
1
, e
2
, . . . , e
k
T
k
), thus w(T
) w(T
k
). Since T
k
is an M.S.T of G, that is,
w(T
) w(T
k
), T
eX
w(e),
and w() = 0. The optimization problem for M is the problem of nding an independent set B of a
maximum weight [Oxl92]. If w(e) 0 for all e E, the solution B will be a base of the matroid, and if
w(e) < 0 for some e, then any optimum solution does not contain e, so we can eliminate e from E. In
the graphic matroid of a connected graph G, the optimization problem is equivalent to the problem of
nding maximum spanning trees, which can be solved by using greedy algorithms as shown in Chapter
2. This happens to be the case for all matroids.
The Greedy Algorithm
Let M = (E, I, w) be a matroid with a weight function w. The following algorithm returns the maximal
independent set S of a maximum weight over all independent sets.
Algorithm 8 The greedy algorithm
Require: A matroid M = (E, I, w) with a weight function w.
Ensure: An independent set S of a maximum weight.
Sort the elements in E such that w(e
1
) w(e
2
) w(e
|E|
)
S
for i = 1 : |E| do
if S +e
i
I then
S S +e
i
end if
end for
In the graphic matroid, this algorithm is reduced to Kruskals algorithm, which we have discussed in the
previous chapter for solving the maximum weight spanning tree problem.
Section 4.2. Optimization Problem Page 17
4.2.1 Theorem. Let M = (E, I, w) be a matroid with a weight function w. The above greedy algorithm
returns the maximal independent set S of the maximum weight among all maximal independent sets.
Proof. By contradiction, let F be a maximal independent set with a maximum weight, and w(F) >
w(S). Suppose that F = {f
1
, f
2
, . . . , f
k
}, such that the elements in F are ordered as w(f
1
) w(f
2
)
w(f). Since w(F) > w(S), we can choose the rst index p such that w(f
p
) > w(s
p
). Consider
the two sets A = {f
1
, f
2
, . . . , f
p
} and B = {s
1
, s
2
, . . . , s
p1
}. Since A F and B S, from the
inclusion property, they are independent sets. Since |A| > |B| from the exchange property, there exists
f
i
A\B such that B {f
i
} I. But w(f
i
) w(f
p
) > w(s
p
), which contradicts the progress of the
greedy algorithm in adding s
p
to S instead of f
i
.
4.2.2 Theorem. Consider a subset systemM = (E, I, w). The greedy algorithm solves the optimization
problem for M if, and only if, M is a matroid.
Proof. Let M be a matroid. Theorem 4.2.1 shows that the greedy algorithm nds the maximal inde-
pendent set S which has a maximal weight, which is a solution of the optimization problem.
Now, assume M is not a matroid. We want to show how the greedy algorithm fails to nd the set of
maximal weight for some weight functions. Suppose that there are two sets X, Y I and |X| < |Y |
such that for all e Y \X, the set X {e} is dependent. Dene a weight function w(e) as
w(e) =
_
_
|X| + 2 e X
|X| + 1 e Y \X
0 Otherwise.
The greedy algorithm adds the highest weights rst. That is, the elements in X, then jumps over all the
elements in Y \X since they will generate a dependent set with X, after this, it checks the remaining
elements. The total weight becomes |X|(|X| + 2). But the optimal solution contains at least all the
elements in Y with total weight |Y |(|X| +1) which is at least (|X| +1)(|X| +1). Therefore, the greedy
algorithm fails to nd the solution of the optimization problem.
We have observed this powerful property, which determines the behaviour of a greedy algorithm without
doing something obvious with the algorithm.
5. Euclidean Minimum Spanning Trees
A Euclidean Minimum Spanning Tree (E.M.S.T) of a set P of n points in the plane is a minimum
spanning tree of P, where the weight of an edge between a pair of points is the Euclidean distance
between those points. The simplest way to nd an E.M.S.T of a set of n points is to construct a
complete weighted graph G = (V, E, w) of n vertices, which has n(n1)/2 edges, such that the weight
function w of G is the length of any edge e in the graph.
After this construction, we can apply any of the Minimum Spanning Trees (M.S.T) algorithm (such
as Kruskals algorithm which we have discussed in Chapter 2) to nd an E.M.S.T. Since the graph G
has (n
2
) edges, M.S.T algorithms require O(n
2
log n) time to get an E.M.S.T. This running time can
be decreased to O(nlog n) by using what is called Delaunay triangulation which we will discuss in the
following sections.
5.1 Delaunay Triangulation
5.1.1 Triangulation. A triangulation T of R
n
is a subdivision of R
n
into a set of n-dimensional
simplices
1
, such that:
Any simplex face is shared by either one adjacent simplex or none at all.
Any bounded set in R
n
intersects only nitely many simplices in T.
In a sense, a triangulation generates a mesh of simplices from a given set of points in R
n
.
5.1.2 Delaunay Triangulation. A Delaunay Triangulation (D.T) of a set P of points in the plane is
dened as a triangulation where the circumcircle of every triangle
2
does not contain any other points
from P. Figure 5.1 shows an example of a Delaunay and a non-Delaunay triangle. For an edge e that
connects two points from a set of points P in the plane, e is called Delaunay edge if there exists a circle
passing through the two endpoints of e that does not contain any interior point of P (see Figure 5.2).
t
p
t
p
A B
Figure 5.1: A: Triangle t is non-Delaunay since the point p lies within the circumcircle; B: t is a Delaunay
triangle.
1
n-dimensional simplices are a generalization of the notion of triangles or tetrahedrons to n dimensions.
2
a circle that passes through the endpoints of a triangle
18
Section 5.1. Delaunay Triangulation Page 19
e
e
A B
Figure 5.2: A: e is a non-Delaunay edge; B: e
A B
Figure 5.3: A is not a Delaunay triangulation. Flipping of a non-Delaunay edge e in A forms a Delaunay
triangulation in B.
5.1.4 Theorem. For a set of points P, two points p
1
and p
2
are connected by an edge in the Delaunay
triangulation if, and only if, there is an empty circle passing through p
1
and p
2
.
Section 5.2. The Delaunay Triangulation and E.M.S.T Page 20
Proof. Let p
1
, p
2
and p
3
be the endpoints of a Delaunay triangle, then there exists a circle C that
passes through p
1
, p
2
, p
3
and does not contain any interior point. Thus, C is an empty circle passing
through p
1
and p
2
.
Now we prove the other direction. Let p
1
, p
2
and p
3
be the endpoints of a triangle t. By contradiction,
assume that there are empty circles passing through each edge of t, and t is a non-Delaunay triangle,
this implies that the circumcircle of t contains an interior point p
i
. Without any loss of generality, the
edge that separates p
i
from inside t is (p
1
, p
2
) (see Figure 5.4). Hence, any circle that passes through
p
1
and p
2
must contains p
3
or p
i
as an interior point, but this contradicts our assumption.
p
1
p
3
p
2
p
i
t
Figure 5.4: p
i
lies inside the circumcircle of triangle t, the edge (p
1
, p
2
) is not a Delaunay edge.
5.1.5 The Relation Between D.T and Voronoi Diagram. A Voronoi diagram of a set of points in
the plane is a division of the plane into regions for each point in the set, where the region of a point
p contains the part of the plane which is closer to p than any other point. Each region is known as a
Voronoi cell and denoted by vo(p). Boundaries between the cells are the perpendicular bisectors of the
lines joining the points. Voronoi vertices are the points which are created in the intersections of these
boundaries. Voronoi edges are the boundaries between two Voronoi cells.
The Delaunay triangulation is a dual graph of a Voronoi diagram which is a planar graph. The Delaunay
triangulation has a vertex for each Voronoi cell, and an edge for each Voronoi edge. Figure 5.5 shows
an example of a Delaunay triangulation and a Voronoi diagram of a set of points and the dual relation
between them.
Generally the construction of the Voronoi diagram requires O(nlog n) running time with certain algo-
rithms such as Fortunes algorithm [Zim05]. Then, the Delaunay triangulation can be generated from
the Voronoi diagram by using the dual relation. The Delaunay triangulation can be constructed directly
in O(nlog n), by using more technical algorithms such as divide and conquer algorithms. These are
based on drawing a line recursively to divide the set of points into two sets, computing the Delaunay
triangulation for each set and merging these two sets. The merge operation can be done in time O(n),
so the total running time is O(nlog n) [Zim05].
5.2 The Delaunay Triangulation and E.M.S.T
The Delaunay triangulation has an interesting property: The E.M.S.T of a set of n points is a subgraph of
the Delaunay triangulation. This property can increase the eciency of nding an E.M.S.T, by applying
minimum spanning tree algorithms on the Delaunay triangulation which has O(n) edges instead of
using the complete graph which has O(n
2
) edges. The proof of this property is shown in the following
theorem.
Section 5.2. The Delaunay Triangulation and E.M.S.T Page 21
A B
Figure 5.5: A shows a Delaunay triangulation of a set of 6 vertices, and B shows the corresponding
Voronoi diagram.
5.2.1 Theorem. A Euclidean minimum spanning tree of a set of points P is a subgraph of any Delaunay
triangulation of P.
Proof. Let T be an E.M.S.T of P with total weight w(T). Assume that p
1
and p
2
are two points in P,
and that there is an edge e in T that connects these two points. By contradiction, we assume that e is
not a Delaunay edge, thus every circle passing through p
1
and p
2
contains an interior point. Choose a
circle C with diameter e, and let p
i
be an interior point of C (as shown in Figure 5.6). Removing the edge
e from T divides T into two connected components. One of them contains p
1
and the other contains p
2
.
Assume that p
i
lies with p
1
in the same connected component, then by adding an edge e
that connects
p
2
with p
i
to T, it generates a spanning tree T
). Since
the length of e is greater than the length of e
p
1
p
i
p
2
Graph T Graph T