You are on page 1of 59

Chapter 1

Review
1.1 CONCEPT OF ALGORITHM

very difficult to make people realize that it is not really the computer but the man behind computer
A common man’s belief is that a computer can do anything and everything that he imagines. It is who does
everything.
In the modern internet world, man feels that just by entering what he wants to search into the computers
he can get information as desired by him. He believes that, this is done by computer. A common man
seldom understands that a man made procedure called search has done the entire job and the only support
provided by the computer is the execution speed and organized storage of information.
In the above instance, a designer of the information system should know what one frequently searches
for. He should make a structured organization of all those details to store in memory of the computer.
Based on the requirement, the right information is brought out. This is accomplished through a set of
instructions created by the designer of the information system to search the right information matching the
requirement of the user. This set of instructions is termed as program. It should be evident by now that it
is not the computer, which generates automatically the program but it is the designer of the information
system who has created this.
Thus, the program is the one, which through the medium of the computer executes to perform all the
activities as desired by a user. This implies that programming a computer is more important than the
computer itself while solving a problem using a computer and this part of programming has got to be done
by the man behind the computer. Even at this stage, one should not quickly jump to a conclusion that
coding is programming. Coding is perhaps the last stage in the process of programming. Programming
involves various activities form the stage of conceiving the problem up to the stage of creating a model to
solve the problem. The formal representation of this model as a sequence of instructions is called an
algorithm and coded algorithm in a specific computer language is called a program.

One can now experience that the focus is shifted from computer to computer programming and then
to creating an algorithm. This is algorithm design, heart of problem solving.

1.2 CHARACTERISTICS OF AN ALGORITHM


Let us try to present the scenario of a man brushing his own teeth(natural denture) as an algorithm as
follows:
Step 1. Take the brush
Step 2. Apply the paste
Step 3. Start brushing
Step 4. Rinse
Step 5. Wash
Step 6. Stop
If one goes through these 6 steps without being aware of the statement of the problem, he could
possibly feel that this is the algorithm for cleaning a toilet. This is because of several ambiguities while
comprehending every step. Step-1 may imply tooth brush, paint brush, toilet brush, etc. Such an ambiguity
arises from the instruction of the above algorithmic step. Thus every step has to be made unambiguous.
An unambiguous step is called definite instruction. Even if step 2 is rewritten as ‘apply the tooth paste’, to
eliminate ambiguities, yet the conflicts such as, where to apply the tooth paste and where is the source of
the tooth paste, need to be resolved. Hence, the act of applying the toothpaste is not mentioned. Although
unambiguous, such unrealizable steps can’t be included as algorithmic instruction as they are not effective.
The definiteness and effectiveness of an instruction implies the successful termination of that instruction.
However the above two may not be sufficient to guarantee the termination of the algorithm. Therefore,
while designing an algorithm care should be taken to provide a proper termination for algorithm.
Thus, every algorithm should have the following five characteristic feature
1) Input
2) Output
3) Definiteness
4) Effectiveness
5) Termination
Therefore, an algorithm can be defined as a sequence of definite and effective instructions, which
terminates with the production of correct output from the given input.

In other words, viewed little more formally, an algorithm is a step by step formalization of a mapping
function to map input set onto an output set.
The problem of writing down the correct algorithm for the above problem of brushing the teeth is left
to the reader.

1.3 HOW TO DEVISE THE ALGORITHMS


The process of devising an algorithm is both an art and a science. This is one part that cannot be
automated fully. Given a problem description, one have to think of converting this into a series of steps,
which, when executed in a given sequence solve the problem. To do this, one has to be familiar with the
problem domain and also the computer domains. This aspect may never be taught fully and most often,
given a problem description, how a person proceeds to covert it into an algorithm becomes a matter of his
“style” – no firm rules become applicable here.
For the purpose of clarity in understanding, let us consider the following example.
Problem: Finding the largest value among n>=1 numbers.
Input: the value of n and n numbers
Output: the largest value
Steps :
1. Let the value of the first be the largest value denoted by BIG
2. Let R denote the number of remaining numbers. R=n-1
3. If R != 0 then it is implied that the list is still not exhausted. Therefore look the next number
called NEW.
4. Now R becomes R-1
5. If NEW is greater than BIG then replace BIG by the value of NEW
6. Repeat steps 3 to 5 until R becomes zero.
7. Print BIG
8. Stop
End of algorithm

1.4 HOW TO VALIDATE THE ALGORITHMS


Once an algorithm has been devised, it becomes necessary to show that it works. i.e it computes the
correct answer to all possible, legal inputs. One simple way is to code it into a program. However,
converting the algorithms into programs is a time consuming process. Hence, it is essential to be reasonably
sure about the effectiveness of the algorithm before it is coded. This process, at the algorithm level, is
called “validation”. Several mathematical and other empirical methods of validation are available. Providing
the validation of an algorithm is a fairly complex process and most often a complete theoretical validation,
though desirable, may not be provided. Alternately, algorithm segments, which have been proved else
where may be used and the overall working algorithm may be empirically validated for several test cases.
Such methods, although suffice in most cases, may often lead to the presence of unidentified bugs or side
effects later on.

1.5 HOW TO TEST THE ALGORITHMS


If there are more then one possible way of solving a problem, then one may think of more than one
algorithm for the same problem. Hence, it is necessary to know in what domains these algorithms are
applicable. Data domain is an important aspect to be known in the field of algorithms. Once we have more
than one algorithm for a given problem, how do we choose the best among them? The solution is to devise
some data sets and determine a performance profile for each of the algorithms. A best case data set can
be obtained by having all distinct data in the set.
The ultimate test of an algorithm is that the programs based on the algorithm should run satisfactorily.
Testing a program really involves two phases a) debugging and b) profiling. Debugging is the process of
executing programs with sample datasets to determine if the results obtained are satisfactory. When
unsatisfactory results are generated, suitable changes are made in the program to get the desired results.
On the other hand, profiling or performance measurement is the process of executing a correct program
on different data sets to measure the time and space that it takes to compute the results.
However, it is pointed out that “debugging can only indicate the presence of errors but not the absence
of it”. i.e., a program that yields unsatisfactory results with a sample data set is definitely faulty, but just
because a program is producing the desirable results with one/more data sets cannot prove that the
program is ideal. Even after it produces satisfactory results with say 10000 data sets, it’s results may be
faulty with the 10001t h set. In order to actually prove that a program is perfect, a process called “proving”
is taken up. Here, the program is analytically proved to correct and in such cases, it is bound to yield
perfect results for all possible sets of data.

1.6 ALGORITHMIC NOTATIONS


In this section we present the pseudo code that we use through out the book to describe algorithms.

The pseudo code used resembles PASCAL and C language control structures. Hence, it is expected that
the reader be aware of PASCAL/C. Even otherwise at least now it is required that the reader should
know preferably C to practically test the algorithm in this course work.
However, for the sake of completion we present the commonly employed control constructs present in
the algorithms.
1. A conditional statement has the following form
If < condition> then
Block 1
Else
Block 2
If end.
This pseudo code executes block1 if the condition is true otherwise block2 is executed.
2. The two types of loop structures are counter based and conditional based and they are as follows
For variable = value1 to value2 do
Block
For end
Here the block is executed for all the values of the variable from value 1 to value 2.
There are two types of conditional looping, while type and repeat type.
While (condition) do
Block
While end.
Here block gets executed as long as the condition is true.
Repeat
Block
Until<condition>
Here block is executed as long as condition is false. It may be observed that the block is executed at
least once in repeat type.

SUMMARY
In this chapter, the concepts of algorithm are presented and the properties of the algorithm are also
given. The concept of designing algorithm is explained with a specific example. The concepts of validating
the algorithms and test the devised algorithms is also presented.
EXERCISE
1. __________ is the process of executing a correct program on data sets and measuring the time and space it
takes to compute the results.
2. Define algorithm? What are its properties
3. What is debugging and What is profiling?
4. One of the properties of an algorithm is beauty (true / false)
Chapter 2
Elementary Data Structures

2.1 FUNDAMENTALS

enhancing the qualities of associated algorithms. In this chapter, we review only the non-primitive
Data structure is a method of organizing data with a sort of relationship with an intension of data structures. The
non-primitive data structures can be broadly classified into two types linear
and non-linear data structures. Under linear data structures Arrays, Stacks and Queues and under non-
linear data structures graphs and trees are reviewed.
2.2 LINEAR DATA STRUTURES
A data structure in which every data element has got exactly two neighbors or two adjacent elements
except two elements having exactly one data element is called a linear data structure. Otherwise it is
called a nonlinear data structure.
2.2.1 Array and its representation
Array is a finite ordered list of data elements of same type. In order to create an array it is required to
reserve adequate number of memory locations. The allocated memory should be contiguous in nature.
The size of the array is finite and fixed up as a constant. Some of the important operations related to
arrays are,
Creation () g A[n], Array created (reservation of adequate number of memory locations) memory
reserved;
Write (A, i, e) g Updated array with ‘e’ at it h position;
Compare (i, j, Relationl operator) g Boolean;

Read (A, i) g ‘e’ element at it h position;


Search (A, e) g Boolean;
REPRESENTATION OF A SINGLE DIMENSIONAL ARRAY
1 2 3 4 5 6
3 7 -8 10 15 5 …… 50 20
1000 1002 1004 1006 1008 1010
Any array is associated with a lower bound l and an upper bound u in general. When the array is
created then it starts at some base address B. Since the computer is byte addressable, each address can
hold a byte. If an integer takes 2 bytes for representation and if B=1000 and if l=1, then
the 1st element is at (1000 + 0)t h location

2nd element is at (1000 + 2)th location


3rd element is at (1000 + 4)t h location
..
..
..

it h element is at (1000 + (i-1)*2 )t h location

In general, if w is the size of the data element, then ith element is at (B + (i – l) w)th location. i.e., A[i]
= B + (i – l) w.
REPRESENTATION OF A TWO DIMENSIONAL ARRAY
The two dimensional array A[l1..u1] [l2..u2] may be interpreted as n = u1-l1+1 rows and m = u2 -l2+1
columns i.e., each row consists of u2-l2+1 elements.

l2 l2+1 l2+2 l2+3 l2+4 …. u2


l1
l1+1
l1+2
.
u1
Although we have an n´m matrix representation the allocations is not in the form of matrix but is
contiguous in nature and will be as shown below. Incase if l1=1, and l2=1 the indices start at [1, 1].

1,1 1,2 1,3 …. 1,m 2,1 2,2 2,3 … 2,m …. ….. n,1 n,2 …. n,m

Given the indices (i, j), the address can be computed using A[i, j] = B + (i-1) m+ (j-1).
In general, if li £ i £ui and lj £ j £uj , then A[i, j] = B + { (i - li ) (uj – lj +1) +(j - lj )}w.
For higher n-dimensional arrays, the address can be computed as
A( i1, i2, i3, ….in ) where l1 £ i1 £u1, l2 £ i2 £u2 , … ln £ in £un
A( i1, i2, i3 , ….in ) = B + { (i1 - l1 ) (u2 - l2 +1)(u3-l3+1) ….(un - ln+1) + (i2 - l2) (u3 - l3 +1)(u4-l4+1) ….
(un- ln +1) + (i3 - l3 ) (u4 - l4 +1) (u5-l5 +1) ….(un- ln +1)
+ ……+
(in -1 - ln -1) (un - ln +1) + (in - ln) }w
The algorithm thus designed to compute the row major address is as follows
Algorithm: Address Computation (Row Major)
Input: (1) n, dimension
(2) l1 , l2, l3 , … ln n lower limits

(3) u1, u2, u3 , … un n upper limits


(4) w, word size

(5) i1, i2, i3, …., in values of subscripts


(6) B, the base address
Output: ‘A’ address of the element at (i1, i2, i3 , …., in )
Method:
-

+= wPliBA )(
jjj

j1
=

+ - = l u P where ) 1 (
kkj
+=jk 1

Algorithm ends.
2.2.2 Stacks
A stack is an ordered list in which all insertions and deletions are made at one end, called the top. Stack
is a linear data structure which works based on the strategy last-in-first out (LIFO). It is a linear data
structure which is open for operations at only one end (both insertions and deletions are defined at only
one end). One natural example of stack, which arises in computer programming, is the processing of
procedure calls and their terminations. Stacks are generally used to remember things in reverse. It finds
a major use in backtracking approaches.
Some of the functions related to a stack are
Create ( ) g S;
Insertion(S, e) g updated S;
Deletion (S) g S, e;
Isfull(S) g Boolean;
Isempty(S) g Boolean;
Top(S) g e;
Destroy(S)
It has to be noted that with respect to a stack, insertion and deletion operations are, in general, called
PUSH and POP operations respectively. Following are the algorithms for some functions of stack.
Algorithm: Create
Output: S, Stack created
Method:
Declare S[SIZE] //Array of size=SIZE
Declare and Initialize T=0 //Top pointer to remember the number of elements
Algorithm ends
Algorithm: Isempty
Input: S, stack
Output: Boolean

Method:
If (T==0)
Return (yes)
Else
Return (no)
If end
Algorithm ends
Algorithm: Isfull
Input: S, stack
Output: Boolean
Method:
If (T==SIZE)
Return (yes)
Else
Return (no)
If end
Algorithm ends
Algorithm: Push
Input: (1) S, stack; (2) e, element to be inserted; (3) SIZE, size of the stack;
(4) T, the top pointer
Output: (1) S, updated; (2) T, updated
Method:
If (Isfull(S)) then
Print (‘stack overflow’)
Else
T=T+1;
S[T] =e
If end
Algorithm ends
Algorithm: Pop
Input: (1) S, stack;
Output: (1) S, updated; (2) T, updated (3) ‘e’ element popped
Method:
If (Isempty(S)) then
Print (‘stack is empty’)
Else
e = S[T]
T=T-1;
If end
Algorithm ends
2.2.3 Queues
A queue is an ordered list in which all insertions take place at one end called the rear end, while all
deletions take place at the other end called the front end. Queue is a linear data structure which works
based on the strategy first-in-first out (FIFO). Unlike stacks, queues also arise quite naturally in the
computer solution of many problems. Perhaps the most common occurrence of a queue in computer
applications is for scheduling of jobs. A minimal set of useful operations on queue includes the following.
Create ( ) g Q;
Insertion (Q, e) g updated Q;
Deletion (Q) g Q, e;
Isfull (Q) g Boolean;
Isempty (Q) g Boolean;
Front (Q) g e;
Back (Q);
Destroy (Q);
Following are the algorithms for some functions of queue.
Algorithm: Create
Output: Q, Queue created

Method:
Declare Q[SIZE] //Array with size=SIZE
Declare and Initialize F=0, R=0
//Front and Rear pointers to keep track of the front element and the rear element respectively
Algorithm ends
Algorithm: Isempty
Input: Q, Queue
Output: Boolean
Method:
If (F==0)
Return (yes)
Else
Return (no)
If end
Algorithm ends
Algorithm: Isfull
Input: Q, Queue
Output: Boolean
Method:
If (R==SIZE)
Return (yes)
Else
Return (no)
If end
Algorithm ends
Algorithm: Front
Input: Q, Queue
Output: element in the front
Method:
If (Isempty (Q))
Print ‘no front element’
Else
Return (Q[F])
If end
Algorithm ends
Algorithm: Rear
Input: Q, Queue
Output: element in the rear
Method:
If (Isempty (Q))
Print ‘no back element’
Else
Return (Q[R])
If end
Algorithm ends
Algorithm: Insertion
Input: (1) Q, Queue; (2) e, element to be inserted; (3) SIZE, size of the Queue;
(4) F, the front pointer; (5) R, the rear pointer
Output: (1) Q, updated; (2) F, updated; (3) R, updated
Method:
If (Isfull (Q)) then
Print (‘overflow’)
Else
R=R+1;
Q[R]=e
If (F==0)
F=1;
If end
If end
Algorithm ends
Algorithm: Deletion
Input: (1) Q, Queue; (2) SIZE, size of the Queue; (3) F, the front pointer; (4) R, the rear pointer
Output: (1) Q, updated; (2) F, updated; (3) R, updated; (4) e, element if deleted;
Method:
If (Isempty (Q)) then
Print (‘Queue is empty’)
Else
e = Q[F]
If (F==R)
F=R=0;
Else
F=F+1;
If end
If end
Algorithm ends

However, the linear queue makes less utilization of memory i.e., the ‘return (isfull (Q)) =yes’ does not
necessarily imply that there are n elements in the queue. This can be overcome by the using an alternate
queue called the circular queue.

2.2.4 Circular Queue


A circular queue uses the same conventions as that of linear queue. Using Front will always point one
position counterclockwise from the first element in the queue. In order to add an element, it will be
necessary to move rear one position clockwise. Similarly, it will be necessary to move front one position
clockwise each time a deletion is made. Nevertheless, the algorithms for create (), Isfull (), Isempty (),
Front () and Rear () are same as that of linear queue. The algorithms for other functions are
Algorithm: Insertion
Input: (1) CQ, Circular Queue; (2) e, element to be inserted; (3) SIZE, size of the Circular Queue;
(4) F, the front pointer; (5) R, the rear pointer
Output: (1) CQ, updated; (2) F, updated; (3) R, updated
Method:
If (Isfull (CQ)) then
Print (‘overflow’)
Else
R=R mod SIZE + 1;
CQ[R] =e
If (Isempty (CQ))
F=1;
If end
If end
Algorithm ends
Algorithm: Deletion
Input: (1) CQ, Circular Queue; (2) SIZE, size of the CQ; (3) F, the front pointer; (4) R, the rear
pointer
Output: (1) CQ, updated; (2) F, updated; (3) R, updated; (4) e, element if deleted;

Method:
If (Isempty (CQ)) then
Print (‘Queue is empty’)
Else
e = CQ[F]
If (F==R)
F=R=0;
Else
F=F mod SIZE +1;
If end
If end
Algorithm ends

2.3 NON-LINEAR DATA STRUCTURES


A data structure which is not linear data structure is said to be non-linear data structure. That is, a
data structure is said to be non-linear if data elements are allowed to have more than two adjacent
elements. Meaning that, a data element can have any number of relations with other data elements.

l The number of elements adjacent to a given element is called ‘arity’ or the degree of the
element. i.e., degree = number of relations the element has with others.

l There is no upper limit placed on this number.


Some of the examples in general are : Friendship among classmates; family structure; University with
every component being a sub-component and the relations among them being non-linear; the nervous
system of the human body, etc.
Few of the specific to Computer Science are: graph, tree.

2.3.1 Introduction to Graph Theory


A graph G = (V, E) consists of a set of objects V = {v1 , v2, …} called vertices, and another set E = {e1 ,
e2, …} whose elements are called edges. Each edge ek in E is identified with an unordered pair (vi , vj) of
vertices. The vertices vi , vj associated with

Fig. 2.1

edge ek are called the end vertices of ek. The most common representation of graph is by means of a
diagram, in which the vertices are represented as points and each edge as a line segment joining its end
vertices.
In the Fig. 2.1 edge e1 having same vertex as both its end vertices is called a self-loop. There may be
more than one edge associated with a given pair of vertices, for example e4 and e5 in Fig. 2.1. Such
edges are referred to as parallel edges.
A graph that has neither self-loop nor parallel edges are called a simple graph, otherwise it is called
general graph. It should also be noted that, in drawing a graph, it is immaterial whether the lines are
drawn straight or curved, long or short: what is important is the incidence between the edges and vertices.
Because of its inherent simplicity, graph theory has a very wide range of applications in engineering,
physical, social, and biological sciences, linguistics, and in numerous other areas. A graph can be used to
represent almost any physical situation involving discrete objects and a relationship among them.
2.3.1.1 Finite and Infinite Graphs
Although in the definition of a graph neither the vertex set V nor the edge set E need be finite, in most
of the theory and almost all applications these sets are finite. A graph with a finite number of vertices as
well as a finite number of edges is called a finite graph; otherwise, it is an infinite graph.
2.3.1.2 Incidence and Degree
When a vertex vi is an end vertex of some edge ej , vi and ej are said to be incident with (on or to) each
other. In Fig. 2.1, for example, edges e2, e6 , and e7 are incident with vertex v4 . Two nonparallel edges are
said to be adjacent if they are incident on a common vertex. For example, e2 and e7 in Fig. 2.1 are
adjacent. Similarly, two vertices are said to be adjacent if they are the end vertices of the same edge. In
Fig. 3.1, v4 and v5 are adjacent, but v1 and v4 are not.

The number of edges incident on a vertex vi , with self-loops counted twice is called the degree, d(vi ),
of vertex vi . In Fig. 2-1, for example, d(v1) = d(v3 ) = d(v4) = 3, d(v2) = 4, and d(v5 ) = 1. Since each edge
contributes two degrees, the sum of the degrees of all vertices in G is twice the number of edges in G.

2.3.1.3 Isolated Vertex, Pendent Vertex and Null Graph


A vertex having no incident edge is called an isolated vertex. In other words, isolated vertices are
vertices with zero degree. Vertex v4 and v7 in Fig. 2.2, for example, are isolated vertices. A vertex of
degree one is called a pendent vertex or an end vertex. Vertex v3 in Fig. 2.2 is a pendant vertex. Two
adjacent edges are said to be in series if their common vertex is of degree two. In Fig. 2.2, the two edges
incident on v1 are in series.

Fig. 2.2 Graph containing isolated vertices, series edges and a pendant vertex.

Fig. 2.3 Null graph of six vertices


In the definition of a graph G = (V, E), it is possible for the edge set E to be empty. Such a graph,
without any edges, is called a null graph. In other words, every vertex in a null graph is an isolated vertex.
A null graph of six vertices is shown in Fig. 2-3. Although the edge set E may be empty, the vertex set V
must not be empty; otherwise, there is no graph. In other words, by definition, a graph must have at least
one vertex.
2.3.1.4 Walk, Path and Connected Graph
A “walk” is a sequence of alternating vertices and edges, starting with a vertex and ending with a
vertex with any number of revisiting vertices and retracing of edges. If a walk has the restriction of no
repetition of vertices and no edge is retraced it is called a “path”. If there is a walk to every vertex from
any other vertex of the graph then it is called a “connected” graph.
2.3.2 Matrix Representation of Graphs
Although a pictorial representation of a graph is very convenient for a visual study, other representations
are better for computer processing. A matrix is a convenient and useful way of representing a graph to
a computer. Matrices lend themselves easily to mechanical manipulations. Besides, many known results
of matrix algebra can be readily applied to study the structural properties of graphs from an algebraic
point of view. In many applications of graph theory, such as in electrical network analysis and operation
research, matrices also turn out to be the natural way of expressing the problem.
2.3.2.1 Adjacency matrix Representation
Since the edges are the relationship between two vertices, the graph can be represented by a matrix.
Consider a 2D matrix X of size |v| × |v where |v| is the number of vertices in
Fig. 2.4
E to belongs v v if ) , (

G. The entries in the matrix are either zero or one. i.e., = otherwise
i
Xj
0

Thus the matrix where each column and row corresponds to the vertex in the
graph G shown in Fig.
2.4 is
V1 V2 V3 V4 V5 V6
V1 0 1 0 0 1 0
V2 1 0 1 1 1 0
V3 0 1 0 0 0 0
V4 0 1 0 0 1 1 (2)
V5 1 1 0 1 0 0
V6 0 0 0 1 (2) 0 0

This matrix X uniquely represents the graph. It is indeed possible to reconstruct the graph back from
the matrix. The high entry in the matrix says the vertex corresponding to the row and vertex corresponding

to the column is adjacent. Since, two vertices are adjacent if there is an edge connecting them, and the
matrix represents the same, the above matrix X is called an adjacency matrix.
Adjacency matrix is a symmetric matrix. If the diagonal elements are high then the graph has a self
loop, the respective row index gives the vertex label with self loop. The sum of the it h row elements gives
the degree of the vertex Vi . While calculating the row sum of the adjacency matrix of a non-simple graph,
a weightage 1 has to be given if the diagonal cell is high and additional weightage to every parallel edge.
2.3.2.2 Incidence Matrix
Let G be a graph with n vertices, e edges, and no self-loops. Define an n by e matrix A =[a ij ], whose
n rows correspond to the n vertices and the e columns correspond to the e edges, as follows:
The matrix element

Ai j = 1, if jth edge ej is incident on it h vertex vi , and


= 0, otherwise.

abcdef gh
v1 0 0 0 1 0 1 0 0
v2 0 0 0 0 1 1 1 1
v3 0 0 0 0 0 0 0 1
v4 1 1 1 0 1 0 0 0
v5 0 0 1 1 0 0 1 0
v6 1 1 0 0 0 0 0 0
(b)

Fig. 2.5 Incidence matrix of the graph in Fig. 2.4.


Such a matrix A is called the vertex-edge incidence matrix, or simply incidence matrix. Matrix A
for a graph G is sometimes also written as A(G). A graph and its incidence matrix are shown in Fig. 2.4
and Fig. 2.5 respectively. The incidence matrix contains only two elements, 0 and 1. Such a matrix is
called a binary matrix or a (0, 1)-matrix.
The following observations about the incidence matrix A can readily be made:
1. Since every edge is incident on exactly two vertices, each column of A has exactly two1’s .
2. The number of 1’s in each row equals the degree of the corresponding vertex.
3. A row with all 0’s , therefore, represents an isolated vertex.
22
Chapter 2 - Elementary Data Structures

4. Parallel edges in a graph produce identical columns in its incidence matrix, for example, columns
1 and 2 in Fig. 2.5.
2.3.3 Trees
The concept of a tree is probably the most important in graph theory, especially for those interested in
applications of graphs.
A tree is a connected graph without any circuits. The graph in Fig 2.6 for instance, is a tree. It follows
immediately from the definition that a tree has to be a simple graph, that is, having neither a self-loop nor
parallel edges (because they both form circuits).

Fig. 2-6. Tree


Trees appear in numerous instances. The genealogy of a family is often represented by means of a
tree. A river with its tributaries and sub-tributaries can also be represented by a tree. The sorting of mail
according to zip code and the sorting of punched cards are done according to a tree (called decision tree
or sorting tree).
2.3.3.1 Some Properties of Tree
1. There is one and only one path between every pair of vertices in a tree, T.
2. A tree with n vertices has n-1 edges.
3. Any connected graph with n vertices and n-1 edges is a tree.
4. A graph is a tree if and only if it is minimally connected.
Therefore a graph with n vertices is called a tree if
1. G is connected and is circuit less, or
2. G is connected and has n-1 edges, or
3. G is circuit less and has n-1 edges, or
4. There is exactly one path between every pair of vertices in G, or
5. G is a minimally connected graph.
23
BSIT 41 Algorithms

SUMMARY
This chapter devotes itself to present an overview of very fundamental data structures essential for
the design of algorithms. One of the basic technique for improving algorithms is to structure the data in
such a way that the resulting operations can be carried out efficiently. Though the chapter does not
present all the data structure, we have selected several which occur more frequently in this book. This
chapter also presents some terminologies used in the graph. The notions such as arrays, stack, queue,
graphs and trees have been exposed.
EXERCISE
1. Give at least 5 real life examples where we use stack operations
2. Give at least 5 real life applications where queue is used.
3. Name 10 situations that can be represented by means of graphs. Explain what each vertex and edge represent
4. Draw a connected graph that becomes disconnected when any edge is removed from it
5. Draw all trees of n labeled vertices for n=1,2,3,4 and 5
6. Sketch all binary trees with six pendent edges
7. Write adjacency and incidence matrix for all the graphs developed
Chapter 3
Some Simple Algorithms

on the proper understanding of the logic constructs rather than the programming constructs specific
Algorithm is a blueprint, the program follows to achieve the end result. The end result thus depends to the programming
language. In this chapter we present some simple problems and discuss the
issues in solving the problem and finally design the algorithms for the same.

3.1 ADDITION OF TWO NUMBERS


In a programming situation if we need to add two numbers , we require two memory locations to take
the input and one more memory location to store the result, totally three memory locations.
Consider the following example :
3+6=9

a b c
The ‘a’ ,‘b’ ,‘c’ are the memory locations from now on called the variables. When we generalize this,
it appears as
c= a + b
To make it even more simple, we have two inputs and one output. To store one output and two inputs
we need three variables or memory locations. The Algorithm to add two numbers is now given below.

24
Chapter 3 - Some Simple Algorithms
25
BSIT 41 Algorithms

Algorithm : Add_two_numbers
Input : a, b, two numbers to be added
Output : c updated
Method
c= a + b
Display c
Algorithm ends

3.2 INPUT THREE NUMBERS AND OUTPUT THEM IN


ASCENDING ORDER
The problem of sorting is major computer science problem. The problem of sorting is dealt in depth in
the next chapter, but here only the problem of sorting of three numbers is presented.
To sort three numbers in ascending order, one has to follow the procedure described here. Find the
smallest among the three numbers, that becomes the first element in the sorted list, and find the smallest
among the remaining two elements and that becomes the second element and remaining becomes the last
element.
Consider the following example: 3 , 1, 7 is the input
The smallest amongst these three is 1, therefore 1 becomes the first element in the sorted list. Amongst
3 and 7, 3 is smaller, hence 3 is the second element and 7 is left out and it becomes the last element. Hence
the sorted list becomes 1, 3, 7.
Algorithm : Sorting_3_Elements
Input : a, b, c, the three numbers to be sorted
Output : The Ascending order sequence of the 3 elements
Method:
small = a // Assign ‘a’ to small
k1 = b
k2 = c
if (small > b )
26
Chapter 3 - Some Simple Algorithms

b = small
k1 = a
k2 = c
end_if
if (small> c)
c = small
k1 = a
k2 = b
end_if
Display small
If (k1 > k2 )
Display k2
Display k1
Else
Display k1
Display k2
end_if
Algorithm ends

3.3 TO FIND THE QUADRANT OF A GIVEN CO-ORDINATE


POSITION
The problem of finding the Quadrant in which a given co-ordinate position(x,y) lies is a Computer
Graphics problem. In a Cartesian system if both X and Y co-ordinates are positive then it is said to be First
Quadrant of the Cartesian plane. If X and Y are both negative then it said to be Third Quadrant of the
Cartesian plane. If X is negative and Y is positive then it is said to be Second Quadrant and in case if X
is positive and Y is negative then it said to be in the Fourth Quadrant of the Cartesian plane. The Diagram
of the Cartesian plane is given in Fig. 3.1 below to make facts more clearer.
27
BSIT 41 Algorithms

Fig. 3.1 Cartesian plane


The co-ordinate positions thus entered have to fall in either of those Quadrant based on the sign of the
X and Y co-ordinate positions.
Algorithm : Quadrant_Finder
Input : x, X co-ordinate
y, Y co-ordinate
Output : Corresponding Co-ordinate
Method
If( x >=0)
If(y>=0)
Display ‘I –Quadrant’
Else
Display ‘IV-Quadrant’
end_if
else
If(y>=0)
Display ‘II –Quadrant’
Else
28
Chapter 3 - Some Simple Algorithms

Display ‘III-Quadrant’
end_if
end_if
Algorithm ends

3.4 TO FIND THE ROOTS OF A QUADRATIC EQUATION


The general form of the Quadratic equation is ax2+bx+c=0.

ac b b 4 + -
R2=
12-
a

ac b b 4 - -
R2=
22-
a
And these are two roots of the equation.
The term (b2-4ac) in the solution to the root is called the discriminant of the root. Basically the roots
can be real or imaginary in nature. If the roots are real in nature then they can be either real and distinct,
or both of them can be equal. If they are imaginary the roots exists in complex conjugates.
The problem of identifying the nature of the roots is achieved in checking the nature of the discriminant.
The nature of the discriminant reflects the nature of the solution of the quadratic equation.
If the discriminant is negative then the roots will be of imaginary in nature.
i.e if (b2-4ac) = some –ve value.
And these are two roots of the equation.
The term (b2-4ac) in the solution to the root is called the discriminant of the root. Basically the roots
can be real or imaginary in nature. If the roots are real in nature then they can be either real and distinct,
or both of them can be equal. If they are imaginary the roots exists in complex conjugates.
The problem of identifying the nature of the roots is achieved in checking the nature of the discriminant.
The nature of the discriminant reflects the nature of the solution of the quadratic equation.
If the discriminant is negative then the roots will be of imaginary in nature.
i.e if (b2-4ac) = some –ve value.
2-
ac b 4
Then, is complex in nature, because square root of any negative number always results in
an imaginary result.
29
BSIT 41 Algorithms

Now the roots R1 and R2 has imaginary parts and hence they are imaginary in nature.

Now the other possibility is that the roots being real. For that to happen the disciminant (b2-4ac) = 0.
If it is equal to zero than the discriminant vanishes and hence the roots are real and equal,which are R1
= R2 = -b/2a.
In case (b2-4ac) > 0 , then the roots are real and distinct. The algorithm to find the solution to a
quadratic equation is given below.
Algorithm: Quadratic_solver
Input : a,b,c the co-efficients of the Quadratic Equation
Output : The two roots of the Equation
Method
disc = ((b*b) –(4*a*c))
if (disc = 0)
display ‘roots are real and equal’
r1 = -b/2a
r2 = -b/2a
display r1
display r2
else
if(disc>0)
display ‘ roots are real and distinct’

r1 = (-b+sqrt(disc))/2a
r2 = (-b-sqrt(disc))/2a
else
display ‘roots are complex’

display ‘real part’,-b/2a


30
Chapter 3 - Some Simple Algorithms

display ‘imaginary part’, sqrt(absolute_value_of(disc))


display ‘the two root exists in conjugates’
end_if
Algorithm ends

3.5 CHECKING FOR PRIME


Prime number checking has been a very interesting problem for computer science for a very long
time. A prime number is divisible by either 1 or itself and not by any other number. Prime number checking
is technically called as Primality testing.
First Approach
Here we keep dividing a number from 2 to half the value of that number. For ex: if 47 is the number
under consideration for primality testing then we would divide 47 form 2 to 47/2(23.5 approx. 24) and
check whether any number in this interval performs a division operation on 47 such that there is no
remainder. If so then the given number is not prime. But in the case of 47 no number between 2 and 24
performs such sort of division operation and hence it is declared as a prime number.
Algorithm: Primality_Testing (First approach)
Input: n , number
flag, test condition
Output: flag updated
Method
flag = 0
for(i=2 to n/2 in steps of +1 and flag = 0)
if( n % i = 0) // n mod i
flag = 1
end-if
end-for

if(flag = 0)
31
BSIT 41 Algorithms

display ‘Number is prime’


else
display ‘Number is not prime’
end_if
Algorithm ends
Second Approach
It is proved in number theory that instead of setting the interval of divisor to n/2 we can simply set that
to square root of the given number under consideration of Primality and it would work perfectly fine as the
previous algorithm. Note that in this algorithm we achieve the same result with lesser number of operations
because we reduce the size of the interval.
Algorithm: Primality_Testing (Second approach)
Input : n , number
flag, test condition
Output : flag updated
Method
flag = 0
for(i=2 to square_root(n) in steps of +1 and flag = 0)
if( n % i = 0) // n mod i
flag = 1
end_if
end-for

if(flag = 0)
display ‘Number is prime’
else
display ‘Number is not prime’
end_if
Algorithm ends
32
Chapter 3 - Some Simple Algorithms

3.6 FACTORIAL OF A NUMBER


Finding Factorial of a given number is another interesting problem. Mathematically represented as n! .
For ex: 5! = 5*4*3*2*1.
Not to forget that 1! = 1 and 0! = 1.
We can now generalize the factorial of a given number which is any thing other than zero and one as
the product of all the numbers ranging from given number to 1.
i.e n! = n * (n – 1) * (n – 2 ) * . . . *1
Algorithm : Factorial
Input : n
Output : Factorial of n
Method
fact = 1
for i = n to 1 in steps of –1 do
fact = fact*i
end_for

display ‘factorial = ‘,fact


Algorithm ends
In the above algorithm we have implemented the logic of the equation
n! = n * (n – 1) * (n – 2 ) * . . . *1.
The same can be achieved by the following algorithm which follows incremental steps rather than
decremental steps of the given algorithm.
Algorithm : Factorial
Input : n
Output : Factorial of n
33
BSIT 41 Algorithms

Method
fact = 1
for i = 1 to n in steps of 1 do
fact = fact*i
end_for
display ‘factorial = ‘,fact
Algorithm ends

3.7 TO GENERATE FIBONACCI SERIES


The Fibonacci Series is as follows 0 , 1 , 1 , 2 , 3 , 5 , 8 , …
The property of this series is that any given element in the series after the third element is the sum of
its first and second predecessor. For example, consider 8. Its first and second predecessors are 5 and 3.
Therefore, 5 + 3 = 8. Consider 3. Its first and second predecessors are 2 and 1. 2 + 1 = 3. The property
of Fibonacci series holds.
The following is the algorithm to generate the Fibonacci series up to a given number of elements.
Algorithm : Fibonacci_Series
Input : n, the number of elements in the series
Output : fibonacci series upto the nth element
Method
a = -1
b= 1
for(i=1 to n in steps of +1 do)
c=a+b
display ‘c’
a=b
b=c
end_for
Algorithm ends
34
Chapter 3 - Some Simple Algorithms

3.8 SUM OF N NUMBERS AND AVERAGE


Consider the addition of five numbers 12 , 15 , 10 , 5 and 1.
Initially add 12 and 15 => 12 +15 = 27
Subsequently add 10 to 27 => 10 + 27 = 37
Subsequently add 5 to 37 => 37 + 5 = 42
And Finally add 1 to 42 => 42 + 1 = 43
Which is nothing but 12 + 15 + 10 + 5 + 1 = 43.
The following logic of adding two successive elements and iterating the same process over the other
remaining elements is described in the algorithm given below.
The resulting sum divided by the number of elements yields the average of the domain of input set.
Algorithm : Sum_and_Average
Input : n , number of elements
a(n) , array of n elements
Output : Sum and Average of ‘n’ array elements
Method
Display ‘Enter the number of elements ‘
Accept ‘n’
Display ‘ Enter the elements one by one’
For (i = 1 to n in steps of +1 do)
Accept a(i)
end_for
sum = 0
For (i = 1 to n in steps of +1 do)
sum = sum + a(i)
end_for
Display ‘Sum = ’,sum
Display ‘Average =’,sum/n
Algorithm ends
35
BSIT 41 Algorithms

3.9 TO ADD TWO MATRICES


Consider
aaa bbb bababa +++
13 1 2 1 1 1 3 12 11 1 3 1 3 1 2 1 2 1 1 11

aaa + bbb = bababa +++


23 2 2 21 23 2 2 2 1 23 2 3 22 2 2 2 1 21

aaa bbb bababa +++


33 3 2 3 1 33 3 2 3 1 33 3 3 32 3 2 31 31

The above example is of matrix addition. Matrix addition is possible iff orders of the two matrices are
same. Respective matrix elements are added together in a third matrix and the results are thus obtained.
The procedure is thus described below.
The two Matrices has to read initially in a two dimensional array and then the respective row element
and column elements in the Matrices have to be added to get a third matrix, which leads to the realization
of Matrix addition.
Algorithm : Matrix Addition
Input : n , order of the matrices
a(n,n),b(n,n) the two input matrices
Output : c(n,n) the resultant Sum matrix
Method
{
Accept ‘n’
for(i = 1 to n in steps + 1 do)
for(j= 1 to n in steps of + 1do)
accept a(i,j)
end_for
end_for

for(i = 1 to n in steps + 1 do)


for(j= 1 to n in steps of + 1do)
accept b(i,j)
end_for
end_for
36
Chapter 3 - Some Simple Algorithms

for(i = 1 to n in steps + 1 do)


for(j= 1 to n in steps of + 1do)
c(i,j) = a(i,j) + b(i,j)
end_for
end_for

for(i = 1 to n in steps + 1 do)


for(j= 1 to n in steps of + 1do)
display c(i,j)
end_for
end_for
Algorithm ends
SUMMARY
In this chapter some simple problems, the problem solving concepts and the concept of designing
algorithms for the problems has been presented. For the ease of students some simple problems are
considered and the algorithms are developed. The students are expected to implement all the above
explained algorithms and experience the way they work.
EXERCISE
1. Design and Develop algorithms for multiplying n integers.
Hint: Follow the algorithm to add n numbers given in the text
2. Design and develop algorithm for finding the middle element in the three numbers.
3. Develop algorithm to find the number of Permutations and Combinations for a given n and r
4. Design a algorithm to generate all prime numbers within the limits l1 and l2.
5. Design an algorithm to find the reverse of a number.
6. A number if said to be a palindrome if the reverse of a number is same as the original. Design a algorithm to
check whether a number is palindrome or not
6. Design a algorithm to check whether a given string is palindrome or not
7. Implement all the devised algorithms and also the algorithms discussed in the chapter.
Chapter 4
Searching and Sorting

4.1 SEARCHING

‘k’, then, we have to search the entire file from the beginning till the end to check whether the
Let us assume that we have a sequential file and we wish to retrieve an element matching with key
element matching k is present in the file or not.
There are a number of complex searching algorithms to serve the purpose of searching. The linear
search and binary search methods are relatively straight forward methods of searching.
4.1.1 Sequential search
In this method, we start to search from the beginning of the list and examine each element till the end
of the list. If the desired element is found we stop the search and return the index of that element. If the
item is not found and the list is exhausted the search returns a zero value.

In the worst case the item is not found or the search item is the last (nt h ) element. For both situations
we must examine all n elements of the array.
The algorithm for sequential search is as follows,
Algorithm : sequential search
Input : A, vector of n elements
K, search element
Output : j –index of k

37
BSIT 41 Algorithms
38
Chapter 4 - Searching and Sorting

Method : i=1
While(i<=n)
{
if(A[i]=k)
{
write(“search successful”)
write(k is at location i)
exit();
}
else
i++
if end
while end
write (search unsuccessful);
algorithm ends.
4.1.2 Binary Search
Binary search method is also relatively simple method. For this method it is necessary to have the
vector in an alphabetical or numerically increasing order. A search for a particular item with X resembles
the search for a word in the dictionary. The approximate mid entry is located and its key value is examined.
If the mid value is greater than X, then the list is chopped off at the (mid-1)t h location. Now the list gets
reduced to half the original list. The middle entry of the left-reduced list is examined in a similar manner.
This procedure is repeated until the item is found or the list has no more elements. On the other hand, if
the mid value is lesser than X, then the list is chopped off at (mid+1) t h location. The middle entry of the
right-reduced list is examined and the procedure is continued until desired key is found or the search
interval is exhausted.
The algorithm for binary search is as follows,
Algorithm : binary search
Input : A, vector of n elements
K, search element
39
BSIT 41 Algorithms

Output : low –index of k


Method : low=1,high=n
While(low<=high-1)
{
mid=(low+high)/2
if(k<a[mid])
high=mid
else
low=mid
if end
}
while end
if(k=A[low])
{
write(“search successful”)
write(k is at location low)
exit();
}
else
write (search unsuccessful);
if end;
Algorithm ends.

4.2 SORTING
One of the major applications in computer science is the sorting of information in a table. Sorting
algorithms arrange items in a set according to a predefined ordering relation. The most common types of
data are string information and numerical information. The ordering relation for numeric data simply
40
Chapter 4 - Searching and Sorting

involves arranging items in sequence from smallest to largest and from largest to smallest, which is called
ascending and descending order respectively.
The items in a set arranged in non-decreasing order are {7,11,13,16,16,19,23}. The items in a set
arranged in descending order is of the form {23,19,16,16,13,11,7}
Similarly for string information, {a, abacus, above, be, become, beyond}is in ascending order and {
beyond, become, be, above, abacus, a}is in descending order.
There are numerous methods available for sorting information. But, not even one of them is best for all
applications. Performance of the methods depends on parameters like, size of the data set, degree of
relative order already present in the data etc.
4.2.1 Insertion Sorting
The first class of sorting algorithm that we consider comprises algorithms that sort by insertion. An
algorithm that sorts by insertion takes the initial, unsorted sequence,
} ... s s s s S S S ' : ... : '
{ 3 S = , and computes a series of sorted sequences n ' 1
0 , as follows:
:2:1n:

' S is the empty sequence. i.e., {} S .


'0 =
1. The first sequence in the series, 0

' S in the se ries, for n i = = 0 , the next sequence in the series,


2. Given a sequence 0
1 ( + element of the unsorted sequence
i)
'+
S , is obtained by inserting the t h
1i
S' .
' + S into the correct position in i
1i

S ' , n i = = 0 , contains the first i elements of the unsorted sequence S.


Each sequence i
S' , is the sorted sequence we seek. i.e.,
Therefore, the final sequence in the series, n
'=.
SS'
n
Fig. 4.1 illustrates the insertion sorting algorithm. The figure shows the progression of the
insertion sorting algorithm as it sorts an array of ten integers. The array is sorted in place.
S S S ' : ... : '
I.e., the initial unsorted sequence, S, and the series of sorted sequences, n
'1
0, ,
:
occupy the same array.

In the ith step, the element at position i in the array is inserted into the sorted sequence
S ' which occupies array positions 0 to (i-1). After this is done, array positions 0 to i
i
S . Array positions (i+1) to (n-1) contain the remaining n-i-
contain the i+1 elements of 1
'+
i
1 elements of the unsorted sequence S.
41
BSIT 41 Algorithms

As shown in Fig. 4.1, the first step (i=0) is trivial—inserting an element into the empty list involves no
work. Altogether, n-1 non-trivial insertions are required to sort a list of n elements.

Fig. 4.1 Insertion sort


Algorithm : Insertion Sort
Input : n, Size of the input domain
a[1..n], array of n elements
Output : a[1..n] sorted
42
Chapter 4 - Searching and Sorting

Method
for j= 2 to n in steps of 1 do
item = a[j]
i = j-1
while((i>=1) and (item<a[i])) do
a[i+1] = a[i]
i = i-1
while end
a[i+1] = item
for end
Algorithm ends
4.2.2 Selection Sorting
Such algorithms construct the sorted sequence one element at a time by adding elements to the sorted
sequence in order. At each step, the next element to be added to the sorted sequence is selected from the
remaining elements.
Because the elements are added to the sorted sequence in order, they are always added at one end.
This is what makes selection sorting different from insertion sorting. In insertion sorting elements are
added to the sorted sequence in an arbitrary order. Therefore, the position in the sorted sequence at which
each subsequent element is inserted is arbitrary.
Both selection sorts described in this section sort the arrays in place. Consequently, the sorts are
implemented by exchanging array elements. Nevertheless, selection differs from exchange sorting because
at each step we select the next element of the sorted sequence from the remaining elements and then we
move it into its final position in the array by exchanging it with whatever happens to be occupying that
position.
Straight Selection Sorting
The simplest of the selection sorts is called straight selection . Fig.4.2 illustrates how straight selection
works. In the version shown, the sorted list is constructed from the right (i.e., from the largest to the
smallest element values).
43
BSIT 41 Algorithms

At each step of the algorithm, a linear search of the unsorted elements is made in order to determine
the position of the largest remaining element. That element is then moved into the correct position of the
array by swapping it with the element which currently occupies that position.

Fig. 4.2 Selection sort


For example, in the first step shown in Fig 4.2, a linear search of the entire array reveals that 9 is the
largest element. Since 9 is the largest element, it belongs in the last array position. To move it there, we
swap it with the 4 that initially occupies that position. The second step of the algorithm identifies 6 as the
largest remaining element an moves it next to the 9. Each subsequent step of the algorithm moves one
element into its final position. Therefore, the algorithm is done after n-1 such steps.
44
Chapter 4 - Searching and Sorting

Algorithm : Selection Sort


Input : n, Size of the input domain
a[1..n], array of n elements
Output: a[1..n] sorted
Method:
for i = 1 to n in steps of 1 do
j=i
for k = i+1 to n in steps of 1 do
if(a[k]< a[j]) then j = k
for end
Interchange a[i] and a[j]
For end
Algorithm ends.
Two more sorting algorithms are discussed in the next chapter.
SUMMARY
In this chapter two techniques for checking whether an element is presented in the list of elements is
presented. Linear search is best employed when data searching operation is used minimally. But when
data searching on the same data set has to be done several times then it is better to sort that data set and
apply binary search. Binary search reduces the effort in searching as in case of Linear search. The
chapter also presents some sorting techniques in sequel to searching. Selection sort is simplest of all
sorting algorithms and it goes for the selection of the largest element at each iteration. Insertion sort builds
a sorted list by inserting elements to a small sub sorted list.
EXERCISE
1. What are the serious short comings of the binary search method and sequential search method.
2. Consider a data set of nine elements {10, 30, 45, 54, 56, 78, 213, 415, 500} and trace the linear search algorithm
to find whether the keys 30, 150, 700 are present in the data set or not.
3. Trace the Binary search algorithm on the same data set and same key elements of problem 2.
4. Try to know more sorting techniques and make a comparative study of them.
5. Hand Simulate Insertion Sort on the data set { 13 , 45 , 12, 9 , 1, 10, 40}
6. Implement all the algorithms designed in the chapter.
Chapter 5
Recursion

5.1 WHAT IS RECURSION?

supported by most of the languages. At the same time, it is also a fact that most beginners are
We look at the concept of recursion, which is one of the very powerful programming concepts, confused by the
way it works and are unable to use it effectively. Also, some languages may
not support recursion. In such cases, it may become necessary to rewrite recursive functions into non-
recursive ones. All these and many other aspects are dealt with in this chapter.
Recursion is an offshoot of the concept of subprograms. A subprogram as we know is the concept of
writing separate modules, which can be called from other points in the program or other programs. Then
came a concept wherein any subprogram can call any other subprogram. The control goes to the called
subprogram, performs the assigned tasks and comes back to the caller programs.
Then comes the question – can a program call itself? Theoretically it is possible. If so, where do we
use them normally? A subprogram is called to perform a function, which the caller subprogram cannot
perform itself. But if the caller program calls itself, what purpose does it serve? The answer is, the
subprogram no doubt, calls itself, but with a different value of the parameter. In fact, in most cases, the
calling continues until some specific value of the parameter is reached.
We now understand that recursion is a process of defining a process/ problem/ an object in terms of
itself. Recursion is one of the applications of stacks. The recursive mechanisms are extremely powerful,
but even more importantly; many times they can express an otherwise complex process, very clearly.
Any program can be written using recursion. Of course, the recursive program in that case could be
tougher to understand. Hence, recursion can be used when the problem itself can be defined recursively.
The general procedure for any recursive algorithm is as follows,
1. Save the parameters, local variables and return addresses.

45
BSIT 41 Algorithms
46
Chapter 5 - Recursion

2. If the termination criterion is reached perform final computation and go to step 3, otherwise
perform final computations and go to step 1.
3. Restore the most recently saved parameters, local variables and return address and go to the
latest return address.

5.2 WHY DO WE NEED RECURSION?


When iteration can be easily used and also supported by most programming languages, why do we
need recursion at all? The answer is that iteration has certain demerits as is made clear below:
1. Mathematical functions such as factorial and fibonacci series generation can be easily
implemented using recursion than iteration.
2. In iterative techniques looping of statement is very much necessary.

5.3 WHEN TO USE RECURSION?


Recursion can be used for repetitive computations in which each action is stated in terms of previous
result. There are two conditions that must be satisfied by any recursive procedure.
1. Each time a function calls itself it should get nearer to the solution.
2. There must be a decision criterion for stopping the process.
In making the decision about whether to write an algorithm in recursive or non-recursive form, it is
always advisable to consider a tree structure for the problem. If the structure is simple then use non-
recursive form. If the tree appears quite bushy, with little duplication of tasks, then recursion is suitable.
Recursion is a top down approach to problem solving. It divides the problem into pieces or selects out
one key step, postponing the rest. Whereas, iteration is more of a bottom up approach. It begins with what
is known and from this constructs the solution step by step.
Let us now look at some basic examples which are often devised using recursion.

5.4 FACTORIAL OF A POSITIVE INTEGER


The factorial of a number ‘n’ = n * (n-1) * (n-2)* … * 3 * 2 * 1. An iterative way of obtaining the
factorial of a given number is to put a ‘for’ loop to repeat the multiplication n times. We start with 1, then
evaluate 1 * 2, then that product * 3, ….*(n-1). The factorial of a number can also be obtained recursively.
47
BSIT 41 Algorithms

Suppose we are asked to evaluate N!. If we somehow know (N-1)! then we can evaluate N! = N*(N-
1)!. But how do we get (N-1)!? The same logic can be employed to evaluate (N-1)! = (N-1) * (N-2)!.
The problem of finding the factorial of a given number can be recursively defined as
- * 1 ) 1 ( n if n
Factorial n

Factorial (n) [where n is a positive integer] =

= 1 1 n if

Thus, the algorithm developed to compute the factorial of a given number is


Algorithm: Factorial
Input: n, the integer value whose factorial is to be computed
Output: factorial of n
Method:
If (n==1) then
Return (1)
Else
Return (n * factorial (n-1)
If end
Algorithm ends
The students are advised to try various values of n to actually see how the method works.

5.5 FINDING THE NTH FIBONACCI NUMBER


As already introduced in Chapter 3, a Fibonacci series is a sequence of integers 0,1,1,2,3,5…… i.e.,
The Fibonacci sequence starts from 0, 1 and after that each new term will be the sum of the previous two
terms. .We shall here, at finding out what could be the nth Fibonacci number in the series. For instance, if
we say the first Fibonacci number then it is 0. The second Fibonacci number is 1. Thus, if the 6th Fibonacci
number asked then we are expected to produce the number 5. In general, if kt h Fibonacci number is
expected then that can be obtained by summing up (k-1)th and (k-2)th fibonacci numbers. This process
can be recursively done and finally one can obtain the kt h Fibonacci number.
Thus, following is the recursive algorithm designed to find the nt h Fibonacci number
Algorithm: Fibonacci
Input: n, the position at which the Fibonacci number has to be computed
48
Chapter 5 - Recursion

Output: nt h Fibonacci number


Method:
If (n==0)
Return (0)
Else
If (n == 1)
Return (1)
Else
Return (Fibonacci (n-1) + Fibonacci (n-2))
If end
If end
Algorithm ends
Again, the correctness can be checked for various input values.
We use a third example, though normally this method is not used to explain recursion, nevertheless it is
a very useful method.

5.6 SUM OF FIRST N INTEGERS


The sum of the integers to n is the sum of the integers through n -1 + n. The sum of the integers to n
-1 is the sum to n -2 to n -1, etc. Eventually, we know that the sum of the first positive integer is 1.
Therefore, we can define a terminating condition for some small subset of the problem. The recursive
algorithm to achieve this is as follows
Algorithm : SumPosInt
Input : n, the upper limit
Output : Sum of first n positive integers
Method:
if (n <= 0) // We only want positive integers
return 0;
49
BSIT 41 Algorithms

else
if (n == 0) // Our terminating condition
return 1;
else
return (n + SumPosInt( n -1 ); // recursive step
if end
if end
Algorithm ends

5.7 BINARY SEARCH


Binary search method as explained earlier is a process of searching for the presence or absence of a
key element in the sorted list. The approximate mid entry is located and its key value is examined. If the
mid value is greater than X, then the list is chopped off at the (mid-1)th location. Now the list gets reduced
to half the original list. The middle entry of the left-reduced list is examined in a similar manner. Thus, one
can always think of using a recursive algorithm to solve the same.
The recursive algorithm for binary search is as follows,
Algorithm : binary search
Input : A, vector of n elements
K, search element
Low, the lower limit
High, the upper limit //initially low=1 and high=n the number of elements
Output : the position of the K
Method : if (low <= high)
mid=(low+high)/2
if( a[mid] == K)
return(mid)
else
50
Chapter 5 - Recursion

if (a[mid]< K)
Binary search(A, K, Low, mid)
else
Binary Search(A, K, High, mid)
if end
if end
else
return(0)
if end
Algorithm ends.

5.8 MAXIMUM AND MINIMUM IN THE GIVEN LIST OF N


ELEMENTS
Here the problem is to find out the maximum values in a give list of n data elements. The recursive
algorithm designed to serve this purpose is as follows.
Algorithm: Max-Min
Input: p, q, the lower and upper limits of the dataset
max, min, two variables to return the maximum and minimum values in the list
Output: the maximum and minimum values in the data set
Method:
If (p = q) Then
max = a(p)
min = a(q)
Else
If ( p – q-1) Then
If a(p) > a(q) Then
51
BSIT 41 Algorithms

max = a(p)
min = a(q)
Else
max = a(q)
min = a(p)
If End
Else
m ¬ (p+q)/2
max-min(p,m,max1,min1)
max-min(m+1,q,max2,min2)
max f large(max1,max2)
min f small(min1,min2)
If End
If End
Algorithm Ends.

5.9 MERGE SORT


Sorting as stated in Chapter 4, is a process of arranging a set of given numbers in some order. The
basic concept of merge sort is like this. Consider a series of n numbers, say A(1), A(2) ……A(n/2) and
A(n/2 + 1), A(n/2 + 2) ……. A(n). Suppose we individually sort the first set and also the second set. To
get the final sorted list, we merge the two sets into one common set.
We first look into the concept of arranging two individually sorted series of numbers into a common
series using an example:
Let the first set be A = {3, 5, 8, 14, 27, 32}. Let the second set be B = {2, 6, 9, 15, 18, 30}.
The two lists need not be equal in length. For example the first list can have 8 elements and the second
5. Now we want to merge these two lists to form a common list C. Look at the elements A(1) and B(1),
A(1) is 3, B(1) is 2. Since B(1) < A(1), B(1) will be the first element of C i.e., C(1)=2. Now compare
A(1) =3 with B(2) =6. Since A(1) is smaller then B(2), A(1) will become the second element of C. C[ ]
= {2, 3}
52
Chapter 5 - Recursion

Similarly compare A(2) with B(2), since A(2) is smaller, it will be the third element and so on. Finally,
C is built up as C[ ]= {2, 3, 5, 6, 8, 9, 14, 15, 18, 27, 30, 32}.
However the main problem remains. In the above example, we presume that both A & B are originally
sorted. Then only they can be merged. But, how do we sort them in the first? To do this and show the
consequent merging process, we look at the following example. Consider the series A= (7 5 15 6 4). Now
divide A into 2 parts (7, 5, 15) and (6, 4). Divide (7, 5, 15) again as ((7, 5) and (15)) and (6, 4) as ((6) (4)).
Again (7, 5) is divided and hence ((7, 5) and (15)) becomes (((7) and (5)) and (15)).
Now since every element has only one number, we cannot divide again. Now, we start merging them,
taking two lists at a time. When we merge 7 and 5 as per the example above, we get (5, 7) merge this with
15 to get (5, 7, 15). Merge this with 6 to get (5, 6, 7, 15). Merging this with 4, we finally get (4, 5, 6, 7 and
15). This is the sorted list.
You are now expected to take different sets of examples and see that the method always works.
We design two algorithms in the following. The main algorithm is a recursive algorithm (some what
similar to the binary search algorithm that we saw earlier) which calls at times the other algorithm called
MERGE. The algorithm MERGE does the merging operation as discussed earlier.
Algorithm: MERGESORT
Input: low, high, the lower and upper limits of the list to be sorted
A, the list of elements
Output: A, Sorted list
Method:
If (low<high)
mid¬ (low + high)/2
MERGESORT(low, mid)
MERGESORT (mid, high)
MERGE(A, low, mid, high)
If end
Algorithm ends
You may recall that this algorithm runs on lines parallel to the binary search algorithm. Each time it
divides the list (low, high) into two lists(low, mid) and (mid+1, high). But later, calls for merging the two
lists.
53
BSIT 41 Algorithms

Algorithm: Merge
Input: low, mid, high, limits of two lists to be merged i.e., A(low, mid) and A(mid+1, high)
A, the list of elements
Output: B, the merged and sorted list
Method:
h = low, i = low, j = mid + 1;
While ((h dŠ mid) and (j dŠ high)) do
If (A(h) dŠ A(j) )
B(i) = a(h);
h = h+1;
else
B(i) = A(j);
j = j+1;
If end
i = i+1;
If (h > mid)
For k = j to high
B(i) = A(k);
i = i+1;
For end
Else
For k = h to mid
B(i) = A(k);
i = i+1
For end
If end
While end
Algorithm ends
54
Chapter 5 - Recursion

The first portion of the algorithm works exactly similar to the explanation given earlier, except that
instead of using two lists A and B to fill another array C, we use the elements of the same array A[low,mid]
and A[mid+1,high] to write into another array B.
Now it is not necessary that both the lists from which we keep picking elements to write into B should
get exhausted simultaneously. If the fist list gets exhausted earlier, then the elements of the second list are
directly written into B, without any comparisons being needed and vice versa. This aspect will be taken
care of by the second half of the algorithm.

5.10 QUICKSORT
This is another method of sorting that uses a different methodology to arrive at the same sorted result.
It “Partitions” the list into 2 parts (similar to merge sort), but not necessarily at the centre, but at an
arbitrarily “pivot” place and ensures that all elements to the left of the pivot element are lesser than the
element itself and all those to the right of it are greater than the element. Consider the following example.
75 80 85 90 95 70 65 60 55
To facilitate ordering, we add a very large element, say 1000 at the end. We keep in mind that this is
what we have added and is not a part of the list.
75 80 85 90 95 70 65 60 55 1000
A(1) A(2) A(3) A(4) A(5) A(6) A(7) A(8) A(9) A(10)
Now consider the first element. We want to move this element 75 to its correct position in the list. At
the end of the operation, all elements to the left of 75 should be less than 75 and those to the right should
be greater than 75. This we do as follows:
Start from A(2) and keep moving forward until an element which is greater than 75 is obtained.
Simultaneously start from A(10) and keep moving backward until an element smaller than 75 is obtained.
A(1) A(2) A(3) A(4) A(5) A(6) A(7) A(8) A(9) A(10)
75 80 85 90 95 70 65 60 55 1000
Now A(2) is larger than A(1) and A(9) is less than A(1). So interchange them and continue the
process.
75 55 85 90 95 70 65 60 80 1000
Again A(3) is larger than A(1) and A(8) is less than A(1), so interchange them.
75 55 60 90 95 70 65 85 80 1000
Similarly A(4) is larger than A(1) and A(7) is less than A(1), interchange them
55
BSIT 41 Algorithms

75 55 60 65 95 70 90 85 80 1000
In the next stage A(5) is larger than A(1) and A(6) is lesser than A(1), after interchanging we have
75 55 60 65 70 95 90 85 80 1000
In the next stage A(6) is larger than A(1) and A(5) is lesser than A(1), we can see that the pointers
have crossed each other, hence Interchange A(1) and A(5).
70 55 60 65 75 95 90 85 80 1000
We have completed one series of operations. Note that 75 is at its proper place. All elements to its left
are lesser and to it’s right are greater.
Next we repeat the same sequence of operations from A(1) to A(4) and also between A(6) to A(10).
This we keep repeating till single element lists are arrived at.
Now we suggest a detailed algorithm to do the same. As before, two algorithms are written. The
main algorithm, called QuickSort repeatedly calls itself with lesser and lesser number of elements. However,
the sequence of operations explained above is done by another algorithm called PARTITION
Algorithm: QuickSort
Input: p, q, the lower and upper limits of the list of elements A to be sorted
Output: A, the sorted list
Method:
If (p < q)
j = q+1;
PARTITION (p, j)
QuickSort(P, j-1)
Quicksort(j+1, q)
If end
Algorithm ends
Algorithm: PARTITION
Input: m, the position of the element whose actual position in the sorted list has to be found
p, the upper limit of the list
Output: the position of mt h element
56
Chapter 5 - Recursion

Method:
v = A(m);
i = m;
Repeat
Repeat
i = i+1
Until (A(i) e> v);
Repeat
p=p-1
Until (A(p) d” v);
If (i < p)
INTERCHANGE(A(i), A(p))
If end
Until (i e” p)
A(m) = A(p) ;
A(p) = v;
Algorithm ends

5.11 DEMERITS OF RECURSION


Now, are you thinking that recursion is the best programming technique to be followed? Well, you are
wrong. Recursion has some demerits too, which often make it a not-so-favoured solution to a problem.
Some of the demerits of recursive algorithms are listed below:
1. Many programming languages do not support recursion; hence recursive mathematical function
is implemented using iterative methods.
2. Even though mathematical functions can be easily implemented using recursion it is always at
the cost of execution time and memory space.
57
BSIT 41 Algorithms

Fig. 5.1 Time Space tree of the algorithm Fibonacci


For example, the recursion tree for generating 6 numbers in a fibonacci series generation is given in fig
5.1. A fibonacci series is of the form 0,1,1,2,3,5,8,13…etc, where the third number is the sum of preceding
two numbers and so on. It can be noticed from the fig 5.1 that, f (n-2) is computed twice, f (n-3) is
computed thrice, f (n-4) is computed 5 times.
3. A recursive procedure can be called from within or outside itself and to ensure its proper
functioning it has to save in some order the return addresses so that, a return to the proper
location will result when the return to a calling statement is made.
4. The recursive programs needs considerably more storage and will take more time.
SUMMARY
In this chapter, the concepts of recursion are introduced. Some simple problems which were discussed
in the earlier chapters are reconsidered and the recursive algorithms are designed to achieve the same
outcome. Two sorting algorithms namely Quick sort and merge sort are presented and the recursive
algorithms are designed.
EXERCISE
1. Trace out the algorithm Merge Sort on the data set {1,5,2,19,4,17, 45, 12, 6}
2. Trace out the algorithm Quick Sort on the data set {12 , 1, 5,7,19,15, 8, 9, 10}
3. Implement all the algorithms designed in this chapter.
4. Trace out the algorithm MaxMin on a data set consisting of atleast 8 elements.
5. List out the merits and demerits of Recursion.
6. The data structure used by recursive algorithms is _____________
7. When is it appropriate to use recursion?
Chapter 6
Representation and Traversal of a
Binary Tree

6.1 BINARY TREE

with n vertices to be minimally connected there have to be n-1 edges. A tree is, therefore, a
As already seen in Chapter 2, a tree is a minimally connected graph without a circuit. For a graph connected graph with
n vertices and (n-1) edges. It can also be said to be a graph with n vertices
and (n-1) edges without a circuit.
Sometimes a specific node is given importance. Then, it is called a “rooted” tree. This tree is a
collection of vertices and edges in which a vertex has been identified as the root and the remaining
vertices are categorized into many classes (k >= 0), where each class is a rooted tree. Such a rooted tree
class is called the sub-tree of the given tree.

(b)

(a) (c)
Fig. 6.1 (a) A complete binary tree (b) A full binary tree (c)A binary tree showing levels
binary tree
58
Chapter 6 - Representation and Traversal of a Binary Tree
59
BSIT 41 Algorithms

In the above definition if k is restricted to be maximum of 2 then the rooted tree is called “binary
tree”. If k = 0, it implies that there are no children. A node without any children is called a “leaf” node.
The number of levels in the tree is called the “depth” of the tree. A “complete” binary tree is one
which allows sequencing of the nodes and all the previous levels are maximally accommodated before the
next level is accommodated. i.e., the siblings are first accommodated before the children of any one of
them. And a binary tree which is maximally accommodated with all leaves at the same level is called
“full” binary tree. A full binary tree is always complete but complete binary tree need not be full. Fig. 6.1
(a) and (b) shows the examples for complete and full binary trees.
The maximum number of vertices at each level in a binary tree can be found out as follows:

At level 0: 20 number of vertices


At level 1: 21 number of vertices

At level 2: 22 number of vertices


At level i: 2i number of vertices


Therefore, maximum number of vertices in a binary tree of depth ‘l’ is:
0+
2 ... 2 2 2 + + + +
l21
23

k
i.e., 1 2 -
l
21=+
10==k

A binary indicating the levels is shown in Fig. 6.1 (c)

6.2 REPRESENTATION OF A BINARY TREE


Binary Tree can be represented using the two methods:
l Static allocation, and
l Dynamic allocation
In static allocation, we have two ways of representing the binary tree. One is through the use of
Adjacency matrices and the other through the use of Single dimensional array representation.
6.2.1 Adjacency Matrix Representation
A two dimensional array can be used to store the adjacency relations very easily and can be used to
represent a binary tree. In this representation, to represent a binary tree with n vertices we use a n×n
60
Chapter 6 - Representation and Traversal of a Binary Tree

matrix. Here the row indices correspond to the parent nodes and the column corresponds to the child
nodes. i.e., A row corresponding to the vertex vi having the entries ‘L’ and ‘R’ indicate that vi has as its
left child the index corresponding to the column with the entry ‘L’ and the index corresponding to the
column with ‘R’ entry as its right child. The column with no entries corresponds to the root node. All other
columns have only one element. Each row may have 0, 1 or 2 entries. Zero entry in the column indicates
that the node is the root. Zero entry in the row means that, that element is a leaf node, only one entry
means that the node has only one child and two entries denote that the node has both the children. “L” is
used to represent left child and “R” is used to represent right child entries.

Fig. 6.2 A Binary tree


The adjacency matrix representation for the binary tree shown in Fig. 6.2 is given below.
ABCDEFGHIJK
A LR
B LR
C L
D
E LR
F R
G
H
I LR
J
K
Now, let us see the space utilization of this method of binary tree representation. Let ‘n’ be the number
of vertices. There is space allocated for n x n matrix. i.e., we have n2 space allocated, but we have only
n-1 entries in the matrix. Therefore, the percentage of space utilization is given as follows:
61
BSIT 41 Algorithms

11-n
% 100 = n

nn

The space utilization decreases as n increases. For large ‘n’, the percentage of utilization becomes
negligible. This is, therefore, not the most efficient method of representing a binary tree.
6.2.2 Single Dimensional Array Representation
Since the two dimensional array is a sparse matrix, we can consider the prospect of mapping it onto a
single dimensional array for better space utilization. In this representation, we have to note the following
points:

l The left child of the ith node is placed at the 2it h position.

l The right child of the it h node is placed at the (2i+1) t h position.

l The parent of the it h node is at the (i/2) t h position in the array.


Thus the single dimensional array representation for the binary tree shown in Fig.6.2 turns out to be as
follows.

4 5 6 7 8 9 10 11 1 2 13 14 15 1 6 17 18 19 2 0 21 22 23 2 4 25 26 27 2 8 2 9 30 31

DEF GH I JK

If l is the depth of the binary tree then, the number of possible nodes in the binary tree is 2l + 1-1. Hence
it is necessary to have 2l +1 -1 locations allocated to represent the binary tree.
If ‘n’ is the number of nodes, then the percentage of utilization is
1-n
100
1×+l
2-
l

For a complete and full binary tree, there is 100% utilization and there is a maximum wastage if the
binary tree is right skewed or left skewed, where only l+1 spaces are utilized out of the 2l+ 1 – 1 spaces
possible.
1+l
i.e., is the worst possible utilization in the single dimensional array representation.
100

12-
+l

An important observation to be made here is that the organization of the data in the binary tree decides
the space utilization of the representation used.
The other alternative way of representing the binary tree is based on Linked Allocation (dynamic
allocation method).
62
Chapter 6 - Representation and Traversal of a Binary Tree

6.2.3 Linked Representation


Here, the ‘structure’ for each node becomes regular and in each node structure, there is a pointer to
a left child and another to the right child and space for the data of that node. This is pictorially represented
below:
Lchild Data Rchild

Fig. 6.3 Linked representation of a binary tree


A binary tree represented using the linked representation is shown in Fig. 6.3. In this representation,
ordering of nodes is not mandatory and we require a header to point to the root node of the binary tree. If
there are ‘n’ nodes to be represented, only ‘n’ node-structures will be allocated. This means that there
will be 2n link fields. Out of 2n link fields only (n-1) will be actually used to point to the next nodes and the
remaining are wasted.
Therefore the percentage of utilization is given as:
1 1=-n
% 50 100 × =
2 nn
6.3 TRAVERSAL OF A BINARY TREE
Traversal is the most important operation done on a binary tree. Traversal is the process of visiting all
the vertices of the tree in a systematic order. Systematic means that every time the tree is traversed it
should yield the same result.
63
BSIT 41 Algorithms

There are three major methods of traversals and the rest are reversals of them. They are:
1. Pre-order traversal
2. In-order traversal
3. Post-order traversal
Pre-Order Traversal
In this traversal, the nodes are visited in the order of root, left child and then right child. i.e.,
1 Visit the root node first.
2 Go to the first left sub-tree.
3 After the completion of the left sub-tree, Go to the right sub-tree.
Here, the leaf nodes represent the stopping criteria. We go to the sibling sub-tree after the traversal of
a sub-tree. The pre-order traversal sequence for the binary tree shown in Fig. 6.3 is : A B D E G H C F
IJK
In-Order Traversal
In this traversal, the nodes are visited in the order of left child, root and then right child. i.e., the left
sub-tree is traversed first, then the root is visited and then the right sub-tree is traversed. Here also, the
leaf nodes denote the stopping criteria. The in-order traversal sequence for the above considered example
(Fig. 6.3) is: D B G E H A F J I K C
Post_Order Traversal
In this traversal, the nodes are visited in the order of left child, right child and then root. i.e., the left
sub-tree is traversed first, then the sibling is traversed next. The root is visited last. The post-order
traversal for the above example (Fig. 6.3) is: D G H E B J K I F C A
6.3.1 Traversal of a Binary Tree Represented in an Adjacency
Matrix
The steps involved in traversing a binary tree from the adjacency matrix representation, firstly requires
to find out the root node. Then it entails to traverse through the left subtree and then the right subtree in
specific orders. In order to remember the nodes already visited it is necessary to maintain a stack data
structure. Thus, following are the steps involved in traversing trough the binary tree given in an adjacency
matrix representation.
1 Locate the root (the column sum is zero for the root)
2 Display
64
Chapter 6 - Representation and Traversal of a Binary Tree

3 Push in to the stack


4 Scan the row in search of ‘L’ for the left child information
5 Pop from the stack
6 Scan the row in search of ‘R’ for the right child information
7 Check if array IsEmpty().
8 Stop
Sequencing the above stated steps helps us in arriving at preorder, inorder and postorder traversal
sequences.
Preorder Traversal
Fig.6.4 shows flow graph of preorder traversal.

‘L’ found

Fig. 6.4 Preorder flow graph


Note that before popping, we need to check whether the stack is empty or not.
Inorder Traversal
The in-order traversal will also have the same steps explained above. Only the flow graph sequence
changes, and is given in Fig. 6.5.
65
BSIT 41 Algorithms

Fig. 6.5 Inorder flow graph


Designing a flow graph for post order traversal is left as an exercise to the students.
6.3.2 Binary Tree Traversal from one Dimensaional Array
Representation
Preorder Traversal
Algorithm: Preorder Traversal
Input: A[], one dimensional array representing the binary tree
i, the root address //initially i=1
Output: Preorder sequence
Method:
If (A[i] != 0)
Display(A[i])
Preorder Traversal (A, 2i)
Preorder Traversal (A, 2i +1)
If end
Algorithm ends
66
Chapter 6 - Representation and Traversal of a Binary Tree

Inorder Traversal
Algorithm: Inorder Traversal
Input: A[], one dimensional array representing the binary tree
i, the root address //initially i=1
Output: Inorder sequence
Method:
If (A[i] != 0)
Inorder Traversal (A, 2i)
Display(A[i])
Inorder Traversal (A, 2i +1)
If end
Algorithm ends
Postorder Traversal
Algorithm: Postorder Traversal
Input: A[], one dimensional array representing the binary tree
i, the root address //initially i=1
Output: Postorder sequence
Method:
If (A[i] != 0)
Postorder Traversal (A, 2i)
Postorder Traversal (A, 2i +1)
Display(A[i])
If end
Algorithm ends
We shall now look into the tree traversals in linked representation
67
BSIT 41 Algorithms

6.3.3 Binary Tree in Linked Representation


As we already studied, every node has a structure which has links to the left and right children. The
algorithms for traversing the binary tree in linked representation are given below.
Algorithm: Pre-order Traversal
Input: bt, address of the root node
Output: Preorder sequence
Method:
if(bt != NULL)
{
Display([bt].data)
Pre-order Traversal([bt].Lchild)
Pre-order Traversal([bt].Rchild)
}
Algorithm ends.

Algorithm: In-order Traversal


Input: bt, address of the root node
Output: Inorder sequence
Method:
if(bt != NULL)
{
In-order Traversal([bt].Lchild)
Display([bt].data)
In-order Traversal([bt].Rchild)
}
Algorithm ends.
68
Chapter 6 - Representation and Traversal of a Binary Tree

Algorithm: Post-order Traversal


Input: bt, address of the root node
Output: Postorder sequence
Method:
if(bt != NULL)
{
Post-order Traversal([bt].Lchild)
Post-order Traversal([bt].Rchild)
Display([bt].data)
}
Algorithm ends.
SUMMARY
In this chapter the binary trees are introduced. Trees form the core of non-linear data structure and
Binary tree helps in storing the data more efficiently although the access is not as simple as arrays. The
various ways of representing the binary tree, linear, two dimensional arrays and linked representation are
discussed. The algorithms for traversing the binary tree represented in various representations are also
presented.
EXERCISE
1. What is a binary tree
2. Differentiate complete and full binary trees
3. What is the maximum number of nodes in a binary tree of level 7, 8 and 9
4. What are the wastage of memory for a binary tree with 16 nodes represented in a 1D array, 2D array and a linked
representation.
5. For a atleast 5 binary trees of different depths greater than or equal to 6 of your choice obtain the preorder,
postorder and inorder sequences.
69
BSIT 41 Algorithms

References
1. Sartaj Sahni, 2000, Data structures, algorithms and applications in C++, McGraw Hill international edition.
2. Horowitz and Sahni, 1983, Fundamentals of Data structure, Galgotia publications
3. Horowitz and Sahni, 1998, Fundamentals of Computer algorithm, Galgotia publications.
4. Narsingh Deo, 1990, Graph theory with applications to engineering and computer science, Prentice hall
publications.
5. Tremblay and Sorenson, 1991, An introduction to data structures with applications, McGraw Hill edition.
6. Horowitz, Sahni and Rajasekaran, 2000, Computer algorithms, Galgotia publications
7. Dromey R. G., 1999, How to solve it by computers, Prentice Hall publications , India.

You might also like