You are on page 1of 72

In the modern world, Data and its information is an essential part, and various implementations

are being made to store in different ways. Data are just a collection of facts and figures, or you
can say data are values or a set of values that are in a particular format. A data item refers to a
single set of values. Data items are then further categorized into sub-items which are the group
of items which are not being called a plain elementary form of items. Let us take an example
where the name of the student may be divided into three sub-items namely: first name, middle
name and last name. But the ID that is assigned to a student would normally be considered as a
single item.

In the example mentioned above such as ID, Age, Gender, First, Middle, Last, Street,
Area, etc. are elementary data items, whereas (Name, Address) is group data items.

What is Data Structure?


In computer terms, a data structure is a Specific way to store and organize data in a
computer's memory so that these data can be used efficiently later. Data may be
arranged in many different ways such as the logical or mathematical model for a
particular organization of data is termed as a data structure. The variety of a specific
data model depends on the two factors -

 Firstly, it must be loaded enough in structure to reflect the actual relationships of the
data with the real world object.
 Secondly, the formation should be simple enough so that anyone can efficiently
process the data each time it is necessary.
Categories of Data Structure
The data structure can be subdivided into major types:

 Linear Data Structure


 Non-linear Data Structure

Linear Data Structure


A data structure is said to be linear if its elements combine to form any specific order.
There are two techniques of representing such linear structure within memory.

 The first way is to provide the linear relationships among all the elements represented using
linear memory location. These linear structures are termed as arrays.
 The second technique is to provide a linear relationship among all the elements represented
by using the concept of pointers or links. These linear structures are termed as linked lists.

The common examples of the linear data structure are:

 Arrays
 Queues
 Stacks
 Linked lists

Nonlinear Data Structure


This structure is mostly used for representing data that contains a hierarchical
relationship among various elements.

Examples of Non-Linear Data Structures are listed below:

 Graphs
 the family of trees and
 table of contents

Tree: In this case, data often contain a hierarchical relationship among various
elements. The data structure that reflects this relationship is termed as a rooted tree
graph or a tree.
Graph: In this case, data sometimes hold a relationship between the pairs of elements
which is not necessarily following the hierarchical structure. Such a data structure is
termed as a Graph.

Data Structures Environment Setup


For doing data structure and implementing its various concepts in these upcoming chapters, you
need to have a compiler to perform all the concepts in the form of programs. Here, all the
programs of the data structure will be shown using C++. So for that, you need to have a local
compiler installed in your PCs or laptop.

Local Compiler Setup

If you want to install the C++ compiler in your PC to perform the data structure
concepts, then you have many choices. The first choice, you can use the text
editor such as vi / vim / gedit, EMACS for Linux Users. For Windows, the text editors will
be Notepad or Notepad++. The name and versions of text editors vary based on the
operating systems.

The files you create with your text editor will be the source file and will contain the
program's source code. Here you will be using C++ so the source file will have the
extension as ".cpp".

Second Choice, you can install a compiler of C++. There are various C++ compilers
available in the market such as:

For Windows
 Turbo C++
 Borland C++
 Dev C++
 Intel C++
 Visual C++
For Linux
 Open64
 GNU Compiler Collection
 Intel C++ Compiler PE

For Mac
 Apple C++
 Sun Studio
 Cygwin (GNU C++)
 Digital Mars C++

The source code that will be written in the compiler and saved as a source file is in the
human-readable form which will be your data structure program. That code then needs
to be "compiled," for turning into machine language so that the CPU can truly execute
the program as per the code is written.

Out of all, any one of these above C++ language compilers will be required for
compiling your source code into the final executable program and create the ".exe" file.
The basic knowledge about a programming language is required before approaching to
grab the concepts of the data structure.

Most frequently used and the free available compiler is GNU C/C++ compiler, or you
can use different compilers either from Intel, Oracle or Solaris if you have own choice
and systems.

Steps to install Turbo C++ Compiler

Step 1: Download the Turbo C++ compiler from the link given below: https://www.
turboc.codeplex.com/downloads/get/1489512

Step 2: If you have any already existing "Turbo C++" version install in your computer,
then, first of all, uninstall that existing compiler.

Step 3: Extract the compiler you have downloaded now, i.e., "Turbo C++ 3.2.zip" file.
Step 4: Run the "setup.exe" file.

Step 5: Select the Location where to keep the compiler

Steps to Install Compiler on UNIX/Linux


Step1: Check for the GCC, that whether it is already installed or not by using the
command given below:

$ gcc -v

Step2: In case, if you find the GCC is not installed on your Linux system then you need
to install it by yourself using the given instructions that are available
at http://www.gcc.gnu.org/install/

Fundamental Elements of Data Structure


The present generation digital computer was made and planned as a device which can make
easy and speed up complex and time-consuming computations. In the majority of applications
or programs, it can store and access a huge amount of information take part as the dominant
one and is measured to be its primary feature and its ability to compute, that is to calculate or to
carry out arithmetic, has in many cases become almost unrelated.

In most of the cases, the vast amount of information which is to be developed in some
sense signifies a concept of a part of reality. The information that is accessible to the
computer consists of a specific set of data about the real problem that is set that and is
considered applicable to the problem at hand. The data signifies an abstraction of reality
in the sense which certain properties and distinctiveness of the real objects get ignored
as they are peripheral and inappropriate to the particular problem. A concept of
abstraction is thereby also an overview of facts.

In this chapter, you will learn about the fundamental elements of the data structure.
What are the characteristics of Data types in Data
Structure?
 The data type chooses the set of values to which a constant will belong and which
may be assumed by a variable or an expression within a program or which may be
produced by an operator or a function.
 The type of a value indicated by a constant or a variable or expression may be
resulting from its form or its declaration without the need of executing the
computational process.
 Each operator and function expects some arguments of a fixed type which is
represented by assigning a data type to those specific sets of arguments and yields a
result of a fixed type. If an operator declares arguments of several types such as the
'+' will be used for addition of both integers and real numbers, then the type of the
answer can be determined from specific language rules.
Types of Data Structure

In this case, you will be studying the concepts of the data structure using C++. The
Datatypes are mainly categorized into three major types. These are:

 Built-in data type: These types of data types are predefined and has a fixed set of
rules for declaration. In other words, these data types when belonging to a particular
programming language has built-in support, and hence they are also called as built-in
data types. Examples of such data types are:
o Integer type
o Boolean type
o Character type
o Floating type
 Derived Data type: These data types can be implemented independently within a
language. These data types are built by combining both primary and built-in data types
and then associating operations on them. Examples of such data types are:
o Array
o Stack
o Queue
o List
You might be familiar with these basic data types if you have read either C or C++. For
dealing with the various concepts of data structures, you can use any programming
language. But it is recommended to use either C or C++ for better implementation
purpose.
Basic Operations of Data Structures
Some specific operations process all data in the data structures. The specific data
structure that has been chosen mostly depends on the number of time of the
occurrence of the operation which needs to be carried out on the data structure. Names
of such operations are listed below:

 Traversing
 Searching
 Insertion
 Deletion
 Sorting
 Merging
Greedy Algorithm
All data structures are combined, and the concept is used to form a specific algorithm. All
algorithms are designed with a motive to achieve the best solution for any particular problem. In
the greedy algorithm technique, choices are being made from the given result domain. As being
greedy, the next to a possible solution that looks to supply the optimum solution is chosen.

The greedy method is used to find restricted most favorable result which may finally
land in globally optimized answers. But usually, greedy algorithms do not give globally
optimized solutions.

A game like chess can be won only by having ideas ahead: a player who is alert entirely
on immediate benefit is easy to defeat. But in some other games such as Scrabble, it is
likely to do quite well by just making whichever move seems finest at the moment and
not worrying too much regarding the future consequences or cost.

These kind of myopic activities are easy and suitable for making this a smart logarithmic
strategy. Greedy algorithms build up a solution piece by piece, by constantly choosing
the next piece which offers the most obvious and instant benefit. Although this kind of
approach can be disastrous for some computational jobs yet there are many for which it
is best suitable. The first example that you will be going to understand is the minimum
spanning trees.
Minimum Spanning Trees
Suppose you are invited to create a networked collection of computers by connecting
selected pairs of those computers. This translates this into a graph problem wherein all
nodes are computers, undirected edges are possible links, and the objective is to pick
enough of these edges which the nodes are associated with. And that is not all; each
link has a maintenance cost which will reflect in those edge's weight. What is the
cheapest possible network?

So what you can do is make a graph that will be having six vertices/nodes named A, B,
C, D, E, and F and assign each edge with a value. So it will be an undirected weighted
graph.

One instant study is that the optimal set of edges will not contain a cycle as because
taking away or removing an edge from this cycle would lessen the cost without
cooperating connectivity:

Removing a cycle edge will not disconnect a graph. Hence, the answer must be
connected and acyclic: undirected graphs of this type are termed as trees. The
particular tree you want to put is the one with the least total weight that is called as
the minimum spanning tree. Here is its proper symbolic definition:
Most of the networking algorithms used today utilize the concept of a greedy approach.
Here is a list of few of such problems:

1. Travelling Salesman Problem


2. Prim's Minimal Spanning Tree Algorithm
3. Kruskal's Minimal Spanning Tree Algorithm
4. Dijkstra's Minimal Spanning Tree Algorithm
5. Graph - Map Coloring
6. Graph - Vertex Cover
7. Knapsack Problem
8. Job Scheduling Problem

Furthermore, there are lots of related problems that use the concept of the greedy
algorithm for finding the most favorable solution.
Data Structure and Arrays
You have seen so far that data structure uses some algorithms and need storage for storing
values. For storing these values, programmers must need to have the fundamental data type's
names such as char, int, float & double. As you know, these particular data types are beneficial
for declaring variables, constants or a return type for a function; they are in control by the fact
that, these types can store only a specific form of value at a time. For many applications, there
may arise some circumstances where programmers need to have a single name to store
multiple values. For processing such a large amount of data, programmers need powerful data
types that would facilitate efficient storage, accessing and dealing with such data items. Using
C++, you can implement the concept of arrays.

What are arrays?

The array is a fixed-size sequenced collection of variables belonging to the same data
types. The array has adjacent memory locations to store values. Since the array
provides a convenient structure for representing data, it falls under the category of the
data structures in C. The syntax for declaring array are:

data_type array_name [array_size];

Following are the essential terminologies used for understanding the concepts of
Arrays:

Element: Every item stored in an array is termed as an element

Index: each memory location of an element in an array is denoted by a numerical index


which is used for identifying the element
Why Do You Need Arrays for Building a Specific Data
Structure?
When a program works with many variables which hold comparable forms of data, then
organizational and managerial difficulty quickly arise. If you are not using arrays, then
the number of variables used will increase. Using the array, the number of variables
reduces, i.e., you can use a single name for multiple values, you need to deal with its
index values (starting from 0 to n).

So if the total run of each player is getting stored in separate variables, using arrays you
can bring them all into one array having single name like: plrscore[11];

Collecting Input Data in Arrays

Arrays are particularly helpful for making a collection of input data which arrive in
random order. An excellent example will be vote counting: You can write a program
which tallies the votes of a four-candidate in an election. (For your ease, you will say
use the candidates' names as Cand 0, Cand 1, Cand 2, and Cand 3.) Votes arrive once
at a time, where a vote for Candidate i is denoted by the number, i. So according to this
example, two votes for Cand 3 followed by one vote for Cand 0 would appear:

0
and so on....

i.e., the structure will be:

int votes[4];

Basic Operations
There is some specific operation that can be performed or those that are supported by
the array. These are:

 Traversing: It prints all the array elements one after another.


 Inserting: It adds an element at given index.
 Deleting: It is used to delete an element at given index.
 Searching: It searches for an element(s) using given index or by value.
 Updating: It is used to update an element at given index.

Linked List
The linked list or one way list is a linear set of data elements which is also termed as nodes.
Here, the linear order is specified using pointers.

Each node is separated into two different parts:

 The first part holds the information of the element or node


 The second piece contains the address of the next node (link / next-pointer field) in
this structure list.

Linked lists can be measured as a form of high-level standpoint as being a series of


nodes where each node has at least one single pointer to the next connected node, and
in the case of the last node, a null pointer is used for representing that there will be no
further nodes in the linked list. In the data structure, you will be implementing the linked
lists which always maintain head and tail pointers for inserting values at either the head
or tail of the list is a constant time operation. Randomly inserting of values is excluded
using this concept and will follow a linear operation. As such, linked lists in data
structure have some characteristics which are mentioned below:

 Insertion is O(1)
 Deletion is O(n)
 Searching is O(n)

Linked lists have a few key points that usually make them very efficient for
implementing. These are:

 the list is dynamic and hence can be resized based on the requirement
 Secondly, the insertion is O(1).
Singly Linked List

Singly linked lists are one of the most primitive data structures you will learn in this
tutorial. Here each node makes up a singly linked list and consists of a value and a
reference to the next node (if any) in the list.
Insertion of Values in Linked List
In general, when people talk about insertion concerning linked lists of any form, they
implicitly refer to the adding of a node to the tail of the list.

Adding a node to a singly linked list has only two cases:

 head = fi, in which case the node we are adding is now both the head and tail of the list; or
 we simply need to append our node onto the end of the list updating the tail reference
appropriately

Algorithm for inserting values to Linked List


 also Add(value)
 Pre: value is the value to be added to the list
 Post: value has been placed at the tail of the list
 n <- node(value)
 if head = fi
 head <- n
 tail<- n
 else
 Next <- n
 tail <- n
 end if
 end Add

Searching in Linked Lists


Searching a linked list is straightforward: we simply traverse the list checking the value
we are looking for with the value of each node in the linked list.

 also Contains(head, value)


 Pre: the head is the head node in the list
 value is the value to search for
 Post: the item is either in the linked list, true; otherwise false
 n <- head
 While n 6= fi and n.Value 6= value
 n <- n.Next
 end while
 if n = fi
 return false
 end if
 return true
 end Contains

Deletion in Linked Lists


Deleting a node from a linked list is straight-forward, but there are some cases that you
need to account for:

 the list is empty; or


 the node to remove is the only node in the linked list; or
 we are removing the head node; or
 we are removing the tail node; or
 the node to remove is somewhere in between the head and tail; or
 the item to remove doesn't exist in the linked list

Algorithm for Deletion


 algorithm Remove(head, value)
 Pre: the head is the head node in the list
 value is the value to remove from the list
 Post: value is removed from the list, true; otherwise false
 if head = fi
 return false
 end if
 n <- head
 If n.Value = value
 if the head = tail
 head <- fi
 tail <- fi
 else
 HHead <- head.Next
 end if
 return true
 end if
 while n.Next 6= fi and n.Next.Value 6= value
 n <- n.Next
 end while
 if n.Next 6= fi
 if n.Next = tail
 tail <- n
 end if
 // this is only case 5 if the conditional on line 25 was false
 Next <- n.Next.Next
 return true
 end if
 return false
 end Remove

Polynomials Using Linked List and Arrays


Polynomials and Sparse Matrix are two important applications of arrays and linked lists. A
polynomial is composed of different terms where each of them holds a coefficient and an
exponent. This tutorial chapter includes the representation of polynomials using linked lists and
arrays.

What is Polynomial?

A polynomial p(x) is the expression in variable x which is in the form (axn + bxn-1 + …. +
jx+ k), where a, b, c …., k fall in the category of real numbers and 'n' is non negative
integer, which is called the degree of polynomial.

An essential characteristic of the polynomial is that each term in the polynomial


expression consists of two parts:

 one is the coefficient


 other is the exponent

Example:
10x2 + 26x, here 10 and 26 are coefficients and 2, 1 is its exponential value.

Points to keep in Mind while working with Polynomials:

 The sign of each coefficient and exponent is stored within the coefficient and the
exponent itself
 Additional terms having equal exponent is possible one
 The storage allocation for each term in the polynomial must be done in ascending and
descending order of their exponent

Representation of Polynomial
Polynomial can be represented in the various ways. These are:

 By the use of arrays


 By the use of Linked List

 By the use of Linked List

Representation of Polynomials Using Arrays


There may arise some situation where you need to evaluate many polynomial
expressions and perform basic arithmetic operations like addition and subtraction with
those numbers. For this, you will have to get a way to represent those polynomials. The
simple way is to represent a polynomial with degree 'n' and store the coefficient of n+1
terms of the polynomial in the array. So every array element will consist of two values:

 Coefficient and
 Exponent

A polynomial can be represented using the C++ code:

#include <iostream>

#include <iomanip.h>

using namespace std;

struct poly {

int coeff;

int pow_val;

poly* next;

};

class add {

poly *poly1, *poly2, *poly3;

public:

add() { poly1 = poly2 = poly3 = NULL; }

void addpoly();

void display();

};

void add::addpoly()

int i, p;
poly *newl = NULL, *end = NULL;

cout << "Enter highest power for x\n"; cin >> p;

//Read first poly

cout << "\nFirst Polynomial\n"; for (i = p; i >= 0; i--) {

newl = new poly;

newl->pow_val = p;

cout << "Enter Co-efficient for degree" << i << ":: "; cin >> newl->c
oeff;

newl->next = NULL;

if (poly1 == NULL)

poly1 = newl;

else

end->next = newl;

end = newl;

//Read Second poly

cout << "\n\nSecond Polynomial\n"; end = NULL; for (i = p; i >= 0; i--) {

newl = new poly;

newl->pow_val = p;

cout << "Enter Co-efficient for degree" << i << ":: "; cin >> newl->c
oeff;

newl->next = NULL;

if (poly2 == NULL)

poly2 = newl;

else
end->next = newl;

end = newl;

//Addition Logic

poly *p1 = poly1, *p2 = poly2;

end = NULL;

while (p1 != NULL && p2 != NULL) {

if (p1->pow_val == p2->pow_val) {

newl = new poly;

newl->pow_val = p--;

newl->coeff = p1->coeff + p2->coeff;

newl->next = NULL;

if (poly3 == NULL)

poly3 = newl;

else

end->next = newl;

end = newl;

p1 = p1->next;

p2 = p2->next;

}
void add::display()

poly* t = poly3;

cout << "\n\nAnswer after addition is : ";

while (t != NULL) {

cout.setf(ios::showpos);

cout << t->coeff;

cout.unsetf(ios::showpos);

cout << "X" << t->pow_val;

t = t->next;

int main()

add obj;

obj.addpoly();

obj.display();

}
Output:

Representation of Polynomial Using Linked Lists


A polynomial can be thought of as an ordered list of non zero terms. Each non zero
term is a two-tuple which holds two pieces of information:

 The exponent part


 The coefficient part

Program for the addition of Polynomials


#include <iostream>

using namespace std;

class polyll {

private:

struct polynode {

float coeff;

int exp;

polynode* link;
} * p;

public:

polyll();

void poly_append(float c, int e);

void display_poly();

void poly_add(polyll& l1, polyll& l2);

~polyll();

};

polyll::polyll()

p = NULL;

void polyll::poly_append(float c, int e)

polynode* temp = p;

if (temp == NULL) {

temp = new polynode;

p = temp;

else {

while (temp->link != NULL)

temp = temp->link;

temp->link = new polynode;


temp = temp->link;

temp->coeff = c;

temp->exp = e;

temp->link = NULL;

void polyll::display_poly()

polynode* temp = p;

int f = 0;

cout << endl; while (temp != NULL) { if (f != 0) { if (temp->coeff > 0)

cout << " + ";

else

cout << " "; } if (temp->exp != 0)

cout << temp->coeff << "x^" << temp->exp;

else

cout << temp->coeff;

temp = temp->link;

f = 1;

void polyll::poly_add(polyll& l1, polyll& l2)

{
polynode* z;

if (l1.p == NULL && l2.p == NULL)

return;

polynode *temp1, *temp2;

temp1 = l1.p;

temp2 = l2.p;

while (temp1 != NULL && temp2 != NULL) {

if (p == NULL) {

p = new polynode;

z = p;

else {

z->link = new polynode;

z = z->link;

if (temp1->exp < temp2->exp) {

z->coeff = temp2->coeff;

z->exp = temp2->exp;

temp2 = temp2->link;

else {

if (temp1->exp > temp2->exp) {

z->coeff = temp1->coeff;

z->exp = temp1->exp;
temp1 = temp1->link;

else {

if (temp1->exp == temp2->exp) {

z->coeff = temp1->coeff + temp2->coeff;

z->exp = temp1->exp;

temp1 = temp1->link;

temp2 = temp2->link;

while (temp1 != NULL) {

if (p == NULL) {

p = new polynode;

z = p;

else {

z->link = new polynode;

z = z->link;

z->coeff = temp1->coeff;

z->exp = temp1->exp;

temp1 = temp1->link;
}

while (temp2 != NULL) {

if (p == NULL) {

p = new polynode;

z = p;

else {

z->link = new polynode;

z = z->link;

z->coeff = temp2->coeff;

z->exp = temp2->exp;

temp2 = temp2->link;

z->link = NULL;

polyll::~polyll()

polynode* q;

while (p != NULL) {

q = p->link;

delete p;

p = q;

}
}

int main()

polyll p1;

p1.poly_append(1.4, 5);

p1.poly_append(1.5, 4);

p1.poly_append(1.7, 2);

p1.poly_append(1.8, 1);

p1.poly_append(1.9, 0);

cout << "\nFirst polynomial:";

p1.display_poly();

polyll p2;

p2.poly_append(1.5, 6);

p2.poly_append(2.5, 5);

p2.poly_append(-3.5, 4);

p2.poly_append(4.5, 3);

p2.poly_append(6.5, 1);

cout << "\nSecond polynomial:";

p2.display_poly();

polyll p3;

p3.poly_add(p1, p2);

cout << "\nResultant polynomial: ";

p3.display_poly();

getch();
}

Output:

Principles of Program Analysis


This chapter starts with the basic information regarding the fundamental knowledge required to
solve various problems. Algorithm design is one of the primary steps in solving problems.
Algorithms are set of steps or instructions required and designed to solve a specific problem.

What is Software Development Life Cycle?

Developing good software is a tedious process which keeps on going i.e. under
development for a long time before the software or the program takes the final shape.
This process is often termed as Software Development Life Cycle (SDLC). Here, the
output of one stage becomes the input of next stage.

The various steps involved in the Software Development Life Cycle are as follows:

 Analyze the problem with precision


 Create a prototype and experiment with it until all requirements are finalized
 Design an algorithm for the task using the tools of the data structure
 Verify the algorithmic steps
 Analyze the algorithm for checking its requirements
 Code the algorithm to any suitable programming language
 Test and evaluate the code
 Refine and repeat the preceding steps until the software is complete
 Optimize the code to improve performance
 Maintain the application that you have designed so that it meets the upcoming client's
and users need
Program Design

Program design is an important stage of software development. This phase takes the
help of algorithms and different concepts of data structures to solve the problem(s) that
is proposed.

Different Approaches to Designing Algorithms


Complex code can be subdivided into smaller units called modules. The advantage of
modularity is that it allows the principle of separation of concerns to be applied into two
phases are -

 While dealing with details of each module in isolation


 While dealing with overall characteristics of all modules and their relationships

Modularity enhances design clarity, which in turn eases implementation and readability.
Debugging, testing, documenting and maintenance of product also increase due to
modularity.

What Do You Mean by Complexity of Algorithm?


Complexity is an essential concept in Data structure. When you talk about complexity is
related to computer, you call it as computational complexity. It can be termed as the
characterization of time and space requirements for solving a problem using some
specific algorithm. These requirements are expressed regarding one single parameter
which is used to represent the size of the problem.

Example:

Let 'n' denotes the size of the problem. Then the time required for a specific type
of algorithm for solving a problem can be expressed as:

f : R -> R, where f is the function and f(n) is the most significant amount of time needed.
Thus you can conclude that analysis of any program requires two vital concepts:
 Time Complexity
 Space Complexity

Time Complexity of a program can be defined as the amount of time the computer takes
to run a program to its completion. On the other hand, the space complexity of an
algorithm can be defined as the memory that it needs to run that algorithm to its
completion.

Big-o-notation-and-algorithm-analysis
In this chapter, you will learn about the different algorithmic approaches that are usually
followed while programming or designing an algorithm. Then you will get the basic idea of what
Big-O notation is and how it is used. Finally, there will be brief lists of the different types of
algorithmic analysis that is being performed using the types of complexity.

Various Approaches to Algorithmic Design

Any system can have components which have components of their own. Certainly, a
system is a hierarchy of components. The highest level of components corresponds to
the total system. There are usually two approaches to design such hierarchy:

1. Top-down approach
2. Bottom-up approach

Now let's discuss both of them:

Top-down approach
The top-down approach starts by identifying the major components of the system or
program decomposing them into their lower level components and iterating until the
desired level of modular complexity is achieved. The top-down method takes the form of
stepwise working and refinement of instructions. Thus the top-down approach starts
from an abstract design, and each step is refined into more concrete level until the final
refined stage is not reached.
Bottom-up approach

The bottom-up design starts with designing the most basic or primitive components and
proceeds to the higher level component. It works with layers of abstraction. Starting
from below, the operation that provides a layer of abstraction is implemented. These
operations are further used to implement more powerful operations and still higher
layers of abstraction until the final stage is reached.

What is Big - O notation?


When resolving a computer-related problem, there will frequently be more than just one
solution. These individual solutions will often be in the shape of different algorithms or
instructions having different logic, and you will normally want to compare the algorithms
to see which one is more proficient. So, this is where Big O analysis helps program
developers to give programmers some basis for computing and measuring the
efficiency of a specific algorithm.

If f(n) represents the computing time of some algorithm and g(n) represents a known
standard function like n, n2, n log n, then to write:

f(n) is O g(n)

which means that f(n) of n equals to the biggest order of the function, i.e., the g(n).

So what Big - O does? It helps to determine the time as well as space complexity of the
algorithm. Using Big - O notation, the time taken by the algorithm and the space
required to run the algorithm can be ascertained. Some of the lists of common
computing times of algorithms in order of performance are as follows:

 O (1)
 O (log n)
 O (n)
 O (nlog n)
 O (n2)
 O (n3)
 O (2n)

Thus algorithm with their computational complexity can be rated as per the mentioned
order of performance.

Algorithm Analysis
In the last chapter, you have studied about the time and space complexity. This
complexity is used to analyze the algorithm in the data structure. There are various
ways of solving a problem and there exists different algorithms which can be designed
to solve the problem.

Consequently, analysis of algorithms focuses on the computation of space and time


complexity. Here are various types of time complexities which can be analyzed for the
algorithm:

 Best case time complexity: The best case time complexity of an algorithm is a
measure of the minimum time that the algorithm will require for an input of size 'n.' The
running time of many algorithms varies not only for the inputs of different sizes but also
for the different inputs of the same size.
 Worst case time Complexity: The worst case time complexity of an algorithm is a
measure of the minimum time that the algorithm will require for an input of size 'n.'
Therefore, if various algorithms for sorting are taken into account and say 'n,' input
data items are supplied in reverse order for a sorting algorithm, then the algorithm will
require n2 operations to perform the sort which will correspond to the worst case time
complexity of the algorithm.
 Average Time complexity Algorithm: This is the time that the algorithm will require to
execute a typical input data of size 'n' is known as the average case time complexity.

Concepts of Stack in Data Structure

In this chapter, you will explore one of the most important data structures which are
used in many fields of programming and data handling, i.e., the Stack. It falls under the
category of an abstract data type which serves as a concrete and valuable tool for
problem-solving. In this chapter, you will study the various operations and working
technique of stack data structure.
What is Stack?

A stack is a linear data structure in which all the insertion and deletion of data or you
can say its values are done at one end only, rather than in the middle. Stacks can be
implemented by using arrays of type linear.

The stack is mostly used in converting and evaluating expressions in Polish notations,
i.e.:

 Infix
 Prefix
 Postfix

In case of arrays and linked lists, these two allows programmers to insert and delete
elements from any place within the list, i.e., from the beginning or the end or even from
the middle also. But in computer programming and development, there may arise some
situations where insertion and deletion require only at one end wither at the beginning
or end of the list. The stack is a linear data structure, and all the insertion and deletion
of its values are done in the same end which is called the top of the stack. Let us
suppose take the real-life example of a stack of plates or a pile of books etc. As the item
in this form of data structure can be removed or added from the top only which means
the last item to be added to the stack is the first item to be removed. So you can say
that the stack follows the Last In First Out (LIFO) structure.

Stack as Abstract Data Type

He stacks of elements of any particular type is a finite sequence of elements of that type
together with the following operations:

 Initialize the stack to be empty


 Determine whether the stack is empty or not
 Check whether the stack is full or not
 If the stack is not full, add or insert a new node at the top of the stack. This operation is
termed as Push Operation
 If the stack is not empty, then retrieve the node at its top
 If the stack is not empty, the delete the node at its top. This operation is called as Pop
operation

Representation of Stack using Arrays


The stack can be represented in memory with the use of arrays. To do this job, you
need to maintain a linear array STACK, a pointer variable top which contains the top
element.

Example:

#include <iostream>

#include<stdlib.h>

using namespace std;

class stack {

int stk[5];

int top;

public:
stack()

top = -1;

void push(int x)

if (top > 4) {

cout << "stack overflow";

return;

stk[++top] = x;

cout << "inserted " << x;

void pop()

if (top < 0) {

cout << "stack underflow";

return;

cout << "deleted " << stk[top--];

void display()

if (top < 0) {
cout << " stack empty"; return; } for (int i = top; i >= 0; i--)

cout << stk[i] << " ";

};

int main()

int ch;

stack st;

while (1) {

cout << "\n1.push 2.pop 3.display 4.exit\nEnter ur choice: "; cin >>
ch;

switch (ch) {

case 1:

cout << "enter the element: "; cin >> ch;

st.push(ch);

break;

case 2:

st.pop();

break;

case 3:

st.display();

break;

case 4:
exit(0);

Output:

Concepts of Queue in Data Structure

The queue is a linear data structure used to represent a linear list. It allows insertion of
an element to be done at one end and deletion of an element to be performed at the
other end.

In this chapter, you will be given an introduction to the basic concepts of queues along
with the various types of queues which will be discussed simulating the real world
situation.
What is a Queue?
A queue is a linear list of elements in which deletion of an element can take place only
at one end called the front and insertion can take place on the other end which is
termed as the rear. The term front and rear are frequently used while describing queues
in a linked list. In this chapter, you will deal with the queue as arrays.

In the concept of a queue, the first element to be inserted in the queue will be the first
element to be deleted or removed from the list. So Queue is said to follow the FIFO
(First In First Out) structure. A real-life scenario in the form of example for queue will be
the queue of people waiting to accomplish a particular task where the first person in the
queue is the first person to be served first.

Other examples can also be noted within a computer system where the queue of tasks
arranged in the list to perform for the line printer, for accessing the disk storage, or even
in the time-sharing system for the use of CPU. So basically queue is used within a
single program where there are multiple programs kept in the queue or one task may
create other tasks which must have to be executed in turn by keeping them in the
queue.

Queue as an ADT (Abstract Data Type)


The meaning of an abstract data type clearly says that for a data structure to be
abstract, it should have the below-mentioned characteristics:

 First, there should be a particular way in which components are related to each other
 Second, a statement for the operation that can be performed on elements of abstract
data type must have to be specified
Thus for defining a Queue as an abstract data type, these are the following criteria:

 Initialize a queue to be empty


 Check whether a queue is empty or not
 Check whether a queue is full or not
 Insert a new element after the last element in a queue, if the queue is not full
 Retrieve the first element of the queue, if it is not empty
 Delete the first element in a queue, if it is not empty

Representation of Queue as an Array

Queue is a linear data structure can be represented by using arrays. Here is a program
showing the implementation of a queue using an array.
Example:

#include <iostream>

#include<stdlib.h>

using namespace std;

class queuearr {

int queue1[5];

int rear, front;

public:

queuearr()

rear = -1;

front = -1;

}
void insert(int x)

if (rear > 4) {

cout << "queue over flow";

front = rear = -1;

return;

queue1[++rear] = x;

cout << "inserted " << x;

void delet()

if (front == rear) {

cout << "queue under flow";

return;

cout << "deleted " << queue1[++front];

void display()

if (rear == front) {

cout << " queue empty";


return;

for (int i = front + 1; i <= rear; i++)

cout << queue1[i] << " ";

};

int main()

int ch;

queuearr qu;

while (1) {

cout << "\n1.insert 2.delet 3.display 4.exit\nEnter ur choice: "; cin


>> ch;

switch (ch) {

case 1:

cout << "enter the element: "; cin >> ch;

qu.insert(ch);

break;

case 2:

qu.delet();

break;

case 3:

qu.display();

break;
case 4:

exit(0);

Output:

Searching Techniques

This chapter explores various searching techniques. The process of identifying or


finding a particular record is called Searching. You often spend time in searching for any
desired item. If the data is kept properly in sorted order, then searching becomes very
easy and efficient. In this chapter, you will get to know the basic concepts of searching
that are used in the data structure and case of programming also.

What is Searching?

Searching is an operation or a technique that helps finds the place of a given element or
value in the list. Any search is said to be successful or unsuccessful depending upon
whether the element that is being searched is found or not. Some of the standard
searching technique that is being followed in the data structure is listed below:
 Linear Search or Sequential Search
 Binary Search

What is Linear Search?


This is the simplest method for searching. In this technique of searching, the element to
be found in searching the elements to be found is searched sequentially in the list. This
method can be performed on a sorted or an unsorted list (usually arrays). In case of a
sorted list searching starts from 0th element and continues until the element is found
from the list or the element whose value is greater than (assuming the list is sorted in
ascending order), the value being searched is reached.

As against this, searching in case of unsorted list also begins from the 0th element and
continues until the element or the end of the list is reached.

Example:
The list given below is the list of elements in an unsorted array. The array contains ten
elements. Suppose the element to be searched is '46', so 46 is compared with all the
elements starting from the 0th element, and the searching process ends where 46 is
found, or the list ends.

The performance of the linear search can be measured by counting the comparisons
done to find out an element. The number of comparison is 0(n).

Algorithm for Liner Search

It is a simple algorithm that searches for a specific item inside a list. It operates looping
on each element O(n) unless and until a match occurs or the end of the array is
reached.

 algorithm Seqnl_Search(list, item)


 Pre: list != ;
 Post: return the index of the item if found, otherwise: 1
 index <- fi
 while index < list.Cnt and list[index] != item //cnt: counter variable
 index <- index + 1
 end while
 if index < list.Cnt and list[index] = item
 return index
 end if
 return: 1
 end Seqnl_Search

What is Binary Search?


Binary search is a very fast and efficient searching technique. It requires the list to be in
sorted order. In this method, to search an element you can compare it with the present
element at the center of the list. If it matches, then the search is successful otherwise
the list is divided into two halves: one from the 0thelement to the middle element which is
the center element (first half) another from the center element to the last element (which
is the 2nd half) where all values are greater than the center element.

The searching mechanism proceeds from either of the two halves depending upon
whether the target element is greater or smaller than the central element. If the element
is smaller than the central element, then searching is done in the first half, otherwise
searching is done in the second half.

Algorithm for Binary Search


 algorithm Binary_Search(list, item)
 Set L to 0 and R to n: 1
 if L > R, then Binary_Search terminates as unsuccessful
 else
 Set m (the position in the mid element) to the floor of (L + R) / 2
 if Am < T, set L to m + 1 and go to step 3
 if Am > T, set R to m: 1 and go to step 3
 Now, Am = T,
 the search is done; return (m)

Sorting Techniques

In this chapter you will be dealing with the various sorting techniques and their
algorithms used to manipulate data structure and its storage. Sorting method can be
implemented in different ways - by selection, insertion method, or by merging. Various
types and forms of sorting methods have been explored in this tutorial.

What is Sorting?

Sorting refers to the operation or technique of arranging and rearranging sets of data in
some specific order. A collection of records called a list where every record has one or
more fields. The fields which contain a unique value for each record is termed as
the key field. For example, a phone number directory can be thought of as a list where
each record has three fields - 'name' of the person, 'address' of that person, and their
'phone numbers'. Being unique phone number can work as a key to locate any record in
the list.

Sorting is the operation performed to arrange the records of a table or list in some order
according to some specific ordering criterion. Sorting is performed according to some
key value of each record.

The records are either sorted either numerically or alphanumerically. The records are
then arranged in ascending or descending order depending on the numerical value of
the key. Here is an example, where the sorting of a lists of marks obtained by a student
in any particular subject of a class.

Categories of Sorting
The techniques of sorting can be divided into two categories. These are:

 Internal Sorting
 External Sorting

Internal Sorting: If all the data that is to be sorted can be adjusted at a time in the main
memory, the internal sorting method is being performed.

External Sorting: When the data that is to be sorted cannot be accommodated in the
memory at the same time and some has to be kept in auxiliary memory such as hard
disk, floppy disk, magnetic tapes etc, then external sorting methods are performed.

The Complexity of Sorting Algorithms

The complexity of sorting algorithm calculates the running time of a function in which 'n'
number of items are to be sorted. The choice for which sorting method is suitable for a
problem depends on several dependency configurations for different problems. The
most noteworthy of these considerations are:

 The length of time spent by the programmer in programming a specific sorting


program
 Amount of machine time necessary for running the program
 The amount of memory necessary for running the program

The Efficiency of Sorting Techniques


To get the amount of time required to sort an array of 'n' elements by a particular
method, the normal approach is to analyze the method to find the number of
comparisons (or exchanges) required by it. Most of the sorting techniques are data
sensitive, and so the metrics for them depends on the order in which they appear in an
input array.

Various sorting techniques are analyzed in various cases and named these cases as
follows:

 Best case
 Worst case
 Average case
Hence, the result of these cases is often a formula giving the average time required for
a particular sort of size 'n.' Most of the sort methods have time requirements that range
from O(nlog n) to O(n2).

Types of Sorting Techniques


 Bubble Sort
 Selection Sort
 Merge Sort
 Insertion Sort
 Quick Sort
 Heap Sort

Bubble Sort Algorithm

Bubble Sort Algorithm is used to arrange N elements in ascending order, and for that,
you have to begin with 0th element and compare it with the first element. If the
0th element is found greater than the 1stelement, then the swapping operation will be
performed, i.e., the two values will get interchanged. In this way, all the elements of the
array get compared.

Below given figure shows how Bubble Sort works:


Algorithm for Bubble Sort
1. algorithm Bubble_Sort(list)
2. Pre: list != fi
3. Post: list is sorted in ascending order for all values
4. for i <- 0 to list:Count - 1
5. for j <- 0 to list:Count - 1
6. if list[i] < list[j]
7. Swap(list[i]; list[j])
8. end if
9. end for
10. end for
11. return list
12. end Bubble_Sort

Selection Sort Algorithm

The selection is a straightforward process of sorting values. In this method, to sort the
data in ascending order, the 0th element is compared with all other elements. If the
0th element is found to be greater than the compared element, the two values get
interchanged. In this way after the first iteration, the smallest element is placed at
0th position. The technique is repeated until the full array gets sorted.

The below-given figure shows how Selection Sort works:

Example:

#include <iostream>
using namespace std;

int main()

int a[100], i, n, p, k, min, loc, tmp;

cout << "\n------------ SELECTION SORT ------------ \n\n";

cout << "Enter No. of Elements:"; cin >> n;

cout << "\nEnter Elements:\n";

for (i = 1; i <= n; i++) { cin >> a[i];

for (p = 1; p <= n - 1; p++) // Loop for Pass

min = a[p]; // Element Selection

loc = p;

for (k = p + 1; k <= n; k++) // Finding Min Value { if (min > a[k]) {

min = a[k];

loc = k;

}
tmp = a[p];

a[p] = a[loc];

a[loc] = tmp;

cout << "\nAfter Sorting: \n";

for (i = 1; i <= n; i++) {

cout << a[i] << endl;

Output:

Merge Sort Algorithm

Merge sort is another sorting technique and has an algorithm that has a reasonably
proficient space-time complexity - O(n log n) and is quite trivial to apply. This algorithm
is based on splitting a list, into two comparable sized lists, i.e., left and right and then
sorting each list and then merging the two sorted lists back together as one.
Merge sort can be done in two types both having similar logic and way of
implementation. These are:

 Top down implementation


 Bottom up implementation

Below given figure shows how Merge Sort works:

Algorithm for Merge Sort


1. algorithm Merge_Sort(list)
2. Pre: list 6= fi
3. Post: list has been sorted into values of ascending order
4. if list.Count = 1 // when already sorted
5. return list
6. end if
7. m <- list.Count = 2
8. left <- list(m)
9. right <- list(list.Count - m)
10. for i <- 0 to left.Count - 1
11. left[i] <- list[i]
12. end for
13. for i <- 0 to right.Count -1
14. right[i] <- list[i]
15. end for
16. left <- Merg_Sort(left)
17. 17) right <- Merge_Sort(right)
18. 18) return MergeOrdered(left, right)
19. 19) end Merge_Sort
It is to be noted that, Merge sort uses the concept of divide and conquer technique that was
invented by John von Neumann in the year 1945.

Example:

#include <iostream>

using namespace std;

void merge(int *,int, int , int );

void mergesort(int *a, int low, int high)

int mid;

if (low < high)

mid=(low+high)/2;

mergesort(a,low,mid);

mergesort(a,mid+1,high);

merge(a,low,high,mid);
}

return;

// Merge sort concepts starts here

void merge(int *a, int low, int high, int mid)

int i, j, k, c[50];

i = low;

k = low;

j = mid + 1;

while (i <= mid && j <= high)

if (a[i] < a[j])

c[k] = a[i];

k++;

i++;

else

c[k] = a[j];

k++;

j++;

}
}

while (i <= mid)

c[k] = a[i];

k++;

i++;

while (j <= high)

c[k] = a[j];

k++;

j++;

for (i = low; i < k; i++)

a[i] = c[i];

// from main mergesort function gets called

int main()

int a[30], i, b[30];

cout<<"enter the number of elements:";

for (i = 0; i <= 5; i++) { cin>>a[i];


}

mergesort(a, 0, 4);

cout<<"sorted array\n";

for (i = 0; i < 5; i++)

cout<<a[i]<<"\t";

cout<<"enter the number of elements:";

for (i = 0; i < 5; i++) { cin>>b[i];

mergesort(b, 0, 4);

cout<<"sorted array:\n";

for (i = 0; i < 5; i++)

cout<<b[i]<<"\t";

getch();

Output:
Quick sort algorithm

Quick sort is one of the most famous sorting algorithms based on divide and conquers
strategy which results in an O(n log n) complexity. So, the algorithm starts by picking a
single item which is called pivot and moving all smaller items before it, while all greater
elements in the later portion of the list. This is the main quick sort operation named as a
partition, recursively repeated on lesser and greater sublists until their size is one or
zero - in which case the list is wholly sorted. Choosing an appropriate pivot, as an
example, the central element is essential for avoiding the severely reduced performance
of O(n2).

Algorithm for Quick Sort


1. algorithm Quick_Sort(list)
2. Pre: list 6= fi
3. Post: the list has been sorted in ascending order
4. if list.Count = 1 // list already sorted
5. return list
6. end if
7. pivot <- Median_Value(list)
8. for i <- 0 to list.Count - 1
9. if list[i] = pivot
10. equal.Insert(list[i])
11. end if
12. if list[i] < pivot
13. less.Insert(list[i])
14. end if
15. if list[i] > pivot
16. greater.Insert(list[i])
17. end if
18. end for
19. return Concatenate(Quick_Sort(less), equal, Quick_Sort(greater))
20. end Quick_sort
Example:

#include <iostream>

#include <stdio.h>

template <class t>

using namespace std;

void quick_sort(t a[], int low, int high)

t k;

int i, j, flag = 1;

if (low < high) {

k = a[low];

i = low + 1;

j = high;

while (flag) {

while ((a[i] <= k) && (i < j)) i++; while (a[j] > k)

j--;

if (i < j)

swap(a, i, j);

else

flag = 0;

swap(a, low, j);

quick_sort(a, low, j - 1);


quick_sort(a, j + 1, high);

template

void swap(t1 a[], int x, int y)

t1 temp;

temp = a[x];

a[x] = a[y];

a[y] = temp;

int main()

int i, n, a[20];

cout << "Enter the number of elements to be sort::"; cin >> n;

cout << "Enter elements:\n";

for (i = 0; i < n; i++) cin >> a[i];

quick_sort(a, 0, n - 1);

cout << "The sorted elements are:\n";

for (i = 0; i < n; i++)

cout << a[i] << endl;

}
Output:

Insertion Sort Algorithm

Insertion sort is a to some extent an interesting algorithm with an expensive runtime


characteristic having O(n2). This algorithm can be best thought of as a sorting scheme
which can be compared to that of sorting a hand of playing cards, i.e., you take one
card and then look at the rest with the intent of building up an ordered set of cards in
your hand.

Algorithm for Insertion sort


1. algorithm Insertion_sort(list)
2. Pre: list 6= fi
3. Post: list has been sorted into values of ascending order
4. unsorted <- 1
5. while unsorted < list.Count
6. hold <- list[unsorted]
7. i à unsorted - 1
8. while i >= 0 and hold < list[i]
9. list[i + 1] <- list[i]
10. i <- i - 1
11. end while
12. list[i + 1] <- hold
13. unsorted <- unsorted + 1
14. end while
15. return list
16. end Insertion_sort

Example:

#include <iostream>

#include <conio.h>

int main()

int a[16], i, j, k, tmp;

cout << "enter the number of elements\n";

for (i = 0; i < 6; i++) { cin >> a[i];

for (i = 1; i < 6; i++) { for (j = i; j >= 1; j--) {

if (a[j] < a[j - 1]) {

tmp = a[j];

a[j] = a[j - 1];

a[j - 1] = tmp;

else

break;

cout << "sorted array\n" << endl;

for (k = 0; k < 6; k++) {

cout << a[k] << "\t";


}

Output:

Binary trees

This chapter explores one of the most important non-linear data structures, i.e., trees.
Various kinds of trees are available with different features. You will start learning with
the most important tree structure, i.e., the binary tree which is a finite set of elements
that is either empty or further divided into sub-trees.

There are two ways to represent binary trees. These are:

 Using arrays
 Using Linked lists

The Non-Linear Data structure


The data structures that you have learned so far were merely linear - strings, arrays,
lists, stacks, and queues. One of the most important nonlinear data structure is the tree.
Trees are mainly used to represent data containing the hierarchical relationship
between elements, example: records, family trees, and table of contents. A tree may be
defined as a finite set 'T' of one or more nodes such that there is a node designated as
the root of the tree and the other nodes are divided into n>=0 disjoint sets T 1, T2, T3,
T4 …. Tn are called the subtrees or children of the root.
What is a Binary Tree?
A binary tree is a special type of tree in which every node or vertex has either no child
node or one child node or two child nodes. A binary tree is an important class of a tree
data structure in which a node can have at most two children.

Child node in a binary tree on the left is termed as 'left child node' and node in the right
is termed as the 'right child node.'

In the figure mentioned below, the root node 8 has two children 3 and 10; then this two
child node again acts as a parent node for 1 and 6 for left parent node 3 and 14 for right
parent node 10. Similarly, 6 and 14 has a child node.

A binary tree may also be defined as follows:

 A binary tree is either an empty tree


 Or a binary tree consists of a node called the root node, a left subtree and a right
subtree, both of which will act as a binary tree once again

Applications of Binary Tree


Binary trees are used to represent a nonlinear data structure. There are various forms
of Binary trees. Binary trees play a vital role in a software application. One of the most
important applications of the Binary tree is in the searching algorithm.

A general tree is defined as a nonempty finite set T of elements called nodes such that:

 The tree contains the root element


 The remaining elements of the tree form an ordered collection of zeros and more
disjoint trees T1, T2, T3, T4 …. Tn which are called subtrees.

Types of Binary Trees are


 A full binary tree which is also called as proper binary tree or 2-tree is a tree in which
all the node other than the leaves has exact two children.

 A complete binary tree is a binary tree in which at every level, except possibly the last,
has to be filled and all nodes are as far left as possible.
 A binary tree can be converted into an extended binary tree by adding new nodes to
its leaf nodes and to the nodes that have only one child. These new nodes are added
in such a way that all the nodes in the resultant tree have either zero or two children. It
is also called 2 - tree.
 The threaded Binary tree is the tree which is represented using pointers the empty
subtrees are set to NULL, i.e. 'left' pointer of the node whose left child is empty subtree
is normally set to NULL. These large numbers of pointer sets are used in different
ways.

ALV Trees

Tree is one of the most important data structure that is used for efficiently performing
operations like insertion, deletion and searching of values. However, while working with
a large volume of data, construction of a well-balanced tree for sorting all data s not
feasible. Thus only useful data is stored as a tree, and the actual volume of data being
used continually changes through the insertion of new data and deletion of existing
data. You will find in some cases where the NULL link to a binary tree to special links is
called as threads and hence it is possible to perform traversals, insertions, deletions
without using either stack or recursion. In this chapter, you will learn about the Height
balance tree which is also known as the AVL tree.
What is The AVL Tree?

AVL tree is a binary search tree in which the difference of heights of left and right
subtrees of any node is less than or equal to one. The technique of balancing the height
of binary trees was developed by Adelson, Velskii, and Landi and hence given the short
form as AVL tree or Balanced Binary Tree.

An AVL tree can be defined as follows:

Let T be a non-empty binary tree with TL and TR as its left and right subtrees. The tree is
height balanced if:

 TL and TR are height balanced


 hL - hR <= 1, where hL - hR are the heights of TL and TR

The Balance factor of a node in a binary tree can have value 1, -1, 0, depending on
whether the height of its left subtree is greater, less than or equal to the height of the
right subtree.

Advantages of AVL tree


Since AVL trees are height balance trees, operations like insertion and deletion have
low time complexity. Let us consider an example:

If you have the following tree having keys 1, 2, 3, 4, 5, 6, 7 and then the binary tree will
be like the second figure:
To insert a node with a key Q in the binary tree, the algorithm requires seven
comparisons, but if you insert the same key in AVL tree, from the above 1st figure, you
can see that the algorithm will require three comparisons.

Representation of AVL Trees

Struct AVLNode

int data;

struct AVLNode *left, *right;

int balfactor;
};

Algorithm for different Operations on AVL

For Insertion:
Step 1: First, insert a new element into the tree using BST's (Binary Search Tree)
insertion logic.

Step 2: After inserting the elements you have to check the Balance Factor of each node.

Step 3: When the Balance Factor of every node will be found like 0 or 1 or -1 then the
algorithm will proceed for the next operation.

Step 4: When the balance factor of any node comes other than the above three values
then the tree is said to be imbalanced. Then perform the suitable Rotation to make it
balanced and then the algorithm will proceed for the next operation.

For Deletion:
Step 1: Firstly, find that node where k is stored

Step 2: Secondly delete those contents of the node (Suppose the node is x)

Step 3: Claim: Deleting a node in an AVL tree can be reduced by deleting a leaf. There
are three possible cases:

 When x has no children then, delete x


 When x has one child, let x' becomes the child of x.
 Notice: x' cannot have a child, since subtrees of T can differ in height by at most one :
o then replace the contents of x with the contents of x'
o then delete x' (a leaf)
 Step 4: When x has two children,
o then find x's successor z (which has no left child)
o then replace x's contents with z's contents, and
o delete z
In all of the three cases, you will end up removing a leaf.

Forests and Orchards

In a computer application, you usually do not need to study trees in such generality, and
when you do, for emphasis you call them as free trees. The trees you create are almost
always tied down by having one particular vertex singled out as the root or for emphasis
you can call such trees as a rooted tree.

In a rooted tree, there is still no way to tell left from right or when one vertex has several
children, to tell which is first, second and so on. So for no other reasons, the restraint of
sequential execution of instructions (not to the mentioned sequential organization of
storage) usually imposes an order on the children of each vertex. Hence you can define
an ordered tree to be a rooted tree in which children o each vertex are assigned an
order.

In this chapter you will learn about the basic concepts of forests and how orchards are
formed in data structure.

What are Forests and Orchards?


In your understanding so far with binary trees you have got benefits for using binary
tree. Binary trees can also be implemented using recursion which reduces a problem to
a smaller one.

There is a standard term for an arbitrary set of trees which is called a forest. In other
words, you can say a general tree as the root of a forest, and a forest is an ordered
combination of zero or more general trees. This mutually recursive definition of general
trees and forests allows programmers to utter about trees where nodes may have more
than two children. These children of a node (the trees of the forest that it roots) are
sequenced: the first, the second, and so on. There is no concept of left and right in
general trees, except you can typically draw the tree with the sequential ordering from
left to right.
When you are using the term forest, you will generally assume that the tree is rooted.
The phrase ordered forest is sometimes used for an ordered set of ordered trees, but
you will adopt the equally descriptive term orchards. You can also say that the standard
term for an arbitrary set of trees is a forest. It is to be noted that you can only obtain a
forest or an orchard by removing the rot from a rooted tree - by starting with a forest or
an orchard, attaching a new vertex on the top and adding branches from the new vertex
to the root of all trees I forest or an orchard.

What is Rotation?
Rotation is the transformation from orchards to a binary tree. In a binary tree [v, f(O 1),
f(O2)] of the left link from v goes to the root of the binary tree f(O1), which in fact is the
first child of V in the ordered tree {V, O1}. In geometrical terms, the transformation
reduces to the following rules explained below -

 Draw the orchard so that the first child of each vertex is immediately below the vertex.
 Draw a vertical link from each vertex to its first child.
 Draw a horizontal link from each vertex to its next sibling.
 Remove those remaining original links
 Rotate the diagram at an angle of 45o (degree) clockwise which appear as a left link
and horizontal links appear as the right one.

You might also like