You are on page 1of 18

Chapter 3 Step 6: Print Element x Found at index i and

go to step 8
3. Simple Sorting and Searching Step 7: Print element not found
Algorithms Step 8: Exit

Searching Pseudocode
Searching is a process of looking for a
specific element in a list of items or
procedure linear_search (list, value)
determining that the item is not in the list.
There are two simple searching algorithms:
for each item in the list
Sequential Search, and
Binary Search if match item == value
3.1.1 Linear Search (Sequential
Search) return the item's location
A linear search looks down a list, one item end if
at a time, without jumping. In complexity
terms this is an O(n) search the time taken to end for
search the list gets bigger at the same rate as
the list does. Linear search is a very simple end procedure
search algorithm. In this type of search, a
sequential search is made over all items one Example Implementation:
by one. Every item is checked and if a match
is found then that particular item is int Linear_Search(int list[], int key)
returned, otherwise the search continues {
till the end of the data collection. int index=0;
int found=0;
Algorithm of linear search do{
Loop through the array starting at the first if(key==list[index])
element until the value of target matches one found=1;
of the array elements. If a match is not else
found, return 1. index++;
Time is proportional to the size of input (n) }while(found==0&&index<n);
and we call this time complexity O(n) if(found==0)
index=-1;
Algorithm return index;
}
3.1.2Binary Search
A binary search is searching start with the
middle of a sorted list, and see whether that's
greater than or less than the value you're
looking for, which determines whether the
Linear Search (Array A, Value x)
value is in the first or second half of the list.
Step 1: Set i to 1
Jump to the half way through the sub list,
Step 2: if i > n then go to step 7
and compare again etc. This is pretty much
Step 3: if A[i] = x then goes to step 6
how humans typically look up a word in a
Step 4: Set i to i + 1
dictionary (although we use better heuristics,
Step 5: Go to Step 2
obviously - if you're looking for "cat" you
don't start off at "M"). In complexity terms
this is an O(log n) search - the number of
search operations grows more slowly than How Binary Search Works?
the list does, because you're halving the
"search space" with each operation .This For a binary search to work, it is mandatory
search algorithm works on the principle of for the target array to be sorted. We shall
divide and conquer. This searching Learn the process of binary search with a
algorithm works only for an ordered list. pictorial example. The following is our
The basic idea is: sorted
Locate midpoint of array to array and let us assume that we need to
search search the location of value 31 using
Determine if target is in lower half or binary
upper half of an array. Search.
If in lower half, make this half the
array to search
If in the upper half, make this half
First, we shall determine half of the array by
the array to search
using this formula -
mid = low + (high - low) / 2
Loop back to step 1 until the size of the Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value
array to search is one, and this element does of 4.5). So, 4 is the mid of the array.
not match, in which case return 1.
The computational time for this
algorithm is proportional to log2 n
Therefore the time complexity is O(log n)
Now we compare the value stored at
location 4, with the value being searched,
Pseudo code i.e. 31. We find that the value at location 4 is
27, which is not a match. As the value is
Procedure binary search
greater than 27 and we have a sorted array,
A sorted array
N size of array so we also know that the target value must
X value to be searched be in the upper portion of the array.
Set lower Bound = 1
Set upper Bound = n
While x not found
If upper Bound < lower Bound
EXIT: x does not exist.
Set midpoint = lower Bound + (upper We change our low to mid + 1 and find the
Bound - lower Bound) / 2 new mid value again.
If A[midpoint] < x low = mid + 1
Set lower Bound = midpoint + 1 mid = low + (high - low) / 2
If A[midpoint] > x Our new mid is 7 now. We compare the
Set upper Bound = midpoint - 1 value stored at location 7 with our target
If A[midpoint] = x value 31.
EXIT: x found at location midpoint
End while
End procedure
The pseudo code of binary search algorithms
should look like this
The value stored at location 7 is not a match; As an example, suppose you were looking
rather it is less than what we are looking for. for U in an A-Z list of letters (index 0-25;
So, the value must be in the lower part from we're looking for the value at index 20).
this location. A linear search would ask:
list[0] == 'U'? No.
list[1] == 'U'? No.
list[2] == 'U'? No.
list[3] == 'U'? No.
list[4] == 'U'? No.
Hence, we calculate the mid again. This
list[5] == 'U'? No.
time it is 5.
... list[20] == 'U'? Yes, Finished.

The binary search would ask:


We compare the value stored at location 5
with our target value. We find that it is a Compare list[12] ('M') with 'U': Smaller,
match. look further on. (Range=13-25)
Compare list[19] ('T') with 'U': Smaller, look
further on. (Range=20-25)
We conclude that the target value 31 is Compare list[22] ('W') with 'U': Bigger, look
stored at location 5. Binary search halves the earlier. (Range=20-21)
searchable items and thus reduces the Compare list[20] ('U') with 'U': Found it!
count of comparisons to be made to Finished
very less numbers.
Example Implementation: Comparing the two:
int Binary Search(int list[],int k)
{ Binary search requires the input data
int left=0, right=n-1, found=0; to be sorted; linear search doesn't
do Binary search requires an ordering
{ comparison; linear search only
mid=(left+right)/2; requires equality comparisons
if(key==list[mid]) Binary search has complexity O(log
found=1; n); linear search has complexity O(n)
else as discussed earlier
{ Binary search requires random
if(key<list[mid]) access to the data; linear search only
right=mid-1; requires sequential access (this can
else be very important - it means a linear
left=mid+1; search can stream data of arbitrary
} size)
}
while(found==0&&left<right);
Interpolation Search
if(found==0)
Interpolation search is an improved variant
index=-1;
else of binary search. This search algorithm
index=mid; return index; works on the inquiring position of the
}
required value. For this algorithm to work
properly, the data collection should be in a Interpolation search finds a particular item
sorted form and equally distributed. by computing the probe position. Initially,
the probe position is the position of the
middle most item of the collection.
Binary search has a huge advantage of time
complexity over linear search.

Linear search has worst-case complexity of


(n) whereas binary search has (log n).

There are cases where the location of


target data may be known in advance. If a match occurs, then the index of the item
For example, in case of a telephone is returned. To split the list into two parts,
directory, if we want to search the we use the following method
telephone number of Morpheus .Here, mid = Lo + ((Hi - Lo) / (A[Hi] - A[Lo])) *
linear search and even binary search will (X - A[Lo])
seem slow as we can directly jump to Where
memory space where the names start from A = list
'M' are stored. Lo = Lowest index of the list
Hi = Highest index of the list
Positioning in Binary Search A[n] = Value stored at index n in the list
In binary search, if the desired data If the middle item is greater than the item,
is not found then the rest of the list then the probe position is again calculated in
is divided in two Parts, lower and higher. the sub-array to the right of the middle item.
The search is carried out in either of them. Otherwise, the item is searched in the sub
array to the left of the middle item. This
process continues on the sub-array as well
until the size of sub array reduces to zero.
Runtime complexity of interpolation
search algorithm is (log (log n)) as
compared to (log n) of BST in favorable
situations.

Algorithm

As it is an improvisation of the existing


BST algorithm, we are mentioning the
steps to
Even when the data is sorted, binary search Search the 'target' data value index, using
does not take advantage to investigate the position probing
position of the desired data. Step 1 Start searching data from middle
of the list.
Position Probing in Interpolation Step 2 If it is a match, return the index of
Search the item, and exit.
Step 3 If it is not a match, probe position.
Step 4 Divide the list using probing Solved. The solution of all sub-problems is
formula and find the new middle. finally merged in order to obtain the solution
Step 5 If data is greater than middle, of
search in higher sub-list. an original problem.
Step 6 If data is smaller than middle,
search in lower sub -list.
Step 7 Repeat until match.

Pseudo Code
A Array list
N Size of A
X Target Value
Procedure Interpolation_Search()
Broadly, we can understand divide-and-
Set Lo 0
conquer approach in a three-step process.
Set Mid -1
Set Hi N-1
While X does not match Divide/Break
if Lo equals to Hi OR A[Lo] equals to A[Hi]
EXIT: Failure, Target not found This step involves breaking the problem into
end if smaller sub-problems. Sub-problems should
Set Mid = Lo + ((Hi - Lo) / (A[Hi] - A[Lo])) represent a part of the original problem. This
* (X - A[Lo]) step generally takes a recursive approach to
if A[Mid] = X divide the problem until no sub-problem is
EXIT: Success, Target found at Mid further divisible. At this stage, sub-
else problems
if A[Mid] < X become atomic in nature but still represent
Set Lo to Mid+1 some part of the actual problem.
else if A[Mid] > X
Set Hi to Mid-1 Conquer/Solve
end if
end if This step receives a lot of smaller sub-
End While problems to be solved. Generally, at this
End Procedure level, the
problems are considered 'solved' on their
Divide and Conquer own.
In divide and conquer approach, the problem
Merge/Combine
in hand, is divided into smaller sub-
When the smaller sub-problems are solved,
problems
this stage recursively combines them until
and then each problem is solved
they formulate a solution of the original
independently. When we keep on dividing
problem. This algorithmic approach works
the sub-
recursively and conquer & merge steps
problems into even smaller sub-problems,
works so close that they appear as one.
we may eventually reach a stage where
Examples
no
The following computer algorithms are
more division is possible. Those "atomic"
based on divide-and-conquer programming
smallest possible sub-problems (fractions)
approach
are
Merge Sort
Quick Sort In-place Sorting and Not-in-place Sorting
Binary Search
There are various ways available to solve Sorting algorithms may require some extra
any computer problem, but the mentioned space for comparison and temporary storage
are a good example of divide and conquer of few data elements. These algorithms do
approach. not require any extra space and sorting is
said
to happen in-place, or for example, within
3.2. Sorting Algorithms the array itself. This is called in-place
sorting.
Bubble sort is an example of in-place
Sorting is one of the most important
sorting.
operations performed by computers. Sorting
is a process of reordering a list of items in
However, in some sorting algorithms, the
either increasing or decreasing order. The
program requires space which is more than
following are simple sorting algorithms used
or equal to the elements being sorted.
to sort small-sized lists.
Sorting which uses equal or more space is
Have an array you need to put in order? called not-in-place sorting . Merge-sort is
Keeping business records and want to sort an example of not-in-place sorting.
them by ID number or last name of client?
Then you'll need a sorting algorithm. To Stable and Not Stable Sorting
understand the more complex and efficient
sorting algorithms, it's important to first If a sorting algorithm, after sorting the
understand the simpler, but slower contents, does not change the sequence of
algorithms. similar content in which they appear, it is
called stable sorting .
Sorting refers to arranging data in a
particular format. Sorting algorithm
specifies the way
to arrange data in a particular order.
Most common orders are in numerical or
Lexicographical order.
The importance of sorting lies in the fact If a sorting algorithm, after sorting the
that data searching can be optimized to a contents, changes the sequence of similar
very high level, if data is stored in a sorted content
manner. Sorting is also used to represent in which they appear, it is called unstable
data in more readable formats. sorting .
Following are some of the
examples of sorting in real-life
scenarios:
Telephone Directory
The telephone directory stores the telephone
numbers of people sort ed by their names, so
that the names can be searched easily.

Dictionary The dictionary stores of an algorithm matters when we wish


words in an alphabetical order so that to maintain the sequence of original
searching of any word becomes easy. elements, like in a tuple for example.
Adaptive and Non-Adaptive Sorting Algorithm Non-Decreasing Order

A sorting algorithm is said to be adaptive, if A sequence of values is said to be in non-


it takes advantage of already 'sorted' decreasing order , if the successive element
elements in the list that is to be sorted. That is greater than or equal to its previous
is, while sorting if the source list has some element in the sequence. This order occurs
element already sorted, adaptive algorithms when the sequence contains duplicate
will take this into account and will try not to values. For example, 1, 3, 3, 6, 8, 9 are in
re-order them. non-decreasing order, as every next element
A non-adaptive algorithm is one which does is greater than or equal to (in case of 3) but
not take into account the elements which are not less than the previous one.
already sorted. They try to force every
single element to be re -ordered to confirm In generally In this article, you'll learn about
their sortedness. bubble sort, including a modified bubble
Important Terms sort that's slightly more efficient; insertion
Some terms are generally coined while sort; and selection sort. Any of these sorting
discussing sorting techniques, here is a algorithms are good enough for most small
brief Introduction to them tasks, though if you were going to process a
large amount of data, you would want to
Increasing Order choose one of the sorting algorithms listed
on the advanced sorting.
A sequence of values is said to be in Insertion Sort
increasing order, if the successive element is Selection Sort
greater than the previous one. For example, Bubble Sort
1, 3, 4, 6, 8, 9 are in increasing order, as 3.2.1. Insertion Sort
every next element is greater than the
previous element. The insertion sort works just like its name
suggests - it inserts each item into its proper
Decreasing Order place in the final list. The simplest
A sequence of values is said to be in implementation of this requires two list
decreasing order, if the successive element structures - the source list and the list into
is less than the current one. For example, 9, which sorted items are inserted. To save
8, 6, 4, 3, 1 are in decreasing order, as every memory, most implementations use an in-
next Element is less than the previous place sort that works by moving the current
element. item past the already sorted items and
repeatedly swapping it with the preceding
Non-Increasing Order item until it is in place.
A sequence of values is said to be in non-
increasing order, if the successive element is It's the most instinctive type of sorting
less than or equal to its previous element algorithm. The approach is the same
in the sequence. This order occurs when the approach that you use for sorting a set of
sequence contains duplicate values. For cards in your hand. While playing cards, you
example, 9, 8, 6, 3, 3, 1 are in non- pick up a card, start at the beginning of your
increasing order, as every next element is hand and find the place to insert the new
less than or equal to (in case of 3) but not card, insert it and move all the others up one
greater than any previous element. place.
This is an in-place comparison-based of the array already covered is in order;
sorting algorithm. Here, a sub-list is then, the current element of the array is
maintained which is always sorted. For inserted into the proper position at the head
example, the lower part of an array is of the array, and the rest of the elements are
maintained to be sorted. moved down, using the space just vacated
An element which is to be 'inserted in this by the element inserted as the final space.
sorted sub-list, has to find its appropriate
place and then it has to be inserted there. Here is an example: for sorting the array the
Hence the name, insertion sorts array 52314 First, 2 is inserted before 5,
The array is searched sequentially and resulting in 25314 Then, 3 is inserted
unsorted items are moved and inserted between 2 and 5, resulting in 23514 Next,
into the sorted sub-list (in the same array). one is inserted at the start, 12354 Finally, 4
This algorithm is not suitable for large data is inserted between 3 and 5, 12345
sets as its average and worst case
complexity are of (n2), where n is the Algorithm
number of items.
Basic Idea: Now we have a bigger picture of how this
Find the location for an element and move sorting technique works, so we can derive
all others up, and insert the element. simple steps by which we can achieve
The process involved in insertion sort is as insertion sort.
follows:
Step 1 If it is the first element it is already
The left most value can be said to be sorted. return 1
sorted relative to itself. Thus, we Step 2 Pick next element
dont need to do anything. Step 3 Compare with all elements in the
Check to see if the second value is sorted sub-list
smaller than the first one. If it is, Step 4 Shift all the elements in the sorted
swap these two values. The first two sub -list that is greater than the
values are now relatively sorted. value to be sorted
Next, we need to insert the third Step 5 Insert the value
value in to the relatively sorted Step 6 Repeat until list is sorted
portion so that after insertion, the
portion will still be relatively sorted. Pseudocode
Remove the third value first. Slide
the second value to make room for procedure insertionSort( A : array of items )
insertion. Insert the value in the int holePosition
appropriate position. int valueToInsert
Now the first three are relatively
sorted. for i = 1 to length(A) inclusive do:
Do the same for the remaining items
in the list. /* select value to be inserted */
valueToInsert = A[i]
Insertion sort does exactly what you would holePosition = i
expect: it inserts each element of the array /*locate hole position for the element to
into its proper position, leaving be inserted */
progressively larger stretches of the array
sorted. What this means in practice is that while holePosition > 0 and
the sort iterates down an array, and the part A[holePosition-1] > valueToInsert do:
A[holePosition] = A[holePosition-1]
holePosition = holePosition -1
end while
These values are not in a sorted order.
/* insert the number at hole position */
A[holePosition] = valueToInsert
So we swap them
end for

end procedure
However, swapping makes 27 and 10
unsorted.
How Insertion Sort Works?
We take an unsorted array for our
example.
Hence, we swap them too.
Insertion sort compares the first two Again we find 14 and 10 in an unsorted
elements. order.

It finds that both 14 and 33 are


already in ascending order. For now,
14 is in sorted sub-list. We swap them again. By the end of third
iteration, we have a sorted sub-list of 4
items.

Insertion sort moves ahead and compares 33


with 27.
This process goes on until all the unsorted
values are covered in a sorted sub-list.
And finds that 33 is not in the correct
position. Implementation

void insertion_sort(int list[])


{
It swaps 33 with 27. It also checks with all
int key;
the elements of sorted sub-list. Here we see
for(int i=1;i<n;i++){
that the sorted sub-list has only one
key=list[i];
element 14, and 27 is greater than 14.
for(int j=i; j>=0;j--)
Hence, the sorted sub-list remains sorted
{
after swapping.
// work backwards through the array
finding where temp should go
if(key<list[j])
By now we have 14 and 27 in the sorted {
sub-list. Next, it compares 33 with list[j+1]=list[j];
10. list[j]=temp;
}
}//end of inner loop
}//end of outer loop
}//end of insertion_sort Basic Idea:
Analysis
How many comparisons? Loop through the array from i=0 to
1+2+3++(n-1)= O(n2) n-1.
How many swaps? Select the smallest element in the
1+2+3++(n-1)= O(n2) array from i to n
How much space? Swap this value with value at
In-place algorithm position i.

3.2.2. Selection Sort Now, let us learn some programming


aspects of selection sort.
Selection sort is the most conceptually
simple of all the sorting algorithms. It works Algorithm
by selecting the smallest (or largest, if you Step 1 Set MIN to location 0
want to sort from big to small) element of Step 2 Search the minimum element in the
the array and placing it at the head of the list
array. Then the process is repeated for the Step 3 Swap with value at location MIN
remainder of the array; the next largest Step 4 Increment MIN to point to next
element is selected and put into the next slot, element
and so on down the line. Step 5 Repeat until list is sorted
Because a selection sort looks at
progressively smaller parts of the array each
Pseudo Code
time (as it knows to ignore the front of the procedure selection sort
array because it is already in order), a list : array of items
selection sort is slightly faster than bubble n : size of list
sort, and can be better than a modified for i = 1 to n - 1
bubble sort. /* set current element as minimum*/
min = i
Selection sort is a simple sorting algorithm. /* check the element to be minimum */
This sorting algorithm is an in-place for j = i+1 to n
Comparison based algorithm in which the if list[j] < list[min] then
list is divided into two parts, the sorted part min = j;
at end if
end for
the left end and the unsorted part at the right /* swap the minimum element with the current
end. element*/
Initially, the sorted part is empty and the if indexMin != i then
unsorted part is the entire list. swap list[min] and list[i]
The smallest element is selected from the end if
end for
unsorted array and swapped with the end procedure
leftmost
element, and that element becomes a How Selection Sort Works?
part of the sorted array. Consider the following depicted array as an
This process continues moving unsorted example.
array boundary by one element to the right.
This algorithm is not suitable for large data
sets as its average and worst case
complexities are of O(n2), where n is the For the first position in the sorted list,
number of items. the whole list is scanned sequentially. The
first position where 14 is stored Here is the code for a simple selection sort:
presently, we search the whole for(int x=0; x<n; x++)
{ int index_of_min = x;
list and find that 10 is the lowest value. for(int y=x; y<n; y++)
{if(array[index_of_min]<array[y])
{
index_of_min = y;
}
}
So we replace 14 with 10. int temp = array[x];
After one iteration 10, which array[x] = array[index_of_min];
array[index_of_min] = temp;
happens to be the minimum value }
in the list, appears in the first position of the The first loop goes from 0 to n, and the
sorted list. second loop goes from x to n, so it goes
from 0 to n, then from 1 to n, then from 2 to
n and so on. The multiplication works out so
that the efficiency is n*(n/2), though the
order is still O(n2).
For the second position, where 33 is Implementation:
residing, we start void selection_sort(int list[])
scanning the rest of the {
list in a linear manner. int i,j, smallest,loc;
for(i=0;i<n;i++){
smallest=list[i];
loc=i;
for(j=i+1;j<n;j++){
We find that 14 is the if(list[j]< smallest){
smallest=list[j];
second lowest value in the loc=j;
list and it should appear at }
the second place. We }//end of inner loop
swap these values. list[loc]=list[i];
list[i]=smallest;
} //end of outer loop
}//end of selection_sort

After two iterations, two Analysis


least values are positioned How many comparisons?
at the beginning in a sorted manner. (n-1)+(n-2)++1= O(n2)
How many swaps?
n=O(n)
How much space?
The same process is applied to the rest of the In-place algorithm
items in the array. 3.2.3. Bubble Sort
Following is a pictorial depiction of the
entire sorting process - Bubble sort is a simple sorting algorithm.
This sorting algorithm is comparison-based
algorithm in which each pair of adjacent
elements is compared and the elements
are
swapped if they are not in order. This
algorithm is not suitable for large data sets
as its average and worst case complexity are whole array is completely sorted in an
of O(n2) where n is the number of items. ascending order. This may cause a few
complexity
Bubble sort is the simplest algorithm to issues like what if the array needs no
implement and the slowest algorithm on more swapping as all the elements are
very large inputs. alr eady
ascending.
The simplest sorting algorithm is bubble To ease-out the issue, we use one flag
sort. The bubble sort works by iterating variable swapped which will help us see
down an array to be sorted from the first if any
element to the last, comparing each pair of swap has happened or not. If no swap
elements and switching their positions if has occurred, i.e. the array requires no
necessary. more
processing to be sorted, it will come out of
This process is repeated as many times as the loop.
necessary, until the array is sorted. Since the Pseudocode of
worst case scenario is that the array is in BubbleSort algorithm can be written as
reverse order, and that the first element in follows
sorted array is the last element in the starting procedure bubbleSort( list : array of items )
array, the most exchanges that will be loop = list.count;
for i = 0 to loop-1 do:
necessary is equal to the length of the array. swapped = false
Here is a simple example: for j = 0 to loop-1 do:
/* compare the adjacent elements */
Given an array 23154 a bubble sort would if list[j] > list[j+1] then
lead to the following sequence of partially /* swap them */
swap( list[j], list[j+1] )
sorted arrays: 21354, 21345, 12345. First the swapped = true
1 and 3 would be compared and switched, end if
then the 4 and 5. On the next pass, the 1 and end for
2 would switch, and the array would be in /*if no number was swapped that means
order. array is sorted now, break the loop.*/
if(not swapped) then
Algorithm break
We assume list is an array of n elements. We end if
further assume that swap function swaps end for
the values of the given array elements. end procedure return list
The basic code for bubble sort looks like
begin BubbleSort(list) this, for sorting an integer array:
for all elements of list
if list[i] > list[i+1] for(int x=0; x<n; x++)
swap(list[i], list[i+1]) {for(int y=0; y<n-1; y++)
end if {
end for if(array[y]>array[y+1])
return list {
end BubbleSort int temp =
Pseudocode array[y+1];
array[y+1] =
We observe in algorithm that Bubble Sort array[y];
compares each pair of array element unless array[y] = temp;
the }
}
}
Notice that this will always loop n times Analysis of Bubble Sort
from 0 to n, so the order of this algorithm is How many comparisons?
O (n2). This is both the best and worst case (n-1)+(n-2)++1= O(n2)
scenario because the code contains no way How many swaps?
of determining if the array is already in (n-1)+(n-2)++1= O(n2)
order. Space? In-place algorithm.

A better version of bubble sort, known as General Comments


modified bubble sort, includes a flag that is Each of these algorithms requires n-1
set if an exchange is made after an entire passes: each pass places one item in its
pass over the array. correct place. The ith pass makes either i or n
If no exchange is made, then it should be - i comparisons and moves. So:
clear that the array is already in order
because no two elements need to be
switched. In that case, the sort should end.
The new best case order for this algorithm is
O(n), as if the array is already sorted, then
no exchanges are made. You can figure out
the code yourself! It only requires a few or O(n2). Thus these algorithms are only
changes to the original bubble sort. suitable for small problems where their
simple code makes them faster than the
Basic Idea: more complex code of the O(nlogn)
algorithm. As a rule of thumb, expect to find
Loop through array from i=0 to n
an O(n logn) algorithm faster for n>10 - but
and swap adjacent elements if they
the exact value depends very much on
are out of order.
individual machines!.
Implementation:
One more issue we did not address in
Empirically its known that Insertion sort is
our original algorithm and its improvised
over twice as fast as the bubble sort and is
pseudocode, is that, after every
just as easy to implement as the selection
iteration the highest values settles
sort. If you really want to use the selection
down at the end of the
sort for some
array. Hence, the next iteration need not
reason, try to avoid sorting lists of more than
include already sorted elements. For this
a 1000 items with it or repetitively sorting
purpose, in our implementation, we restrict
lists of more than a couple hundred items.
the inner loop to avoid already sorted values.

void bubble_sort(list[]) How Bubble Sort Works?


{ We take an unsorted array for our example.
int i,j,temp; Bubble sort takes (n2) time so we're
for(i=n-1;i>0; i--){ keeping it short and precise.
for(j=0;j<i; j++){
if(list[j]>list[j+1]){
temp=list[j];
list[j]=list[j+1]; Bubble sort starts with very first two
list[j+1]=temp; elements, comparing them to check which
}//swap adjacent elements
}//end of inner loop one is greater.
}//end of outer loop
}//end of bubble_sort
Merge Sort Algorithm
In this case, value 33 is greater than 14, so it Merge sort is a sorting technique based on
is already in sorted locations. Next, we divide and conquer technique. With worst-
compare 33 with 27. case time complexity being (n log n), it is
one of the most respected algorithms.
Merge sort first divides the array into equal
We find that 27 is smaller than 33 and these halves and then combines them in a sorted
two values must be swapped. manner.
The new array should look like this How Merge Sort Works?
To understand merge sort, we take an
unsorted array as the following
Next we compare 33 and 35. We find that
both are in already sorted positions.
We know that merge sort first divides the
whole array iteratively into equal halves
unless the atomic values are achieved. We
Then we move to the next two values, 35 see here t hat an array of 8 items is divided
and 10. into two arrays of size 4.

We know then that 10 is smaller 35. Hence This does not change the sequence of
they are not sorted. appearance of items in the original. Now we
divide these two arrays into halves.

We further divide these arrays and we


achieve atomic value which can no more be
After one
divided.
Iteration, the array should look like this -

To be precise, we are now showing how an


Now, we combine them in exactly the same
array should look like after each iteration.
manner as they were broken down. Please
After the second iteration, it should look like
note the color codes given to these lists. We
this
first compare the element for each list and
then combine them into another list in a
sorted manner. We see that 14 and 33 are in
sorted positions. We compare 27 and 10 and
Notice that after each iteration, at least one in the target list of 2 values we put 10 first,
value moves at the end. And when followed by 27.We change the order of 19
there's no swap required, bubble and 35 whereas 42 and 44 are placed
sorts learns that an array is completely sequentially.
sorted. Now we should look into some
practical aspects of bubble sort.
procedure mergesort( var a as array )
if ( n == 1 ) return a
var l1 as array = a[0] ... a[n/2]
var l2 as array = a[n/2+1] ... a[n]
In the next iteration of the combining l1 = mergesort( l1 )
phase, we compare lists of two data l2 = mergesort( l2 )
values, and merge them into a list of found
data values placing all in a sorted order. return merge( l1, l2 )
end procedure

procedure merge( var a as array, var b as


array )

After the final merging, the list should look var c as array
like this -
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
Now we should learn some programming remove b[0] from b
aspects of merge sorting. else
add a[0] to the end of c
Algorithm remove a[0] from a
Merge sort keeps on dividing the list into end if
equal halves until it can no more be divided. end while
By definition, if it is only one element in the
list, it is sorted. Then, merge sort combines while ( a has elements )
the smaller sorted lists keeping the new list add a[0] to the end of c
sorted too. remove a[0] from a
Step 1 if it is only one element in the list it end while
is already sorted, return.
Step 2 divide the list recursively into two while ( b has elements )
halves until it can no more be add b[0] to the end of c
divided. remove b[0] from b
Step 3 merge the smaller lists into new list end while
in sorted order.
Pseudocode return c
We shall now see the pseudo codes for
merge sort functions. As our algorithms end procedure
point out two main functions divide &
merge.Merge sort works with recursion and Quick Sort
we shall see our implementation in the same Quick sort is a highly efficient sorting
way. algorithm and is based on partitioning of
array of
data into smaller arrays. A large array is
partitioned into two arrays one of which
holds
values smaller than the specified value, say
pivot, based on which the partition is made
and another array holds values greater than
the pivot value. Quick Sort Pivot Pseudocode
Quick sort partitions an array and then calls The pseudocode for the above algorithm can
itself recursively twice to sort the two be derived as -
resulting
subarrays. This algorithm is quite function partitionFunc(left, right, pivot)
efficient for large-sized data sets as its leftPointer = left -1
average and rightPointer = right
worst case complexity are of O(nlogn),
where n is the number of items. while True do
Partition in Quick Sort while A[++leftPointer] < pivot do
//do-nothing
Following animated representation explains end while
how to find the pivot value in an array.
while rightPointer > 0 && A[--
rightPointer] > pivot do
//do-nothing
end while

if leftPointer >= rightPointer


break
The pivot value divides the list into two
else
parts. And recursively, we find the pivot for
swap leftPointer,rightPointer
each
end if
sub-lists until all lists contains only one
element.
end while
Quick Sort Pivot Algorithm
swap leftPointer,right
Based on our understanding of
return leftPointer
partitioning in quick sort, we will now
try to write an
end function
algorithm for it, which is as follows.

Step 1 Choose the highest index value has


pivot
Step 2 Take two variables to point left and
right of the list excluding pivot
Step 3 left points to the low index Quick Sort Algorithm
Step 4 right points to the high
Step 5 while value at left is less than pivot Using pivot algorithm recursively, we
move right end up with smaller possible partitions.
Step 6 while value at right is greater than Each
pivot move left partition is then processed for quick sort.
Step 7 if both step 5 and step 6 does not
match swap left and right We define recursive algorithm for quicksort
Step 8 if left = right the point where they as follows -
met is new pivot
Step 1 Make the right-most index value
pivot
Step 2 partition the array using pivot value
Step 3 quicksort left partition recursively
Step 4 quicksort right partition recursively

Quick Sort Pseudocode


To get more into it, let see the
pseudocode for quick sort algorithm
-

procedure quickSort(left, right)

if right-left <= 0
return
else
pivot = A[right]
partition = partitionFunc(left, right,
pivot)
quickSort(left,partition-1)
quickSort(partition+1,right)
end if

end procedure
END

You might also like