You are on page 1of 21

Complexity of an Algorithm

C#ODE Studio || codstudio.wordpress.com

Algorithms
A finite set of steps that specify a sequence of operations to be carried out in order to solve a specific problem is called an algorithm Properties of Algorithms: 1. Finiteness- Algorithm must terminate in finite number of steps 2. Absence of Ambiguity-Each step must be clear and unambiguous 3. Feasibility-Each step must be simple enough that it can be easily translated into the required language 4. Input-These are zero or more values which are externally supplied to the algorithm 5. Output-At least one value is produced

C#ODE Studio || codstudio.wordpress.com

Conventions Used for Algorithms


Identifying number-Each algorithm is assigned as identification number Comments-Each step may contain a comment in brackets which indicate the main purpose of the step Assignment statement-Assignment statement will use colon-equal notation Set max:= DATA[1] Input/Output- Data may be input from user by means of a read statement Read: Variable names Similarly, messages placed in quotation marks, and data in variables may be output by means of a write statement: Write: messages or variable names
C#ODE Studio || codstudio.wordpress.com

Selection Logic or conditional flow If condition, then: [end of if structure] Double alternative If condition, then Else: [End of if structure]
Multiple Alternatives If condition, then: Else if condition2, then: Else if condition3, then Else: [End of if structure]C#ODE Studio || codstudio.wordpress.com

Iteration Logic Repeat- for loop Repeat for k=r to s by t: [End of Loop] Repeat-While Loop Repeat while condition: [End of loop]

C#ODE Studio || codstudio.wordpress.com

Algorithm complexity An algorithm is a sequence of steps to solve a problem. There can be more than one algorithm to solve a particular problem and some of these solutions may be more efficient than others. The efficiency of an algorithm is determined in terms of utilization of two resources, time of execution and memory. This efficiency analysis of an algorithm is called complexity analysis, and it is a very important and widely-studied subject in computer science. Performance requirements are usually more critical than memory requirements. Thus in general, the algorithms are analyzed on the basis of performance requirements i.e running time efficiency. Specifically complexity analysis is used in determining how resource requirements of an algorithm grow in relation to the size of input. The input can be any type of data. The analyst has to decide which property of the input should be measured; the best choice is the property that most significantly affects the efficiency-factor we are trying to analyze. Most commonly, we measure one of the following : the number of additions, multiplications etc. (for numerical algorithms). the number of comparisons (for searching, sorting) the number of data moves (assignment statements) .
C#ODE Studio || codstudio.wordpress.com

Based on the type of resource variation studied, there are two types of complexities Time complexity Space complexity Space Complexity- The space complexity of an algorithm is amount of memory it needs to run to completion. The space needed by a program consists of following components: Instruction space-space needed to store the executable version of program and is fixed. Data space-space needed to store all constants, variable values and has further two components: Space required by constants and simple variables. This space is fixed. Space needed by fixed sized structured variable such as arrays and structures. Dynamically allocated space. This space usually varies.
C#ODE Studio || codstudio.wordpress.com

Environment stack space- Space needed to store information needed to resume the suspended functions. Each time a function is invoked following information is saved on environment stack Return address i.e from where it has to resume after completion of the called function Values of all local variables and values of formal parameters in function being invoked. Time complexity- Time complexity of an algorithm is amount of time it needs to run to completion. To measure time complexity, key operations are identified in a program and are counted till program completes its execution. Time taken for various key operations are: Execution of one of the following operations takes time 1: 1. assignment operation 2. single I/O operations 3. single Boolean operations, numeric comparisons 4. single arithmetic operations 5. function return 6. array index operations, pointer dereferences

C#ODE Studio || codstudio.wordpress.com

Running time of a selection statement (if, switch) is the time for the condition evaluation + the maximum of the running times for the individual clauses in the selection. Loop execution time is the number of times the loop body is executed + time for the loop check and update operations, + time for the loop setup. Always assume that the loop executes the maximum number of iterations possible Running time of a function call is 1 for setup + the time for any parameter calculations + the time required for the execution of the function body.

C#ODE Studio || codstudio.wordpress.com

Expressing Space and time complexity: Big O notation


It is very difficult to practically analyze the variation of time requirements of an algorithm with variation in size of input. A better approach to express space/time complexity is in the form of a function f (n) where n is the input size for a given instance of problem being solved. Efficiency (algorithm A) = a function F of some property of A's input. We have to decide which property of the input we are going to measure; the best choice is the property that most significantly affects the efficiency-factor we are trying to analyze. For example, the time taken to sort a list is invariably a function of the length of the list. The speed of an algorithm varies with the number of items n. The most important notation used for expressing this function f(n) is Big O notation.

C#ODE Studio || codstudio.wordpress.com

Big O notation is a characterization scheme that allows to measure the properties of algorithm such as performance and/or memory requirements in general fashion. big O notation Uses the dominant term of the function Omits lower order terms and coefficient of dominant Apart from n (size of input), efficiency measure will depend on three cases which will decide number of operations to be performed. Best- Case performance under ideal condition Worst-case performance under most un favorable condition Average case performance under most probable condition. Big O notation tries to analyze each algorithm performance in worst condition. It is the rate of increase of f(n) that is examined as a measure of efficiency.

C#ODE Studio || codstudio.wordpress.com

Rate of growth: Big O notation


Suppose F is an algorithm and suppose n is the size of input data. Clearly, complexity f(n) of F increases as n increases. Rate of increase of f(n) is examined by comparing f(n) with some standard functions such as log2n, n, nlog2n, n2,n3 and 2n. One way to compare f(n) with these standard functions is to use functional O notation defined as follows: Definition: If f(n) and g(n) are functions defined on positive integers with the property that f(n) is bounded by some multiple of g(n) for almost all n. That is, suppose there exist a positive integer no and a positive number M such that for all n> no we have |f(n) | M |g(n)| Then we may write f(n)=O g(n) Which is read as f(n)(time taken for number of operations) is of the order of g(n). If g(n)=n, it means that f(n) is a linear proportional to n. For g(n)=n2 , f(n) is proportional to n2. Thus if an array is being sorted using an algorithm with g(n)=n2, it will take 100 times as long to sort the array that is 10 times the size of another array.
C#ODE Studio || codstudio.wordpress.com

Based on Big O notation, algorithms can be categorized as Constant time ( O(1)) algorithms Logarithmic time algorithms (O(logn)) Linear Time algorithm (O(n) Polynomial or quadratic time algorithm (O(nk)) Exponential time Algorithm (O(kn)) It can be seen that logarithmic function log(n) grows most slowly whereas kn grows most rapidly and polynomial function nk grows in between the two extremities. Big-O notation, concerns only the dominant term, low-order terms and constant coefficients are ignored in a statement. Thus if g(n) = n2+2n, the variation is taken as O(n2) rather than O(n). Complexities of some well known searching and sorting algorithms is: Linear Search: O(n) Mergesort: O(nlogn) Binary Search: O(logn) Bubble sort: O(n2)
C#ODE Studio || codstudio.wordpress.com

RUNNING TIME
let f(n) be the function that defines the time an algorithm takes for problem size n. The exact formula for f(n) is often difficult to get. We just need an approximation What kind of function is f ? Is it constant? linear? quadratic? .... and, our main concern is large n. Other factors may dominate for small n

C#ODE Studio || codstudio.wordpress.com

C#ODE Studio || codstudio.wordpress.com

C#ODE Studio || codstudio.wordpress.com

C#ODE Studio || codstudio.wordpress.com

108
107 Time taken 106 105

2n

n3
n2

nlogn n logn 1 10 100 1000 Input size n 10,000

104 103 102

Rate of Growth of f(n) with n

C#ODE Studio || codstudio.wordpress.com

Other Asymptotic notations for Complexity Analysis


The big O notation defines the upper bound function g(n) for f(n) which represents the time/space complexity of the algorithm on an input characteristic n . The other such notations are: Omega Notation ()- used when function g(n) defines the lower bound for function f(n). It is defined as |f(n)| M|g(n)| Theta Notation ()- Used when function f(n) is bounded both from above and below by the function g(n). It is defined as c1|g(n)| |f(n)| c2.|g(n)| Where c1 and c2 are two constants. Little oh Notation (o)- According to this notation, f(n)=o g(n) iff f(n) = Og(n) and f(n) g(n)

C#ODE Studio || codstudio.wordpress.com

Time-Space Tradeoff- The best algorithm to solve a given problem is one that requires less space in memory and takes less time to complete its execution. But in practice, it is not always possible to achieve both of these objectives. Thus, we may have to sacrifice one at the cost of other. This is known as Time-space tradeoff among algorithms. Thus if space is our constraint, we have to choose an algorithm that requires less space at the cost of more execution time. On the other hand, if time is our constraint, such as in real time systems, we have to choose a program that takes less time to complete execution at the cost of more space.

C#ODE Studio || codstudio.wordpress.com

References
Dinesh Mehta and Sartaj Sahni Handbook of Data Structures and Applications, Chapman and Hall/CRC Press, 2007. Niklaus Wirth, Algorithms and Data Structures, Prentice Hall, 1985. Diane Zak, Introduction to programming with c++, copyright 2011 Cengage Learning Asia Pte Ltd Schaumm Series ,McGraw Hill.

C#ODE Studio || codstudio.wordpress.com

You might also like