You are on page 1of 10

Subject :DAA

1.

2.

Oral Question

What is analysis of algorithms


Algorithm analysis is an important part of a broader computational
complexity theory, which provides theoretical estimates for the resources
needed by any algorithm which solves a given computational problem.
These estimates provide an insight into reasonable directions of search for
efficient algorithms.
How analysis is done

Analysis of Algorithms.

A complete analysis of the running time of an algorithm involves the


following steps:
Implement the algorithm completely.
Determine the time required for each basic operation.
Identify unknown quantities that can be used to describe the
frequency of execution of the basic operations.
Develop a realistic model for the input to the program.
Analyze the unknown quantities, assuming the modelled input.
Calculate the total running time by multiplying the time by the
frequency for each operation, then adding all the products.
3.

4.

5.

What is Time complexity


In computer science, the time complexity of an algorithm quantifies the
amount of time taken by an algorithm to run as a function of the length of
the string representing the input :226. The time complexity of an algorithm
is commonly expressed using big O notation, which excludes coefficients
and lower order terms.
What is space complexity
Space complexity is a measure of the amount of working storage an
algorithm needs. That means how much memory, in the worst case, is
needed at any point in the algorithm. As with time complexity, we're
mostly concerned with how the space needs grow, in big-Oh terms, as the
size N of the input problem grows.
What are asymptotic notations
The main idea of asymptotic analysis is to have a measure of efficiency of
algorithms that doesnt depend on machine specific constants, and doesnt
require algorithms to be implemented and time taken by programs to be
compared. Asymptotic notations are mathematical tools to represent time
complexity of algorithms for asymptotic analysis. The following 3
asymptotic notations are mostly used to represent time complexity of
algorithms.
1) Notation: The theta notation bounds a functions from above and
below, so it defines exact asymptotic behavior.
A simple way to get Theta notation of an expression is to drop low order
terms and ignore leading constants. For example, consider the following

expression.
3n3 + 6n2 + 6000 = (n3)
Dropping lower order terms is always fine because there will always be a n0
after which (n3) beats n2) irrespective of the constants involved.
For a given function g(n), we denote (g(n)) is following set of functions.
((g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that
0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is
always between c1*g(n) and c2*g(n) for large values of n (n >= n0). The
definition of theta also requires that f(n) must be non-negative for values of
n greater than n0.
2) Big O Notation: The Big O notation defines an upper bound of an
algorithm, it bounds a function only from above. For example, consider the
case of Insertion Sort. It takes linear time in best case and quadratic time in
worst case. We can safely say that the time complexity of Insertion sort is
O(n^2). Note that O(n^2) also covers linear time.
If we use notation to represent time complexity of Insertion sort, we have
to use two statements for best and worst cases:
1. The worst case time complexity of Insertion Sort is (n^2).
2. The best case time complexity of Insertion Sort is (n).
The Big O notation is useful when we only have upper bound on time
complexity of an algorithm. Many times we easily find an upper bound by
simply looking at the algorithm.
O(g(n)) = { f(n): there exist positive constants c and n0 such that
0 <= f(n) <= cg(n) for all n >= n0}
3) Notation: Just as Big O notation provides an a symptotic upper bound
on a function, notation provides an asymptotic lower bound.
Notation< can be useful when we have lower bound on time complexity
of an algorithm. As discussed in the previous post, the best case
performance of an algorithm is generally not useful, the Omega notation is
the least used notation among all three.
For a given function g(n), we denote by (g(n)) the set of functions.
(g(n)) = {f(n): there exist positive constants c and n0 such that

0 <= cg(n) <= f(n) for all n >= n0}.


Let us consider the same Insertion sort example here. The time complexity
of Insertion Sort can be written as (n), but it is not a very useful
information about insertion sort, as we are generally interested in worst
case and sometimes in average case.
6.

7.

What is amortized analysis


In computer science, amortized analysis is a method for analyzing a
given algorithm's time complexity, or how much of a resource, especially
time or memory in the context of computer programs, it takes to execute.
When Divide and conquer strategy is used

Solving difficult problems


Divide and conquer is a powerful tool for solving conceptually difficult
problems: all it requires is a way of breaking the problem into sub-problems,
of solving the trivial cases and of combining sub-problems to the original
problem. Similarly, decrease and conquer only requires reducing the
problem to a single smaller problem.

Algorithm efficiency
The divide-and-conquer paradigm often helps in the discovery of efficient
algorithms.

Parallelism
Divide and conquer algorithms are naturally adapted for execution in multiprocessor machines, especially shared-memory systems where the
communication of data between processors does not need to be planned in
advance, because distinct sub-problems can be executed on different
processors.

Memory access
Divide-and-conquer algorithms naturally tend to make efficient use of
memory caches. The reason is that once a sub-problem is small enough, it
and all its sub-problems can, in principle, be solved within the cache,
without accessing the slower main memory. An algorithm designed to
exploit the cache in this way is called cache-oblivious, because it does not
contain the cache size(s) as an explicit parameter. Moreover, D&C
algorithms can be designed for important algorithms (e.g., sorting, FFTs,
and matrix multiplication) to be optimal cache-oblivious algorithmsthey
use the cache in a probably optimal way, in an asymptotic sense,
regardless of the cache size.

Roundoff control
In computations with rounded arithmetic, e.g. with floating point numbers,
a divide-and-conquer algorithm may yield more accurate results than a
superficially equivalent iterative method. For example, one can add N
numbers either by a simple loop that adds each datum to a single variable,
or by a D&C algorithm called pairwise summation that breaks the data set
into two halves, recursively computes the sum of each half, and then adds
the two sums. While the second method performs the same number of
additions as the first, and pays the overhead of the recursive calls, it is
usually more accurate.

8.

What is time complexity of Binary search ,quick sort, merge sort


Binary Search Tree O(log(n))
Quicksort O(n log(n))
Mergesort O(n log(n))
9. What is advantage and disadvantage of divide and conquer strategy
Advantages of Divide & Conquer technique:
For solving conceptually difficult problems like Tower Of Hanoi, divide &
conquer is a powerful tool
Results in efficient algorithms
Divide & Conquer algorithms are adapted foe execution in multi-processor
machines
Results in algorithms that use memory cache efficiently.
Limitations of divide & conquer technique:
Recursion is slow
Very simple problem may be more complicated than aniterative approach.
Example: adding n numbers etc
10. What is Dynamic programming
In mathematics, computer science, economics, and bioinformatics,
dynamic programming is a method for solving a complex problem by
breaking it down into a collection of simpler subproblems. It is applicable to
problems exhibiting the properties of overlapping subproblems and optimal
substructure
11. What is principal of optimality
Principle of Optimality: An optimal policy has the property that whatever
the initial state and initial decision are, the remaining decisions must
constitute an optimal policy with regard to the state resulting from the first
decision.
12. What are different characteristics of Dynamic programming technique
The problem can be divided into stages with a decision required at each
stage.
In the capital budgeting problem the stages were the allocations to a
single plant. The decision was how much to spend. In the shortest

path problem, they were defined by the structure of the graph. The
decision was were to go next.
1. Each stage has a number of states associated with it.
The states for the capital budgeting problem corresponded to the
amount spent at that point in time. The states for the shortest path
problem was the node reached.
2. The decision at one stage transforms one state into a state in the
next stage.
The decision of how much to spend gave a total amount spent for the
next stage. The decision of where to go next defined where you
arrived in the next stage.
3. Given the current state, the optimal decision for each of the
remaining states does not depend on the previous states or
decisions.
In the budgeting problem, it is not necessary to know how the money
was spent in previous stages, only how much was spent. In the path
problem, it was not necessary to know how you got to a node, only
that you did.
4. There exists a recursive relationship that identifies the optimal
decision for stage j, given that stage j+1 has already been solved.
5. The final stage must be solvable by itself.
13. What is the difference between divide and conquer & dynamic
programming technique
Divide & Conquer
1. The divide-and-conquer paradigm involves three steps at each level of
the recursion:
Divide the problem into a number of sub problems.
Conquer the sub problems by solving them recursively. If the sub problem
sizes are small enough, however, just solve the sub problems in a
straightforward manner.
Combine the solutions to the sub problems into the solution for the
original problem.
2. They call themselves recursively one or more times to deal with closely
related sub problems.
3. D&C does more work on the sub-problems and hence has more time
consumption.
4. In D&C the sub problems are independent of each other.
5. Example: Merge Sort, Binary Search

Dynamic Programming
1. The development of a dynamic-programming algorithm can be broken
into a sequence of four steps.a. Characterize the structure of an optimal
solution.b. Recursively define the value of an optimal solution. c. Compute
the value of an optimal solution in a bottom-up fashion.d. Construct an
optimal solution from computed information
2. Dynamic Programming is not recursive.
3. DP solves the sub problems only once and then stores it in the table.
4. In DP the sub-problems are not independent.
5. Example : Matrix chain multiplication
14. What is memorization
15. What is greedy method
A greedy algorithm is an algorithm that follows the problem solving
heuristic of making the locally optimal choice at each stage with the hope
of finding a global optimum.
16. What is greedy choice
A greedy algorithm is an algorithm that follows the problem solving
heuristic of making the locally optimal choice at each stage with the hope
of finding a global optimum.
17. How greedy is different from dynamic method
18. What is 0/1 knapsack problem how it is solved in greedy
The knapsack problem or rucksack problem is a problem in
combinatorial optimization: Given a set of items, each with a mass and a
value, determine the number of each item to include in a collection so that
the total weight is less than or equal to a given limit and the total value is
as large as possible. It derives its name from the problem faced by
someone who is constrained by a fixed-size knapsack and must fill it with
the most valuable items.
19. What is backtracking
Backtracking is a general algorithm for finding all (or some) solutions to
some computational problems, notably constraint satisfaction problems,
that incrementally builds candidates to the solutions, and abandons each
partial candidate c ("backtracks") as soon as it determines that c cannot
possibly be completed to a valid solution.[1][2]
The classic textbook example of the use of backtracking is the eight queens
puzzle, that asks for all arrangements of eight chess queens on a standard
chessboard so that no queen attacks any other. In the common
backtracking approach, the partial candidates are arrangements of k
queens in the first k rows of the board, all in different rows and columns.
Any partial solution that contains two mutually attacking queens can be

abandoned.
Backtracking can be applied only for problems which admit the concept of a
"partial candidate solution" and a relatively quick test of whether it can
possibly be completed to a valid solution. It is useless, for example, for
locating a given value in an unordered table. When it is applicable,
however, backtracking is often much faster than brute force enumeration of
all complete candidates, since it can eliminate a large number of
candidates with a single test.
20. What is the time complexity of n queen problem
The complexity is n^n and here is the explanation
Here n represent the number of of queens and will remain same for every
function call. K is the row number and function will be called times till k
reaches the n.There if n=8,we have n rows and n queens.
T(n)=n(n+t(max of k - 1))=n^max of k=n^n as the max of k is n.
Note:The function has two parameters.In loop, n is not decreasing ,For
every function call it remains same.But for number of times the function is
called it is decreasing so that recursion could terminate.
21. Explain 8 queen problem
The eight queens puzzle is the problem of placing eight chess queens on
an 88 chessboard so that no two queens threaten each other. Thus, a
solution requires that no two queens share the same row, column, or
diagonal. The eight queens puzzle is an example of the more general nqueens problem of placing n queens on an nn chessboard, where
solutions exist for all natural numbers n with the exception of n=2 and n=3.
[1]
22. What is branch and Bound
Branch and bound (BB or B&B) is an algorithm design paradigm for
discrete and combinatorial optimization problems, as well as general real
valued problems. A branch-and-bound algorithm consists of a systematic
enumeration of candidate solutions by means of state space search: the set
of candidate solutions is thought of as forming a rooted tree with the full set
at the root. The algorithm explores branches of this tree, which represent
subsets of the solution set. Before enumerating the candidate solutions of a
branch, the branch is checked against upper and lower estimated bounds
on the optimal solution, and is discarded if it cannot produce a better
solution than the best one found so far by the algorithm.
23. What is LIFO and FIFO Strategy
24. What is Polynomial problem & non polynomial problem
Any P type problem can be solved in "polynomial time." (A polynomial is a
mathematical expression consisting of a sum of terms, each term including
a variable or variables raised to a power and multiplied by a coefficient.) A
P type problem is a polynomial in the number of bits that it takes to

describe the instance of the problem at hand. An example of a P type


problem is finding the way from point A to point B on a map. An NP type
problem requires vastly more time to solve than it takes to describe the
problem. An example of an NP type problem is breaking a 128-bit digital
cipher. The P versus NP question is important in communications, because
it may ultimately determine the effectiveness (or ineffectiveness) of digital
encryption methods.
An NP problem defies any brute-force approach at solution, because finding
the correct solution would take trillions of years or longer even if all the
supercomputers in the world were put to the task. Some mathematicians
believe that this obstacle can be surmounted by building a computer
capable of trying every possible solution to a problem simultaneously. This
hypothesis is called P equals NP. Others believe that such a computer
cannot be developed (P is not equal to NP). If it turns out that P equals NP,
then it will become possible to crack the key to any digital cipher regardless
of its complexity, thus rendering all digital encryption methods worthless.
25. What is Tractable and non tractable problem
Generally we think of problems that are solvable by polynomial time
algorithms as being tractable, and problems that require superpolynomial
time as being intractable.
Sometimes the line between what is an easy problem and what is a
hard problem is a fine one.
For example, Find the shortest path from vertex x to vertex y in a given
weighted graph. This can be solved efficiently without much difficulty.
However, if we ask for the longest path (without cycles) from x to y, we
have a problem for which no one knows a solution better than an
exhaustive search
26. What is Reducibility
REDUCIBILITY. A reduction is a way of converting one problem to another
problem, so that the solution to the second problem can be used to solve
the first problem. Finding the area of a rectangle, reduces to measuring its
width and height Solving a set of linear equations, reduces to inverting a
matrix.
27. what is NP,NP-Hard & NP-Complete problem
28. What is 3sat problem
SAT3 problem is a special case of SAT problem, where Boolean expression
should have very strict form. It should be divided to clauses,such that every
clause contains of three literals.
For example,
(x1x2x3)(x4x5x6)
This Boolean expression in 3SAT form, 2 clauses, each clause contains of 3
literals. The question is the same, is there such values of x1...x6, that given

Boolean expression is TRUE.


29. What is approximation algorithm
An approximate algorithm is a way of dealing with NP-completeness for
optimization problem. This technique does not guarantee the best solution.
The goal of an approximation algorithm is to come as close as possible
to the optimum value in a reasonable amount of time which is at most
polynomial time.
30. What is Randomization algorithm
A randomized algorithm is an algorithm that employs a degree of
randomness as part of its logic. The algorithm typically uses uniformly
random bits as an auxiliary input to guide its behavior, in the hope of
achieving good performance in the "average case" over all possible choices
of random bits.
31. What is parallel computing, how it differs from sequential computing.
32. What is Amdahls law
Amdahl's law is a formula used to find the maximum improvement
improvement possible by improving a particular part of a system. In parallel
computing, Amdahl's law is mainly used to predict the theoretical
maximum speedup for program processing using multiple processors.
33. When a parallel algorithm is said to be optimal
34. Explain Bullys algorithm
35. What is Mathematical model, why is it necessary
36. What is Data Science Project Life Cycle(DSPLC)
The data science life-cycle thus looks somewhat like:
1. Data acquisition
2. Data preparation
3. Hypothesis and modeling
4. Evaluation and Interpretation
5. Deployment
6. Operations
7. Optimization

Fig. Data Science Project Life-cycle


37. How you define IOT
38. What are adaptive and dynamic algorithms in IOT

39. What is Cryptography


40. What are different algorithms used for cryptography
41. How TSP IS EXECUTED USING Dynamic technique
42. What is JSON,What are its uses
JSON is short for JavaScript Object Notation, and is a way to store
information in an organized, easy-to-access manner. In a nutshell, it gives
us a human-readable collection of data that we can access in a really logical
manner

You might also like