Professional Documents
Culture Documents
1.
2.
Oral Question
Analysis of Algorithms.
4.
5.
expression.
3n3 + 6n2 + 6000 = (n3)
Dropping lower order terms is always fine because there will always be a n0
after which (n3) beats n2) irrespective of the constants involved.
For a given function g(n), we denote (g(n)) is following set of functions.
((g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that
0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is
always between c1*g(n) and c2*g(n) for large values of n (n >= n0). The
definition of theta also requires that f(n) must be non-negative for values of
n greater than n0.
2) Big O Notation: The Big O notation defines an upper bound of an
algorithm, it bounds a function only from above. For example, consider the
case of Insertion Sort. It takes linear time in best case and quadratic time in
worst case. We can safely say that the time complexity of Insertion sort is
O(n^2). Note that O(n^2) also covers linear time.
If we use notation to represent time complexity of Insertion sort, we have
to use two statements for best and worst cases:
1. The worst case time complexity of Insertion Sort is (n^2).
2. The best case time complexity of Insertion Sort is (n).
The Big O notation is useful when we only have upper bound on time
complexity of an algorithm. Many times we easily find an upper bound by
simply looking at the algorithm.
O(g(n)) = { f(n): there exist positive constants c and n0 such that
0 <= f(n) <= cg(n) for all n >= n0}
3) Notation: Just as Big O notation provides an a symptotic upper bound
on a function, notation provides an asymptotic lower bound.
Notation< can be useful when we have lower bound on time complexity
of an algorithm. As discussed in the previous post, the best case
performance of an algorithm is generally not useful, the Omega notation is
the least used notation among all three.
For a given function g(n), we denote by (g(n)) the set of functions.
(g(n)) = {f(n): there exist positive constants c and n0 such that
7.
Algorithm efficiency
The divide-and-conquer paradigm often helps in the discovery of efficient
algorithms.
Parallelism
Divide and conquer algorithms are naturally adapted for execution in multiprocessor machines, especially shared-memory systems where the
communication of data between processors does not need to be planned in
advance, because distinct sub-problems can be executed on different
processors.
Memory access
Divide-and-conquer algorithms naturally tend to make efficient use of
memory caches. The reason is that once a sub-problem is small enough, it
and all its sub-problems can, in principle, be solved within the cache,
without accessing the slower main memory. An algorithm designed to
exploit the cache in this way is called cache-oblivious, because it does not
contain the cache size(s) as an explicit parameter. Moreover, D&C
algorithms can be designed for important algorithms (e.g., sorting, FFTs,
and matrix multiplication) to be optimal cache-oblivious algorithmsthey
use the cache in a probably optimal way, in an asymptotic sense,
regardless of the cache size.
Roundoff control
In computations with rounded arithmetic, e.g. with floating point numbers,
a divide-and-conquer algorithm may yield more accurate results than a
superficially equivalent iterative method. For example, one can add N
numbers either by a simple loop that adds each datum to a single variable,
or by a D&C algorithm called pairwise summation that breaks the data set
into two halves, recursively computes the sum of each half, and then adds
the two sums. While the second method performs the same number of
additions as the first, and pays the overhead of the recursive calls, it is
usually more accurate.
8.
path problem, they were defined by the structure of the graph. The
decision was were to go next.
1. Each stage has a number of states associated with it.
The states for the capital budgeting problem corresponded to the
amount spent at that point in time. The states for the shortest path
problem was the node reached.
2. The decision at one stage transforms one state into a state in the
next stage.
The decision of how much to spend gave a total amount spent for the
next stage. The decision of where to go next defined where you
arrived in the next stage.
3. Given the current state, the optimal decision for each of the
remaining states does not depend on the previous states or
decisions.
In the budgeting problem, it is not necessary to know how the money
was spent in previous stages, only how much was spent. In the path
problem, it was not necessary to know how you got to a node, only
that you did.
4. There exists a recursive relationship that identifies the optimal
decision for stage j, given that stage j+1 has already been solved.
5. The final stage must be solvable by itself.
13. What is the difference between divide and conquer & dynamic
programming technique
Divide & Conquer
1. The divide-and-conquer paradigm involves three steps at each level of
the recursion:
Divide the problem into a number of sub problems.
Conquer the sub problems by solving them recursively. If the sub problem
sizes are small enough, however, just solve the sub problems in a
straightforward manner.
Combine the solutions to the sub problems into the solution for the
original problem.
2. They call themselves recursively one or more times to deal with closely
related sub problems.
3. D&C does more work on the sub-problems and hence has more time
consumption.
4. In D&C the sub problems are independent of each other.
5. Example: Merge Sort, Binary Search
Dynamic Programming
1. The development of a dynamic-programming algorithm can be broken
into a sequence of four steps.a. Characterize the structure of an optimal
solution.b. Recursively define the value of an optimal solution. c. Compute
the value of an optimal solution in a bottom-up fashion.d. Construct an
optimal solution from computed information
2. Dynamic Programming is not recursive.
3. DP solves the sub problems only once and then stores it in the table.
4. In DP the sub-problems are not independent.
5. Example : Matrix chain multiplication
14. What is memorization
15. What is greedy method
A greedy algorithm is an algorithm that follows the problem solving
heuristic of making the locally optimal choice at each stage with the hope
of finding a global optimum.
16. What is greedy choice
A greedy algorithm is an algorithm that follows the problem solving
heuristic of making the locally optimal choice at each stage with the hope
of finding a global optimum.
17. How greedy is different from dynamic method
18. What is 0/1 knapsack problem how it is solved in greedy
The knapsack problem or rucksack problem is a problem in
combinatorial optimization: Given a set of items, each with a mass and a
value, determine the number of each item to include in a collection so that
the total weight is less than or equal to a given limit and the total value is
as large as possible. It derives its name from the problem faced by
someone who is constrained by a fixed-size knapsack and must fill it with
the most valuable items.
19. What is backtracking
Backtracking is a general algorithm for finding all (or some) solutions to
some computational problems, notably constraint satisfaction problems,
that incrementally builds candidates to the solutions, and abandons each
partial candidate c ("backtracks") as soon as it determines that c cannot
possibly be completed to a valid solution.[1][2]
The classic textbook example of the use of backtracking is the eight queens
puzzle, that asks for all arrangements of eight chess queens on a standard
chessboard so that no queen attacks any other. In the common
backtracking approach, the partial candidates are arrangements of k
queens in the first k rows of the board, all in different rows and columns.
Any partial solution that contains two mutually attacking queens can be
abandoned.
Backtracking can be applied only for problems which admit the concept of a
"partial candidate solution" and a relatively quick test of whether it can
possibly be completed to a valid solution. It is useless, for example, for
locating a given value in an unordered table. When it is applicable,
however, backtracking is often much faster than brute force enumeration of
all complete candidates, since it can eliminate a large number of
candidates with a single test.
20. What is the time complexity of n queen problem
The complexity is n^n and here is the explanation
Here n represent the number of of queens and will remain same for every
function call. K is the row number and function will be called times till k
reaches the n.There if n=8,we have n rows and n queens.
T(n)=n(n+t(max of k - 1))=n^max of k=n^n as the max of k is n.
Note:The function has two parameters.In loop, n is not decreasing ,For
every function call it remains same.But for number of times the function is
called it is decreasing so that recursion could terminate.
21. Explain 8 queen problem
The eight queens puzzle is the problem of placing eight chess queens on
an 88 chessboard so that no two queens threaten each other. Thus, a
solution requires that no two queens share the same row, column, or
diagonal. The eight queens puzzle is an example of the more general nqueens problem of placing n queens on an nn chessboard, where
solutions exist for all natural numbers n with the exception of n=2 and n=3.
[1]
22. What is branch and Bound
Branch and bound (BB or B&B) is an algorithm design paradigm for
discrete and combinatorial optimization problems, as well as general real
valued problems. A branch-and-bound algorithm consists of a systematic
enumeration of candidate solutions by means of state space search: the set
of candidate solutions is thought of as forming a rooted tree with the full set
at the root. The algorithm explores branches of this tree, which represent
subsets of the solution set. Before enumerating the candidate solutions of a
branch, the branch is checked against upper and lower estimated bounds
on the optimal solution, and is discarded if it cannot produce a better
solution than the best one found so far by the algorithm.
23. What is LIFO and FIFO Strategy
24. What is Polynomial problem & non polynomial problem
Any P type problem can be solved in "polynomial time." (A polynomial is a
mathematical expression consisting of a sum of terms, each term including
a variable or variables raised to a power and multiplied by a coefficient.) A
P type problem is a polynomial in the number of bits that it takes to