You are on page 1of 3

amortized analysis is a method of analyzing algorithms that considers the entire sequence of operations

of the program. It allows for the establishment of a worst-case bound for the performance of an algorithm
irrespective of the inputs by looking at all of the operations. At the heart of the method is the idea that
while certain operations may be extremely costly in resources, they cannot occur at a high-enough
frequency to weigh down the entire program because the number of less costly operations will far
outnumber the costly ones in the long run, "paying back" the program over a number of iterations. [1] It is
particularly useful because it guarantees worst-case performance while accounting for the entire set of
operations in an algorithm.
The basic idea is that a worst case operation can alter the state in such a way that the worst case cannot
occur again for a long time, thus "amortizing" its cost.
There are generally three methods for performing amortized analysis: the aggregate method, the
accounting method, and the potential method. All of these give the same answers, and their usage
difference is primarily circumstantial and due to individual preference. [2]

Aggregate analysis determines the upper bound T(n) on the total cost of a sequence
of n operations, then calculates the average cost to beT(n) / n.[2]

The accounting method determines the individual cost of each operation, combining its immediate
execution time and its influence on the running time of future operations. Usually, many short-running
operations accumulate a "debt" of unfavorable state in small increments, while rare long-running
operations decrease it drastically.[2]

The potential method is like the accounting method, but overcharges operations early to
compensate for undercharges later.[2]

[edit]Examples
As a simple example, in a specific implementation of the dynamic array, we double the size of the array
each time it fills up. Because of this, array reallocation may be required, and in the worst case an insertion
may require O(n). However, a sequence of n insertions can always be done in O(n) time, because the
rest of the insertions are done in constant time, so n insertions can be completed in O(n) time.
The amortizedtime per operation is therefore O(n) / n = O(1).
Another way to see this is to think of a sequence of n operations. There are 2 possible operations: a
regular insertion which requires a constant c time to perform (assume c = 1), and an array doubling which
requires O(j) time (where j<n and is the size of the array at the time of the doubling). Clearly the time to
perform these operations is less than the time needed to perform n regular insertions in addition to the
number of array doublings that would have taken place in the original sequence of n operations. There
are only as many array doublings in the sequence as there are powers of 2 between 0 and n ( lg(n) ).
Therefore the cost of a sequence of n operations is strictly less than the below expression: [3]

The amortized time per operation is the worst-case time bound on a series of n operations divided by
n. The amortized time per operation is therefore O(3n) / n = O(n) / n = O(1).
[edit]Comparison

to other methods

Notice that average-case analysis and probabilistic analysis of probabilistic algorithms are not the
same thing as amortized analysis. In average-case analysis, we are averaging over all possible
inputs; in probabilistic analysis of probabilistic algorithms, we are averaging over all possible random
choices; in amortized analysis, we are averaging over a sequence of operations. Amortized analysis
assumes worst-case input and typically does not allow random choices.
An average-case analysis for an algorithm is problematic because the user is dependent on the fact
that a given set of inputs will not trigger the worst case scenario. A worst-case analysis has the
property of often returning an overly pessimistic performance for a given algorithm when the
probability of a worst-case operation occurring multiple times in a sequence is 0 for certain programs.
[edit]Common

use

In common usage, an "amortized algorithm" is one that an amortized analysis has shown to
perform well.

Online algorithms commonly use amortized analysis

What is Amortized Complexity?


The complexity of a method or operation, as defined in Chapter 2 of the text, is
the actual complexity of the method/operation. The actual complexity of an operation
is determined by the step count for that operation, and the actual complexity of a
sequence of operations is determined by the step count for that sequence. The actual
complexity of a sequence of operations may be determined by adding together the
step counts for the individual operations in the sequence. Typically, determining the
step count for each operation in the sequence is quite difficult, and instead, we obtain
an upper bound on the step count for the sequence by adding together the worst-case
step count for each operation.
Example Consider the method insert of Program 2.10. This method inserts an
element into a sorted array, and its step count ranges from a low of 4 to a high of2n+4,
where n is the number of elements already in the array. Suppose we perform 5 insert
operations beginning with n = 0. Further, suppose, that the actual step counts for these

insert operations are 4, 4, 6, 10, and 8, respectively. The actual step count for the
sequence of insert operations is 4 + 4 + 6 + 10 + 8 = 32. If we did not know the
actual step count for the individual operations, we could obtain an upper bound on the
actual step count for the operation sequence using one of the following two
approaches.
1. Since the worst-case step count for an insert operation is 2n+4, sum0 <= i <= 4(2i+4)
= 4 + 6 + 8 + 10 + 12 = 40 is an upper bound on the step count for the
sequence of 5 inserts.
2. The maximum number of elements already in the array at the time an insert
operation begins is 4. Therefore, the worst-case step count of an insert
operation is2*4+4 = 12. Therefore, 5*12 = 60 is an upper bound on the step
count for the sequence of 5 inserts.

In the preceding example, the upper bound obtained by the first approach is closer to
the actual step count for the operation sequence. We say that the count obtained by the
first approach is a tighter (i.e., closer to the real count) upper bound than that
obtained by the second approach.
When determining the complexity of a sequence of operations, we can, at times,
obtain tighter bounds using amortized complexity

You might also like