Data Structures and Algorithms
with Object-Oriented Design Patterns in C# |
Having made an asymptotic analysis of the running time of an algorithm, how can you verify that the implementation of the algorithm performs as predicted by the analysis? The only practical way to do this is to conduct an experiment--write out the algorithm in the form of a computer program, compile and execute the program, and measure its actual running time for various values of the parameter, n say, used to characterize the size of the problem.
However, several difficulties immediately arise:
Suppose you have conducted an experiment in which you measured the actual running time of a program, T(n), for a number of different values of n. Furthermore, suppose that on the basis of an analysis of the algorithm you have concluded that the worst-case running time of the program is O(f(n)). How do you tell from the measurements made that the program behaves as predicted?
One way to do this follows directly from the definition of big oh: there exists c>0 such that for all . This suggests that we should compute the ratio T(n)/f(n) for each of value of n in the experiment and observe how the ratio behaves as n increases. If this ratio diverges, then f(n) is probably too small; if this ratio converges to zero, then f(n) is probably too big; and if the ratio converges to a constant, then the analysis is probably correct.
What if f(n) turns out to large? There are several possibilities: