Data Structures and Algorithms
with Object-Oriented Design Patterns in C# |
In order to better understand the actual performance of the various sorting algorithms presented in this chapter, it is necessary to conduct some experiments. Only by conducting experiments is it possible to determine the relative performance of algorithms with the same asymptotic running time.
To measure the performance of a sorting algorithm, we need to provide it with some data to sort. To obtain the results presented here, random sequences of integers were sorted. That is, for each value of n, the RandomNumberGenerator class defined in Section was used to create a sequence of n integers. In all cases (except for bucket sort) the random numbers are uniformly distributed in the interval . For the bucket sort the numbers are uniformly distributed in .
Figures , and show the actual running times of the sorting algorithms presented in this chapter. These running times were measured on an Intel Pentium III, which has a 1 GHz clock and 256MB RAM under the WindowsME operating system. The programs were compiled using the C# compiler provided with the Microsoft .NET beta SDK (csc) and run under the Microsoft common language runtime. The times shown are elapsed CPU times, measured in seconds.
Figure shows the running times of the sorts for sequences of length n, . Notice that the bubble sort has the worst performance and that the binary insertion sort has the best performance. Figure clearly shows that, as predicted, binary insertion is better than straight insertion. Notice too that all of the sorts require more than 25 seconds of execution time to sort an array of 20000 integers.
Figure: Actual running times of the sorts.
The performance of the sorts is shown in Figure . In this case, the length of the sequence varies between n=100 and . The graph clearly shows that the algorithms are significantly faster that the ones. All three algorithms sort 100000 integers in under 2 seconds. Merge sort and quicksort display almost identical performance.
Figure: Actual running times of the sorts.
Figure shows the actual running times for the bucket sort and radix sort algorithms. Both these algorithms were shown to be O(n) sorts. The graph shows results for n between 100 and . The universe used to test bucket sort was . That is, a total of m=1024 counters (buckets) were used. For the radix sort, 32-bit integers were sorted by using the radix R=256 and doing p=4 sorting passes.
Figure: Actual running times of the O(n) sorts.
Clearly, the bucket sort has the better running time. For example, it sorts 500000 10-bit integers in under 2 seconds. Radix sort performs extremely well too. It sorts 500000 32-bit integers in just over 2 seconds, only slightly slower than the bucket sort.