Bucket sort

From Wikipedia, the free encyclopedia
Bucket sort
ClassSorting algorithm
Data structureArray
Worst-case performance
Average performance, where k is the number of buckets. .
Worst-case space complexity
Elements are distributed among bins
Then, elements are sorted within each bin

Bucket sort, or bin sort, is a sorting algorithm that works by distributing the elements of an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, a generalization of pigeonhole sort that allows multiple keys per bucket, and is a cousin of radix sort in the most-to-least significant digit flavor. Bucket sort can be implemented with comparisons and therefore can also be considered a comparison sort algorithm. The computational complexity depends on the algorithm used to sort each bucket, the number of buckets to use, and whether the input is uniformly distributed.

Bucket sort works as follows:

  1. Set up an array of initially empty "buckets".
  2. Scatter: Go over the original array, putting each object in its bucket.
  3. Sort each non-empty bucket.
  4. Gather: Visit the buckets in order and put all elements back into the original array.

Pseudocode[edit]

function bucketSort(array, k) is
    buckets ← new array of k empty lists
    M ← 1 + the maximum key value in the array
    for i = 0 to length(array) do
        insert array[i] into buckets[floor(k × array[i] / M)]
    for i = 0 to k do 
        nextSort(buckets[i])
    return the concatenation of buckets[0], ...., buckets[k]

Let array denote the array to be sorted and k denote the number of buckets to use. One can compute the maximum key value in linear time by iterating over all the keys once. The floor function must be used to convert a floating number to an integer ( and possibly casting of datatypes too ). The function nextSort is a sorting function used to sort each bucket. Conventionally, insertion sort is used, but other algorithms could be used as well, such as selection sort or merge sort. Using bucketSort itself as nextSort produces a relative of radix sort; in particular, the case n = 2 corresponds to quicksort (although potentially with poor pivot choices).

Analysis[edit]

Worst-case analysis[edit]

When the input contains several keys that are close to each other (clustering), those elements are likely to be placed in the same bucket, which results in some buckets containing more elements than average. The worst-case scenario occurs when all the elements are placed in a single bucket. The overall performance would then be dominated by the algorithm used to sort each bucket, for example insertion sort or comparison sort algorithms, such as merge sort.

Average-case analysis[edit]

Consider the case that the input is uniformly distributed. The first step, which is initialize the buckets and find the maximum key value in the array, can be done in time. If division and multiplication can be done in constant time, then scattering each element to its bucket also costs . Assume insertion sort is used to sort each bucket, then the third step costs , where is the length of the bucket indexed . Since we are concerning the average time, the expectation has to be evaluated instead. Let be the random variable that is if element is placed in bucket , and otherwise. We have . Therefore,

The last line separates the summation into the case and the case . Since the chance of an object distributed to bucket is , is 1 with probability and 0 otherwise.

With the summation, it would be

Finally, the complexity would be .

The last step of bucket sort, which is concatenating all the sorted objects in each bucket, requires time. Therefore, the total complexity is . Note that if k is chosen to be , then bucket sort runs in average time, given a uniformly distributed input.[1]

Optimizations[edit]

A common optimization is to put the unsorted elements of the buckets back in the original array first, then run insertion sort over the complete array; because insertion sort's runtime is based on how far each element is from its final position, the number of comparisons remains relatively small, and the memory hierarchy is better exploited by storing the list contiguously in memory.[2]

If the input distribution is known or can be estimated, buckets can often be chosen which contain constant density (rather than merely having constant size). This allows average time complexity even without uniformly distributed input.

Variants[edit]

Generic bucket sort[edit]

The most common variant of bucket sort operates on a list of n numeric inputs between zero and some maximum value M and divides the value range into n buckets each of size M/n. If each bucket is sorted using insertion sort, the sort can be shown to run in expected linear time (where the average is taken over all possible inputs).[3] However, the performance of this sort degrades with clustering; if many values occur close together, they will all fall into a single bucket and be sorted slowly. This performance degradation is avoided in the original bucket sort algorithm by assuming that the input is generated by a random process that distributes elements uniformly over the interval [0,1).[1]

ProxmapSort[edit]

Similar to generic bucket sort as described above, ProxmapSort works by dividing an array of keys into subarrays via the use of a "map key" function that preserves a partial ordering on the keys; as each key is added to its subarray, insertion sort is used to keep that subarray sorted, resulting in the entire array being in sorted order when ProxmapSort completes. ProxmapSort differs from bucket sorts in its use of the map key to place the data approximately where it belongs in sorted order, producing a "proxmap" — a proximity mapping — of the keys.

Histogram sort[edit]

Another variant of bucket sort known as histogram sort or counting sort adds an initial pass that counts the number of elements that will fall into each bucket using a count array.[4] Using this information, the array values can be arranged into a sequence of buckets in-place by a sequence of exchanges, leaving no space overhead for bucket storage.[failed verification]

Postman's sort[edit]

The Postman's sort is a variant of bucket sort that takes advantage of a hierarchical structure of elements, typically described by a set of attributes. This is the algorithm used by letter-sorting machines in post offices: mail is sorted first between domestic and international; then by state, province or territory; then by destination post office; then by routes, etc. Since keys are not compared against each other, sorting time is O(cn), where c depends on the size of the key and number of buckets. This is similar to a radix sort that works "top down," or "most significant digit first."[5]

Shuffle sort[edit]

The shuffle sort[6] is a variant of bucket sort that begins by removing the first 1/8 of the n items to be sorted, sorts them recursively, and puts them in an array. This creates n/8 "buckets" to which the remaining 7/8 of the items are distributed. Each "bucket" is then sorted, and the "buckets" are concatenated into a sorted array.

Comparison with other sorting algorithms[edit]

Bucket sort can be seen as a generalization of counting sort; in fact, if each bucket has size 1 then bucket sort degenerates to counting sort. The variable bucket size of bucket sort allows it to use O(n) memory instead of O(M) memory, where M is the number of distinct values; in exchange, it gives up counting sort's O(n + M) worst-case behavior.

Bucket sort with two buckets is effectively a version of quicksort where the pivot value is always selected to be the middle value of the value range. While this choice is effective for uniformly distributed inputs, other means of choosing the pivot in quicksort such as randomly selected pivots make it more resistant to clustering in the input distribution.

The n-way mergesort algorithm also begins by distributing the list into n sublists and sorting each one; however, the sublists created by mergesort have overlapping value ranges and so cannot be recombined by simple concatenation as in bucket sort. Instead, they must be interleaved by a merge algorithm. However, this added expense is counterbalanced by the simpler scatter phase and the ability to ensure that each sublist is the same size, providing a good worst-case time bound.

Top-down radix sort can be seen as a special case of bucket sort where both the range of values and the number of buckets is constrained to be a power of two. Consequently, each bucket's size is also a power of two, and the procedure can be applied recursively. This approach can accelerate the scatter phase, since we only need to examine a prefix of the bit representation of each element to determine its bucket.

References[edit]

  1. ^ a b Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest & Clifford Stein. Introduction to Algorithms. Bucket sort runs in linear time on the average. Like counting sort, bucket sort is fast because it assumes something about the input. Whereas counting sort assumes that the input consists of integers in a small range, bucket sort assumes that the input is generated by a random process that distributes elements uniformly over the interval [0,1). The idea of bucket sort is to divide the interval [0, 1) into n equal-sized subintervals, or buckets, and then distribute the n input numbers into the buckets. Since the inputs are uniformly distributed over [0, 1), we don't expect many numbers to fall into each bucket. To produce the output, we simply sort the numbers in each bucket and then go through the buckets in order, listing the elements in each.
  2. ^ Corwin, E. and Logar, A. "Sorting in linear time — variations on the bucket sort". Journal of Computing Sciences in Colleges, 20, 1, pp.197–202. October 2004.
  3. ^ Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 8.4: Bucket sort, pp.174–177.
  4. ^ NIST's Dictionary of Algorithms and Data Structures: histogram sort
  5. ^ "Robert Ramey Software Development".
  6. ^ A revolutionary new sort from John Cohen Nov 26, 1997

External links[edit]