We need to sort a set of numbers, in such a way that the sort takes
linear time in the number of elements if the list is already sorted, and
no worse than O(n log n) time otherwise. We are implementing this in
hardware, and need to minimise the number of memory access. Because of
this, an array is probably a better representation than a linked list,
but maybe not.
We can see that examining the list first, then doing a merge-sort if
there are any elements out of order, but we would prefer not to do this.
In particular, we would rather like the algorithm to spend O(n log n)
time operating on any unsorted sub-lists, while still managing O(n) time
on the rest. I.e., any time that it takes in addition to the time it
takes on a sorted list should depend only on those clusters of elements
that are out of order.
This is for a multiple polygon rendering algorithm: the hardware
maintains a list of x-coordinates in order, to run-length encode the
pixel colour changes occurring on a single horizontal scan line, which
are dispatched to pixel rendering hardware. As the scan line move down
vertically, the x-coordinates move by (different) small amounts.
Occasionally, a few move past the others. The sorting algorithm is
needed to maintain the sorted order of these pixels. Because we don't
expect the sequence order to change much from one scan line to the next,
an algorithm that takes linear time when most of the elements are
already in order would be ideal.
Insertion sort is pretty fast when most of the elements are in order,
but takes O(n^2) time in the worst case (which can happen with our
polygon rendering). Merge sort is pretty good all round. Quick sort is
good almost all the time, but a good implementation is probably too
complicated to implement in the amount of hardware we have available.
We are thinking of some kind of hybrid of merge sort and insertion sort,
but no particular algorithm has suggested itself :-)
Does anyone know of a suitable sorting algorithm?