-
-
Save sjswitzer/b98cd3647b7aa0ef9ecd to your computer and use it in GitHub Desktop.
module Msort (msortBy, msort) where | |
msortBy :: (a -> a -> Ordering) -> [a] -> [a] | |
msortBy orderOp = | |
foldr merge [] . foldr mergeStack [] . runs | |
where | |
-- mergeStack :: [a] -> [[a]] -> [[a]] | |
-- mergeStack "k" [ "" "ij" "" "abcdefgh" ] = [ "k" "ij" "" "abcdefgh" ] | |
-- mergeStack "l" [ "k" "ij" "" "abcdefgh" ] = [ "" "" "ijkl" "abcdefgh" ] | |
mergeStack x ([]:s) = x:s | |
mergeStack x (y:s) = []:mergeStack (merge x y) s | |
mergeStack x [] = [x] | |
-- merge :: [a] -> [a] -> [a] | |
merge xx@(x:xs) yy@(y:ys) | |
| orderOp x y /= GT = x:merge xs yy | |
| otherwise = y:merge xx ys | |
merge x [] = x | |
merge [] y = y | |
-- runs :: Ord a => [a] -> [[a]] | |
runs (x:xs) = collectRun x x (x:) xs | |
runs [] = [] | |
-- collectRun :: Ord a => a -> a -> ([a] -> [a]) -> [a] -> [[a]] | |
collectRun mn mx f (x:xs) | |
| orderOp x mn == LT = collectRun x mx (\y -> x:(f y)) xs -- prepend | |
| orderOp x mx /= LT = collectRun mn x (\y -> f (x:y)) xs -- append | |
collectRun mn mx f x = f [] : runs x | |
msort :: Ord a => [a] -> [a] | |
msort = msortBy compare |
It turns out that this sort is basically a functional version of Timsort where:
- mergeStack maintains the stack as a series of merges whose lengths are 2^n where Timsort's stackCollapse function maintains the lengths as something like a Fibonacci series
- Timsort creates runs when it doesn't find them
This sort works nicely in functional languages or on linked lists since it doesn't have to count lengths and needs only cons rather than expensive concats. There's no use implementing a conventional imperative, array-based version since Timsort already exists.
An imperative implementation for linked lists is useful, though. It can be implemented to require no additional storage aside from a stack since the (mutable) links can be used to represent the intermediate merged lists. Here's a version in C++.
Modified to clarify the algorithm and emphasize the relation to Timsort.
Amusingly, the function still sorts if you replace
foldr merge [] . foldr mergeStack [] . runs
with
foldr merge [] . runs
But in that case you're just merging all the runs (effectively an insertion sort) which is O(n*m) where, again, m is the number of runs. Best case O(n), worst case O(n^2). "foldr mergeStack []
" ensures that merges are scheduled as they are in conventional mergesort.
Collect both ascending and descending runs (reversed).
But here's a funny thing. Timing the function on a big string
let big = take 10000000 $ cycle "the quick brown fox jumps over the lazy dog."
msort is actually faster when timed in GHCi without foldr mergeStack []
(basically just merging all the runs) and nearly as fast as the built-in sort. I have no explanation for that at all.
UPDATE: Explained below.
Moreover, while msort and sort are designed to sort already sorted data quickly, in both cases sorting and then sorting again is unexpectedly much slower!
*Main> let big = take 10000000 $ cycle "the quick brown fox jumps over the lazy dog."
*Main> take 1 $ sort big
" "
(10.03 secs, 4,525,157,720 bytes)
*Main> take 1 $ sort $ sort big
" "
(128.88 secs, 61,507,910,736 bytes)
My sort fares worse
*Main> take 1 $ msort big
" "
(20.07 secs, 7,257,693,600 bytes)
*Main> take 1 $ msort $ msort big
" "
(239.14 secs, 111,587,668,512 bytes)
For lack of a better theory, I chalk this up to some weird effect of laziness.
UPDATE: Explained below.
Yes, it was an effect of laziness. "take 1" did not force the sort function to sort the whole data set. It sufficed to compare the first element of each run and in that case, for all sorts, the cost was linear. It's amusing that non-strict evaluation is able to "discover" a cheaper way to find the least element while (partially) evaluating a sort function!
Taking the "maximum" of the sorted list forces complete evaluation and expected run times. Sorting the result of a sort adds only a trivial O(n) cost, as expected.
You can uncomment your local type signatures if you enable {-# LANGUAGE ScopedTypeVariables #-}
and use msortBy :: forall a . (a -> a -> Ordering) -> [a] -> [a]
.
A variation on mergesort that incrementally merges items as they become available.
The idea here is to consider the sequence of merges that occurs during a conventional mergesort and then to perform that series of merges as each new item becomes available. So rather than recursively halving the list and then merging, simply maintain a stack of partial merges. An empty stack element represents an available right branch of the merge tree. When new items are added to the stack, the topmost non-empty elements of the stack are merged and replaced by empty elements and the merged list replaces the first empty element if there is one or is put on the bottom of the stack otherwise. If you consider empty and non-empty elements as bits (0 for empty, 1 for non-empty), they encode number of lists pushed into the stack in binary.
For example, to sort
d a g i b e c f j h
:Then merge the remaining lists:
Unlike traditional recursive approaches, merging can proceed as new items are added to the stack rather than waiting until all items have been acquired and there is no need to split the list. Due to the incremental nature of the sort it's easy to consolidate runs of ascending or (reversed) descending items. So the complexity is O(n log m) where m is the number of runs. The worst case complexity is O(n log n) and the best case, for already sorted data, is O(n)