-
-
Save sjswitzer/b98cd3647b7aa0ef9ecd to your computer and use it in GitHub Desktop.
module Msort (msortBy, msort) where | |
msortBy :: (a -> a -> Ordering) -> [a] -> [a] | |
msortBy orderOp = | |
foldr merge [] . foldr mergeStack [] . runs | |
where | |
-- mergeStack :: [a] -> [[a]] -> [[a]] | |
-- mergeStack "k" [ "" "ij" "" "abcdefgh" ] = [ "k" "ij" "" "abcdefgh" ] | |
-- mergeStack "l" [ "k" "ij" "" "abcdefgh" ] = [ "" "" "ijkl" "abcdefgh" ] | |
mergeStack x ([]:s) = x:s | |
mergeStack x (y:s) = []:mergeStack (merge x y) s | |
mergeStack x [] = [x] | |
-- merge :: [a] -> [a] -> [a] | |
merge xx@(x:xs) yy@(y:ys) | |
| orderOp x y /= GT = x:merge xs yy | |
| otherwise = y:merge xx ys | |
merge x [] = x | |
merge [] y = y | |
-- runs :: Ord a => [a] -> [[a]] | |
runs (x:xs) = collectRun x x (x:) xs | |
runs [] = [] | |
-- collectRun :: Ord a => a -> a -> ([a] -> [a]) -> [a] -> [[a]] | |
collectRun mn mx f (x:xs) | |
| orderOp x mn == LT = collectRun x mx (\y -> x:(f y)) xs -- prepend | |
| orderOp x mx /= LT = collectRun mn x (\y -> f (x:y)) xs -- append | |
collectRun mn mx f x = f [] : runs x | |
msort :: Ord a => [a] -> [a] | |
msort = msortBy compare |
But here's a funny thing. Timing the function on a big string
let big = take 10000000 $ cycle "the quick brown fox jumps over the lazy dog."
msort is actually faster when timed in GHCi without foldr mergeStack []
(basically just merging all the runs) and nearly as fast as the built-in sort. I have no explanation for that at all.
UPDATE: Explained below.
Moreover, while msort and sort are designed to sort already sorted data quickly, in both cases sorting and then sorting again is unexpectedly much slower!
*Main> let big = take 10000000 $ cycle "the quick brown fox jumps over the lazy dog."
*Main> take 1 $ sort big
" "
(10.03 secs, 4,525,157,720 bytes)
*Main> take 1 $ sort $ sort big
" "
(128.88 secs, 61,507,910,736 bytes)
My sort fares worse
*Main> take 1 $ msort big
" "
(20.07 secs, 7,257,693,600 bytes)
*Main> take 1 $ msort $ msort big
" "
(239.14 secs, 111,587,668,512 bytes)
For lack of a better theory, I chalk this up to some weird effect of laziness.
UPDATE: Explained below.
Yes, it was an effect of laziness. "take 1" did not force the sort function to sort the whole data set. It sufficed to compare the first element of each run and in that case, for all sorts, the cost was linear. It's amusing that non-strict evaluation is able to "discover" a cheaper way to find the least element while (partially) evaluating a sort function!
Taking the "maximum" of the sorted list forces complete evaluation and expected run times. Sorting the result of a sort adds only a trivial O(n) cost, as expected.
You can uncomment your local type signatures if you enable {-# LANGUAGE ScopedTypeVariables #-}
and use msortBy :: forall a . (a -> a -> Ordering) -> [a] -> [a]
.
Collect both ascending and descending runs (reversed).