This pretty much explains it: http://crypto.stanford.edu/~blynn/haskell/foldl.html here I just review the article.
The rule of thumb is this:
- if you want short circuiting on foldr, use a lazy on the right combiner
- if you want short circuiting on foldl, use a lazy on the left combiner
- there's never really a reason to use short circuiting foldl as you can replace short circuiting foldl with short circuiting foldr
Examples:
Here foldr
works on infinite lists. The &&
is lazy on the right, and it short
circuits on the first False
given on the left section. That is it returns after
one step into the infinite list of False
.
> foldr1 (&&) $ repeat False
Here foldl
works on a finite list, it cannot work on infinite lists because it
would try to expand the entire expression in memory, which would cause a stack
overflow. But this case still results in a short circuit.
> import Debug.Trace (traceShow)
> foldl1 (\x y -> traceShow y $ (\a b -> if y == 5 then 42 else a) x y) [1..5]
5
42
Remeber that foldl
expands a list like:
(((1 : 2) : 3) : 4) : 5
Assuming our :
is the lazy on the left combiner defined above, then you can
see that once the expression is fully expanded (and fits in memory), then the
combiner only needs to evaluate once for (... : 5)
to give you back the 42.
We can easily turn it into an equivalent foldr
form:
foldr1 (flip f) [5,4..1]
This is really cool, because simulating break in functional languages usually
involves calling the base case in the middle of a recursive function. But you
can't do this with fold
because you don't control the implementation of fold
.
But as long as we can create a lazy on the left or lazy on the right function,
then we get the same capability, and we don't perform useless computation. The
operational semantics change, but the denotational semantics stay the same.
Some major usecases for this include the all
and any
functor predicate functions.