Skip to content

Instantly share code, notes, and snippets.

@cscalfani
Last active October 5, 2021 19:53
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save cscalfani/b8282843ddec67197a2182548bfc6f3a to your computer and use it in GitHub Desktop.
Save cscalfani/b8282843ddec67197a2182548bfc6f3a to your computer and use it in GitHub Desktop.

How to Think About Monads 2

How to Think about Monads, starts with composition of pure functions and quickly transitioned to composition of pure functions with side-effects. If you haven't read it yet, please do. This continues where that left off.

By the end of that article, we created specialized composition functions that were equivalent to the Monadic bind.

From that article we can conclude that Monadic computations have the following properties:

  1. Functions are executed sequentially
  2. Optionally, control-flow can be short-circuited
  3. Optionally, Monads manage the side-effects (a.k.a maintain a context)

The only requirement of a Monad is sequential execution. But Functional Composition gives us that.

Is composition just a Poor Man's Monad?

Take the following composition:

f . g . h

The order of execution is h, g and finally f. There is no way for f to execute before g and it's not possible for g to execute before h because each function is "waiting" for the output of the previous.

If you only think of Monads as sequential computers, i.e. a Monad is something that runs it's computation sequentially, then Functional Composition is sufficient to accomplish that.

But, items 2 and 3 are only supported by Monads.

There's more to Monads than sequential computation

Let's look at the other features that Monads can provide.

Flow Control (short-circuit)

Short-circuiting is a common feature when failures happen in a Monadic computation.

Maybe is an example of a Monad that short-circuits the computation as soon as one of the operations fails, i.e. returns Nothing.

Below is the implementation for Functor, Applicative and Monad for Maybe. Notice how (>>=) doesn't call the function when the previous computation fails with a Nothing.

instance Functor Maybe where
  fmap _ Nothing = Nothing
  fmap f (Just x) = Just $ f x

instance Applicative Maybe where
  pure = Just
  Nothing <*> _ = Nothing
  Just ff <*> x = fmap ff x

instance Monad Maybe where
  Nothing >>= _ = Nothing 		-- short-circuit happens here
  Just xx >>= f = f xx

Here's an example of how a short-cicuit would occur in some code. Assume that tryToGetX succeeds, i.e. returns Just x and alwaysFails fails and returns Nothing.

m = do
  x <- tryToGetX
  y <- alwaysFail x
  tryToGetZ y

Since alwaysFails returns Nothing, tryToGetZ will never be called and the value m will be Nothing.

It's easier to see this when you don't use the do syntax. The following is the equivalent of the do block above

m = tryToGetX >>= (\x -> alwaysFail x >>= (\y -> tryToGetZ y)) 

At any point, >>= can decide to NOT call the function, which means that a Monad can short-circuit the computation at its discretion.

In this case, the second >>= doesn't call (\y -> tryToGetZ y) since the left side, x -> alwaysFail x, evaluates to Nothing.

Managing side-effects (a.k.a maintain a context)

Managing side-effects is a big part of what Monads do. Here are 2 different Monads, one whose context is a value and one whose context is a function.

Writer Monad

The Writer Monad is one that maintains a value context, e.g. a log of operations.

To see how this is accomplished we first start with the type definition:

newtype Writer w a = Writer { runWriter :: (a, w) }

Notice that (a, w) is a tuple of a computation of type a and of a log of type w. This is very similar to the side-effect functions used in How to Think about Monads.

An interesting thing to note is that runWriter has the misleading name that sounds like the name of a function. But, it's a value, i.e. the final computation and a log in a tuple.

Another thing to note about runWriter is that the signature is Writer w a -> (a, w). This is just like record accessor function singatures:

data SomeRecord =
  SomeRecord {
      a::Int
    , b::String
  }

λ> :t a
a :: SomeRecord -> Int

Starting with the Functor implementation:

instance Functor (Writer w) where
  fmap f (Writer (a, w)) = Writer (f a, w)

fmap maps over the fst of the tuple, i.e. the computation results.

And now the Applicative:

instance Monoid w => Applicative (Writer w) where
  pure a = Writer (a, mempty)
  Writer (f, w) <*> Writer (a, w') = Writer (f a, w <> w')

Notice that we have a type constraint on the log. It must be a Monoid. To learn about monoids, you can read Monoids in Haskell, an Introduction.

pure is implemented by wrapping a value, a, in a Writer that has an empty log, mempty.

<*> is implemented by pattern matching the Writers to get at their contents. The function, f, is then applied to the value, a and the value's log, w' is appended to the function's log w using the Monoid's append function, mappend, via the operator, <>.

This produces (f a, w <> w') which gets wrapped in a new Writer.

And finally, the Monad implementation:

instance Monoid w => Monad (Writer w) where
  Writer (a, w) >>= f = let (a', w') = runWriter (f a)
                        in Writer (a', w <> w')

Once again, w is constrained to be a Monoid.

We pattern match the Writer to get to the value, a, so we can pass it to f. Since f returns a Writer we have to use runWriter to get at the tuple, (a', w').

Once we have all of the parts, the final computation, a', and the original log, w, and the log produced by calling f a, viz. w', we can create a new Writer with the updated context.

Revisiting the side-effect functions in How to Think about Monads:

f' :: Float -> (Float, String)
f' x = (x + 10, "added 10\n")

g' :: Float -> (Float, String)
g' y = (y * 100, "multiplied 100\n")

These can now be rewriten to return Writers:

f' :: Float -> Writer String Float
f' x = Writer (x + 10, "added 10\n")

g' :: Float -> Writer String Float
g' y = Writer (y * 100, "multiplied 100\n")

We can compose these side-effect functions using bind, >>=:

h' :: Writer String Float
h' = g' 10 >>= f'

λ> runWriter h'
(1010, "multiplied 100\nadded 10\n")

The functions f' and g' are very simple functions. But imagine a more complex function where it logs that it's going to call f' before it calls it. Then it logs that it's going to call g' and then it calls it.

v :: Float -> Writer String Float
v x =
  let w1 = "calling f'\n"
      Writer (y, w2) = f' x
      w3 = "calling g'\n"
      Writer (z, w4) = g' y
  in
    Writer (z, w1 <> w2 <> w3 <> w4)

This code is painful since we have to manage the logging.

So we can make it easier by using >>= which will manage the logs for us:

v' :: Float -> Writer String Float
v' x =
  Writer ((), "calling f'\n") >>= const (f' x) >>=
    (\y -> Writer ((), "calling g'\n") >>= const (g' y))

Each time the bind function is called, the logs are concatenated. So now we don't have to manage this ourselves.

But the code looks messy. Let's fix this by using do syntax:

v'' :: Float -> Writer String Float
v'' x = do
  Writer ((), "calling f'\n")
  y <- f' x
  Writer ((), "calling g'\n")
  g' y

This is a big improvement, but having to write Writer ((), "calling f'\n") is messy. We can write a helper function to minimize the noise:

tell :: Monoid w => w -> Writer w ()
tell w = Writer ((), w)

Now our code looks much cleaner:

v''' :: Float -> Writer String Float
v''' x = do
  tell "calling f'\n"
  y <- f' x
  tell "calling g'\n"
  g' y

If you just saw the code for v''' you might think that context management is some kind of magic. But it's not.

By returning to the Monad implementation, we can see that context management is done on each call to bind, >>=, by returning a NEW Monad with the updated context:

instance Monoid w => Monad (Writer w) where
  Writer (a, w) >>= f = let (a', w') = runWriter (f a)
                        in Writer (a', w <> w') -- NEW monad with updated context

When we code using do syntax, it's more difficult to see the bind calls. So here's v''' rewritten using >>= instead:

v'''' :: Float -> Writer String Float
v'''' x =
  tell "calling f'\n" >>= 
    (\_ -> f' x >>= 
      (\y -> tell "calling g'\n" >>= 
        return (g' y)))

Since the >>= implementation calls runWriter (f a) first and then prepends w to that result, w', the final context is built in a right-associative way:

"calling f'\n" <> ("adding 10\n" <> ("calling g'\n" <> "multiplying 100\n"))

This works out nicely with <> since it's an associative binary operator by definition (see Monoids in Haskell, an Introduction).

Monads with independent side-effects will have a context that is most likely a Monoid (or at least a Semi-Group).

For Monads where the side-effect is dependent on the previous context, like in the State Monad, the bind, >>=, must make sure to pass the previous value to the function involved in the bind.

This requires the Monad to contain a function instead of a value.

State Monad

The State Monad maintains a state that can be read and written to during the Monadic computation.

Here is the State defintion:

newtype State s a = State { runState :: s -> (a, s) }

Unlike runWriter from the Writer Monad, runState is a function. Its signature is State s a -> s -> (a, s).

The function complicates the implementation of the State Monad when compared to Monads with values, e.g. Writer, Maybe, etc.

You may want to read Monads of Functions in Haskell if the following implementation seems confusing.

Here is the full State Monad implementation:

instance Functor (State s) where
  fmap f mx = State $ \s -> let (x', s') = runState mx s in (f x', s')

instance Applicative (State s) where
  pure x = State $ \s -> (x, s)
  mf <*> mx = State $ \s ->
    let (f, s') = runState mf s
        (x, s'') = runState mx s'
    in (f x, s'')

instance Monad (State s) where
  mx >>= f = State $ \s -> let (x, s') = runState mx s in runState (f x) s'

But first, notice how all the implementations have a similar pattern. They all start with State $ \s ->. This is how the state, s, gets passed in.

Since the Monad contains a function, all evaluations are deferred so runState is used to produce the computation and the new state. Constrast that with using pattern matching with Monads that contian values.

Since the implementations are more complicated, let's look at each part individually.

First the Functor:

instance Functor (State s) where
  fmap f mx = State $ \s -> let (x', s') = runState mx s in (f x', s')

runState is used to get to the results of mx. Once we have the results, we can apply f to the computation portion of the result, x'. The updated state, s' is returned as the new state.

Notice how all of this requires an initial state, s, hence the \s ->. Then it's all wrapped up in a State Monad.

Remember, that the final result is deferred until runState is called on this new State Monad with an initial state.

Next the Applicative:

instance Applicative (State s) where
  pure x = State $ \s -> (x, s)
  mf <*> mx = State $ \s ->
    let (f, s') = runState mf s
        (x, s'') = runState mx s'
    in (f x, s'')

pure simply creates a State Monad that wraps its parameter, x, along with the state, s, unchanged.

<*> has to evaluate both mf and mx by calling runState on each. The initial state, s, goes through the evaluation of mf first and then the resulting state is passed to the evaluation of mx resulting in the final state, s''.

The final result, f x, is the result of evaluting mf, viz. f, and applying to the result of evaluating mx, viz. x.

Once again, the final result and final state tuple are wrapped in State Monad.

Finally, the Monad:

instance Monad (State s) where
  mx >>= f = State $ \s -> let (x, s') = runState mx s in runState (f x) s'

runState is used twice here. The first time is to get at the result of mx and the second time is to get at the results of f x, both of which are State Monads.

The initial state is passed to the first runState call resulting in s' which is then passed to the second runState call resulting in the final state.

To fully understand the State Monad, let's write some code to use it.

First, let's write some helpers, like we did with tell, to help us access the state:

get :: State s s
get = State $ \s -> (s, s)

put :: s -> State s ()
put ns = State $ const ((), ns)

These functions may seem magical at first glance. They seem to do nothing at all, but that's not true.

get takes the state that's passed in to the State Monad's function and "leaks" it to the computation.

put takes it's state parameter, ns, and simply makes that the resulting state ignoring the state that's normally passed in to the State Monad's function.

Assuming the state is of type Int, get and put can be used as follows:

do
  s <- get
  put $ s + 1

It would be nicer to have a single line like:

do
  modify (+ 1)

So let's write modify:

modify :: (s -> s) -> State s ()
modify f = get >>= (put . f)

Here we first get the state and pass it to f for modification and then finally to put.

Now let's use these in a real implementation:

data Statistics =
  Statistics {
      evenCount :: Int,
      oddCount :: Int
  } deriving Show

initialStats :: Statistics
initialStats = Statistics 0 0

isEven :: Int -> Bool
isEven = (== 0) . (`mod` 2)

isOdd :: Int -> Bool
isOdd = not . isEven

modifyStats :: Int -> State Statistics ()
modifyStats n =
  modify $ \s -> Statistics {
        evenCount = if isEven n then evenCount s + 1 else evenCount s
      , oddCount = if isOdd n then oddCount s + 1 else oddCount s
    }

fs :: Int -> State Statistics Int
fs n = do
  modifyStats n
  return $ n + 10

gs :: Int -> State Statistics Int
gs n = do
  modifyStats n
  return $ n * 100

When this is run, it produces:

λ> runState (fs 1 >>= gs) initialStats
(1100,Statistics {evenCount = 0, oddCount = 2})

Since do syntax can hide a lot of details, here's fs and gs using >>=:

fs' :: Int -> State Statistics Int
fs' n = modifyStats n >>= const (return $ n + 10)

gs' :: Int -> State Statistics Int
gs' n = modifyStats n >>= const (return $ n * 100)

Since modifyStats returns a result of (), we throw it out with const.

Or we can use >> which throws away the results for us:

fs'' :: Int -> State Statistics Int
fs'' n = modifyStats n >> return (n + 10)

gs'' :: Int -> State Statistics Int
gs'' n = modifyStats n >> return (n * 100)

Unlike the Writer Monad, the State Monad's state management is left-associative. To illustrate this, let's write an inefficient version of fs that uses get, put and >>=:

fs''' :: Int -> State Statistics Int
fs''' n =
  get >>=
    (\s ->
      put Statistics {
          evenCount = if isEven n then evenCount s + 1 else evenCount s
        , oddCount = if isOdd n then oddCount s + 1 else oddCount s
      } >>=
        (\_ -> return $ n + 10))

Looking again at the Monad implementation:

instance Monad (State s) where
  mx >>= f = State $ \s -> let (x, s') = runState mx s in runState (f x) s'

Since the >>= implementation calls runState mx s first (left-side of >>=) and then runState (f x) (right-side of >>=), the final context is built in a left-associative way:

((get's state --> put's state) --> (\_ -> return $ n + 10))

First, get will produce a state that will be passed to put which will produce a state that will be passed to (\_ -> return $ n + 10)).

Each function in the computation is dependent on the previous computation as far as state is concerned. This is in sharp contrast to the Writer Monad where each function was independent of the previous computation.

Conclusion

Hopefully, implementing Maybe, Writer and State monads and understanding how they accomplish Flow Control and Side-effects has removed some of the mystery of Monads.

What's interesting is that all the implementations are very simple, but provide great power and flexibility. And when coupled with some Monadic helper functions, i.e functions of the form a -> m b, the Monad gains it's own API.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment