Learning Monads by Example

This is an updated version of a repository I released on GitHub some months ago when researching for my bachelor thesis. If you are an expert in the topic and see something that can be improved or anything feels off, don’t hesitate to leave a comment in this post or open an issue in the repository.


While writing my bachelor thesis, a heuristic search framework in Haskell, I ran into a roadblock I long feared to have: I needed to understand how monads work. When performing a search, it was necessary for me (and probably the users of the framework would feel the same way) to collect intermediate data from a search, like the number of nodes expanded or enqueued. After some short research, everything pointed to monads. This is going to be a quick example that helped understand the concept without much introduction to Haskell idiosyncrasy or syntax, so I recommend you some previous literature that can help you taking off in this topic:

If you have already read this but you are not sure yet on how to implement a monad in Haskell, maybe this little example will help you and you will save visiting some of the dead ends I had to face.

Once again I am not an expert, so feel free to open an issue to give any kind of feedback: improvements of the explanation, corrections on concepts, better tips, more literature… Thank you, dear reader!

An example: Euclidean Algorithm

A pure approach

One of the easiest and prettiest examples to program in Haskell is the Euclidean Algorithm to find the Greatest Common Divisor of two given numbers. We can see the code for that function here:

1
2
3
4
5
simpleGCD :: (Integral a) => a -> a -> a
simpleGCD a b
  | a < b          = simpleGCD b a
  | a `mod` b == 0 = b
  | otherwise      = simpleGCD b (a `mod` b)

This function receives to integer numbers, and computes their remainder operation recursively until 0 is found. This codes exhibits some and it works like a charm:

λ> simpleGCD 9282 12376
3094
(0.01 secs, 4128256 bytes)
λ> simpleGCD 9293 12376
1
(0.01 secs, 4129080 bytes)

However, what if we want to get a trace of the execution performed? In a different language like Python we could think of printing the steps in each iteration, but that implies secondary effects and is not that easy to do something like that in a purely functional environment like the one Haskell offers. Another option is to create a type Log, that contains both the value and the a list of strings:

1
data Log a = Log { getValue :: a, getLogs :: [String] } deriving Show

In that list of strings, we can add each of the steps performed in the recursive call, but that will create a huge mess and the code will be barely readable. What we can do is to check if it makes sense to implement =Log= as a monad.

Turning Log into a monad

In Haskell, for a type to be a monad it has to be also a functor and applicative. Don’t worry though, all monads are by definition functor and applicatives, so if you really need to turn your data into a monad there will be no problem. It is a healthy exercise to check if your data makes sense as a monad.

To make a Log an instance of the Functor class, we need to define its behavior with fmap :

1
2
instance Functor Log where
  fmap f (Log x logs) = Log (f x) logs

Here, we define that every time that a function f :: a -> b is mapped to a Log a , we want to apply it to the value and let the list of logs as it is. Next step is to define the data as Applicative, which implies the next functions:

1
2
3
instance Applicative Log where
  pure x = Log x []
  Log f log <*> Log x log' = Log (f x) (log ++ log')

Where we define that pure can turn a value into a Log by creating an object with that value and an empty list of logs. The <*> operator seems a little bit more obscure, but in case there is a function contained in a Log it should be appended to the value of another Log and both lists of logs should be appended. This makes perfect sense when we think about the creation of partial functions in Haskell.

Finally, we are able now to make Log an instance of Monad. To do so, we must define:

1
2
3
instance Monad Log where
  return = pure
  (Log x log) >>= f = let Log y new = f x in Log y (log ++ new)

return is used to include a new value in a Log, so we can reuse the definition of pure. On the other hand, the bind operator (>>=) defines the behavior we have been trying to achieve: every time we want to bind a Log object to the results of a function that takes the value of a Log and returns a new log, we take the value returned by the function but we append the logs of the new function to the list of the old ones.

Although it may seem strange to do it like so (using >>=), we need to notice that this is the real operators that a do-block uses as syntactic sugar; taking this into account we could create a function to store the new logs in a dummy Log before being appended to the object by the end of recursion:

1
2
saveLog :: String -> Log ()
saveLog str = Log () [str]

Although this function may seem unnecessary (and it kind of is), it provides a cleaner syntax and it is actually replicated in the provided Writer monad as tell.

Using all these pieces, we can finally update the algorithm to perform the logging of operations! In each guard (except the swapping one) we just need to create a do-block were we log a message and then bind a value to that block. The final version of the function is:

1
2
3
4
5
6
7
8
9
logGCD :: (Integral a, Show a) => a -> a -> Log a
logGCD x y
 | x < y = logGCD y x
 | y == 0 = do
     saveLog $ "Greatest Common Divisor found: " ++ show x
     return x
 | otherwise = do
     saveLog $ show x ++ " mod(" ++ show y ++ ") = " ++ show (x `mod` y)
     logGCD y (x `mod` y)

We can try the function with the stepsGCD function that will print all the logs in a nice way:

λ> stepsGCD 9282 12376
12376 mod(9282) = 3094
9282 mod(3094) = 0
Greatest Common Divisor found: 3094
(0.01 secs, 2575200 bytes)
λ> stepsGCD 9293 12376
12376 mod(9293) = 3083
9293 mod(3083) = 44
3083 mod(44) = 3
44 mod(3) = 2
3 mod(2) = 1
2 mod(1) = 0
Greatest Common Divisor found: 1
(0.01 secs, 2575400 bytes)

We can see how now the trace is nicely printed, and we have only modified the GCD function as much as we would have done it in an imperative language. Monads principle advantage is modularity, that helps us isolate the secondary effects keeping the purity.

The Monad laws

It is really important to understand that the monad type comes with some laws that your type has to fulfill. These laws are:

The left-identity is basically that return x >>= f should have the exact same result as f x . By trying our type with logGCD we get:

λ> return 3 >>= (logGCD 5)
Log {getValue = 1, getLogs = ["5 mod(3) = 2","3 mod(2) = 1","2 mod(1) = 0",
"Greatest Common Divisor found: 1"]}
λ> (logGCD 5) 3
Log {getValue = 1, getLogs = ["5 mod(3) = 2","3 mod(2) = 1","2 mod(1) = 0",
"Greatest Common Divisor found: 1"]}

The right-identity where, similar to the previous one, m >>= return should be equal to m.

λ> let m = logGCD 5 3
λ> m
Log {getValue = 1, getLogs = ["5 mod(3) = 2","3 mod(2) = 1","2 mod(1) = 0",
"Greatest Common Divisor found: 1"]}
λ> m >>= return
Log {getValue = 1, getLogs = ["5 mod(3) = 2","3 mod(2) = 1","2 mod(1) = 0",
"Greatest Common Divisor found: 1"]}

The associativity law, that states that a chain of monadic operations should not be dependant on its nesting.

λ> return 14 >>= (logGCD 35) >>= (logGCD 21)
Log {getValue = 7, getLogs = ["35 mod(14) = 7","14 mod(7) = 0",
"Greatest Common Divisor found: 7","21 mod(7) = 0",
"Greatest Common Divisor found: 7"]}
λ> ( return 14 >>= (logGCD 35) ) >>= (logGCD 21)
Log {getValue = 7, getLogs = ["35 mod(14) = 7","14 mod(7) = 0",
"Greatest Common Divisor found: 7","21 mod(7) = 0",
"Greatest Common Divisor found: 7"]}
λ> return 14 >>= (\x -> ((logGCD 35) x) >>= (logGCD 14))
Log {getValue = 7, getLogs = ["35 mod(14) = 7","14 mod(7) = 0",
"Greatest Common Divisor found: 7","14 mod(7) = 0",
"Greatest Common Divisor found: 7"]}

Yes, thanks to the Monad value, we can chain GCD operations to obtain combined logs. Super cool, isn’t it?

Do I really have to do all this?

Actually, no. This was just an example on how to build a super simple monadic type, but it is so simple and common that Haskell provides you with its own implementation, Control.Monad.Writer. If you ever feel like you need it in the real world, don’t reinvent the wheel and just use the provided one. However, take also into account that if you need high-performant computations, the overhead generated by Writer may be too high and it may be worth to check other packages (like fast-logger) or implement your own monad.

Also, even though I said that it is a healthy exercise to think if your data is as well a functor and applicative, but that is indeed not necessary either. If you are struggling with it, you can use the functions provided by the Monad class to define Functor and Applicative instances like this:

1
2
3
4
5
6
7
8
import Control.Monad (liftM, ap)

instance Functor Log where
  fmap = liftM

instance Applicative Log where
  pure  = return
  (<*>) = ap

And then just focus on defining the Monad instance. There might be a more efficient way to define it, but this should work anyway for all cases. Just be sure of checking if the data really makes sense as a monad before actually implementing it as such.

Further reading

After understanding this, it is probably useful to read: