All of this has been extracted from https://typelevel.org/cats and https://www.scalawithcats.com/
Type classes are tools for polymorphism. We can use definitions of what we call type classes in order to define functions that are valid for a wider range of types A
.
Here we can create different structures that hold some required properties that might not be present if we just played around types / subtypes.
Therefore, the whole cats
ecosystem is built upon the creation of specific type classes and its usage.
A type A
can form a Semigroup if it has an associative binary operation.
trait Semigroup[A] {
def combine(x: A, y: A): A
}
With cats implicits, we can use the infix notation. For example:
import cats.implicits._
// import cats.implicits._
1 |+| 2
// res3: Int = 3
Associativity would be a law for Semigroups. Therefore, we can use this knowledge to apply either a foldLeft
or foldRight
when summing lists of Int
.
A Monoid is an extension of a Semigroup by adding an empty
value:
trait Semigroup[A] {
def combine(x: A, y: A): A
}
trait Monoid[A] extends Semigroup[A] {
def empty: A
}
Then, the empty
value acts as the neutral element for the combine
operation.
With this new type class, supposing that we have an inverse element, a Monoid would form a Group. Moreover, if the combine
operation was also commutative, we would have a commutative group (or abelian).
Now, not all Semigroups would have available an empty
value. Therefore, what we can use is to lift into Option
to get a Monoid, where None
can act as empty
.
For any Semigroup[A]
there is a Monoid[Option[A]]
.
A Functor is a type class that abstracts over type constructors that can be map
d over (e.g., List
, Option
, Future
). It is a class that encapsulates sequencing computations
trait Functor[F[_]] {
def map[A, B](fa: F[A])(f: A => B): F[B]
}
// Example implementation for Option
implicit val functorForOption: Functor[Option] = new Functor[Option] {
def map[A, B](fa: Option[A])(f: A => B): Option[B] = fa match {
case None => None
case Some(a) => Some(f(a))
}
}
We should think of map not as an iteration pattern, but as a way of sequencing computations on values ignoring some complication dictated by the relevant data type:
Option
— the value may or may not be present;Either
— there may be a value or an error;List
— there may be zero or more values.
OBS: Good read about Futures in Reddit.
A Functor[F]
has to hold two laws:
- Composition: fa.map(f).map(g) = fa.map(f.andThen(g))`
- Identity:
fa.map(x => x) = fa
We can also see a Functor[F]
as a way of lifting a pure function A => B
into an effectful function F[A] => F[B]
.
trait Functor[F[_]] {
def map[A, B](fa: F[A])(f: A => B): F[B]
def lift[A, B](f: A => B): F[A] => F[B] =
fa => map(fa)(f)
}
OBS: Notes on effect and effectful from Alvin Alexander: An effect (main effect, NOT side effect!, we are talking about the main purpose here) is what a Monad handles. For example, the
Option
Monad models the effect of optionality, whileTry
Monad models the effect of failure. Then, an effectful function is a function that returnsF[A]
instead of[A]
.
The F
in Functor
is what we call an effect or computational context.
Functor composes, meaning that if F
and G
have Functor instances, so does F[G[_]]
, for example the case of List[Option[A]]
.
Kinds are like types for types. They describe the number of “holes” in a type. We distinguish between regular types that have no holes and “type constructors” that have holes we can fill to produce types.
For example, List is a type constructor with one hole. We fill that hole by specifying a parameter to produce a regular type like List[Int]
or List[A]
. List
is a type constructor, List[A]
is a type. We can think of type constructors as functions that return us generic types, the same way that we know that functions are "value constructors".
In Scala we declare type constructors using underscores. Once we’ve declared them, however, we refer to them as simple identifiers:
// Declare F using underscores:
def myMethod[F[_]] = {
// Reference F without underscores:
val functor = Functor.apply[F]
// ...
}
Applicative
extends Functor
with ap
and pure
:
import cats.Functor
trait Applicative[F[_]] extends Functor[F] {
def ap[A, B](ff: F[A => B])(fa: F[A]): F[B]
def pure[A](a: A): F[A]
def map[A, B](fa: F[A])(f: A => B): F[B] = ap(pure(f))(fa)
}
pure
wraps the value into a type constructor. For Option
that would be Some(_)
.
ap
is a bit trickier. There is a motivation example in the docs, but given a function product
with signature
trait Applicative[F[_]] extends Functor[F] {
def product[A, B](fa: F[A], fb: F[B]): F[(A, B)]
def pure[A](a: A): F[A]
}
Then ap
can be seen as the equivalent of map
and product
.
Applicative
must obey three laws:
-
Associativity:
fa.product(fb).product(fc) ~ fa.product(fb.product(fc))
. With map, this can be made into an equality withfa.product(fb).product(fc) = fa.product(fb.product(fc)).map { case (a, (b, c)) => ((a, b), c) }
-
Left identity:
pure(()).product(fa) ~ fa
As an equality:pure(()).product(fa).map(_._2) = fa
-
Right identity:
fa.product(pure(())) ~ fa
. As an equality:fa.product(pure(())).map(_._1) = fa
Like Functor
, Applicative
s compose: if F
and G
have Applicative
instances, then so does F[G[_]]
.
If we said that with Functor
we can work with a single effect, with Applicative
we can combine multiple independent effects with product
and map
.
Hence, ap
is the missing piece that let's us compose two effectful values. Otherwise, this cannot be done just with map
.
The straightforward way to use product
and map
(or just ap
) is to compose n independent effects, where n is a fixed number. In fact there are convenience methods named apN
, mapN
, and tupleN
(replacing N with a number 2 - 22) to make it even easier.
With this addition of traverse
, we can now compose any number of independent effects, statically known or otherwise:
import cats.implicits._
List(1, 2, 3).traverse(i => Some(i): Option[Int])
// res6: Option[List[Int]] = Some(List(1, 2, 3))
Apply
is the version of Applicative
where pure
cannot be defined. Therefore, we could define Applicative
as an extension of Apply
:
trait Apply[F[_]] extends Functor[F] {
def ap[A, B](ff: F[A => B])(fa: F[A]): F[B]
}
trait Applicative[F[_]] extends Apply[F] {
def pure[A](a: A): F[A]
def map[A, B](fa: F[A])(f: A => B): F[B] = ap(pure(f))(fa)
}
The syntax for Applicative
can be found under cats.implicits._
. It can be useful to use in example like:
import cats.implicits._
import java.sql.Connection
val username: Option[String] = Some("username")
val password: Option[String] = Some("password")
val url: Option[String] = Some("some.login.url.here")
def attemptConnect(username: String, password: String, url: String): Option[Connection] = ???
(username, password, url).mapN(attemptConnect)
// res8: Option[Option[java.sql.Connection]] = ...
Note how mapN
allows us to directly use the attemptConnect
work even if our values are defined as Option
.
We could have also done
Applicative[Option].map3(username, password, url)(attemptConnect)
As here we know that we have statically three elements.
An example implementation of Traverse
applied with a general effect F[_]
on a List
:
trait Traverse[F[_]] {
def traverse[G[_]: Applicative, A, B](fa: F[A])(f: A => G[B]): G[F[B]]
}
// Example implementation for List
implicit val traverseForList: Traverse[List] = new Traverse[List] {
def traverse[G[_]: Applicative, A, B](fa: List[A])(f: A => G[B]): G[List[B]] =
fa.foldRight(Applicative[G].pure(List.empty[B])) { (a, acc) =>
Applicative[G].map2(f(a), acc)(_ :: _)
}
}
With traverse we could implement map
, using
trait Traverse[F[_]] extends Functor[F] {
def traverse[G[_]: Applicative, A, B](fa: F[A])(f: A => G[B]): G[F[B]]
def map[A, B](fa: F[A])(f: A => B): F[B] =
traverse(fa)(a => Id(f(a))).value
}
Moreover, Traversable are Foldable, where Traverse is strictly more powerful than Foldable, as we could implement foldLeft
and foldRight
with traverse
.
Sometimes you will be given a traversable that has effectful values already, such as List[Option[A]]
. Since the values themselves are effects, traversing with Identity
will turn the traversable "inside out":
import cats.implicits._
val list = List(Some(1), Some(2), None)
// list: List[Option[Int]] = List(Some(1), Some(2), None)
val traversed = list.traverse(identity)
// traversed: Option[List[Int]] = None
We can achieve the same with cats' sequence
:
val sequenced = list.sequence
// sequenced: Option[List[Int]] = None
In general, t.map(f).sequence
can be replaces by t.traverse(f)
.
A monad is a mechanism for sequencing computations.
It is true that with Functor we can also sequence computations. However, with the addition of flatMap
, Monads can handle "complications" in the sequencing. Meaning that if we are playing around with Option
and any computation was to "fail", we'd just get a None
and the rest of steps will not run on it:
def parseInt(str: String): Option[Int] =
scala.util.Try(str.toInt).toOption
def divide(a: Int, b: Int): Option[Int] =
if(b == 0) None else Some(a / b)
def stringDivideBy(aStr: String, bStr: String): Option[Int] =
for {
aNum <- parseInt(aStr)
bNum <- parseInt(bStr)
ans <- divide(aNum, bNum)
} yield ans
stringDivideBy("6", "2")
// res0: Option[Int] = Some(3)
stringDivideBy("6", "0")
// res1: Option[Int] = None
stringDivideBy("6", "foo")
// res2: Option[Int] = None
At each step:
F[A] -> flatMap: (A => F[B]) -> F[B]
Therefore we can keep chaining flatMap
calls.
Monad
extends Applicative
type class with flatten
: it takes a value in a nested context F[F[A]]
and "joins" the contexts together to return a single context F[A]
. For example:
Option(Option(1)).flatten
// res0: Option[Int] = Some(1)
Option(None).flatten
// res1: Option[Nothing] = None
List(List(1),List(2,3)).flatten
// res2: List[Int] = List(1, 2, 3)
To form a Monad
, we just need to use its Applicative
pure
function and define a flatMap
function. flatMap
can be seen as map
followed by flatten
. Conversely, flatten
can be obtained by flatMap
over the identity x => x
.
The beauty of flatMap
is that it allows to use for-comprehensions. Therefore, when working with Monads, we are sure that we can use that those.
On top of pure
and flatMap
, Cats has chosen to also require tailRecM
when defining a Monad, which encodes stack safe monadic recursion.
An example of Monad
implementation for Option
would be:
import scala.annotation.tailrec
implicit val optionMonad = new Monad[Option] {
// Define flatMap using Option's flatten method
override def flatMap[A, B](fa: Option[A])(f: A => Option[B]): Option[B] =
app.map(fa)(f).flatten
// Reuse this definition from Applicative.
override def pure[A](a: A): Option[A] = app.pure(a)
@tailrec
def tailRecM[A, B](a: A)(f: A => Option[Either[A, B]]): Option[B] = f(a) match {
case None => None
case Some(Left(nextA)) => tailRecM(nextA)(f) // continue the recursion
case Some(Right(b)) => Some(b) // recursion done
}
}
Eval is a useful tool to enforce stack safety when working on very large computations and data structures. However, we must bear in mind that trampolining is not free. It avoids consuming stack by creating a chain of function objects on the heap. There are still limits on how deeply we can nest computations, but they are bounded by the size of the heap rather than the stack.
This will blow up the stack:
def factorial(n: BigInt): BigInt =
if(n == 1) n else n * factorial(n - 1)
But with Eval.defer
...
def factorial(n: BigInt): Eval[BigInt] =
if(n == 1) {
Eval.now(n)
} else {
Eval.defer(factorial(n - 1).map(_ * n))
}
factorial(50000).value
// res: A very big value
With Eval
we can use now
, always
or later
to represent val
, def
or lazy val
behaviors.
cats.data.Reader is a monad that allows us to sequence operations that depend on some input. Instances of Reader wrap up functions of one argument, providing us with useful methods for composing them.
One common use for Readers is dependency injection. If we have a number of operations that all depend on some external configuration, we can chain them together using a Reader to produce one large operation that accepts the configuration as a parameter and runs our program in the order specified.
The power of Readers comes from their map and flatMap methods, which represent different kinds of function composition. We typically create a set of Readers that accept the same type of configuration, combine them with map and flatMap, and then call run to inject the config at the end.
Readers provide a tool for doing dependency injection. We write steps of our program as instances of Reader, chain them together with map and flatMap, and build a function that accepts the dependency as input.
https://scalafiddle.io/sf/LRxnHpz/2
There are many ways of implementing dependency injection in Scala, from simple techniques like methods with multiple parameter lists, through implicit parameters and type classes, to complex techniques like the cake pattern and DI frameworks.
Readers are most useful in situations where:
we are constructing a batch program that can easily be represented by a function;
we need to defer injection of a known parameter or set of parameters;
we want to be able to test parts of the program in isolation.
By representing the steps of our program as Readers we can test them as easily as pure functions, plus we gain access to the map and flatMap combinators.
For more advanced problems where we have lots of dependencies, or where a program isn’t easily represented as a pure function, other dependency injection techniques tend to be more appropriate.