Skip to content

Instantly share code, notes, and snippets.

@gvolpe
Last active April 24, 2024 20:51
Show Gist options
  • Save gvolpe/1454db0ed9476ed0189dcc016fd758aa to your computer and use it in GitHub Desktop.
Save gvolpe/1454db0ed9476ed0189dcc016fd758aa to your computer and use it in GitHub Desktop.
Dependency Injection in Functional Programming

Dependency Injection in Functional Programming

There exist several DI frameworks / libraries in the Scala ecosystem. But the more functional code you write the more you'll realize there's no need to use any of them.

A few of the most claimed benefits are the following:

  • Dependency Injection.
  • Life cycle management.
  • Dependency graph rewriting.

So I'm going to try to demystify each of these assumptions and show you a functional approach instead.

The goal of this document is to explain why the concept of DI in Scala is something that has been brought up from the OOP world and it is not needed at all in FP applications, and not to talk about how good or bad a DI library might be. Reason why I won't mention any particular library here.


Before getting started I would like to quote a Q&A from a Stack Overflow thread:

Question:

What is the idiomatic Haskell solution for dependency injection?

Answer:

I think the proper answer here is, and I will probably receive a few downvotes just for saying this: forget the term dependency injection. Just forget it. It's a trendy buzzword from the OO world, but nothing more.

Let's solve the real problem. Keep in mind that you are solving a problem, and that problem is the particular programming task at hand. Don't make your problem "implementing dependency injection".

DI library vs Functional Programming

Dependency Injection

The DI library approach

Most libraries provide a kind of DSL to define how dependencies are created and maybe some annotations, for example:

import mylibrary.di._

case class Config(host: Host, port: Port)

trait Repository
@Inject
class PostgreSQLRepository(config: Config) extends Repository
class InMemoryRepository extends Repository

@Inject
class A(repo: Repository)
@Inject
class B(repo: Repository)
@Inject
class C(a: A, b: B)

trait Service
class ProdService extends Service
class TestService extends Service

val repo: Repository = bind[Repository].to[PostgreSQLRepository].singleton

val a: A = bind[A]
val b: B = bind[B]
val c: C = bind[C]

val service: Service = bind[Service].to[ProdService]

The FP way

So, what does exactly "Dependency Injection" mean? In simple terms, to provide an input in order to produce an output. Does that sound familiar?

f: A => B

A pure function from A to B is equivalent to dependency injection in OOP. Given the previous example, we can define it as:

Constructor Arguments
val config: Config = ??? // Eg. Read config with `pureConfig`
val repo: Repository = new PostgreSQLRepository(config)

val a: A = new A(repo)
val b: B = new B(repo)
val c: C = new C(a, b)

val service: Service = new ProdService()

Just plain and basic class argument passing. We can think of constructors as plain functions:

val mkRepository: Config => Repository = config =>
  new PostgreSQLRepository(config)

val makeA: Repository => A = repo => new A(repo)
val makeB: Repository => B = repo => new B(repo)
val makeC: A => B => C = a => b => new C(a, b)
Implicits

Another option we have in Scala is to take advantage of the implicit mechanism. For example:

class A(implicit repo: Repository)
class B(implicit repo: Repository)
class C(a: A, b: B)

implicit val repo: Repository = new PostgreSQLRepository(config)

val a: A = new A() // or explicitly new A(repo)
val b: B = new B()
val c: C = new C(a, b)

In my experience the implicit approach plays very well in some cases only, I'll show it in one of the last sections.

Reader Monad

Last but not least we have ReaderT, called Kleisli in the Cats ecosystem. It has some benefits but also the downside of contaminating all your type system unless used in an MTL style.

Eg. this:

class A(repo: Repository) {
  def foo: Id[String] = repo.getFoo
}

Becomes this:

class A() {
  def foo: Kleisli[Id, Repository, String] = Kleisli { repo =>
    repo.getFoo
  }
}

Life Cycle Management

The DI library approach

A few libraries come with this feature but none of them provide the right abstraction. They are more likely, side-effectful.

For example, they provide a few methods to manage all the resources in your application:

val pool: ExecutorService = ???
val db: Repository = ???
val es: ElasticSearch = ???

def onStart() = {
  db.connect()
  es.connect()
}

def onStop() = {
  db.close()
  es.shutdown()
  pool.shutdown()
}

And this breaks the nice composability you gain when writing applications with fs2 or cats-effect since you need to extract the value which means evaluating the effects to access the inner instance.

The FP way

In Haskell there are pure functional abstractions that exist for this purpose:

  • MonadMask (bracket)
  • Managed
  • Resource

In Scala we have had Stream.bracket present for a long time in the fs2 library and now we also have Bracket and Resource in the Cats Efect library, which are the right abstraction to manage resources.

NOTE: Bracket is the only primitive, Resource builds on top on Bracket.

For example, this is how you would safely manage a database connection:

import cats.effect._

def myProgram(connection: Connection): IO[Unit] = ???

val releaseDBConnection: Connection => IO[Unit] = conn => IO(conn.close())

IO(db.connect).bracket(myProgram)(releaseDBConnection)

It consists of three parts: acquire, use and release. And it's composable.

Dependency Graph Rewriting

The DI library approach

Some libraries give you access to a DependencyGraph that contains all the dependencies. For example:

val prodDeps = DependencyGraph.instance()

Now if you want to have the same dependency graph and just change the Repository instance for an in-memory implementation (for testing purposes), some of these libraries will allow you to do the following:

val testDeps = prodDeps.bind[Repository].to[InMemoryRepository]

Which will take the existent dependency graph and rewrite the instance for Repository. Now this is quite convenient since you don't need to re-build your entire dependency graph for that.

However in practice, I found out that in a big project you might need to rewrite up to five dependencies per test suite, but normally you would only need to rewrite one or two. And that's easy to implement as I show in the next section.

The FP way

There's no pure abstraction for this feature in FP as far as I know, but there are alternatives.

My default choice is to define all the dependencies in a single place that I usually call Module and a Rewritable case class that represents all the dependencies that can be re-written in the dependency graph.

For this very simple case we might only need to change either the Repository or/and the Service instances (for testing purposes), so we can define it as follows:

case class Rewritable(
  repo: Option[Repository] = None,
  service: Option[Service] = None
)

And a Module with an empty instance of Rewritable by default:

class Module(config: Config)(implicit D: Rewritable = Rewritable()) {
  val repo: Repository = D.repo.getOrElse(new PostgreSQLRepository(config))

  val a: A = new A(repo)
  val b: B = new B(repo)
  val c: C = new C(a, b)

  val service: Service = D.service.getOrElse(new ProdService())
}
Production Dependency Graph
val module: Module = new Module(config)
Test Dependency Graph
implicit val deps = Rewritable(
  repo    = Some(new InMemoryRepository),
  service = Some(new TestService)
)

val testModule = new Module(config)

That's it, we were able to change only two instances needed for testing, the rest of the dependencies remain the same.

FP for the win!

The examples above are quite simple. In a real-world application you might have hundreds of dependencies and managing your dependencies becomes a non-trivial task.

One of the designs I've been working with successfully is MTL style or tagless final in which every component of your application is defined as an abstract algebra.

For example, let's consider an application that needs to manage users and also be able to get the current exchange rate for a given currency:

Algebras

trait UserRepository[F[_]] {
  def find(email: Email): F[Option[User]]
  def save(user: User): F[Unit]
}

trait ExchangeRateAlg[F[_]] {
  def rate(baseCurrency: Currency, foreignCurrency: Currency): F[Option[Rate]]
}

Programs

We will have a simple program that requests for a user, and if it exists then it'll request the current exchange rate:

class MyProgram[F[_]: Monad](repo: UserRepository[F], exchangeRate: ExchangeRateAlg[F]) {

  def exchangeRateForUser(email: Email, baseCurrency: Currency, foreignCurrency: Currency): F[Option[Rate]] =
    for {
      maybeUser <- repo.find(email)
      rate      <- maybeUser.fold((None: Option[Rate]).pure[F]) { _ =>
                     exchangeRate.rate(baseCurrency, foreignCurrency)
                   }
    } yield rate

}

Interpreters

We also want to have logging and metrics capabilities in the interpreters:

class PostgreSQLRepository[F[_]: Sync](config: Config)(
                                       implicit L: Log[F],
                                                M: Metrics[F]
                                      ) extends UserRepository[F] { ... }
class HttpExchangeRate[F[_]: Async](implicit L: Log[F]) extends ExchangeRateAlg[F] { ... }

Here implicits are the best choice IMO.

Application

And finally we assemble the entire application in a Module with our Rewritable instance:

case class Rewritable(
  repo: Option[UserRepository[F]] = None,
  ex: Option[ExchangeRateAlg[F]] = None
)

class Module[F[_]: Effect](config: Config)(implicit D: Rewritable = Rewritable()) {
  implicit val log: Log[F] = implicitly // eg. provide a default instance for Sync[F]
  implicit val metrics: Metrics[F] = implicitly // eg. provide a default instance for Sync[F]

  val userRepo: UserRepository[F] = D.repo.getOrElse(new PostgreSQLRepository[F](config))
  val exchangeRate: ExchangeRateAlg[F] = D.ex.getOrElse(new HttpExchangeRate[F])

  val program = new MyProgram[F](userRepo, exchangeRate)
}

For example, for a test case we might just want to replace the UserRepository:

implicit val deps = Rewritable(repo = Some(UserInMemoryRepository[F]))
val module = new Module[Id](config)

Conclusion

I believe that Functional Programming is the way forward since one can rely on immutability, referential transparency and great abstractions created by very smart people. And in return, you'll benefit from local reasoning and composability, among others. Now... the choice is completely YOURS!

@kostaskougios
Copy link

From my experience in playframework and akka apps I do tend to prefer DI with guice. It helps a lot especially for TDD as code that didn't use DI tended to become not easy to test.

@gvolpe
Copy link
Author

gvolpe commented Jun 5, 2018

And I completely understand @kostaskougios because those frameworks don't embrace functional programming, on the contrary, they are quite side-effectful :) And in this post I describe "Dependency Injection in Functional Programming".

@neko-kai
Copy link

neko-kai commented Jun 5, 2018

Note that your tagless final example will not work in Haskell itself due to type class coherence requirement. To be able to choose between several implementations, each new implmentation of an algebra would have to be accompanied by it's own newtype monad transformer and each such monad transformer would need to implement instances for every other mtl type class.

This is known as the n² instances problem and it makes mtl-style largely not viable in Haskell itself, with large projects instead adopting a pseudo-OOP style similar to Scala flavor of tagless final.

Also, note that using a Reader monad to hold your context, or using implicits or cake pattern instead of a reflection-based framework does not mean that you're not using the dependency injection pattern. I don't think the attack on the term, saying to 'forget it' is appropriate. If you're programming with first-class modules (aka objects) and parameterizing your modules with other modules – you're using dependency injection, there's no need to bash the term.

@gvolpe
Copy link
Author

gvolpe commented Jun 6, 2018

Hi @Kaishh, first of all thanks for providing all those links, great reads :)

Now let me share my thoughts:

Note that your tagless final example will not work in Haskell itself due to type class coherence requirement. To be able to choose between several implementations, each new implmentation of an algebra would have to be accompanied by it's own newtype monad transformer and each such monad transformer would need to implement instances for every other mtl type class.

You're right. Tagless final doesn't play very well neither in Haskell nor in PureScript due to type class coherence which I think it's great for type classes but on the other hand it makes it less flexible. For example, Ord is defined as a type class and this doesn't make sense because you don't really want type class coherence here and it'd be better to pass it as an argument. Instead we have to get around by defining new types. Same with Monoid and Show to name a few.

But tagless final works perfectly in Scala and it is very performant at the same time. And it also works perfectly in Idris since it gets around type class coherence with Named Instances which I find very smart.

Also, note that using a Reader monad to hold your context, or using implicits or cake pattern instead of a reflection-based framework does not mean that you're not using the dependency injection pattern. I don't think the attack on the term, saying to 'forget it' is appropriate. If you're programming with first-class modules (aka objects) and parameterizing your modules with other modules – you're using dependency injection, there's no need to bash the term.

Again you're right, it is dependency injection indeed. And in pure functional programming languages you can talk about it because the language is very strict and it's easier to find your way throughout the type system. However, with Scala being a hybrid OOP/FP language with most of the users coming from the Java world, this is often confused with impure DI frameworks and we end up with many DI libraries for Scala which I consider unnecessary as I demonstrated in this post. So I think we are better off without the term dependency injection.

@santiagopoli
Copy link

santiagopoli commented Jun 6, 2018

HI Gabi,

First of all, this was a great read! Even though, I have some things to address:

Although I mostly agree with the points exposed here, I think this post is more against DI Frameworks than DI itself. While it's implied that this kind of frameworks are necessary in OOP (I don't know if that was your intention, but it sounded like that) and not in FP, I think these
frameworks can be harmful in both paradigms. In fact, I still doesn't know why some people have the urge to use DI containers in the first place.

In OOP, the best Dependency Injection is in the Constructor. Period. And given the Constructor is in fact a function, it resembles what Dependency Injection looks like in FP. It's true, FP has other ways of doing DI like the Reader Monad, but I think this advice can benefit any developer, regardless of the language or paradigm he is using.

Having said that, I think the Functional approach to DI is more flexible: I for example, have been using this approach in languages like JS in which I can define my objects like a set of pure functions, constructed via another pure function.

Cheers!

@gvolpe
Copy link
Author

gvolpe commented Jun 6, 2018

Hey @santiagopoli ! Didn't expect you around here :)

Although I mostly agree with the points exposed here, I think this post is more against DI Frameworks than DI itself. While it's implied that this kind of frameworks are necessary in OOP (I don't know if that was your intention, but it sounded like that) and not in FP, I think these
frameworks can be harmful in both paradigms. In fact, I still doesn't know why some people have the urge to use DI containers in the first place.

Yes, it is more against DI frameworks in Scala rather than DI itself since, as @Kaishh mentioned, we still have dependency injection but I prefer to avoid the term especially in Scala being not a pure functional programming language.

In OOP, the best Dependency Injection is in the Constructor. Period. And given the Constructor is in fact a function, it resembles what Dependency Injection looks like in FP. It's true, FP has other ways of doing DI like the Reader Monad, but I think this advice can benefit any developer, regardless of the language or paradigm he is using.

I guess that in Java, DI frameworks make more sense than in Scala because it's very verbose but still it's easy enough to get around by just using constructors.

Saludos!

@emilypi
Copy link

emilypi commented Jun 6, 2018

I like this post @gvolpe. it's basically the approach i use to construct services as well, and provides a good separation between your logical and specialization concerns that are hard to realize otherwise.

@gvolpe
Copy link
Author

gvolpe commented Jun 7, 2018

Thanks @emilypi ! Appreciate the feedback 😃

@sujithjay
Copy link

sujithjay commented Jun 22, 2018

Hi Gabriel,
Thank you for this post. As @kostaskougios mentions above, I have found that using DI has helped me write testable code in Play. However, I do not use Guice (or similar DI frameworks), but instead rely on Cake pattern. Just trying to understand how that fits in your narrative.

@gvolpe
Copy link
Author

gvolpe commented Jun 22, 2018

Hi @sujithjay,
The cake pattern seemed like a good idea at the time but in the long term it brings more harm than good. I pretty much agree with the points made in this blog post.

@rt83
Copy link

rt83 commented Oct 28, 2018

Hi,

Thanks for a great description of how to do dependency injection in functional programming.

However, I must say that, what you have describe is very similar to doing manual DI in OOP world, though, FP syntax might feel a little bit less verbose. For example, your piece of code

val mkRepository: Config => Repository = config =>
  new PostgreSQLRepository(config)

val makeA: Repository => A = repo => new A(repo)
val makeB: Repository => B = repo => new B(repo)
val makeC: A => B => C = a => b => new C(a, b)

essentially defines a set of bindings, which you can use later on like

val c : C = makeC(makeA(someRepo), makeB(someRepo))

If we accept that, constructor are just a kind of function, the manual DI in OOP and your way of FP DI, are not that much different!

And I think, it is pretty verbose, and sometimes can be not very well organized and difficult to get every piece right, when the amount of dependencies grows larger and larger, especially for new beginners. And for this problem, I think DI frameworks are of great help.

I think the main benefit of DI frameworks is to provide their users with a set of high level constructs which removes a lot of boiler plate in declaring bindings and using them. They also are very helpful in making your application very well structured with conventions such as modules, components, subcomponents, lazy, providers, etc.. (e.g. in Dagger) or with the separation of bindings declaration and using them in different places. This helps anyone, even with very little experience in using Dependency Injection to get things right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment