Skip to content

Instantly share code, notes, and snippets.

@aappddeevv aappddeevv/
Last active Aug 17, 2017

What would you like to do?
Discussing the different versions of the cake pattern and its relation to dependency injection.

##Cake Pattern and Why Cake Patterns Sometimes Look Different The cake pattern is typically described as a way to use small layers of functionality to create a larger, more complex program. This area of application design is typically designed with application scalability not in the pure performance sense, but in the sense of scaling the application up to more complex functionality and maintainability over time.

There have been many papers and blogs on this aspect, but a few especially helpful blogs include:

All of these describe the cake pattern but they all look alittle different and a reader may be left to wondering why are there so many ways to describe the cake pattern?

The confusion is warranted. The Cake Pattern is really more of a design pattern than a specific, set-in-stone programming approach.

The Cake Pattern design pattern is one that tries to wire an application together as early as possible in the application development and program execution process and detect if there are issues. The Cake Pattern is often compared to dependency injection with a DI framework like the spring framework. Of course, many of the concepts between these areas overlap in terms of the job they perform and you ultimately need to decide which approach best supports your application and users.

I previous wrote about the cake pattern here [].

##Application Development and Program Execution Guiding Principles

Generally, there are a few software engineering guiding principles that are used and where the cake pattern starts to play a role. Some, none or all of these may be relevant for your application:

  • Decouple the modules in your application as much as possible because they may need to be swapped out, fixed or enhanced over time to meet the needs for program evolution, maintenance or user needs.
  • Detect errors in your application as early as possible both during development as well as when the application runs.
  • OO, inheritance and containment: have issues scaling at times and single inheritance trees can cause accidental complexity to occur in your modules. Multiple inheritance solves alot of problems but also creates new ones. Lately, containment has been an important part of managing complexity versus inheritance given the limitations of single inheritance. In fact, containment in java typically leads to "program to interfaces" type guiding principles that can increase complexity. It also leads to a large number of bean property getters and setters on classes and interface specifications--methods solely designed to set a value that is not potentially set in the constructor.
  • Complex interfaces e.g. many parameters for every function call is generally increases coupling and increases accidental complexity. The classic case here is database connections. Having to pass them to every function that potentially makes a database call increases coupling. So various singleton/global variable/thread variable/injection techniques have been created to avoid having to pass, literally, every parameter to every function all the time (despite the claims of functional programming languages that this is bad).
  • Static type checking is good, but it can be taken too far: Generally, static type checking does help catch more bugs but it may increase other forms of complexity and cost e.g. trading off time to market versus maintenance. Again, the degree to which this is important is really based on your application.
  • Delay being specific, stay abstract, for as long as possible, but no longer. Abstractions are good, but excessive abstraction is bad. There's a boundary there somewhere but sometimes its hard to define.

##Tools to Use So when you develop an application that uses these guiding principles you have to look and see what levers you have available to align with those objectives. Just because I focus on scala more these days, I'll say there are a few ways that scala tries to give you tools to help:

  • Abstract types (versus parameterized types): Abstract types allow you to specify a type in an enclosing trait or class. The abstract type allows you to delay specifying the type until later in other subclasses or later in a scala module (an object) where you must lock in the definition of the abstract type. Its nice that you can actually have the abstract type specialized differently in different traits then recombine those definitions back together--that was the pre-cog article is about.
  • Self-types: A self-type allows you to tell a trait that the final type of an instance will really be a combination of itself as well as those traits listed in the self type declaration. Hence it can count on the other trait's members, types and methods to be available to it and you can write code in that trait with that assumption. This is what the boner article uses to combine "components" together. Self-types allow you to make assumptions about what sub-trait/class will be available to your trait in order to resolve values you need for your trait's methods. But yes, just using abstract type members does force the creation of vals or defs downstream in the objct.
  • Mixins: Mixins allow you to mix traits into objects or classes. Traits can have data definitions and methods. That's actually key. Because mixins have methods, they can do more than represent data and those methods often have dependencies. Ideally every trait stands on its own and is independent, but that's never true in the real world.
  • Implicits: Implicits allow you to declare your needs for values but not have to pass the value through the parameters.

You'll notice that some of comments speak to the ability to resolve to values that are not necessarily passed through constructor parameters or method parameters. In fact, with implicits and self-types, you are able to write code that makes assumptions that values will be provided in an "environment" that is resolvable at compile time. In essence, these scala tools loosen up various parts of the programming model so that it is not so rigid. You can use these tools too much though, and loosen up the system too much, such that it actually increases complexity.

But what's really at work when we use these components? What's the main idea. Here's my thoughts.

##A Way of Thinking About The Cake Pattern in Scala Clearly, you program using DI approaches in scala. So its assumed that you can go that route. However, the tools in scala allow you some options. These options may be useful to your application. If they are not or create other types of complexity, do not use them. But determining how much of the scala tools to use to manage complexity is up to the application architects, designers and programmers.

So here's my thoughts on the cake pattern.

The cake pattern takes on different forms based on the specific approach of "keeping it general for as long as possible" versus locking it down when you really need to have an instance to work with. In scala, you have a few more choices than java around how to abstract your domain and lock it down. I guess since both languages are turing complete, you can solve problems in either language e.g. type parameters versus abstract types may both solve problems but there are subtleties to both methods.

In scala, you can have abstract methods, values and type definitions. In java, you generally get abstract methods. You can also do type characterization but that's a little different than abstract types as defined in scala although both are very heplpful of course. So with scala, you have three primary levers to build cake layers. And the key for scala is that you can use these in different ways to create layers that are more fine grained.

You can obviously still create layers in java, e.g. type parameters versus abstract types, but the scala tools listed above can help make finer slices when you need them while still keeping aligned with the guiding principles. That's why there are so many conversations about whether scala or java are different. With scala, you can stay a more aligned with the guiding principles while solving a problem than with java for some applications. This is not an universal truth because some applications may never stray far from the guiding principles regardless of the language, features or frameworks used. Because this is a subjective thought, no argument can ever be definitive about whether java or scala is better overall, all the time. For some problems it does not matter.

When observing or employing the cake pattern, you have to decide what levers are being used an why. In the pre-cog example, they do not use self-types, they use abstract types. And the Config object is customized in each class then recombined in one final class. In that case, the cake pattern allows them to customize the same class definition in different ways for each class that needs a configuration. And they do that staying aligned to the static typing principle. In java, you can do this as well. But you typically create an interface marker class, in his case Config, allow each class to subclass it, then create a final object that inherits from each of the specializations. But then you only have the choice of using methods (since you are using interfaces) and you cannot add any code to the specializations. So you can achieve something similar in java of course.

In boner's article, he is using self-types. The self-type allows him to express a dependency from the service trait to the registry trait. The registry is guaranteed to have all the methods, types and values required due to self type and the abstract member declarations. His example actually resembles DI in that the registry is like an application context where all the beans are defined. The components are not services (which are the enclosed types) as much as they are containers for the type members (not abstract but they could be) and the type definitions (a source code convenience, he could have defined the classes external to the components). The abstract type members force subclasses to declare and populate the required members, and the self type allows the service component to ensure that a repository object will be available without having to pass in the repository instance through a parameter in either the constructor or the method call on the trait. So abstract members on the ComponentRegistry object force the two abstract members to be populated and present and the self-type plays 2 roles. It both allows the service trait to ensure that registry will be available in the final object and use that knowledge to write an algorithm in the trait and it forces the final object (in this case a ComponentRegistry) to inherit from the UserReositoryComponent class as well. In his example, the embedded classes UserRepository and UserService are not extended in a sub-trait or the final object instance but they normally would be defined more robustly in a sub-trait of some sort. If the UserServiceComponent#UserService class did not need the UserRepository component, he could have not used the self-type and the ComponentRegistry object would not have been forced to inherit from UserRepository since it would not be needed anyway. So the value here was that it was able to declare that dependency without using inheritance in the upper levels of the trait hierarchy--it was only needed at the ComponentRegistry object creation time.

You'll note that boner's article uses a member value to declare that two instances must be provided (a service and a registry) when the final object is defined. This is alot like expressing that there is an interface in java that requires 2 setters and that the final defined object must inherit from that interface and must be "injected" with two instances at runtime to satisfy the dependencies. In contrast, the pre-cog cake has multiple method names in the trait definition and pre-cog is a bit more focused on the "data" (the user and the config) in the application versus say the behavior.

But notice that boner's article uses self-types but pre-cog does not? Why in this case? The pre-cog example did not need to use methods from other components like boner's service class does. Boner's service class needs a repository. You have only a few choices to satisfy the dependency. You could declare that the registry will be available by calling a "set" method on the trait, declare the registry through a constructor (which you cannot do because traits cannot have constructors), use implicits or use inheritance declaring that the service is a subclass of the registry (which makes no sense). You could also declare another trait that declares its a "registry provider" and subclass from that so then call that getter method to obtain a registry when the service needs it in its methods. But notice that would be more complex than the self-type. Using methods and values to declare that the boner service can access a registry is also fairly restricting. The self-type loosens the way you inform the compiler that a registry will be available "somewhere once a real object is created" allows you to avoid other techniques that you could also use to ensure that the trait knows a registry can be accessed. This feels like a small win--but its a big win if you add it up over hundreds and thousands of situations that may be similar.

##Which cake tools/approach to use? The abstractions over types, values and methods

So when using the cake pattern, you have to decide how you want to keep things unspecified for as long as possible (the abstraction) until you need to lock it down. You can manage members, methods and types. Sometimes the self-type helps, as when boner was constructing a registry that provides "service" objects. Other times, using abstract types and path-dependent types like pre-cog can help. Scala gives you some choices and provides some tools to minimize to some degree accidental complexity. Of course, if you use these tools and approaches too much, the code can be very complex. Just like with dependency injection, if you take DI too far, you can rarely figure out where your values are being set when it comes time to debug your program.

This is also why you can use different lenses to view the cake pattern.

  • The cake pattern helps keep your traits "orthogonal" focused on one activity while still allowing additional assumptions to be specified.
  • The cake pattern using self-types helps you avoid large-scale parameter passing in constructors, interfaces or inheritance trees.
  • The cake pattern helps you avoid some complexity issues versus some flavors inheritance and composition approaches.
  • The cake pattern using abstact types helps you model covariant types.
  • The cake pattern helps you address the expression problem.
  • The cake pattern allows you to use finer slices of composition than you can get with DI frameworks or simple constructor/method parameter and global/thread variables.

One note is that the pro-cog article, as mentioned above, is focused on the "type" aspects, the config type, and less so on the "data". The traits (not the RestService) in pre-cog did not need to interact together to perform work--only the RestService needed bled these together to perform work. Pre-cog trait's needed to interact through the config object to define a common configuration object.

In other words, if you look at the expression problem which thinks about "expression" for both methods and data objects, the cake blogs from pre-cog and boner are actually focused on different types of cake baking and interestingly enough, used two different ways to construct the cake. I suggest the book [] to read more about the expression problem.

Boner had to use the repository in the service object and the self-type was the best way to go to express this dependence but avoid inheritance/interface complexity too high up in the trait hierarchy. He used self-types for abstracting over data but he could have used inheritance.

pro-cog was focused on extending the config object and then recombining it later when it was need in the RESTservice. If the RestService needed to be further blended into yet another layer and the "type" needed to be layered.

##Relationship to a DI container So with a DI container, you get several features, but there is no free lunch. Given that spring is typically used with java, it allows you to compose java-based programs together using "containment." DI containers give you less options around layering with types.

But to think about the cake pattern relative to DI, it was best for me to understand what problem you are trying to solve using cake or DI. In general, you have the following programming functions that must execute (either during compilation or at runtime) when creating a set of objects to solve a problem. Neither the cake pattern or DI removes the need to solve these functions, but they may allow you to more easily specify (the recipe) how to solve them:

  1. You must specify the declaration of a class (member values, methods and types).
  2. You need to specify the dependencies of a class.
  3. You must let an instance have access to the proper set of dependencies so it can function properly. This is actually a key point. You can satisfy dependencies several different ways: DI injection, constructors, setters, implicits, etc. In turns out that depending on what feature you use (including the human programmer feature) to satisfy dependencies, dictates which tool you use satisfy that dependency.
  4. The object must be created by something.
  5. The object must be released/finalized by something.

Note this is all about abstracting out "data" dependencies.

Depending on how well you want these functions to be solved while staying aligned to your guiding principles dictates the "tool" you use to implement those functions.

With DI frameworks, you often shift how the above functions are implemented. For example, you may declare an object in XML, declare dependencies through its interface specifications and provide explicit or implicit wiring instructions in the XML, have the container provide values to the instance to satisfy dependencies and have the container create and destroy the object. Note that in the XML (or java based) spring configuration, spring will automatically inject values into your object based on the "name" or "type" so that you do not even have to specify which specific object will satisfy the dependency.

That's alot like the cake pattern in scala or evening just using implicits (which is flavor of DI but more human powered than container powered). Implicit values in scala are found automatically by an object statically by the compiler and you have to manually and very intentionally ensure that the right implicit is in scope for your needs. That's just like ensuring you define your object in XML and create your object in the container to have the wiring occur. You do have to be explicit though and use the word implicit in your code. And you still have to either write the XML code or the java code (with annotations) to specify which objects should be created just as you have to write the scala source code. In boner's cake pattern, you still had to specify which objects in the registry needed to be created similar to XML or java config in spring DI. And for the java case, you typically use interfaces to specify which "properties" are available on service objects (you are using method signatures in interfaces to specify what are really value dependencies) versus just using abstract member declarations in scala.

In scala+cake, injection really means "ensuring that the dependencies that you have specified are satisfied prior to using an object" But that's also like spring DI, where injection means the container figures out the dependencies and ensure that they are satisfied. In spring DI, the runtime container is doing the reasoning and injecting, in scala+cake its the scala compiler doing the "ensuring" and the programmer is doing the reasoning.

So there is no free lunch. The functions above must satisfied in some way whether using DI or scala+cake. The cake pattern is scala gives you additional choices, that can, if you wish it, be more fine grained than through a DI container model, but at a trade-off of having to "see" and "code" more source code versus XML or annotations.

Maybe for some, that's more complex, maybe not. The cake-pattern is not an exact match to a DI container in that you make different trade-offs on alignment to the guiding principles, complexity and how the functionality above is achieved. That's what a hands-on architect and/or designer are for and why you should hire good ones.

###But Why Use Self-Types? Just like boner said, you have 3-4 different ways to handle dependency injection. The part of DI that forces you to declare members in your final object does vary and this also causes confusion. For example, you can use self-types or you can use abstract value members (and other techniques are available like structural types). Here's a note that suggests that self-types are not necessary.

The key thought is that if you have 2-3 components and the hierarchy is mostly linear and each component that you are mixing in does not depend on the other services that you are also mixing in, then abstract type members are fine and are probably just as readable.

However, if you are mixing in several traits into another trait, and each trait may depend on the others' types (for example, all the layers depend on various refinements of the types) then the self-type makes this more composable when it comes time to declare the downstream traits (downstream traits are sub-traits are the "layers" in the cake). But in this case, you are layering in abstractions of types not data. In other words, you still, at the time of instantiation, have to make sure that your final objects inherits from the proper set of traits. Only the abstract type member mechanism force the declaration of the required vals/defs. If you are not layering types and having interactions between those layers, then you can use inheritance or self-types to express the data dependency and its a readability choice.

You may not want to use self-types if you feel that the incremental readability is not large. However, think about using structural types from the gitgo. You could use structural types to declare that a services component trait "extends" from a structural type where that structural type declares that a val member must exist. So instead of even using just the hierarchy of named dependencies to declare the abstract type member, we could have used the much less clearer structural types. Then you might ask, "what's the point of even declaring a big, old fancy type with the abstract value member even?" Its even more typing!

So in scala, there are a few ways to expose the dependency, but there is a hierarchy of readability. The hierarchy of options also shifts the "state the intent that something downstream needs to declare something to make this upstream component work" from being crazily specific and has-to-be-type-out-each-time, the structural type, to something a bit more clearer in its intent and less repetitive, a trait with an abstract type member which is like having a "tag" on a structural type, to something that was more clear about its intent, the self-type. So yeah, I think self-types improves readability.

And now for the kicker. If you are using existential types within your component and your "val" definition is an existential type, there are specific situations where the self-type allows you to compile. See this blog for details an existential types: Most programmers will not run into this situation as much with simple services with simple dependencies on "data", and hence to them, the inheritance approach will always be much clearer than the self-type approach. An existential type is like a cake pattern but you are layering in types (one layer at a time and parts of the layer are dependent on different types in the other layers) and providing refinements at each layer. The only way to do this and be type safe (so no casting!) is to use self-types. Composition of "services" in most implementations that you see quoted on the web is mostly about abstracting over data and the abstraction is one of data dependency. Indeed, this type of dependency is concisely expressed both with inheritance or self-types.

Self-types were created to help with layering over types. But it is a better practice, I think, to always use self-types to be consistent in case you have different layers abstracting over data dependencies or types. So the message is you should use self-types. At some point, you'll learn about self-types enough that this consistency will help you when you must abstraction and layering over types.

###Injecting by Type or By Name In spring, you can inject and resolve the injection by bean name or by bean type. Most people use the type injection. But if you have multiple objects of the same type in your bean container, then you have to disambiguate which object should be used for injection, in which case, injection by name is often used.

Here's a great note that covers how to handle scenario: and a great blog describing why you should sometimes use defs instead of vals in your layers:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.