Skip to content

Instantly share code, notes, and snippets.

@bert2
Last active December 26, 2022 09:03
Show Gist options
  • Star 10 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save bert2/2413ea125992fe59d66d24238cf9eba7 to your computer and use it in GitHub Desktop.
Save bert2/2413ea125992fe59d66d24238cf9eba7 to your computer and use it in GitHub Desktop.
Giving Sisyphus a Hand: How to Improve OOP with Functional Principles

Giving Sisyphus a Hand: How to Improve OOP with Functional Principles

Yet another rant about object-oriented programming, but mainly a collection of ideas stolen from functional programming to improve it.

Disclaimer: This text is targeting C# developers, but many of the concepts directly translate to other OOP languages. Also, throughout this text I will always use the term object-oriented programming (OOP), even when I actually mean object-oriented design (OOD), object-oriented analysis (OOA), or object-oriented software engineering (OOSE).

TL;DR

Learning a functional programming language will make you a better OOP developer.

Table of contents

How OOP failed us

Object-oriented programming (OOP) promised to reduce the complexity of programs by introducing type hierarchies and a new way of encapsulation. But somehow we still have to learn and use numerous design patterns, laws, principles, and architectural styles, before our code is considered "clean" by the OOP overlords. Not only do these patterns and rules introduce additional complexity, they are also inconsistent and notoriously outdated.

OOP patterns might appear like computer science, but mostly they're actually not. Usually they are derived from personal experience rather than scientific research. That doesn't make them necessarily bad, but they don't carry any objective truth either. Yet they are sometimes presented and taught as if they were objectively true, which puts OOP closer to pseudoscience like diet schemes than to actual science.

The false promises of OOP often leave us developers with a nagging feeling of inadequacy. No matter how thoroughly we follow all the patterns and principles, somehow they still won't lead us into the light as promised. But instead of questioning the validity of the articles and books we read, we question our competence: "Did we really implement it the right way? Surely we must have done something wrong!"


"...until we sufficiently obscured the fact that we just want to save
some data from a web request into a database?"

To make matters worse, most OOP languages are not very restrictive. You can ignore all the fancy design patterns and fall back to a procedural programming style at a whim without the compiler warning you. This makes maintaining OOP software a Sisyphean task: that boulder you almost got up the hill of clean architecture is just one unnoticed feature merge away from rolling all the way back down...

You might argue that OOP languages gave us useful abstractions like smart pointers and garbage collection, but just like all the design patterns we are supposed to use these are not more than band aids. They are means to keep OOP barely working, without addressing its fundamental issues.

"Garbage collection: the greatest hack ever imposed on any programmer; the final admission that we are terrible at dealing with side effects."
-- Robert C. Martin

It is about time we reconsider the OOP practices we learned and evaluate the usefulness of other programming paradigms in our contexts.

What OOP can learn from functional programming

What I'm about to present in this text is nothing new or revolutionary. Functional programming (FP) is already making it's way into C# and Java since years. Let's have a look at some of the changes to the C# language for instance:

  • LINQ
    Working declaratively with lists is one of the core features of FP.
  • delegates
    Kind of make functions first class objects of the language, as they are per default in FP languages. This allows for higher-order functions, which are a core design mechanic in FP.
  • method groups
    Reduce noise when working with higher-order functions.
  • async/await
    Is based on a very common abstraction from FP, called monad.
  • lambdas and the Func type
    Almost make functions first class objects of the language, but the need for explicit typing make them a bit unwieldy sometimes.
  • type inference
    Still rather basic compared to FP languages like F# or Haskell, but eliminates much of the type noise.
  • value tuples
    Very common in FP and the syntax integration of ValueTuple makes working with tuples almost as easy as in FP.
  • pattern matching
    Also one of the core features in FP.
  • local functions
    Possible per default in FP, because functions are first class objects.
  • deconstructors and switch expressions
    Make working with tuples and other deconstructible types as easy as in FP.
  • nullable reference types
    Gets the type T? a bit closer to the Maybe<T>/Option<T> type from FP.

Having a closer look at FP is also worthwhile for OOP developers, because many of the patterns they have to learn are realized in FP with literally zero overhead.

So why does everything seem so much easier in FP languages? Where did OOP go wrong?

State is the enemy

Ever had a crash due to a null reference exception? Ever got a headache while juggling all the variables and their current values of a 100+ lines method? Ever had to fix a dead lock or a race condition? That's all state's fault! Or more precisely: the changes done to state.

Unfortunately, in OOP state changes can happen anytime and anywhere. It is possible that some minor private method unexpectedly writes to a database or sends a web request. Because objects are passed as references, all parts of a program can change them as they see fit. And since this is the modus operandi OOP defaults to, we don't even get a slap on the wrist by the compiler for doing so. It's no wonder that even greenfield OOP systems turn into a convoluted mess faster than you can recite the SOLID principles.

Acknowledging the problem

What programmers of all paradigms can agree on is that changing state is necessary eventually. Without state changes we had no side effects and no way to receive inputs or generate outputs. There would be no UI and no I/O. There wouldn't even be a lousy "Hello world" program. The question is how state changes are accomplished with a programming language.

Imperative programs are basically a series of minute and well-timed state changes which eventually yield the desired output state. This model of computation goes all the way back to the Turing machine and is inherently stateful.

The close ties to such a stateful computational model are the reason why changing state is the very first thing you learn in any imperative programming language tutorial. After all you are learning to code in order to "do stuff", right? So it's obvious you first have to learn the basics, like how to change the value of a variable or how to print something to the console. Right?

The problem is that changing state is actually non-trivial from the standpoint of computational logic. By that I mean how well the program code allows one to reason about it. Let me give you an example. It's a little bit contrived, but bear with me.

Consider this harmless little C# function:

bool IsFoo(int a) {
    if (a < 0) a *= -1;
    return a < 3;
}

You will probably see that it checks whether a number is within the range (-3,3) and can be replaced with a simpler version:

bool IsFoo(int a) => a < 3 && a > -3;

Now have a look at this version of the same function:

bool IsFoo2(int a) {
    var b = a < 0 ? a * -1 : a;
    return b < 3;
}

It does the same but without changing the state of the variable a. This allows us to reduce it to the simpler version almost algebraically:

bool IsFoo2(int a) => (a < 0 ? a * -1 : a) < 3;
bool IsFoo2(int a) => a < 0 ? a * -1 < 3 : a < 3;
bool IsFoo2(int a) => (a < 0 && a * -1 < 3) || (!(a < 0) && a < 3);
bool IsFoo2(int a) => (a < 0 && a * -1 < 3) || (a >= 0 && a < 3);
bool IsFoo2(int a) => (a < 0 && a > -3) || (a >= 0 && a < 3);
bool IsFoo2(int a) => (a < 0 || a >= 0) && (a < 0 || a < 3) && (a > -3 || a >= 0) && (a > -3 || a < 3);
bool IsFoo2(int a) => (a < 0 || a < 3) && (a > -3 || a >= 0) && (a > -3 || a < 3);
bool IsFoo2(int a) => a < 3 && (a > -3 || a >= 0) && (a > -3 || a < 3);
bool IsFoo2(int a) => a < 3 && a > -3 && (a > -3 || a < 3);
bool IsFoo2(int a) => a < 3 && a > -3;

Disclaimer
Obviously, no one in their right mind would attempt to refactor IsFoo2() the way I just did. But that's beside the point I'm trying to make, so bear with me just a little longer 😊

That was only possible because IsFoo2() is stateless. Plus, we didn't even had to understand what the function does in order to do that! The absence of state changes gives us a whole new way of thinking about code. And not only us; automated optimizers and refactoring tools can benefit from that as well.

The same transformation is not possible with Foo(), because its return value a < 3 depends not on the value of a when it was passed into the function, but on the current state of a after all previous statements have been executed. This temporal coupling essentially forces us to understand the meaning of the whole function, before we can consider refactoring it.

I hope you can see how stateful code complicates matters even though it often seems very simple to us. FP acknowledges that and went a very different way for that exact reason.

Isolating state changes

FP languages like Haskell are rooted in a different computational model called the lambda calculus which is declarative in nature. Functional programs only describe the desired output state and let the compiler and/or runtime work out the concrete imperative instructions.

This allowed the developers of Haskell to start by creating a language that is good at expressing computations while ignoring state changes altogether.

"Let there be... nothing."
-- the complete backlog of the first ever written Haskell program

Only after they were satisfied with that, they began to think about how to deal with state in Haskell. And as it turns out they figured out a way how to bake side effects into the type system. In Haskell state changes are strictly isolated from the majority of the code and only some functions are capable of changing state (they must return the right type). And that isolation goes even further, because actually no Haskell function can change system state by itself. A function can only request state changes and later the runtime will combine all of them into some sort of program within a program. That might sound very complicated, but I can assure you it's not harder to use than C#'s async/await feature.

Fun fact
The average Haskell book explains printing to the console only after you've read 41.1% of its contents (sample size: 7).

Generally speaking, FP drastically limits the ways how developers can change state to prevent them from messing up their code. Let's talk about the different ways how FP languages achieve that.

Pure functions

A function...

  1. that always returns the same output when given the same input
  2. and that has no side effects

...is called a pure function. A function that does not satisfy those constraints is called an impure function.

In order to obey the first constraint a function must only depened on its parameters. It cannot have any dependencies to instance members or global variables and it can also not read additional inputs from a database, a web service, or even just the system clock. A more subtle implication is that the parameter values must not change midway through its execution. Another thread changing the state of a parameter object, for instance, might not only cause a race condition, but also will make the function in question impure.

The second constraint basically says that a pure function's only effect must be its return value. It is not allowed to write to a database or interact with any other external system. It is not even allowed to make changes to the input parameters.

Not-so-fun fact
Impurity is a highly contagious condition. Every function that calls an impure function will automatically become impure itself which then transitively makes its callers impure as well.

Let's see some example functions. Can you tell the pure from the impure ones?

void SetTotalPrice(Order o) {
    o.TotalPrice = _db.OrderItems
        .Where(i => o.ItemIds.Contains(i.Id))
        .Select(i => i.Price)
        .Sum();
}
Is it pure?

🙀 No, it's impure, for multiple reasons. The most important being that we are accessing the instance member _db which is not part of the input. On top of that we might even be doing a query to a database!


void SetTotalPrice(Order o) {
    o.TotalPrice = o.Items.Select(i => i.Price).Sum();
}
Is it pure?

😿 No, it's still impure, because we are modifying the input. That's just rude!


decimal GetTotalPrice(Order o) => o.Items.Select(i => i.Price).Sum();
Is it pure?

😻 Yes, it's purr!


decimal GetTotalPrice(Order o) {
    var sum = 0m;

    foreach (var i in o.Items)
        sum += i.Price;

    return sum;
}
Is it pure?

😸 Yes, it's pure! Compared to the previous example the code is less functional, but it's still a pure function.


int GetDaysSinceCSharp8()
    => DateTime.Now.Subtract(new DateTime(2019, 9, 23)).Days;
Is it pure?

😿 No, it's impure, because it won't return the same result when called tomorrow.


bool IsInternationalDelivery(Order o)
    => o.OrderedBy.Account.Address.Country != "UG";
Is it pure?

😸 Yes, it's pure! Although it looks a bit unclean, it's still a pure function.


bool IsInternationalDelivery(Order o) {
    if (o.OrderedBy.Account.Address == null)
        throw new UnknownDeliveryAddressException();

    return o.OrderedBy.Account.Address.Country != "UG";
}
Is it pure?

🙀 No, it's impure, because exceptions are side effects. Pure functions have to caputure failure in their return type (e.g. bool? instead of bool).


bool? IsInternationalDelivery(Order o) {
    if (o.OrderedBy.Account.Address == null) {
        Console.WriteLine("ERR: unknown delivery address");
        return null;
    }

    return o.OrderedBy.Account.Address.Country != "UG";
}
Is it pure?

😿 No, it's impure, because writing to the console is a side effect.


Making a function pure might seem quite limiting, but it's definitely worth it, because pure functions have some very useful properties. They are...

  • easier to understand, because they depend solely on their input. The scope developers have to keep in their head when reading them is minimized.
  • testable, because the only dependency is their input and the only effect is their output. The costs for both the "arrange" and "assert" steps in a unit test are minimized.
  • reusable, because they have no side effects. Does the funtion signature suit your needs? Great! Use it whenever and wherever you want without having to worry about introducing bugs.
  • scalable, because they don't share state with any other functions. This makes them easily parallelizable.

Pure functions are also referentially transparent, which is just a fancy way of saying that a call to a pure function can always be replaced by its return value. This implies that they are easier to understand, but is mainly useful for the compiler optimizations functional languages do.

From my own experience I can tell that working with pure functions changes how you read code. Often it is enough to check the signature of a pure function in order to understand what it does. Also you will need to fire up your debugger far less, because most of the time you can debug pure functions "in your head".

Fun fact
Haskell developers have their own search engine to search for a function by signature. This whole idea only works, because the signatures of pure functions are so expressive.

Given the many beneficial traits of pure functions, it should be evident that the majority of a system's code base--especially its business logic--should be composed of pure functions. But how can we work effectively with pure functions in a language like C#, where impure functions are the norm?

Pure functions in C#

Unfortunately the C# compiler cannot guarantee that a function is pure (yet) so we have to rely on conventions and discipline. We can use C# language features and dependency injection (DI) to our advantage, though. Hence, I propose the following set of rules to ensure that a C# function adheres to the two purity laws as much as possible:

  1. Pure functions must be marked static.
  2. Impure functions must be non-static.
  3. Classes having impure methods must be injected via DI.
  4. Parameters of a pure function must not expose impure methods.
  5. Parameters of a pure function must be value types or immutable reference types.
  6. Pure functions must not throw exceptions.

The first three rules make sure a pure C# function cannot interact with impure dependencies, because it will have no access to instance members. However, calls to impure static members of the framework or 3rd party libraries (e.g. Console.WriteLine() or DateTime.Now) still need to be caught in reviews.

Objection!
OOP evangelists might protest that static methods cannot be mocked easily and should therefore not be part of public interfaces. I counter that there is no need to mock a pure function, because it has no dependencies anyway.

With the fourth rule we prevent impure dependencies from being injected into a pure function as parameters. The fifth rule's purpose is to prohibit pure functions from modifying their inputs. Both are rather strict, because in practice it might be non-trivial to make your types immutable (more on that in the next section).

The sixth rule is obvious, because exceptions are side effects. Even though exceptions can pop up in unexpected places (Yes, I'm looking at you NullReferenceException!) we mainly have to look out for exceptions being used for control flow. A runtime exception that is thrown due to a bug does not make a function impure--it just makes it bugged.

To be clear: any of those rules can be relaxed if need be. They are designed to give developers (and the C# compiler) the possibility to tell pure from impure functions by only checking their signatures rather than reading their implementations. If you feel that you don't need that safety net, then go ahead and skip some of the rules. But be aware that this should be ballanced with more intensive code reviews.

By the same token, I'd like to stress that you don't have to follow any of these rules to benefit from pure functions. Pure functions also help readers of your code on a smaller scale. So the next time you are working on a private method, just ask yourself if it could be written to be pure.

Immutability

Immutability means that once an object has been created it can never be changed (mutated) again. The only way to apply modifications to an immutable object is to create a modified copy of it. Immutability is the default in functional languages, whereas in imperative languages it's the other way around and mutability is usually the default.

Can we obey immutability in a language like in C#? Well, kind of yes. Let's have a closer look.

For one there are the obvious modifiers const and readonly. Unfortuantely they are less helpful then they seem.

The modifier const works on variables and fields, but only if their value is known at compile time. It also prohibits the use of var. Hence const is almost useless for the purpose of immutability.

How about readonly then? It can be used on fields, but not on variables (for now) which is a huge bummer. One also has to be aware that a readonly List<T> can still be changed with Add(), Remove() etc. For whatever it's worth, we can make readonly structs though:

public readonly struct Point {
    public readonly double X;
    public readonly double Y;
    public Point(double x, double y) => (X, Y) = (x, y);
}

Immutable types

When designing immutable types, C# developers have to restrict themselves to properties without setters or readonly fields. The immutable alternatives for collections are IEnumerable<T>, IReadOnlyList<T> or the types from the System.Collections.Immutable namespace.

Fun fact
The Roslyn C# compiler makes heavy use of ImmutableArray<T> from System.Collections.Immutable.

Here's an example of an immutable Order class:

public class Order {
    public User OrderedBy { get; }
    public IEnumerable<Item> Items { get; }

    public Order(User orderedBy, IEnumerable<Item> items)
        => (OrderedBy, Items) = (orderedBy, items.ToArray());

    public Order AddItem(Item i)
        => new Order(OrderedBy, Items.Append(i));
}

Note how AddItem() creates a new order instead of changing this. Unfortunately the constructor has to copy the incoming items. Otherwise code like this could change the Items property from outside:

var items = new[] { new Item("foo"), new Item("bar") };
var order = new Order(user, items); // `order.Items` contains "foo" and "bar"
items[1] = new Item("qux"); // `order.Items` now contains "foo" and "qux"

All this copying of course comes with a performance cost. Can we not avoid copying the items when we can quarantee nobody will change them?

public class Order {
    public User OrderedBy { get; }
    public IEnumerable<Item> Items { get; }

    internal Order(User orderedBy, IEnumerable<Item> items)
        => (OrderedBy, Items) = (orderedBy, items);

    public static Order Create(User orderedBy, IEnumerable<Item> items)
        => new Order(orderedBy, items.ToArray());

    public Order AddItem(Item i)
        => new Order(OrderedBy, Items.Append(i));
}

The trick is simply to move the copying from the constructor to a public factory method and hide our constructor from external access. Now internal code can create Orders with almost no performance penality and external code cannot modify Items from outside. Of course the types User and Item have to be immutable as well for this to make sense.

Fun fact
C# strings are immutable.

In functional languages one does not have to jump through that many hoops in order to create an immutable type. In F# an immutable Order type can be defined in a single line:

type Order = { orderedBy: User; items: Item list }

This is possible because we can be very sure that the types User, Item and list are immutable since immutability is the default in F#. In Haskell their immutability is even guaranteed by the compiler.

The C# team is currently considering to add a very similar feature to the language. They call it record types and it would allow us to create immutable types very easily:

public class Order(User orderedBy, IEnumerable<Item> items);

Yes, the class definition really ends with that semicolon. The declaration of the properties, the implementation of Equals(), and all other kinds of noisy technical details are generated by the compiler. Below you can see how the compiler-generated code for the above class definition would look like:

public class Order : IEquatable<Order> {
    public User OrderedBy { get; }
    public IEnumerable<Item> Items { get; }

    public Order(User orderedBy, IEnumerable<Item> items)
        => (OrderedBy, Items) = (orderedBy, items);

    public override bool Equals(Order other)
        => Equals(OrderedBy, other.OrderedBy)
        && Equals(Items, other.Items);

    public override bool Equals(object other)
        => other is Order o && Equals(o);

    public override int GetHashCode()
        => (OrderedBy?.GetHashCode() * 17 + Items.GetHashCode())
            .GetValueOrDefault();

    public Order With(
        User OrderedBy = this.OrderedBy,
        IEnumerable<Item> Items = this.Items)
        => new Order(OrderedBy, Items);

    public void Deconstruct(
        Order self,
        out User OrderedBy,
        out IEnumerable<Item> Items)
        => (OrderedBy, Items) = (self.OrderedBy, self.Items);
}

Immutable variables

Immutability can also be applied when writing functions. As mentioned before this lacks compiler support in C# and is therefore a question of discipline and coding guidelines.

Consider this function that calculates the nth Fibonacci number:

int Fibonacci(int n) {
    int result;

    if (n < 1)
        result = 0;
    else if (n < 3)
        result = 1;
    else
        result = Fibonacci(n - 2) + Fibonacci(n - 1);

    return result;
}

Seems harmless you might think. We even obeyed the famous "single entry, single exit" rule. Unfortunately this rule forces us to set the value of the variable result after initialization which is not the immutable way.

That's why "single entry, single exit" is outdated and the preferred strategy nowadays is to "exit early" (Robert C. Martin, Clean Code, 48f.) as in this revised Fibonacci() function:

int Fibonacci(int n) {
    if (n < 1) return 0;
    if (n < 3) return 1;
    return Fibonacci(n - 2) + Fibonacci(n - 1);
}

Turns out there really is no need for a mutable variable at all.

Below is another example. This time we are gradually building up a final result:

string PrintLetterGreeting(User user) {
    var greeting = "Dear";
    if (user.Title != null) greeting += $" {user.Title}";
    greeting += $" {user.FirstName}";
    greeting += $" {user.LastName}";
    return greeting;
}

Now compare with this revised version that does not mutate any variables:

string PrintLetterGreeting(User user) {
    var title = user.Title == null ? "" : $" {user.Title}";
    return $"Dear{title} {user.FirstName} {user.LastName}";
}

Note how much easier it is now to extract the generation of the value for title into a pure helper function. In the previous version this would require a non-trivial refactoring.

Why bother with immutability?

Making your types immutable and treating your variables as such seems to be quite a hazzle in OOP languages. Why do I think it's still worth it?

Mutability introduces dependency on time

Since a mutable objects might change its value some time after it has been initialized, its state actually not only depends on the initialization but potentially on every statement that has been executed afterwards. Thus functions that use mutable objects are harder to understand and harder to refactor.

Some of the above examples may seem like nitpicking, but once a function reaches a certain size such little mutations quickly increase complexity. Every mutable local object introduces more mutable state that readers have to keep track of.

Immutable objects are inherently thread-safe

If no other thread can change an object's state then there is no need for locking mechanisms like mutexes. This of course improves scalability.

But is immutability really worth the trouble?

While the benefits and feasability of immutability on the level of variables hopefully is apparent now, it still might not be when it comes to immutable types. In order to get a better understanding how immutable types work in practice we should therefore try it with two examples.

The first one is rather simple: we are modifying an ImmutableList of Foo objects.

var originalFoos = ImmutableList.Create(
        new Foo(1), new Foo(2), new Foo(3));
var modifiedFoos = originalFoos.Add(new Foo(0));

On the surface there is nothing particularly suprising happening here. The first statement creates a list with four elements and the second statement adds a new element. The main difference to conventional collection types like List<T> is that Add(T) never mutates the original list--it creates a new one from it and appends the additional element.

This works similarly to LINQ's IEnumerable where each extension method always creates a new IEnumerable from the previous one. However, compared to ImmutableList an IEnumerable has some major disadvantages:

  • IEnumerables can be mutated by changing the underlying collection. Hence, the receiver of an IEnumerable has no immutability guarantee.
  • The receiver might even mutate the IEnumerable itself by casting it to the underlying collection type.
  • Enumerating an IEnumerable might trigger side effects and is not thread-safe.

Persistent data structures

You might wonder how immutable collections can ever be efficient, when every modification creates a new collection from the old one. Who's gonna pay for all that copying of data in order to guarantee immutability? Well, the trick is that most immutable collections are implemented in such a way that copying is minimized by maximizing the amount of shared elements. Such immutable collections are called persistent data structures. But how does that work? We can shed some light on this by looking at a typical example from functional programming:

let xs = [1; 2; 3]
let xs' = 0 :: xs // `xs'` contains `[0; 1; 2; 3]`

Those two lines of F# first declare the list xs and then another one called xs' which has the same elements, but with an additional one in front (the operator :: prepends an element to a list). Because in F# (like many other functional languages) lists are implemented as linked lists internally, the list xs never had to be copied when xs' was created. The diagram below shows what's going on:

As you know, in a linked list, each element points to another linked list that holds the elements following it. Therefore, to reference the whole list, xs only has to remember the first element also known as the "head". When the 0 is prepended, that new element only has to keep a reference to the old head and becomes the new head--but only for xs'! The variable xs still considers 1 to be the head of the list. This is why an immutable linked list does not need to copy any data when elements are prepended or the head is removed. You are basically getting immutability for free! It's no suprise that the linked list is FP's favorite data structure. Take a minute to appreciate their simplicity and efficiency.

The ImmutableList type from above is implemented as a binary tree. I guess the reason for that is that C# developers would find it unfamiliarly constraining to only be able to efficiently add or remove elements at the front of the list. Compared to linked list, binary trees are somewhat slower on that regard, but more versatile and still quite efficient when it comes to indexed access, for instance.

The funny thing about persistent data structures is that one might fear their performance costs due to their immutability, while they actually might improve performance, because of their efficient sharing of memory. This is especially true in multi-threaded scenarios where thread-safety often is achieved by elaborate locking mechanisms or by copying data over and over again.

It's all or nothing

The immutability of the collections only really make sense when the elements are immutable themselves. One could say that when you want immutability, you better go all in. So let's try immutability with a more complex example. First, we'll define a couple of minimal classes to make up our immutable object graph:

// a junkyard has an address and some cars
public class Junkyard {
    public readonly string Address;
    public readonly  ImmutableList<Car> Cars;
    public Junkyard(string address, ImmutableList<Car> cars)
        => (Address, Cars) = (address, cars);
}

// a car has a license plate and some tires
public class Car {
    public readonly string Plate;
    public readonly ImmutableList<Tire> Tires;
    public Car(string plate, ImmutableList<Tire> tires)
        => (Plate, Tires) = (plate, tires);
}

// a tire has an ID like "front left", "rear right", etc.
public class Tire {
    public readonly string Id;
    public Tire(string id) => Id = id;
}

Nothing too fanzy here (quite literally). A Junkyard simply has a list of Cars which in turn have a list of Tires. So let's create an actual graph from those classes:

// a list of three junkyards
var junkyards = ImmutableList.Create(

    // a junkyard with two cars
    new Junkyard("1340 E Bay Ave", ImmutableList.Create(

        // a car without any tires
        new Car("HVU 9747", ImmutableList<Tire>.Empty),

        // a car with a full set of tires
        new Car("GWH 8379", ImmutableList.Create(
            new Tire("front left"),
            new Tire("front right"),
            new Tire("rear left"),
            new Tire("rear right"),
            new Tire("spare"))))),

    // a junkyard without any cars
    new Junkyard("360 Nostrand Ave", ImmutableList<Car>.Empty),

    // a junkyard with three cars
    new Junkyard("148-36 Liberty Ave", ImmutableList.Create(

        // a car with only one tire
        new Car("AZ 3381", ImmutableList.Create(
            new Tire("front right"))),

        // a car with only two tires
        new Car("2100952", ImmutableList.Create(
            new Tire("rear left"),
            new Tire("rear right"))),

        // a car with only three tires
        new Car("HMZ 2798", ImmutableList.Create(
            new Tire("front right"),
            new Tire("rear left"),
            new Tire("rear right"))))));

We can see that some junkyards in this graph have no cars and that some of the existing cars are missing tires. Now let's say we wanted to refit the cars so that each of them should at least have four working tires. Any missing tires should be replaced with a spare tire, i.e. new Tire("spare") (we have plenty of them lying around in each junkyard).

Working bottom-up we first define a function that creates us a specified number of spare tires:

ImmutableList<Tire> GetSpareTires(int count)
    => ImmutableList.CreateRange(Enumerable
        .Range(1, count)
        .Select(_ => new Tire("spare")));

Now let's use it to refit a car:

Car RestockTires(Car car) => new Car(
    car.Plate,
    car.Tires.AddRange(GetSpareTires(Math.Max(0, 4 - car.Tires.Count))));

RestockTires() takes a Car and recreates it while adding 0 to 4 new Tire("spare") elements depending on the number of tires the car already has.

Furthermore, we define a second function that recreates a Junkyard object while mapping its Cars with RestockTires():

Junkyard RefitCars(Junkyard jy) => new Junkyard(
    jy.Address,
    jy.Cars.Select(RestockTires).ToImmutableList());

Finally, we can use that function to refit the cars of all junkyards:

var modified = junkyards.Select(RefitCars).ToImmutableList();

Beyond a naive implementation

You probably have noticed that a lot of objects are being recreated even when they don't really have to. While that is often the case in immutable scenarios and usually makes for simpler code, one can still try to reuse objects when performance is a concern. In our example we can add two simple checks:

Car RestockTires(Car car) => car.Tires.Count >= 4 ? car : new Car(
    car.Plate,
    car.Tires.AddRange(GetSpareTires(4 - car.Tires.Count)));

Junkyard RefitCars(Junkyard jy) => jy.Cars.All(c => c.Tires.Count >= 4)
    ? jy
    : new Junkyard(
        jy.Address,
        jy.Cars.Select(RestockTires).ToImmutableList());

Now cars with four or more tires will be reused. Junkyards where all cars have a sufficient number of tires and junkyards without any cars at all will be reused as well.

However, the line jy.Cars.Select(RestockTires).ToImmutableList() in RefitCars() still always recreates the whole list of cars. And even worse, because the original list gets "lost" in the Select(), no elements can be shared when the new list is created with ToImmutableList(). Fortunately we don't have to map with Select() at all, because ImmutableList has a Replace() method:

Junkyard RefitCars(Junkyard jy) => new Junkyard(jy.Address, jy.Cars
    .Where(c => c.Tires.Count < 4)
    .Aggregate(jy.Cars, (cs, c) => cs.Replace(c, RestockTires(c))));

This allows us to reuse the original list of cars and call Replace() on it for every car that is missing tires. Now both functions make sure that every modified list shares elements with its original.

There is one last problem, though. Repeatedly modifying an ImmutableList in a loop might cause serious GC pressure, due to internal restructuring of the binary tree. Fortunately, each type from System.Collections.Immutable comes with its own builder that works similar to the StringBuilder type. They mimic the interfaces of their target types, but are mutable and allow for efficient in-place modifications. When all modifications are done, their underlying data structure can be "frozen" to become immutable again.

Unfortuantely, however, the ImmutableList<T>.Builder lacks the Replace() method we need. I already created a pull request to add it. In case it gets merged, we can use the builder like so:

Junkyard RefitCars(Junkyard jy) => new Junkyard(jy.Address, jy.Cars
    .Where(c => c.Tires.Count < 4)
    .Aggregate(
        jy.Cars.ToBuilder(),
        (cs, c) => { cs.Replace(c, RestockTires(c)); return cs; })
    .ToImmutable());

The aggregator function looks a bit clumsy now, but it's very efficient.

The verdict

We have seen that designing for immutability has to be done with great care when the language has almost no built-in support for this. To contrast, a devil's advocate might show you their solution for the same problem, but with a mutable object graph:

junkyards
    .SelectMany(jy => jy.Cars)
    .Where(c => c.Tires.Count < 4)
    .ForEach(c => c.Tires.AddRange(GetSpareTires(4 - c.Tires.Count)));

Rightfully, they'd claim that this is less code, that it's easier to read, and that mutable objects are easier to work with in general. But the point of immutability kind of is to take away the easy ways of mutating objects and force developers to implement modifications in a controlled manner. The eventual benefits of that approach are hard to measure and even harder to convey to developers with an imperative or object-oriented background.

I think it all comes down to just trying it out yourself. In order to facilitate that a little bit I attached a LINQPad file with the code from the above immutable object graph example.

Separate data and behavior

The next important principle functional programs obey is that they don't mix data and behavior. Both live in strictly separated realms. This is a direct consequence of immutability and purity: types cannot have any hidden state and they can only be modified by transforming them to new types with pure functions.

Mixing data and behavior breaks encapsulation

Mixing data and behavior was a new way of encapsulation introduced by OOP. Initially encapsulation simply ensured that client code does not has to deal with the implementation details of a set of public functions which process some public data type. OOP took it one step further and proclaimed that the functions and the data should be bundled together as objects. This might sound very convenient, but actually opened a can of worms, because the data now became tangled with all the dependencies of the functions as well.

A worst case example of this is the active record pattern where a type that needs to be persisted to a database has to expose a Save() method. This Save() method obviously depends on the database, but most likely the actual data tier implementation will be injected into the object during creation in order to retain testability. And this is where things are starting to get messy.

Since creating an object of such a type involves injecting all its dependencies, it is very cumbersome to work with it in an immutable fashion. It is much easier to just mutate the same object over and over again which makes it stateful. Every stateful type, however, potentially imposes hidden dependencies on client code (e.g. the methods need to be called in the right order) which breaks encapsulation.

Instead of strengthening encapsulation, OOP actually created a new breeding ground for stateful code and hidden dependencies

Separation benefits

A neat side effect of separating data and behavior is that it greatly reduces the concepts developers have to work with. There are no entities, value objects, domain objects, identities, methods, services, managers, etc. etc... There are just functions and data. Granted, in languages like F# one should also discern between pure and impure functions, but it's still much simpler than in OOP.

In FP this separation often looks something like this:

module Order

open User
open Item

type Order = { orderedBy: User; items: Item list }

let addItem order item =
    let items' = item :: order.items
    { order with items = items' }

let removeItem order item =
    let items' = List.filter (fun i -> i <> item) order.items
    { order with items = items' }

The above F# code is a module called Order that declares a type of the same name and two functions with which one can change the items of an order. An F# module is similar to namespaces in C#. Note that the functions addItem and removeItem cannot access any internals of the Order type--not only because there aren't any, but also because they are not tangled with each other except from being part of the same module.

In C# we can achieve the same separation by moving all object methods to a static class:

public class Order {
    public User OrderedBy { get; }
    public IEnumerable<Item> Items { get; }

    internal Order(User orderedBy, IEnumerable<Item> items)
        => (OrderedBy, Items) = (orderedBy, items);

    public static Order Create(User orderedBy, IEnumerable<Item> items)
        => new Order(orderedBy, items.ToArray());
}

public static class Orders {
    public static Order AddItem(Order o, Item i)
        => new Order(o.OrderedBy, o.Items.Append(i));

    public static Order RemoveItem(Order o, Item i)
        => new Order(o.OrderedBy, o.Items.Where(x => x != i));
}

"Static helper classes? That's blasphemy!", I hear you say. Well maybe so, but now we made sure those functions can never access any private instance members of Order and secretly change its state. To be fair we are still using the internal constructor for performance reasons, but that could be alleviated by using ImmutableList instead of IEnumerable.

Note, how this eliminates the need for a service or manager class with an additional interface. There is no reason to inject a class when it only provides pure functions. After all, injection is only needed to decouple client code from impure dependencies and make them mockable.

Emphasize data flow rather than control flow

In order to understand this point we first have to properly understand what control flow and data flow are.

Control flow

OOP programs (and imperative programs in general) typically consist of multiple statements. A path through such a program executes a series of those statements which in turn mutate system state until the desired output is generated. The control flow of the program determines in which order those statements are being executed.

This idea of control moving from one statement to the next and executing each of them closely resembles the Von Neumann architecture (which itself is a realization of the Turing machine). This of course is no coincidence, because that's how the majority of our computers work internally: an instruction pointer moves between program instructions which read from and write to different kinds of memory on execution. Imperative languages are just a high-level version of that computational model.

The problem is that this model of computation does not scale well with increasing solution complexity. For machines it might, but it does not so for human minds. This is proven by the fact that we permanently have to resort to debugging in order to understand program failures.

Debugging indicates cognitive overload

Debugging allows us to inspect what we actually care about when analyzing an imperative program: the data. Not the next instruction or the execution order--just the data. Of course, we always have to be aware of control flow, but that's an undesirable side effect of the fact that imperative languages only offer a weak abstraction over the internal workings of today's computers. What we really want to understand while debugging is what part of the data transformation our program implements is not working as intended.

OOP never really improved that situation, because it only introduced new ways of grouping and organizing statements. But same as in all imperative languages its core mechanic is still the ordered execution of statements that mutate system state and hence its focus is expressing control flow.

Data flow

In pure functional languages like Haskell there are no statements, only expressions. An expression returns a value when it is evaluated. This return value either depends on other expressions or it is constant. Complex expressions can be named to ease readability and thus become functions with zero or more parameters.

The beauty of this model is not only its simplicity, but also that it inherently expresses data flow, because functions can only depend on their input values and only communicate outputs via their return values. Also, thanks to pure functions, FP developers almost never have to bother themselves with control flow. The compiler or runtime can decide when something needs to be executed, because there are no hidden dependencies. This lets developers focus on the business logic they are trying to build and pushes them to model their processes as chains of functions that feed values into each other.

The obvious way to chain functions is to nest their calls like so:

int Foo(int x) => Bar(Baz(Qux(x)));

However, with all those parantheses this quickly becomes unwieldy. Functional languages offer more practical ways to combine functions. In Haskell the composition operator . is widely used. F# also has composition operators (<< and >>), but F# developers usually prefer the pipe operators <| and |>. These operators lead to two distinct styles of combining functions that I'll just call the "composition style" and the "pipeline style".

The composition style

Here is how function composition looks like in Haskell:

f :: Integral a => [a] -> [a] -- optional type signature
let f xs = (take 3 . reverse . map (^2) . filter even) xs

f [1..10] -- returns `[100,64,36]`

The signature tells us that f is a function that transforms a list of as and that those as have to be Integrals. When you read the function definition from right to left, you see that the list xs first gets filtered such that only the even integers remain, then each integer is squared, and finally the list is reversed so the last three integers can be returned. Not very useful, but that's beside the point.

The point is that we use . to build a new function from other functions and then apply it to xs. You can also omit xs entirely, as its not essential to the definition of f anyway:

let f = take 3 . reverse . map (^2) . filter even

Theoretically one could implement some sort of composition operator in any OOP language that has a type for functions (e.g. C#'s Func or Java's Function), but due to technical limitations the results won't be very readable in practice:

Func<A, C> Compose<A, B, C>(Func<A, B> f1, Func<B, C> f2)
    => x => f2(f1(x));

bool IsEven(int x) => x % 2 == 0;
var h = Compose<string, int, bool>(int.Parse, IsEven);

h("123"); // returns `False`

The pipeline style

We have seen above that function composition is a concept too alien for OOP languages and cannot be used to our advantage. In contrast, F#'s pipe operators are based on function application which is implemented in virtually all programming languages. So let's have a closer look:

let f xs = xs |> List.filter (fun x -> x % 2 = 0)
              |> List.map (fun x -> x * x)
              |> List.rev
              |> List.take 3

f [1..10] // returns `[100; 64; 36]`

This F# function does the same as the one from the Haskell example above, but uses |> to combine functions. The |> operator (called the pipe operator or pipe forward operator) is just an alternative notation for function application where the arguments are flipped. So instead of writing foo x, which applies the function foo to the argument x, we can also write x |> foo. This may seem useless on its own, but allows us to build very readable chains.

Fun fact
In FP even function application can be considered a function. It's an implicit infix operator with a function name on its left and an argument expression on its right side.

Let's have a look at a more practical example. A few years ago I wrote myself a tool to translate localized exception messages. The website unlocalize.com was down for a while back then and I wanted to get some experience with F# in a real-world scenario. In the end I was amazed how its core logic boiled down to two functions and how neatly the whole process could be expressed using the pipe operator:

// translates `exceptionMessage` from `sourceLang` to `targetLang`
let translate targetLang sourceLang exceptionMessage =
    // load all messages of `sourceLang`
    getMessageResources sourceLang
    // prepare them for matching
    |> Seq.map toMatchCandidate
    // find key of our input message
    |> findBestMatch exceptionMessage
    // use found key to load same message in `targetLang`
    |> getMessage targetLang
// finds best match for `target` among `candidates`
let findBestMatch target candidates =
    candidates
    // compute all match scores in parallel
    |> PSeq.map (getMatchScore target)
    |> Seq.sortByDescending score
    |> Seq.head
    // return key of best match
    |> id

The pipeline style in C#

Note how OOP's . notation works similar to F#'s |> operator: the object x on the left side of x.Foo() actually is the first argument of Foo(). In fact every instance method has an implict first argument that is referred to as this. C#'s extensions methods make that first argument explicit and mark it with the this modifier. That's why the F# function f from earlier directly translates to C# with the LINQ extensions:

IEnumerable<int> F(IEnumerable<int> xs) => xs
    .Where(x => x % 2 == 0)
    .Select(x => x * x)
    .Reverse()
    .Take(3);

F(Enumerable.Range(1, 10)).ToArray(); // returns `new[] { 100, 64, 36 }`

Similar to the builder pattern, the LINQ extensions allow us to create method chains as long as the methods always return a specific type. But of course, we don't always have to restrict ourselves that way. We can create extensions methods on any type in order to reshape a series of function call statements to a method chain expression. Say we have the following three functions:

B Foo(A a) { ... }
C Bar(B b) { ... }
D Qux(C c) { ... }

I omitted their actual implementations for brewity, but we can see from the signatures that they model some kind of process that transforms the type A into the type D:

var b = Foo(a);
var c = Bar(b);
var d = Qux(c);

This could be condensed to:

var d = Qux(Bar(Foo(a)));

But those nested parantheses make it not very readable. How about we turn those functions into extensions methods instead:

B Foo(this A a) { ... }
C Bar(this B b) { ... }
D Qux(this C c) { ... }

And then chain them like so:

var d = a.Foo().Bar().Qux();

Now that's much better! No nesting and no noisy temporary variables either.

Again, let's have a look at a more practical example. The following C# class is an implementation of the famous wordwrap kata:

static class TextHelper {
    public static string Wrap(this string text, int columns) => text?
        .GetWords()
        .BreakLongWords(columns)
        .CombineWordsIntoLines(columns)
        .PadLineEndings(columns)
        .Join(Environment.NewLine);

    private static IEnumerable<string> GetWords(this string text);
    private static IEnumerable<string> BreakLongWords(this IEnumerable<string> words, int maxLength);
    private static IEnumerable<string> CombineWordsIntoLines(this IEnumerable<string> words, int columns);
    private static IEnumerable<string> PadLineEndings(this IEnumerable<string> lines, int length);
    private static string Join(this IEnumerable<string> strs, string separator);
}

I left out most of the actual implementation, because the interesting point is how Wrap() defines our pipeline by chaining multiple extension methods. Each method call is a step of the overall process, taking the output of its predecessor as input and feeding its own output into the successor. I think this very cleanly models the problem as a series of independent transformations and expresses the flow of the data while also minimizing noise.

Note how extensions methods also allow us to transform multiple chained statements into an expression. Using statements always carries the risk of introducing subtle dependencies and side-effects into the code while expressions lend themselves to more stateless code, provided, of course, that the expressions themselves are pure and have no side effects.

All this is not a new concept. It works the same way as the well-known builder pattern, which also chains method calls. The builder pattern, however, is restricted to a specific builder type offering chainable methods which encapsulate object creation logic. I'm merely extending the same idea to all kinds of types and all kinds of logic.

Coming back to our Order functions from previous examples, it is easy to make them chainable as well. Just add this at the right places:

public static class Orders {
    public static Order AddItem(this Order o, Item i)
        => new Order(o.OrderedBy, o.Items.Append(i));

    public static Order RemoveItem(this Order o, Item i)
        => new Order(o.OrderedBy, o.Items.Where(x => x != i));
}

The impure/pure/impure sandwich

The "impure/pure/impure sandwich" is a term coined by Mark Seemann. The underlying idea is that all impure code should live on the boundaries of a system while the core business logic is only comprised of pure functions.

You might have heard of OOP architecture patterns that promote a similar separation. The onion architecure for instance requires the core business logic to be decoupled from infrastructure concerns like the UI or the database. This is achieved via the dependency inversion principle and might look similar to this business service:

public class OrderService : IOrderService {
    private IOrderRepo repo;
    public OrderService(IOrderRepo repo) => this.repo = repo;
    public void AddItem(Order o, Item i) {
        o.Items.Add(i);
        repo.Update(o);
    }
}

The OrderService and the IOrderRepo interface will both be part of a business logic assembly. The implementation for IOrderRepo will live in an infrastructure assembly that depends on the business logic, but not the other way around. In OOP this might be considered clean, but it is not when applying FP standards, because AddItem() is impure: apart from modifying its input and depending on an instance member, AddItem() also triggers a write to the database.

The trick of the impure/pure/impure sandwich is to move the database update out of AddItem() and up to the caller. In FP it is the callers responsibility (often the entry point of the application or some other kind of "glue" layer) to do the impure busy work like pre-fetching additional input data and storing results so that the business logic can do its work in a pure fashion.

Here is a simple example from the exception message translator mentioned earlier:

let translate targetLang sourceLang exceptionMessage =
    getMessageResources sourceLang    // impure: loads all source messages
    |> Seq.map toMatchCandidate       // pure: prepares them for matching
    |> findBestMatch exceptionMessage // pure: matching
    |> getMessage targetLang          // impure: loads matched target message

All the impure stuff is done at the beginning and at the end of the pipeline, but not inbetween. Or, like Mark Seemann so tastefully put it:

"It's like a sandwich, with the best parts in the middle, and some necessary stuff surrounding it."
-- Mark Seemann

Back to the roots

If this looks familiar, then that's probably because the impure/pure/impure sandwich is nothing else but the input-process-output (IPO) model appplied to the structure of source code. IPO is the most basic concept for describing software and we were all taught it in school, training, or university. Yet many of our OOP programs tangle and interweave the input, process, and output steps as if we never knew any better. And OOP itself doesn't offer any means to enforce that separation either. I think this is because OOP architectures focus so much on control flow and depedency directions that developers forget the importance of data flow.

Let's see how the impure/pure/impure sandwich could like in C#. First we will define the interface of a class that can load and store the Orders from previous examples:

public static class OrderStore {
    public static Order LoadOrder(string user, int orderId) { ... }
    public static void SaveOrder(this Order o) { ... }
}

In an entry point like a web API controller action that adds items to an order, we would use it like so:

using static OrderStore;

public class OrderItemsController : ControllerBase {
    [HttpPost("/api/users/{user}/orders/{orderId}/items")]
    public void AddItem(string user, int orderId, Item item) =>
        LoadOrder(user, orderId) // impure
        .AddItem(item)           // pure
        .SaveOrder();            // impure
}

This is all still very basic: the added item is not validated and when the order does not exist, the request will fail with an internal server error. We will see later how FP's either monad can solve those problems in a functional way. For now we only note how this controller action keeps pure and impure code separated.

Disclaimer
Attentive readers might have noticed that I'm breaking some of the rules I proposed in section "Pure functions in C#"; namely rule 2 ("Impure functions must be non-static") and rule 3 ("Classes having impure methods must be injected via DI").
Here I'm dropping them on purpose and in favor of the pipeline style, which only works with static extension methods. Remember, that while those rules maximize safety, they are not carved in stone and might be relaxed in teams accustomed to the kind of separation they promote.

Killing two stones with one bird

OOP aficionados will probably point out that the OrderStore dependency is not injected, but used statically instead. Although we lose the ability to unit test the controller in doing so, we also lose an additional interface, an additional instance with potential state, an additional field, and an additional registration in our DI container. I argue that, unless we really need to unit test the entry point, we should avoid decoupling dependencies that don't need to be decoupled. Remember that those dependencies are supposed to only be called from the boundaries and entry points of our system where the dependencies are supposed to be wired up anyway. They are not allowed to be called deeply within our business logic where the decoupling would only be necessary in order to develop independently from the infrastructure.

So by moving impure code from the business logic as close as possible to the entry point, we actually fix two OOP fallacies with one functional principle: the business logic becomes pure and thus easier to understand, and we also get rid of the complexity introduced by decoupled dependencies.

When the sandwich won't cut it

Moving impure dependencies out of your business logic might not always work. Database reads that depend on conditions in your business logic, for instance, could give you a hard time. Often you can get away by pre-fretching the potentially superfluous data anyway, even if the business logic won't actually always need it. When faced with conditional database writes, it might be an option to do the decision inside the business logic, but return the result of that decision to the entry point. The entry point can then check if the database write needs to be done or not.

In some rare cases database reads depend on other database reads or database writes happen so often that moving them up to the entry point is not an option. The good news is that we are still using C# and we can fall back to leaving parts of the business logic impure. The same is true for F#, by the way. However, I want to emphasize, that this should be an exception and we should always strive for pure business logic.

How does Haskell deal with this, though? Haskell is a pure functional language where the compiler prevents you from calling impure functions inside pure functions. Haskell's secret is the IO monad which wraps impure code in a pure way. But more on that at a later time.

Pipelines and async functions

Dependencies like databases usually have asynchronous C# interfaces nowadays. In the real world our OrderStore probably would return a Task from most of its operations and might look something like this:

public static class OrderStore {
    public static async Task<Order> LoadOrder(string user, int orderId)  { ... }
    public static async Task SaveOrder(this Order o) { ... }
}

While C#'s async/await feature is fantastic, it tends to cause disarray in method chains. In order to deal with this, you either have to nest the calls (note the two awaits):

[HttpPost("/api/users/{user}/orders/{orderId}/items")]
public async Task AddItem(string user, int orderId, Item item) => await
    (await LoadOrder(user, orderId))
    .AddItem(item)
    .SaveOrder();

...or you have to split your method chain into multiple statements. Both options defeat the purpose of the pipeline style, so we'll need a better way.

Luckily we can easily create asynchronous overloads for the affected functions and effectively pass Task objects through our chain along with the data so that the calling code has to await only once:

public static class Orders {
    public static Order AddItem(this Order o, Item i)
        => new Order(o.OrderedBy, o.Items.Append(i));

    public static async Task<Order> AddItem(this Task<Order> o, Item i)
        => (await o).AddItem(i);
}
public static class OrderStore {
    public static async Task<Order> LoadOrder(string user, int orderId)  { ... }
    public static async Task SaveOrder(this Order o) { ... }
    public static async Task SaveOrder(this Task<Order> o)
        => await (await o).SaveOrder();
}

The additional code is minimal, but gives us the possibility to work with Orders fluently in synchronous and asynchronous contexts:

[HttpPost("/api/users/{user}/orders/{orderId}/items")]
public async Task AddItem(string user, int orderId, Item item) => await
    LoadOrder(user, orderId)
    .AddItem(item)
    .SaveOrder();

Pattern matching

Pattern matching is not a functional principle, but a very powerful feature that many functional languages offer. In F# for example it is used in match expressions to implement branching logic by mapping a value to an expression:

let rec factorial n =
    match n with
    | 0 -> 1
    | _ -> n * factorial (n-1)

This recursive function factorial takes its parameter n and matches it against two possible cases. Each case starts with the pipe symbol | followed by a pattern which in turn is followed by -> and the result expression. In the above example n is first compared to the constant pattern 0. When it doesn't match, it is compared to the next one which is the wildcard pattern _ that always matches.

This looks pretty similar to C#'s switch statement:

int Factorial(int n) {
    switch (n) {
        case 0:  return 1;
        default: return n * Factorial(n - 1);
    }
}

The main difference is that a switch statement does not return a value itself. It can only execute the statements of the matched case, which makes it a bit inflexible in comparison. To fix this C# 8.0 introduced the switch expression:

int Factorial(int n) => n switch {
    0 => 1,
    _ => n * Factorial(n - 1)
};

The syntax looks a bit odd, but it can be used in any place where an expression is expected.

I imagine that you are not too impressed yet. Especially considering that the above Factorial() function could be rewritten to a one-liner which also handles negative inputs correctly:

int Factorial(int n) => n <= 0 ? 1 : n * Factorial(n - 1);

But wasn't pattern matching supposed to be powerful? Hold onto your hats, because you haven't seen the true power of patterns yet!

Deconstructing patterns

So far I have only shown you the constant pattern 0 and the wildcard pattern _. But patterns can also deconstruct a value. This has nothing to do with destructors. Instead, you can think of deconstructing a value with a pattern as a way to check with which constructor the value was created. In case the pattern matches, any constructor arguments are bound to the variable names declared in the pattern.

For instance, the constructor for an empty F# list is []. The prepend operator :: is also a list constructor that creates a new list from a new head element and a tail list:

let xs' = []
let x = 0
let xs = x :: xs' // same as `let xs = [0]`

Knowing these constructors we can write patterns in F# that check whether a given list is empty or deconstruct it into its head and tail:

let headOrDefault def xs =
    match xs with
    | []     -> def
    | x::xs' -> x

The function headOrDefault does exactly that: when xs is empty, the first pattern [] will match and the default value def is returned. But when xs has one or more elements then the first element will be bound to x while the rest of the list will be bound to xs'. However, since we only care about the list head, we should use the wildcard pattern in place of xs' to signal the intent that we are discarding it:

let headOrDefault def xs =
    match xs with
    | []   -> def
    | x::_ -> x

Patterns are recursive

The way we used the wilcard pattern _ as part of another pattern in the last F# example above, suggests that there is still more to patterns. And of course there is! As a matter of fact patterns can be nested. This allows us, for example, to also extract two elements from the front of a list:

let first2 xs =
    match xs with
    | []      -> failwith "empty list"
    | [_]     -> failwith "only one element in list"
    | x::y::_ -> (x, y)

In the above F# function first2 the patterns of the first two match cases handle lists with no or one element respectively. The pattern of the third case first deconstructs a list with two or more elements into its head and tail using ::, but also deconstructs that tail again, so that we can get to its head as well.

We can also merge the first two cases, because they essentially have the same outcome:

let first2 xs =
    match xs with
    | [] | [_] -> failwith "not enough elements"
    | x::y::_  -> (x, y)

Patterns can deconstruct custom types

Patterns can not only deconstruct the well-known standard types, but also custom ones. In the F# example below we define the type Point2D which can either be a Cartesian point with cartesian coordinates or a Polar point specified with polar coordinates:

type CartesianCoord = { X: float; Y: float }

type PolarCoord = { Radius: float; Angle: float }

type Point2D =
    | Cartesian of CartesianCoord
    | Polar of PolarCoord

By the way: the types CartesianCoord and PolarCoord are called records. Records are like C# classes with immutable properties. The type Point2D is a discriminated union (DU). DUs are a bit like class hierarchies that cannot be extended. So Point2D could be considered an abstract base class extended by only Cartesian and Polar.

Anyway, let's define an F# function that takes a Point2D and uses pattern matching to convert it into a PolarCoord:

// val toPolar : pt:Point2D -> PolarCoord
let toPolar pt =
    match pt with
    | Cartesian { X=x; Y=y } ->
        let r = sqrt (x*x + y*y)
        let a = atan2 y x
        { Radius=r; Angle=a }
    | Polar coord -> coord

You can see in the first match case that we not only test for the actual shape of the Point2D, but when it matches we also immediately deconstruct it to get the X and Y of its CartesianCoord. The second match case does not need to deconstruct the Polar, because we can just return its PolarCoord directly.

Testing toPolar confirms that it works as expected:

let p1 = Cartesian { X=3.0; Y=4.0 }
let p2 = Polar { Radius=3.8; Angle=2.7 }
printfn "p1: %A" (toPolar p1)
printfn "p2: %A" (toPolar p2)
p1: {Radius = 5.0; Angle = 0.927295218;}
p2: {Radius = 3.8; Angle = 2.7;}

Patterns in C#

Because pattern matching is such a powerful tool, the C# language team added it in C# 7.0 and continuously improved it since then.

For instance, we can easily deconstruct tuples. The example below implements the boolean operator XOR and looks just like its truth table:

bool XOR(bool a, bool b) => (a, b) switch {
    (false, false) => false,
    (false, true ) => true,
    (true , false) => true,
    (true , true ) => false
};

Really neat is that we can even deconstruct objects to access their properties and fields:

public class Point2D { public double X, Y; }
string Describe(Point2D pt) => pt switch {
    { X: 0, Y: 0 } => "Point is at origin.",
    { X: 0, Y: var y } => $"Point is on y axis at {y}.",
    { X: var x, Y: 0 } => $"Point is on x axis at {x}.",
    { X: var x, Y: var y } => $"Point is at ({x}, {y})."
};

We can also do type checks with patterns. This let's us reimplement the toPolar example from above in C#:

public abstract class Point2D {
    public class Cartesian : Point2D { public double X, Y; }
    public class Polar : Point2D { public double Radius, Angle; }
}
public Point2D.Polar ToPolar(Point2D pt) => pt switch {
    Point2D.Cartesian { X: var x, Y: var y } => new Point2D.Polar {
        Radius = Math.Sqrt(x * x + y * y),
        Angle = Math.Atan2(y, x)
    },
    Point2D.Polar p => p,
    _ => throw new NotSupportedException()
};

Note how the first match case uses a nested pattern that tests pt's type and deconstructs it in one go. However, because the compiler cannot know whether there are more implementations of Point2D somewhere else, we have to add a default case. This might change in the future when C# gets disciminated unions too.

Custom deconstructors

Since C# 7.0 we can also provide custom deconstructors for our own types:

public class Cartesian : Point2D {
    public double X, Y;
    public void Deconstruct(out double x, out double y)
        => (x, y) = (X, Y);
}
public class Polar : Point2D {
    public double Radius, Angle;
    public void Deconstruct(out double r, out double a)
        => (r, a) = (Radius, Angle);
}

With the above deconstructors our patterns become a bit more concise:

public Point2D.Polar ToPolar(Point2D pt) => pt switch {
    Point2D.Cartesian(var x, var y) => new Point2D.Polar {
        Radius = Math.Sqrt(x * x + y * y),
        Angle  = Math.Atan2(y, x)
    },
    Point2D.Polar p => p,
    _ => throw new NotSupportedException()
};

But we can also implement all kinds of useful helpers. For instance, we can extend IEnumerable<T> with a deconstructor that splits it into its head and tail:

public static void Deconstruct<T>(
    this IEnumerable<T> source,
    out T head,
    out IEnumerable<T> tail)
    => (head, tail) = (source.First(), source.Skip(1));

The usage of our deconstructor looks like this:

var xs = new[] { 1, 2, 3 };
var (head, tail) = xs;
Debug.Assert(head == 1);
Debug.Assert(tail.Count() == 2);

Of course, we can nest that pattern to get not only the first, but also the second element:

var xs = new[] { 1, 2, 3 };
var (x, (y, tail)) = xs;
Debug.Assert(x == 1);
Debug.Assert(y == 2);
Debug.Assert(tail.Count() == 1);

Or how about a deconstructor the splits a decimal into its integer and fractional parts?

public static void Deconstruct(
    this decimal n,
    out int integer,
    out decimal fraction) {
    integer = (int)n;
    fraction = Math.Abs(n - integer);
}
var (intgr, fract) = -2.3m;
Debug.Assert(intgr == -2);
Debug.Assert(fract == 0.3m);

I think custom deconstructors are an overlooked C# feature that can provide much convenience not only in switch expressions, but when working with variables in general.

Pattern matching in combination with switch expressions allow for concise branching logic and lend themselves towards a more expression-based style.

Make null explicit

One of the most annoying things about C# probably is the NullReferenceException. It pops up in unexpected places and provides virtually no information as to what object turned out to be null-valued. Albeit never versions of Visual Studio will tell you exactly that during debugging, this information is not part of the exception message (and probably never will be for security reasons). So good luck finding the culprit when you are only given the stacktrace of a production crash report.

The root cause for this misery is that potentially any reference type can be null valued. Yet there is no immediate way of knowing whether an object variable could be set to null intentionally. For instance, the method Order FindOrder(int id) might return null when it can't find the specified order, but it also could never return null and throw an exception instead. On top of that it might even accidentally return null due to a bug! You can never know for certain until you dig through its implementation.

The consequences of this ambiguity are either an overly defensive programming style that double checks every reference for null and litters the code with if branches, or an overly relaxed "let it crash" attitude that leads to tons of exceptions in production.

"I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years." -- Tony Hoare

Functional programmers were always too lazy to bother with such shenanigans and thus banned null entirely.

Be a maybe!

Ok, they didn't ban it entirely. They just make their functions more honest by encoding failure cases in the return type. For example, instead of returning an Order directly, FindOrder() could also return a MaybeOrder that looks like this:

public class MaybeOrder {
    public Order Order { get; }
    public bool HasOrder => Order != null;
    private MaybeOrder(Order o) => Order = o;
    public static MaybeOrder Some(Order) => new MaybeOrder(o);
    public static MaybeOrder None() => new MaybeOrder(null);
}

Now callers immediately know they might not a get a valid result just by reading the type name and can easily check wether Find() was successful or not with the HasOrder property. The private constructor along with the two static helper functions Some(Order) and None() ensure that we cannot create invalid MaybeOrders.

Obviously this is still not an ideal solution, because it clutters the code base with lots of Maybe* types. Haskell generalizes the idea with a generic type called Maybe. The F# equivalent is called Option.

A Maybe/Option is a box that either contains a value or doesn't, and you can't know until you peek inside. It's like the Schrödinger's cat paradox except that it won't make you commit potential felinicide.

In order to create an F# Option you use one of its two constructors:

let something = Some "Thing"
let nothing = None

In the above F# example something will have the type string option and nothing will be a 'a option (its generic type 'a is unknown yet and will be inferred later when it's being used). Note that both Option<T> and 'a option mean the same thing and that I will use them interchangeably. The former is simply the C#/CLR notation while the latter is the F# notation.

It is not hard to implement a type similar to F#'s Option in C#, so let's just go ahead and do that.

Be a sharp maybe!

At it's core our Option type simply wraps a value of type T that might or might not be present. So at a bare minimum it must have a field or property for that value and another field/property to check if the value actually is there. Here's what that might look like:

public readonly struct Option<T> {
    internal T Value { get; }
    public bool IsSome { get; }
    public bool IsNone => !IsSome;

    public Option(T value) {
        Value = value ?? throw new ArgumentNullException(nameof(value));
        IsSome = true;
    }

    public override string ToString() => IsSome ? $"Some({Value})" : "None";
}

I opted for a readonly struct, because our Option should be immutable and not cause unneccessary heap allocations. But of course you can implement it as a class, if that suits you better. The OOP way would probably be to create an abstract Option base class and derive the concrete classes Some and None from it. But consider that such a class Option can be three possible things: a Some, a None, and null, which obviously makes handling it rather awkward. A struct Option, however, can never be null. Its default is the "none" case, because IsSome is default-initialized to false.

In order to create our Option we either use the constructor that takes a value or the parameterless default constructor that structs always have.

var something = new Option<string>("Thing");
var nothing = new Option<string>();
Option<int> alsoNothing = default;

We can improve the creation for the "some" case by giving our Option an implicit conversion from T to Option<T>:

public readonly struct Option<T> {
    // ...
    public static implicit operator Option<T>(T value) => new Option<T>(value);
}

This allows us to simplify the creation when T can be inferred by the compiler:

Option<string> something = "Thing";

// ...

public static class IEnumerableExtensions {
    // Replacement for `FirstOrDefault` that returns an empty
    // `Option` instead of `default` when `xs` is empty.
    Option<T> FirstOrNone<T>(this IEnumerable<T> xs)
        => xs.Any() ? xs.First() : new Option<T>();
}

We can also make the "none" case more explicit, by introducing some static helper functions to create Options:

public static class Option {
    public static Option<T> Some<T>(T value) => value;
    public static Option<T> None<T>() => default; // same as `new Option<T>()`
}

// ...

var something = Option.Some("Thing");
var nothing = Option.None<string>();

Note that they are not defined on the Option struct directly, but rather on a separate static class. That's why we could omit specifying T during the construction of something above.

"But how do we get a value out of an Option?", I hear you ask. Well, thanks for paying attention :-) You probably noticed that the Option.Value property is internal, so how are clients supposed to access it? The answer is, not directly, because that would be unsafe.

Consider the following Option value:

var nothing = Option.None<int>();

Because int is a value type it will default to 0 in the "none" case. To ensure client's don't end up with unexpected values when they forget to check IsSome, we will extend our Option type with other, more safer, means to unwrap the internal value.

Anyway, armed with this type we can now improve the signature of the method from our initial example: Option<Order> FindOrder(int id). You might fear that you are now forced to litter your code with lots of if conditions that check IsSome/IsNone. But fortunately there are functional solutions for working with Maybe-like types that we are going to learn about soon. First we have to understand how to work with F#'s Option type, though.

How to un-schrödinger the cat

So how can we check the content of an F# Option? For one, we could just force it open and try retrieving the content using its Value property:

let something = Some "foo"
printf "%s" something.Value // prints "foo"

let nothing = None
printf "%s" nothing.Value   // throws a NullReferenceException

However, this might crash with an ugly NullReferenceException. The smarter approach is to always also handle the case when the Option is empty. To that end we can either use an if structure with one of the two Option properties IsSome or IsNone, or we can deconstruct the Option using pattern matching. If you haven't noticed it by now: in FP, the concept which uses the most fancy words is always the idiomatic one, so let's try pattern matching:

let present = Some "PlayStation"
let reaction = match present with
               | Some "N64" -> "Nintendo Sixty-FOOOUR! OH MY GOOOD!!!1!"
               | Some p     -> sprintf "I got a %s!" p
               | None       -> "It's empty :'("

The match expression has three cases: one for a Some that contains "N64", one for all other Some values, and a third for the None. All together they exhaustively map an Option to a message with the additional special case of getting way too excited.

In C# we can achieve the same for our Option type using a switch expression. But first we have to extend it with a function to safely retrieve its value.

There are multiple ways we could do this, but the one that works best with switch expressions is to turn our Option into a nullable value:

public static class ToNullableExt {
    public static T? ToNullable<T>(this in Option<T> opt)
        where T: class
        => opt.IsSome ? opt.Value : null;
}

Note
The above function requires the C# 8.0 feature "nullable reference types" to be enabled. But you can leave it disabled if you remove the ? suffix from the return type in the signature.

ToNullable() will simply check if the given Option contains a value and return it. When the Option is empty, ToNullable() will return null. However, due to C#'s awkward dichotomy between value and reference types, we have to implement this twice.

Note the type parameter constraint where T: class. This tells the C# compiler that this fuction is only available for Options that contain a reference type. We need a second function for Options that contain value types like int by constraining T to structs:

public static class ToNullableExt2 {
    public static T? ToNullable<T>(this in Option<T> opt)
        where T: struct // this is the only visible difference
        => opt.IsSome ? opt.Value : null;
}

The reason for this is that the return type of ToNullable() will actually be different in each case. Even though they both look the same (T?), they will be desugared to T (reference types) and Nullable<T> (value types) by the compiler. If we only defined one function without any type parameter constraints then ToNullable() wouldn't be able to wrap value types with Nullable<T>. This in turn would cause ToNullable() to return the value type's default for an empty Option instead of null (e.g. an empty Option<int> would result in 0).

And yet one more oddity: we have to place the second function into a separate extension class, because both variants only differ by their (desugared) return type. The C# compiler wouldn't be able to tell them apart if we placed them in the same class.

But with all that C# shenanigans out of the way, we can finally switch over our Option:

var present = Option.Some("PlayStation");
var reaction = present.ToNullable() switch {
    "N64"    => "Nintendo Sixty-FOOOUR! OH MY GOOOD!!!1!",
    string p => $"I got a {p}!",
    null     => "It's empty :'("
};

Note:
When pattern-matching a nullable type T? it's important to use T x instead of var x as the pattern for the "value case", because the former won't match null, but the latter will.

Using a switch expression is really only needed if you want to match constant patterns like the "N64" case above. For simpler scenarios we can provide another extension method that takes two lambdas representing the "some" and "none" case:

public static class SwitchExt {
    public static B Switch<A, B>(this in Option<A> opt, Func<A, B> some, Func<B> none)
        => opt.IsSome ? some(opt.Value) : none();
}

It's usage looks as follows:

var present = Option.None<string>();
var reaction = present.Switch(
    some: p  => $"I got a {p}!",
    none: () => "It's empty :'(");

Note
Both ToNullable() and Switch() use the in modifier on the input Option. This is just a slight performance optimization that tells the compiler to provide the parameter as a readonly reference instead of copying it (which is the default for all value types). Since Option is a readonly struct this can safely be done without causing defensive copies.

Yet another way is to define an extension method on Option that returns its content or a specified fallback value in case it's empty:

public static class GetValueOrExt {
    public static T GetValueOr<T>(this in Option<T> opt, T fallback)
        => opt.IsSome ? opt.Value : fallback;

    public static T GetValueOr<T>(this in Option<T> opt, Func<T> fallback)
        => opt.IsSome ? opt.Value : fallback();
}

The additional overload taking a lambda is meant to be used in case the computation of the fallback value is expensive, but it can also be used to throw exceptions in case the Option is empty:

int one = Option.Some(1).GetValueOr(42);

string foo = Option.None<string>().GetValueOr("foo");

int lazyFallback = Option.None<int>().GetValueOr(() => DoExpensiveCalculation());

int throws = Option.None<int>().GetValueOr(() => throw new Exception("oof."));

Chaining the processing of Options

A nice property of Options is that they can carry failure through multiple processing steps. When all steps can handle Option inputs and return Options we only have to handle a failure once at the end.

For example, imagine that family Who from Whoville prepared christmas presents under their christmas tree, but at night the Grinch came and messed with them. Some presents he stole and some he even replaced with empty boxes! In F# we can use a map from a name to an Option to represent the set of presents:

// val presents : Map<string, string option>
let presents = Map.ofList [("Boo Who", Some "candy bar");
                           ("Cindy Lou Who", None); // Cindy's present is empty :(
                           ("Betty Lou Who", Some "toy car")]
                           // Danny's & Annie's presents were stolen :(

An F# Map<K, V> is pretty much the same as a C# Dictionary<K, V>, except that it's immutable. In order to retrieve a value from it we can use the function Map.tryFind which takes a key and a map and returns an Option<V> that will be empty in case the map didn't contain the provided key. Note that, since our map presents already contains Options, the result of tryFind will be an Option<Option<string>>, i.e. an Option wrapped inside another Option.

Now, if we wanted to simulate the reaction of any of the Who kids with code, we could implement a function that takes a name and returns a reaction message:

// val reactionOf : who:string -> string
let reactionOf who =
    // val x : string option option
    let x = Map.tryFind who presents
    match x with
    | None     -> "I got nothing :'("
    | Some box -> match box with
                  | None         -> "I got nothing :'("
                  | Some present -> sprintf "I got a %s :)" present

The function first looks for a box with the given name and if it finds one, it opens the box to see what's inside. But notice how we have to handle both failure cases (no box at all or an empty box) separately although the Who kids sure don't care about such specifics. All that matters is they might not get a present!

Knowing this we can simply turn a missing box into an empty box and let the last step produce the failure:

let reactionOf who =
    let box = match Map.tryFind who presents with
              | Some box -> box
              | None     -> None
    match box with
    | Some present -> sprintf "I got a %s :)" present
    | None         -> "I got nothing :'("

This change may seem trivial, but we will later see how this approach helps to organize code more clearly. For now, we just notice how we got rid of duplication and nesting.

A more realistic scenario

Unpacking presents is fun and all (unless you're in the Who family), but it's hardly a realistic scenario. Let's check how the Option type holds up in more complex programs.

Imagine we were writing a little command line tool that searches the contents of a directory for a specific word. It has three parameters:

Usage:
  --word STRING       The word to search for (mandatory, non-empty, alphanumeric)
  --rescursive BOOL   Toggle recursive search (optional, default: true)
  --first INT         Only return the top n results (optional, default: return all)

Luckily we already have a basic arguments parser that turns an argument string like...

> .\our-search-tool.exe --word test --recursive false --first 3

...into a Map<string, string> that uses the argument names as keys and maps them to the raw argument strings. Our job is now to pre-process these arguments so we can pass them to our actual search function:

// val search : string -> bool -> int option -> string list
let search word recursive first = ...

Let's first create a shell of our program with comments explaining what we want to implement:

// `args` was actually provided by our arguments parser.
let args = Map.ofList [("word", "test"); ("recursive", "false"); ("first", "3")]

// removes non-word characters
let sanitize s = Regex.Replace(s, "\W", "")

let word =
    // Try find `"word"` in `args` (fail if missing).
    // Remove special characters with `sanitize` (never fails).
    // Fail if empty, otherwise we're done.

let recursive =
    // Try find `"recursive"` in `args` (don't fail if missing).
    // Parse as `bool` (can fail, but should be supressed).
    // Return result or default `true` in case previous steps failed.

let first =
    // Try find `"first"` in `args` (don't fail if missing).
    // Parse as `int` (can fail, but should be supressed).
    // Return result as option and `None` in case previous steps failed.

// Initiate actual search with the arguments `word` (a `string`),
// `recursive` (a `bool`), and `first` (an `int option`).
search word recursive first

Now let's implement the first part, where we want to get the value of word. Translating those comments directly we might come up with the following code:

let word =
    let wordArg = match Map.tryFind "word" args with
                  | Some s -> s
                  | None   -> failwith "Argument '--word' is required."
    let sanitizedWord = sanitize wordArg
    match sanitizedWord with
    | "" -> failwith "Argument '--word' cannot be empty (special characters are ignored)."
    | _  -> sanitizedWord

However, we learned earlier that, when working with Options, it is helpful to forward failures. I still haven't told you why, but we will get there soon. So let's restructure the implementation a bit:

 let word =
    let wordArg = Map.tryFind "word" args
    let sanitizedWord = match wordArg with
                        | Some s -> Some (sanitize s)
                        | None   -> None
    match sanitizedWord with
    | Some "" | None -> failwith "Argument '--word' is required (non-empty, alphanumeric)"
    | Some s         -> s

You might be thinking that this implementation doesn't look elegant at all and that you probably could come up with a simpler structure using regular if expressions. You are absolutely right! The unpacking and repacking of Options is awkward and obfuscates the interesting parts, i.e. Map.tryFind "word" args, sanitize s, Some "", and failwith "...".

Of course, I wouldn't be writing all this if there wasn't a nicer, more functional solution to the above problem. And indeed, F#'s Option module comes with lots of helper functions that do all the unpacking and repacking of Options for us. The ones we need are Option.map, Option.filter, and Option.getOrFail:

let word =
    Map.tryFind "word" args
    |> Option.map sanitize
    |> Option.filter (fun s -> s <> "")
    |> Option.getOrFail "Argument '--word' is required (non-empty, alphanumeric)"

Note
The function Option.getOrFail is from the nuget package FSharpX.Extras.

Now you see why I insisted on forwarding Nones instead of handling them on the spot. It allows us to use the pipeline style! This sure looks a lot better, but how does it work? Let's understand each of those new functions in detail by implementing them for our C# Option.

Filtering Option (F#)

We start with Option.filter, because it's the simplest. It has the following signature:

val filter : predicate:('a -> bool) -> o:'a option -> 'a option`

Option.filter takes a bool-returning function named predicate and a 'a option named o. When called, it first unpacks the 'a option and checks if it is None. If so, it just returns it diretly. Otherwise it tests whether the 'a value satisfies the predicate and turns the value into a None in case it doesn't.

In our case Option.filter checks whether the --word argument has any word characters left after sanitization. If not, the argument will be invalidated by turning it into a None. Of course, if there was no --word argument to begin with, it will just forward the None it was given.

I think, that's pretty handy! So how can we use it in C#?

Filtering Option (C#)

In C#, Filter() is an extension method on Option<T> that takes a Func<T, bool> to filter out unwanted values:

public static class FilterExt {
    public static Option<T> Filter<T>(this in Option<T> opt, Func<T, bool> predicate)
        => opt.IsSome && predicate(opt.Value) ? opt : Option.None<T>();
}

We use it like this:

Option<string> numStr = GetUserInput();

Option<string> filteredNumStr = numStr.Filter(s => s.All(char.IsDigit));

int num = filteredNumStr.Switch(
    some: s  => int.Parse(s),
    none: () => throw new Exception("No input given or input was not numeric"));

Not too spectacular on its own, but quite useful when building a method chain to process an Option:

int num = GetUserInput()
    .Filter(s => s.All(char.IsDigit))
    .Switch(
        some: s  => int.Parse(s),
        none: () => throw new Exception("No input given or input was not numeric"));

Mapping Option (F#)

Next up is Option.map which has signature that's only a little more complicated:

val map : mapping:('a -> 'b) -> o:'a option -> 'b option

It takes a mapping function from 'a to 'b and an 'a option. After unpacking the 'a option it will apply the mapping function to the contained 'a value and create a new 'b option from the mapping result. In case the 'a option actually was a None it will just return a None of type 'b option.

In our case Option.map unpacks the result of Map.tryFind, and if the word argument was found, it will be mapped with the function sanitize. In case the --word argument was not provided, Option.map will simply return None.

Again, let's see how we can make use of it in C#.

Mapping Option (C#)

Our C# implementation of Option.map looks similar to Filter(), except that two generic types are needed, because the output type of the mapping can be different from the input type:

public static class MapExt {
    public static Option<B> Map<A, B>(this in Option<A> opt, Func<A, B> mapping)
        => opt.IsSome ? mapping(opt.Value) : Option.None<B>();
}

Map() can return the B result of mapping() directly without using Option.Some(), due to the implicit conversion from T to Option<T> we defined earlier.

Once we have it, we can build compact processing chains that handle "none" cases implicitly:

int num = GetUserInput()
    .Filter(s => s.All(char.IsDigit))
    .Map(int.Parse)
    .Switch(
        some: n  => n,
        none: () => throw new Exception("No input given or input was not numeric"));

Notice how we moved int.Parse() from the Switch() into a separate step using Map(). This allows us to rewrite the above to a slightly shorter version using ToNullable():

int num = GetUserInput()
    .Filter(s => s.All(char.IsDigit))
    .Map(int.Parse)
    .ToNullable()
    ?? throw new Exception("No input given or input was not numeric");

Alternatively, we can use GetValueOr() to simplify it even more:

int num = GetUserInput()
    .Filter(s => s.All(char.IsDigit))
    .Map(int.Parse)
    .GetValueOr(() => throw new Exception("No input given or input was not numeric"));

But back to our little CLI search program. We still have work to do there.

The second argument of our search program

So we implemented the processing of the first argument and can place that into our program shell:

let args = Map.ofList [("word", "test"); ("recursive", "false"); ("first", "3")]
let sanitize s = Regex.Replace(s, "\W", "")

let word =
    Map.tryFind "word" args
    |> Option.map sanitize
    |> Option.filter (fun s -> s <> "")
    |> Option.getOrFail "Argument '--word' is required (non-empty, alphanumeric)"

let recursive =
    // Try find `"recursive"` in `args` (don't fail if missing).
    // Parse as `bool` (can fail, but should be supressed).
    // Return result or default `true` in case previous steps failed.

let first =
    // Try find `"first"` in `args` (don't fail if missing).
    // Parse as `int` (can fail, but should be supressed).
    // Return result as option and `None` in case previous steps failed.

search word recursive first

For the second argument recursive we need to parse a string into a bool without throwing an exception. The nuget package FSharpX.Extras offers the function Boolean.parse exactly for that. It has the following signature:

val parse : string -> bool option

As you can see it takes a string and returns an Option of bool, meaning the result will be None in case the string couldn't be parsed. Your first instinct might be to use it with Option.map:

let recursive =
    Map.tryFind "recursive" args
    |> Option.map Boolean.parse
    // Return result or default `true` in case previous steps failed.

But when you look closely, you can see how the signature of the expected mapping function for Option.map does not exactly line up with the signature Boolean.parse:

val map : mapping:('a -> 'b) -> o:'a option -> 'b option

When we replace the generic parameter 'a with string and 'b with the return type of Boolean.parse (bool option) we end up with the following concrete signature:

val map : mapping:(string -> bool option) -> o:string option -> bool option option

It returns an Option of Option of bool. That's seems a bit awkward, because we'd have to unpack that result twice.

Binding Option (F#)

Luckily there is another function similar to Option.map. It's called Option.bind:

val bind : binder:('a -> 'b option) -> o:'a option -> 'b option

It expects an Option-returning function called binder and an Option. It returns, however, not a nested Option, but a "flattened" Option that we have to unpack only once. Because of its similarity to map and its flattening behavior, bind is sometimes also referred to as flatMap. Anyway, let's put it in place and finish processing of the second argument:

let recursive =
    Map.tryFind "recursive" args
    |> Option.bind Boolean.parse
    |> Option.getOrElse true

We use Option.getOrElse for the last step (also from FSharpX.Extras), which will return the value inside the Option if there is one, or the specified default value otherwise.

Binding Option (C#)

The implementation of bind for C#'s Option looks almost exactly the same as Map(), except that the function passed to it (binder) has to return an Option<B> instead of just B:

public static class BindExt {
    public static Option<B> Bind<A, B>(this in Option<A> opt, Func<A, Option<B>> binder)
        => opt.IsSome ? binder(opt.Value) : Option.None<B>();
}

Building on the earlier C# example for Filter() and Map()...

int num = GetUserInput()
    .Filter(s => s.All(char.IsDigit))
    .Map(int.Parse)
    .GetValueOr(() => throw new Exception("No input given or input was not numeric"));

...we see that we can replace both of them with Bind() if we make us a little helper function that tries to parse strings:

Option<int> ParseInt(string s) => int.TryParse(s, out var i) ? i : Option.None<int>();

int num = GetUserInput()
    .Bind(ParseInt)
    .GetValueOr(() => throw new Exception("No input given or input was not numeric"));

Completing our search program

Now that we learned about Option.bind, we know how to implement the last step of the argument processing where we need to prepare the argument first for our search function:

let args = Map.ofList [("word", "test"); ("recursive", "false"); ("first", "3")]
let sanitize s = Regex.Replace(s, "\W", "")

let word =
    Map.tryFind "word" args
    |> Option.map sanitize
    |> Option.filter (fun s -> s <> "")
    |> Option.getOrFail "Argument '--word' is required (non-empty, alphanumeric)"

let recursive =
    Map.tryFind "recursive" args
    |> Option.bind Boolean.parse
    |> Option.getOrElse true

let first =
    Map.tryFind "first" args
    |> Option.bind Int32.parse

search word recursive first

Note
The function Int32.parse is from the nuget package FSharpX.Extras.

And there we have it: a very functional solution to our problem without any match or if expressions obscuring the actual application logic. All the matching and unpacking of Options is done for us by the functions of the Option module.

Since we've already implemented most of these functions in C#, we now could port our F# program. Of course, we have to implement a C# version of Map.tryFind first:

public static class DictionaryExt {
    public static Option<V> TryGetValue<K, V>(this IDictionary<K, V> dict, K key)
        => dict.TryGetValue(key, out var v) ? v : Option.None<V>();
}

The same goes for Int32.parse and Boolean.parse. But after investing this little extra effort, the resulting code becomes very readable and compact:

var args = new Dictionary<string, string> {
    ["word"] = "test", ["recursive"] = "false", ["first"] = "3"
};

string Sanitize(string s) => Regex.Replace(s, @"\W", "");
Option<int> ParseInt(string s) => int.TryParse(s, out var i) ? i : Option.None<int>();
Option<bool> ParseBool(string s) => bool.TryParse(s, out var b) ? b : Option.None<bool>();

var word = args
    .TryGetValue("word")
    .Map(Sanitize)
    .Filter(s => s != "")
    .GetValueOr(() => throw new Exception("Argument '--word' is required (non-empty, alphanumeric)"));

var recursive = args
    .TryGetValue("recursive")
    .Bind(ParseBool)
    .GetValueOr(true);

var first = args
    .TryGetValue("first")
    .Bind(ParseInt);

Search(word, recursive, first);

Now imagine you had to implement the same logic, but without Option and our Option helper functions. I don't know about you, but trying to sort out the required ifs and nesting levels in my head makes it hurt. Sure it's possible, but do you think it will be as succinct and expressive as the presented solution?

Drawbacks of our solution

One obvious problem for readers that don't know about Option or its extension methods, is that they won't immediately understand their meaning. Especially Bind() has a rather unintuitive name. The upside is that those method's implementations are so trivial that they can be easily learned and understood. And if you're still having trouble with understanding Bind(): just remember that you need it when you want to use Map(), but your mapping function returns Option<T> instead of T.

Another problem is that we are losing more and more error information the longer our Option pipeline becomes. When the end of the pipeline for the word argument of our search tool has been reached, we no longer know whether the user failed to provide that argument, or if it was provided but empty, or if it only contained special characters. All those possible failure cases are condensed into one generic exception message: Argument '--word' is required (non-empty, alphanumeric). I argue that this is sufficient in many cases, because the benefits of being able to structure the implementation in terms of an Option pipeline outweight the costs of not having dedicated error handling for each possible failure. Still there are a lot of cases where maybe-like types are not sufficient. But for those we have another functional recipe called Either, which I'll introduce in the next section.

We have seen that a basic maybe type is easy to implement. But a bit more work is needed to make it production-ready. For instance, you'll probably want Option to implement IEquatable<Option>. And, because async methods are ubiquitous in dotnet, you'll also want I way to integrate Option with those. Luckily for you, I already did that and published the result as a GitHub Gist that you can copy & paste into your project. It's not a nuget package in order to give you full control over the implementation. So in case you don't like some of the names (e.g. Option should be Maybe, Bind() should be Then(), etc.), you can go ahead and easily fix them.

Option vs nullable reference types

In an earlier version of this section I've shown how to implement the "maybe concept" not as a new type, but rather on top of C# 8.0's nullable reference types (NRTs). That approach had some advantages, but also some annoying disadvantages I briefly want to explain.

In case you missed the introduction of NRTs, I'll start this comparison with a quick recap on NRTs.

WTF r NRTs?

We all know how we can make value types like int and bool nullable using Nullable<T> (or T? for short). However, Nullable<T> does not work with reference types, because they were always nullable. With C# 8.0 we can now make reference types non-nullable by default unless we explicitly mark them as nullable by appending the suffix ? to the type name. We only have to opt in to NRTs by using the directive #nullable enable or the project property <Nullable>enable</Nullable> (and yes, it's confusing that the feature is not called "non-nullable reference types" instead).

After opting in, the compiler will warn us when we try to dereference an NRT without checking it first:

string? x = null;

Console.WriteLine(x.Length); // CS8602 Dereference of a possibly null reference.

if (x != null)
    Console.WriteLine(x.Length); // no warning

We will also get a warning when we try to assign null to a non-NRT:

string x = null; // CS8600 Converting null literal [...] to non-nullable type.

That's NRTs in a nutshell, but you definitely should read up on them (e.g. here), because they have the potential to liberate us from the dreaded NullReferenceException in many scenarios.

Nullable may be trouble

The main problem with NRTs, even though they improve code quality a lot, is that they are not a real type. They are just syntactic sugar that is removed during compilation and replaced with a Nullable attribute.

For instance, a type like...

class Foo {
    string? Bar;
    string Qux;
}

...will be turned into something like:

class Foo {
    [System.Runtime.CompilerServices.Nullable(2)]
    string Bar;

    [System.Runtime.CompilerServices.Nullable(1)]
    string Qux;
}

The C# team did this to preserve backwards compatibility and make migrating existing codebases to NRTs much, much easier.

Those attributes are enough for the compiler to provide warnings in case client code dereferences Foo.Bar without checking it for null first, but makes things confusing when you defined an extension method on string? (or T? where T: class in general).

For instance, let's implement Option.Filter() for NRTs:

public static class NullableExt {
    // turns values that don't satisfy `predicate` into `null`
    public static T? Filter<T>(this T? x, Func<T, bool> predicate)
        where T: class
        => x is not null && predicate(x) ? x : null;
}

And when we use it in a nullable context:

// compiles as expected
var x = ((string?)"foo").Filter(x => x.Length > 0);

// also compiles, but doesn't make much sense
var y = ((string)"foo").Filter(x => x.Length > 0);

Because T? is not a real type when T is a reference type, that method will be available for values of type string? and string. After all, to the compiler they are exactly the same.

For users, however, this is confusing, because it unneccessarily bloats the API surface of every reference type.

For value types like int this won't happen, because int? is turned into Nullable<int>, i.e. a different type, but we have to implement Filter() twice: once with where T: class and once more with where T: struct. Although users won't notice this, the implementation still gets rather messy once you get to Map(): it has to be implemented four times using two different extension classes.

Summary

The nuget package I created for the NRT-based implementation is still available, but I consider it an experiment now. I suggest to use a dedicated maybe-like type instead. If you're still not sure which approach is more suited for you, then maybe the bullet lists below will help.

Using NRTs as a maybe-like type:

  • ✔ Succinct: string? is much more readable than Option<string>,
  • ✔ Easy adoption: it's as easy as migrating to NRTs,
  • ❌ API surface bloat: even non-nullable reference types will expose the maybe operators,
  • ❌ Brittle: too easy to use incorrectly,
  • ❌ Messy implementation: pray you never need to change it.

Using a dedicated maybe-like type:

  • ❌ Noisy: Option<string> is less readable than string?,
  • ❌ "Big bang" adoption: making a method return Option affects all call sites,
  • ✔ Clean API surface: only Options will expose maybe operators,
  • ✔ Solid: much harder to use incorrectly,
  • ✔ Clean implementation: (almost) no dirty tricks needed.

🚧 Everything below is unfinished work in progress

Skip to Sources and further materials.

Functional error handling

  • handling failures one of the most important things when programming robust systems
  • common kinds of error handling:
    • abort (sounds simple, but we still have to consider proper logging to ease debugging of unexpected failures)
    • retry
    • fallback

Return error codes

  • one option: functions return error codes and write their actual results to references provided by the caller via the function's parameters (C style)
namespace QuxLib {
    public enum QuxError { None, DeviceNotFound, UnsupportedFeature, OutdatedFirmware }

    public static class QuxLib {
        // might return `QuxError.DeviceNotFound`
        public static QuxError Connect(out Qux qux) { /* ... */ }

        // might return `QuxError.UnsupportedFeature`
        public static QuxError AccessFooRegulator(Qux qux, out Foo foo) { /* ... */ }

        // can never fail
        public static void AccessFuuRegulator(Qux qux, out Fuu fuu) { /* ... */ }

        // might return `QuxError.OutdatedFirmware`
        public static QuxError ReadFooSchedule(Foo foo, out Plan plan) { /* ... */ }

        // might return `QuxError.OutdatedFirmware`
        public static QuxError WriteFooSchedule(Foo foo, Plan plan) { /* ... */ }

        public static void PrintError(QuxError err, string? msg = null) { /* ... */ }
    }

    public class Foo { /* ... */ }
    public class Fuu : Foo { /* ... */  }
    public class Plan { public TimeSpan Interval; /* ... */  }
}
using System.Threading;
using QuxLib;

var err = QuxError.None;

// retry
Qux qux;
while ((err = QuxLib.Connect(out qux)) != QuxError.None) {
    QuxLib.PrintError(err, "Did you forget to plug in your Qux device? Retrying in 5 seconds...");
    Thread.Sleep(5000);
}

// fallback
if ((err = QuxLib.AccessFooRegulator(qux, out var foo)) != QuxError.None) {
    QuxLib.PrintError(err, "Using Fuu fallback.");
    QuxLib.AccessFuuRegulator(qux, out var fuu);
    foo = fuu;
}

// abort
if ((err = QuxLib.ReadFooSchedule(foo, out var bar)) != QuxError.None) {
    QuxLib.PrintError(err);
    return 1;
}

bar.Interval = TimeSpan.FromHours(1);

// abort
if ((err = QuxLib.WriteFooSchedule(foo, bar)) != QuxError.None) {
    QuxLib.PrintError(err);
    return 1;
}

Console.WriteLine("Done!");
return 0;
  • advantages:
    • explicit: function signature declares possible failure
    • cohesive: errors are handled where they happen
    • flexible: special cases like retry and fallback cause only small increase in complexity
  • disadvantages:
    • boilerplace code for error handling obscures business logic (C: boilerplate typically wrapped with a macro)
    • not very functional, because pure functions must not mutate their parameters

Throw exceptions

  • using exceptions (OOP style)
public class DeviceNotFoundException : Exception { }
public class UnsupportedFeatureException : Exception { }
public class OutdatedFirmwareException : Exception { }

public static class QuxLib {
    // might throw `DeviceNotFoundException`
    public static Qux Connect() { /* ... */ }

    public static void PrintError(Exception ex, string? msg = null) { /* ... */ }
}

public class Qux {
    // might throw `UnsupportedFeatureException`
    public Foo AccessFooRegulator() { /* ... */ }

    // never throws
    public Fuu AccessFuuRegulator() { /* ... */ }
}

public class Foo {
    // might throw `OutdatedFirmwareException`
    public Plan ReadSchedule() { /* ... */ }

    // might throw `OutdatedFirmwareException`
    public void WriteSchedule(Plan plan) { /* ... */ }
}

public class Fuu : Foo { }

public class Plan { public TimeSpan Interval; /* ... */ }
using System.Threading;
using QuxLib;

// retry
Qux qux = null!;
var connected = false;
while (!connected) {
    try
    {
        qux = QuxLib.Connect();
        connected = true;
    } catch (DeviceNotFoundException ex) {
        QuxLib.PrintError(ex, "Did you forget to plug in your Qux device? Retrying in 5 seconds...");
        Thread.Sleep(5000);
    }
}

// fallback
Foo foo;
try {
    foo = qux.AccessFooRegulator();
} catch (UnsupportedFeatureException ex) {
    QuxLib.PrintError(ex, "Using Fuu fallback.");
    foo = qux.AccessFuuRegulator();
}

try {
    var plan = foo.ReadSchedule();         // implicit abort
    plan.Interval = TimeSpan.FromHours(1);
    foo.WriteSchedule(plan);               // implicit abort
    Console.WriteLine("Done!");
    return 0;
} catch (OutdatedFirmwareException ex) {
    QuxLib.PrintError(ex);
    return 1;
}
using System.Threading;
using QuxLib;

try {
    var qux = QuxLib.Connect();            // implicit abort
    var foo = qux.AccessFooRegulator();    // implicit abort
    var plan = foo.ReadSchedule();         // implicit abort
    plan.Interval = TimeSpan.FromHours(1);
    foo.WriteSchedule(plan);               // implicit abort
    Console.WriteLine("Done!");
    return 0;
} catch (DeviceNotFoundException ex) {
    QuxLib.PrintError(ex, "Did you forget to plug in your Qux device?");
    return 1;
} catch (UnsupportedFeatureException ex) {
    QuxLib.PrintError(ex, "No Foo available. Use Fuu fallback instead.");
    return 1;
} catch (OutdatedFirmwareException ex) {
    QuxLib.PrintError(ex);
    return 1;
}
  • advantage:
    • boilerplate-free business logic
      • but only when there is no special failure handling (e.g. retries, fallbacks), because exceptions are not made to steer control flow
  • disadvantages:
    • lack of cohesion: errors might be handled further down or further up the call chain; you won't know until you check the implementation
    • not explicit: function signature does not tell us anything about the possible exceptions
      • Java's checked exceptions tried to alleviate that
    • also not functional: exceptions are hidden return values
  • exceptions are glorified GOTO statements
  • exceptions are slow
public static class Ex {
    public static T Retry<T, TEx>(Func<T> operation, Action<TEx> beforeRetry)
        where TEx : Exception {
        while (true) {
            try
            {
                return operation();
            } catch (TEx ex) {
                beforeRetry(ex);
            }
        }
    }

    public static T Fallback<T, TEx>(Func<T> operation, Func<TEx, T> fallback)
        where TEx : Exception {
        try {
            return operation();
        } catch (TEx ex) {
            return fallback(ex);
        }
    }
}
using System.Threading;
using QuxLib;

// retry
var qux = Ex.Retry<Qux, DeviceNotFoundException>(
    QuxLib.Connect,
    beforeRetry: ex => {
        QuxLib.PrintError(ex, "Did you forget to plug in your Qux device? Retrying in 5 seconds...");
        Thread.Sleep(5000);
    });

// fallback
var foo = Ex.Fallback<Foo, UnsupportedFeatureException>(
    qux.AccessFooRegulator,
    fallback: ex => {
        QuxLib.PrintError(ex, "Using Fuu fallback.");
        return qux.AccessFuuRegulator();
    });

try {
    var plan = foo.ReadSchedule();         // implicit abort
    plan.Interval = TimeSpan.FromHours(1);
    foo.WriteSchedule(plan);               // implicit abort
} catch (OutdatedFirmwareException ex) {
    QuxLib.PrintError(ex);
    return 1;
}

Console.WriteLine("Done!");
return 0;

A compromise

  • Either/Result
public enum QuxError { DeviceNotFound, UnsupportedFeature, OutdatedFirmware }

public static class QuxLib {
    // might return `Result` with `QuxError.DeviceNotFound`
    public static Result<Qux, QuxError> Connect() { /* ... */ }

    // might return `Result` with `QuxError.UnsupportedFeature`
    public static Result<Foo, QuxError> AccessFooRegulator(Qux qux)  { /* ... */ }

    // can never fail
    public static Fuu AccessFuuRegulator(Qux qux)  { /* ... */ }

    // might return `Result` with `QuxError.OutdatedFirmware`
    public static Result<Plan, QuxError> ReadFooSchedule(Foo foo)  { /* ... */ }

    // might return `Result` with `QuxError.OutdatedFirmware`
    public static Result<Unit, QuxError> WriteFooSchedule(Foo foo, Plan plan)  { /* ... */ }

    public static void PrintError(QuxError err, string? msg = null)  { /* ... */ }
}

public record Qux();

public record Foo();

public record Fuu() : Foo;

public record Plan(TimeSpan Interval);
using System;
using SomeLib;

return Result
    .Retry(
        QuxLib.Connect,
        beforeRetry: err => {
            QuxLib.PrintError(err, "Did you forget to plug in your Qux device? Retrying in 5 seconds...");
            TimeSpan.FromSeconds(5);
        })
    .Bind(qux => QuxLib.AccessFooRegulator(qux)
        .Else(err => {
            QuxLib.PrintError(err, "Using Fuu fallback.");
            return QuxLib.AccessFuuRegulator(qux);
        }))
    .Bind(foo => QuxLib.ReadFooSchedule(foo)
        .Map(plan => plan with { Interval = TimeSpan.FromHours(1) })
        .Bind(newPlan => QuxLib.WriteFooSchedule(foo, newPlan)))
    .Switch(
        value: _ => {
            Console.WriteLine("Done!");
            return 0;
        },
        error: e => {
            QuxLib.PrintError(e);
            return 1;
        });
Error codesResult<V, E>Exceptions
Expressiveness 🟡 signature declares success and failure, but readers have to scan parameters to identify success type 🟢 signature declares success and failure in return type 🔴 signature only declares return type for success
Boilerplate 🔴 lots of ifs 🟢/🟡 ifs are hidden, but glue functions and lambdas obscure a bit 🟢/🟡 business logic is very clean, but try catch constructs are noisy
Cohesion 🟢 🟢 🔴
Delegation 🔴 🟢 🟢
Flexibility 🟢 🟠 🔴
Functional 🟠 functions are impure, because they mutate parameters 🟢 foo 🔴 exceptions are not part of return type, making them side effects
Debugging 🟢 🔴 🟠
Traceability 🟠 🔴 🟢
Average 🟢 🟠 🔴
  • combines the best of both approaches
    • errors are ???, because they are part of the function signature
    • removes the boiler plate error checking
  • error handling can be done locally or be delegated

  • build Result type with Bind(), Map(), MapError(), ...

  • railway oriented error handling: https://fsharpforfunandprofit.com/rop/
  • when not to use it:
    • exceptions still have their place
    • even haskell has exceptions (e.g. head [] throws "Exception: Prelude.head: empty list")
    • distinction:
      • unrecoverable failures (exceptions) vs. recoverable failures (Result)
      • exceptions conflate the two
      • problem: why should the libs you are using decide from what failures you are able to recover or not?
      • example: the ORM lib you are using considers a broken DB connection unrecoverable and throws an exception, but you're program actually can recover from it with a couple of retries
      • possible conclusion: try to avoid exceptions as much as reasonably possible (especially in lib code)?
    • https://fsharpforfunandprofit.com/posts/against-railway-oriented-programming/
      • better distinction: unexpected failures (exceptions) vs. expected failures (Result)

Algebraic abstractions

While we implemented the extension functions Map(), Filter(), and Bind() on T? you might have noticed something interesting. Doesn't working with T?s now feel oddly similar to working with LINQ and IEnumerable<T>s?

"Well, of course it does," you say, "because the type T? is isomorphic to the set of empty and singleton lists, dummy!"

Iso-what-now?

Yes, it's true. We can convert a T? into a list with a single element or no element, depending on whether it contained a value or not. We can also convert a list into a T? when it only contains zero ore one elements. In other words: there exists a lossless and invertible conversion between the set of all T?s and the set of all empty/singleton lists.

The image below visualizes this relation between the two types T? and T[]. Here, T[] works as a concrete example for the generic term "list", but List<T> or IEnumerable<T> of course could do the job as well. The blue and green arrows between the values represent the conversion functions. Actually defining those is left as an exercise for the reader. (I always wanted to write that. So cool.)

Functional programmers call two types having such a conversion an isomorphism, because... well because they want to use the same slang the cool kids from the math club use. I'm just kidding. Actually, purely functional programming (especially Haskell) has some of its roots in category theory and the terminology carried over. Anyway, "morphism" is just a fancy name for "conversion" or "mapping" derived from the greek word "morph" which means "shape". And "iso" means "equal" in the same language. So saying that two types are isomorphic, basically means that they are of "equal shape" and that we can convert back and forth between the two without loosing any information.

Although this is indeed interesting to know and might become useful in some scenarios, it is not the whole story. There is another, more powerful abstraction at play here, called functor.

Functors

Let's have a look at the extension methods T?.Map(), T?.Filter(), and T?.Bind() again. When comparing their signatures with those of the LINQ extensions you will find that there exists a LINQ equivalent for each of them:

T? IEnumerable<T> (shortend to T[])
T2? Map<T1, T2>(T1?, Func<T1, T2>) T2[] Select(T1[], Func<T1, T2>)
T? Filter(T?, Func<T, bool>) T[] Where(T[], Func<T, bool>)
T2? Bind(T1?, Func<T1, T2?>) T2[] SelectMany(T1[], Func<T1, T2[]>)

A closer look on Map() and Select() reveals some striking similarities:

In purple we see some kind of "container type" the function is defined for: IEnumerable<T> and T?. The "contained types" are highlighted in blue (input T1) and green (output T2). And the mapping function which is applied to the contained values is marked yellow. Both Map() and Select() only differ in their name and the container type they work on. Since both functions are pure, can we not conclude that both functions have the same meaning? Both can only produce a result using their parameters, so what much else could they do than to apply the yellow mapping function to the contents of the purple container type? For now, I'm ignoring any intentionally useless implementations (e.g. always returning Enumerable.Empty<T2>() or (T2?)null respectively), but we will get to those later.

So what is this common meaning I claim both IEnumerable<T>.Select() and T?.Map() to have? It's actually fairly simple: they allow a generic type to be transformed with a mapping function while preserving its structure. Map() and Select() only change the T contents of a T? or IEnumerable<T>, but not the "container type" itself. This can be exploited to apply transformations to the "container type" without ever "unpacking" it manually.

We have seen this in action in the previous section where we learned about F#'s Option module and implemented our helper functions on T?. Everytime we needed to transform a T? or Option<T> we just used T?.Map() and Option.map, rather than unpacking their values and applying the transformation to the values directly. But you also have used this many times already with LINQ, because IEnumerable<T>.Select() works the same way conceptually. Just think of iterating over a list with a loop as unpacking it. The actual transformation is then part of the body of the loop along with some additional code needed to "re-package" the list either by building a new list or by modifying the original one in place. The two C# snippets below illustrate that similiarity:

var x = (int?)13;

var y1 =              // re-pack (implicit conversion to `int?`)
    x switch {        // unpack
        null => null,
        _    => x + 1 // transform content
};

// map content
var y2 = x.Map(x => x + 1);
var xs = new List<int>() { 1, 2, 3 };

var ys1 = new List<int>(); // re-pack
foreach (var x in xs) {    // unpack
    var y1 = x + 1;        // transform contents
    ys1.Add(y1);           // re-pack
}

// map contents
var ys2 = xs.Select(x => x + 1);

Any generic type MyType<T> exposing a method like Map()/Select() is a functor. It may seem like a geeky exercise to assign a name to such an abstract concept, but we've seen with LINQ's Select() and our T?.Map() extension that functors are incredibly good at hiding boilerplate code and emphasizing business logic. And having a name for this concept makes it much easier to intuitively understand types that use it, as well as to spot opportunities to employ it in your code.

For a few functors more

Another example of a type we can turn into a functor is Task<T>. Like T? it might contain a value, but the question we ask Task<T> is when that value arrives rather than if it was created with a value or not. So to resolve a T? we use branching and to resolve a Task<T> we use await.

Making T? a functor allowed us to chain transformations of its value without resolving it. We effectively postponed the question if there even is a value to transform and delegated it to someone higher up the call chain.

Similarly, making Task<T> a functor will allow us to delay awaiting its value when we just want to apply some transformations to it. For an example have a look at the below method:

// Retrieves the initials of a user's name.
public async Task<string> GetInitials(int userId, bool lowerCase) {
    var user = await LoadUser(userId); // returns a `Task<User>`
    var initials = $"{user.FirstName[0]}{user.LastName[0]}";
    return lowerCase ? initials.ToLower() : initials;
}

GetInitials() will ansynchronously load a user by its id. After that it will first take the starting characters of both the user's FirstName and LastName to combine them into a string. Then, depending on the lowerCase argument, the resulting string might be lower-cased or returned directly.

So GetInitials() does two transformations. The first one takes a User object and turns it into a string and the second one maps from string to string again. But in order for any of this to happen the result of LoadUser() has to be awaited inside GetInitials() first.

But watch what happens when we make Task<T> a functor by giving it a Map() function:

public static class TaskExt {
    // Transforms the value inside a task when it arrives.
    public static async Task<T2> Map<T1, T2>(
        this Task<T1> task,
        Func<T1, T2> mapping)
        => mapping(await task);
}

This allows us to extend the work contained inside the Task<User> with our transformations fluently:

public Task<string> GetInitials(int userId, bool lowerCase) =>
    LoadUser(userId)
    .Map(u => $"{u.FirstName[0]}{u.LastName[0]}")
    .Map(s => lowerCase ? s.ToLower() : s);

Notice how we don't need to await inside GetInitials() anymore. Map() is doing that for us, but another way to frame it is that we delegated the await to the caller of GetInitials().

However, compared to T?.Map(), we're not saving that much. The T? functor allows to avoid lots of boilerplate branching in our code while the Task<T> functor allows us to avoid a tiny await. On top of that, depending on what's happening in the transformations, eliding async/await might not even be such a good idea. That doesn't mean that the Task<T> functor is useless--it's just less useful than the T? functor.

Because even when we keep the await we are still left with a function that has a cleaner structure. We're avoiding noisy temporary variables and expressing the flow of the data with the pipeline style.

public async Task<string> GetInitials(int userId, bool lowerCase) => await
    LoadUser(userId)
    .Map(u => $"{u.FirstName[0]}{u.LastName[0]}")
    .Map(s => lowerCase ? s.ToLower() : s);

There are many other generally useful functors out there and if you look close enough you might even spot some among the domain types of the software you are currently working on.

Mark Seemann wrote a great article series on functors with lots of C# examples. I really recommend reading it to get a good understanding of their power and versatility.

Functors in Haskell

"Wait... what? Haskell?!" 😱
Yes, there will be quite some Haskell in this section. But don't worry, I'll explain all you need to know. Also I think it's worthwhile to have a brief look at how a proper language integration of something like functors looks like. Especially considering that C# might gain similar capabilities sometime in the future.

In the realm of OOP languages like C# (and even in F#) functors are merely an idea. A very good idea, yes, but you still have to learn about it elsewhere and memorize its usage. Kind of like a design pattern, it is more of a convention than something that can be formalized with the language's syntax.

In Haskell, however, functors have a concrete representation. It's the typeclass Functor and its definition looks like this:

class Functor f where
    fmap :: (a -> b) -> f a -> f b -- a generalized mapping function
    (<$) :: a -> f b -> f a        -- just ignore this one

The Functor typeclass declares two function signatures. The most important one being fmap ("functor map") which is a generalized name for operations like our T?.Map() or IEnumerable<T>.Select(). Its declaration states that it takes two parameters. The first parameter is a function (a -> b) that turns inputs of type a into outputs of type b. In C# such a parameter is expressed with the type Func<A, B>. The second parameter f a is a Functor with contents of type a and the return value f b is a new Functor with contents of type b. Types like f a cannot be expressed in C#, because in this definition both f and a are generic. Such "generic generics" are referred to as higher-kinded polymorphism and C# does not support them (yet).

Every type that wants to be an instance of Functor has to implement fmap. The Functor typeclass also declares the operator <$, but we will just ignore it, as it currently has no use for us and comes with a default implementation anyway.

You might wonder what typeclasses are and your OOP intuition probably tells you they are just functional lingo for interfaces. While this is not entirely wrong, it is by no means the whole picture. Typeclasses are much more than interfaces, but for the purpose of my explanations this analogy is sufficient and I will leave it at that for now. Instead let's focus on how Functor is being used.

Note
I'm using the Haskell interactive console (GHCi) to execute the following examples. If you don't have it intalled, you can use an online version like the one at tryhaskell.org instead.

In Haskell, applying a transformation to a list is easy. Let's say we want to increment each element in a list of numbers:

GHCi> map (+1) [1,2,3]
[2,3,4]

(+1) is a convenient shortform of \x -> x + 1. But instead of explaining every detail of the above Haskell expression, I'll just show you an equivalent expression executed in Visual Studio's C# interactive console:

C#> new[] { 1, 2, 3 }.Select(x => x + 1)
Enumerable.WhereSelectArrayIterator<int, int> { 2, 3, 4 }

The Haskell list type already is an instance of the Functor typeclass, so we can also use fmap to achieve the same:

GHCi> fmap (+1) [1,2,3]
[2,3,4]

Comparing the signatures of map and fmap we can see why this works:

map  ::              (a -> b) -> [a] -> [b]
fmap :: Functor f => (a -> b) -> f a -> f b

Both map and fmap take a mapping function (a -> b) as their first parameter. Because they are generic functions, the input and output types of the mapping function are denoted with the generic types a and b. The second parameter is the container that is to be mapped. For map its simply a generic list [a]. But for fmap its some generic container denoted with f a. It can't be just any generic container, though. The container type must be an instance of the Functor typeclass which is enforced on f by the type constraint Functor f =>. The return type of both functions is again the container type but this time with the generic type b (the return type of the mapping function).

Haskell's Maybe type is the equivalent of Option in F#. But unlike Haskell lists, the type Maybe doesn't have its own map function. Fortunately it is a functor, hence we can use fmap on Maybe values:

GHCi> maybe1 = Just 1  -- a `Maybe Int` with a value
GHCi> maybe2 = Nothing -- a generic `Maybe a` without a value
GHCi> fmap (+1) maybe1
Just 2
GHCi> fmap (+1) maybe2
Nothing

Since we seem to like incrementing so much, let's create a little helper function for this:

GHCi> increment x = fmap (+1) x -- a function to increment `Maybe Int`s
GHCi> increment maybe1
Just 2
GHCi> increment maybe2
Nothing

Interestingly, our function increment is actually not restricted to Maybe Ints alone. Its signature tells us that we can use it with all types that are Functor instances and whose generic type is an instance of another typeclass called Num:

GHCi> :type increment
increment :: (Functor f, Num b) => f b -> f b

The Haskell compiler inferred all this information solely from the way we are using the parameter x inside increment. Because we are using fmap on x, the compiler knows its type must be an instance of Functor and because we want to use the + operator on the contained value, it inferred that its generic type must be an instance of Num.

Oh snap! Does this mean we can use increment on number lists as well? You betcha!

GHCi> increment [1,2,3]
[2,3,4]
GHCi> increment [1.1, 2.2, 3.3]
[2.1,3.2,4.3]

I don't know about you, but I think this is pretty amazing. Without any extra effort we created a highly generalized function to increment all functors of numbers!

Creating an instance of Functor

Let's crank it up a bit and make a custom type an instance of Functor. Say, we really love F#'s Option type and want to use it in Haskell instead of Maybe. Defining a new type is usually very easy in functional languages. In this case it's just one line of code:

data Option a = Some a | None deriving Show

Option is a generic sum type that either has a value (Some a) or (|) no value (None). The deriving Show part makes sure Haskell can print it. Think of it as overriding ToString() in C#, but without having to implement it ourselves, because the compiler is smart enough to do that on its own for such a simple type.

Option is still missing a way to map over it. We could implement a function that does that specifically for Options. One way to cover both the Some and None case is to split the definition of this function into one definition for each.

mapOption f (Some x) = Some (f x)
mapOption _ None     = None

Note
If you want to try this in GHCi, you will have to wrap both lines with :{ and :} in order to enter and exit multiline mode:

GHCi> :{
GHCi| mapOption f (Some x) = Some (f x)
GHCi| mapOption _ None = None
GHCi| :}
GHCi>

It sure does what we expect:

GHCi> mapOption (+1) (Some 1)
Some 2
GHCi> mapOption (+1) None
None

But like map only works with lists, mapOption will only work with Options. So instead of polluting the namespace with such a horribly limited and specialized function, let's just scrap it and make Option an instance of Functor:

instance Functor Option where
    fmap f (Some x) = Some (f x)
    fmap _ None     = None

To that end we first declare that Option is now an instance of Functor and then provide an implementation for fmap. Unsurprisingly it looks the same as mapOption. After all, both are supposed to do the exact same thing.

Now that you have seen how to create an instance of Functor we can scrap that again, because Haskell actually can derive the implementation for us! We just have to activate the language extension DeriveFunctor by typing :set -XDeriveFunctor into GHCi:

GHCi> :set -XDeriveFunctor
GHCi> data Option a = Some a | None deriving (Show, Functor)

Testing our Option type with fmap shows that the derived implementation indeed does what it should:

GHCi> fmap (+1) (Some 1)
Some 2
GHCi> fmap (+1) None
None

Congratulations, you created your first functor and you can now map over it like any other functor using fmap! But here's the kicker: there is also an operator for fmap. It's called <$> or just "fmap operator". With <$>, expressions using fmap can be reshaped like this:

GHCi> fmap (+1) (Some 1)
Some 2
GHCi> (+1) <$> Some 1
Some 2

This does not look very impressive on its own, but <$> allows you to easily compose transformations:

GHCi> (*2) <$> (+1) <$> Some 1
Some 4

Using the extension functions from the previous section, we can write an equivalent C# expression:

((int?)1).Map(x => x + 1).Map(x => x * 2)

Without <$> we would have to use one of the following alternatives:

GHCi> fmap (*2) (fmap (+1) (Some 1))
Some 4
GHCi> (fmap (*2) . fmap (+1)) (Some 1)
Some 4
GHCi> fmap ((*2) . (+1)) (Some 1)
Some 4
GHCi> fmap (\x -> 2*(x+1)) (Some 1)
Some 4
GHCi> incrementAndDouble x = 2*(x+1)
GHCi> fmap incrementAndDouble (Some 1)
Some 4
GHCi> incremented = fmap (+1) (Some 1)
GHCi> doubled = fmap (*2) incremented

None of which even come close to <$> in terms of readability. Except for the last two examples maybe, but they require defining a one-shot function with limited reusability or noisy temporary variables.

Note
The operator . is the function composition operator defined like f . g ≡ \x -> f (g x). Don't worry if you don't fully understand how it works. I used it in the above list of alternatives merely to give you a complete picture.

Sheesh... that was a lot of Haskell. But I hope you are at least a little bit as amazed as I was when learning how convenient and easy working with functors in Haskell is. Now let's get back to our beloved C#.

C# knows functors too (kind of)

Interestingly, C# actually has built functors into it as well via its query syntax. We know that we can use LINQ's query syntax to map over a list:

var xs = new[] { 1, 2, 3 };
var ys = from x in xs
         select x * x;

You probably also know that the same can be done using LINQ's method syntax:

var xs = new[] { 1, 2, 3 };
var ys = xs.Select(x => x * x);

What you might not know, however, is that the former gets desugared into the latter by the compiler without checking if xs actually is an IEnumerable<T>. When the compiler translates a query expression like from x in xs select f(x) to xs.Select(x => f(x)), it only checks if xs exposes a method named Select() that accepts a lambda as its only argument. This means that we can use the query syntax on any type that we give a Select() method.

For instance, let's rename the functor extension of Task<T> we defined earlier from Map() to Select():

public static class TaskExt {
    public static async Task<T2> Select<T1, T2>(
        this Task<T1> task,
        Func<T1, T2> mapping)
        => mapping(await task);
}

Now we should be able to build query expressions with Tasks! As an example we try it on our GetInitials() function by adding dots to the returned initials:

var dottedInitials = await
    from initials in GetInitials(0, true)
    select $"{initials[0]}.{initials[1]}.";

Perhaps unsurprisingly this indeed works. It's up to you if you think the above is more readable than:

var dottedInitials = await GetInitials(0, true)
    .Select(initials => $"{initials[0]}.{initials[1]}.");

I personally don't think so, but the bigger disadvantage of the query syntax is that it only allows one select per expression. So if you have multiple mapping steps you'll have to cram all of them together into one.

The interesting thing to note about from .. in .. select is that it acts like a general interface to functors. Similar to Haskell's fmap, which we can use with any type that implements the Functor typeclass, we can build query expressions with any type that exposes Select(). From the ouside from x in xs select f(x) seems to be the same as fmap (\x -> f x) xs or fmap f xs.

Generalizing C#'s functors

So C# clearly has some support for functors hard-coded into the compiler, but can we also generalize this the same way Haskell generalizes it with the Functor typeclass? Such a generalization is useful when we want to write C# functions that accept any kind of functors.

For example, earlier we defined a Haskell function that can increment any functors of numbers:

increment :: (Functor f, Num b) => f b -> f b
increment x = fmap (+1) x

It doesn't matter whether we give increment a Maybe b or a [b], the contents of both will be incremented, because both are functors and thus work with fmap.

We can try and translate the Haskell signature of increment into C#. For simplicity, we ignore the Num typeclass and just replace it with int:

F Increment<F>(F functor) where F: Functor<int>

Easy enough. We now almost have an Increment() function that receives and returns a Functor<int>. The actual implementation will come later, let's deal with Functor<T> first. Making it an interface seems like the most appropriate choice:

public interface Functor<T> {
    Functor<TResult> Select<TResult>(Func<T, TResult> mapping);
}

Well, that wasn't too hard either. We gave it a Select() function so we can use it inside query expressions. Now it's time to create a functor instance by implementing Functor<T> on a type:

public class Foo<T> : Functor<T> {
    public T Content { get; set; }
    public Functor<TResult> Select<TResult>(Func<T, TResult> mapping)
        => new Foo<TResult> { Content = mapping(Content) };
}

Our functor Foo isn't particularly useful--it's only wraps a plain value without adding any semantics, but according to our definition we should be able to it use with Increment(). Of course, we have to add an implementation to Increment() first.

Our Haskell implementation of increment was increment x = fmap (+1) x. In C#, we are trying to use from .. in .. select as a replacement for fmap, so we write:

public F Increment<F>(F functor)
    where F : Functor<int>
    => (F)from x in functor select x + 1;

And now we can test our C# implementation of increment:

var containsOne = new Foo<int> { Content = 1 };
var containsTwo = Increment(containsOne);
Debug.Assert(containsTwo.Content == 2);

Holy typeclass, Batman! That actually compiled and worked! Well, it kind of did. You might have noticed that I cheated a bit and snuck the type cast (F) into the implementation of Increment().

Higher-kinded type fu

The problem is that our declaration of Functor<T>.Select() has a major drawback: its return type is always Functor<TResult> which means that when we call Select() on a concrete functor implementation like Foo<T>, we won't get a Foo<T> back. That's rather impractical, as you can see below:

var containsOne = new Foo<int> { Content = 1 };
var containsTwo = from x in containsOne select x + 1;
Debug.Assert(containsTwo.Content == 2); // CS1061: 'Functor<int>' does not
                                        // contain a definition for 'Content'

What we'd really need is a functor interface that "remembers" the type it was implemented for. In the below pseudo code that type is represented with the generic parameter F:

public interface Functor<F<T>> {
    F<TResult> Select<TResult>(Func<T, TResult> mapping);
}

Notice how Select() now returns F<TResult> instead of Functor<TResult>. Functor implementations could then look like this:

public class Foo<T> : Functor<Foo<T>> {
    Foo<TResult> Select<TResult>(Func<T, TResult> mapping)
        => new Foo<TResult> { Content = mapping(Content) };
}

Unfortunately, neither Functor<F<T>> nor Foo<T> : Functor<Foo<T>> is valid C#, because C# does not support nesting declarations of generic type parameters as in F<T>. Such higher-kinded types (a.k.a. higher-kinded polymorphism) are not supported by C#. They are really powerful though, so we might get them some time in the far future.

What we could do is create an extra "shadow interface" for each type that implements Functor<T>:

public interface FooFunctor<T> : Functor<T> {
    new Foo<TResult> Select<TResult>(Func<T, TResult> mapping);
}

Notice how it shadows the original declaration of Functor<T>.Select() with the new modifier. Our Foo<T> type would then implement FooFunctor<T> instead of Functor<T>, but would have to provide an implementation for both versions of Select():

public class Foo<T> : FooFunctor<T> {
    public T Content { get; set; }

    public Foo<TResult> Select<TResult>(Func<T, TResult> mapping)
        => new Foo<TResult> { Content = mapping(Content) };

    Functor<TResult> Functor<T>.Select<TResult>(Func<T, TResult> mapping)
        => Select(mapping);
}

With this setup Increment() would still have to cast to (F) as before, but our previously invalid example would now compile and work:

var containsOne = new Foo<int> { Content = 1 };
var containsTwo = from x in containsOne select x + 1;
Debug.Assert(containsTwo.Content == 2); // no compiler error here anymore

Even though this works as it should, it is obvious that this is solution is far from elegant. We are bending C# to our needs to the point where our intentions become obfuscated.

There are other, more serious attempts of introducing the functor typeclass (and many others) to C#. Most notable is probably the package LanguageExt.Core where the functor interface looks like this:

public interface Functor<FA, FB, A, B> : Typeclass {
    FB Map(FA ma, Func<A, B> f);
}

But implemeting this interface for our beloved Foo<T> also seems hacky to me, because we'd still have to create an additional type awkwardly sitting between Functor<T> and Foo<T>:

public struct FooFunctor<A, B> : Functor<Foo<A>, Foo<B>, A, B> {
    public static readonly FooFunctor<A, B> Inst = default(FooFunctor<A, B>);

    public Foo<B> Map(Foo<A> ma, Func<A, B> f)
        => new Foo<B> { Content = f(ma.Content) };
}

I think it's safe to say that, currently, typeclasses and higher-kinded types just don't go well with C# and we should wait for the C# language team to add them. Typeclasses might be implemented with a new language feature named concept which is considered a candidate for C# 10. Higher-kinded types are also under evaluation but will probably take much longer to arrive. But don't forget that the lack of both doesn't render functors irrelvant in C#. We still can implement and use functors, we just can't generalize over them with functions like Increment().

Judging from this poor outcome you might be wondering why you even read this section in the first place. I think it was worth it nonetheless, because we learned about some internal details of the C# compiler and also gained a better understanding of typeclasses and higher-kinded types along the way.

The functor laws

Remember earlier when we compared LINQ's Select() with our T?.Map()? I claimed that both must have the same conceptual meaning (i.e. "functor map"), because their signatures are structurally the same. Back then I avoided clarifying how I can infer such a claim from their signatures alone when I really can't know what their implementations look like. And the truth is I can't.

We could, for instance, implement T?.Map() like so:

public static T2? Map<T1, T2>(this T1? x, Func<T1, T2> mapping)
    where T1 : class
    where T2 : class
    => null;

This Map() certainly looks like a functor from the outside, but it does not behave like one. In order for a type to become a functor it not only has to implement a structure-preserving map, it also has to obey the following two laws:

  1. Functors must preserve identity morphisms.
  2. Functors must preserve composition of morphisms.

At first they might sound very intimidating, but they are really quite simple.

  • explain why I delayed going into them

Functors preserve identity

The first law is the simplest, so let's start with that one. In our context "morphism" simply means "pure function". So functors must preserve identity functions? But what are identity functions? Really there is just one identity function and it's one of the easiest functions one can think of:

public T Id<T>(T x) => x;

Wow! So concise, so sleek, so... useless. That's right. The identity function Id() simply returns its only argument unaltered:

var x = 13;
Debug.Assert(Id(x) == 13);

But how do functors "preserve" the identity function? Funnily enough, they do that by doing nothing:

int? a = 13;
int? b = a.Map(Id);
Debug.Assert(a == b);

var xs = new[] { 1, 2, 3 };
var ys = xs.Select(Id);
Debug.Assert(xs.SequenceEqual(ys));

In fact, this law ensures that the functor is doing nothing else but applying the provided mapping function to its contents. If it was doing something else then this would be noticable when it's being used with the identity function.

Another way to look at it is that mapping a functor with the identity function should be exactly the same as applying the identity function to that functor directly:

var xs = new[] { 1, 2, 3 };

var ys = xs.Select(Id);
var zs = Id(xs);

Debug.Assert(ys.SequenceEqual(zs));
  • counter example: a functor not preserving identity

Functors preserve composition

Functional design patterns

What we've seen in action above is actually how design patterns work in functional languages. In OOP a design pattern is a set of best practices and conventions aimed at solving an often encountered problem in a reliable way.

Monads

  • C# knows monads (kind of)

Monoids

WIP

Sources and further materials

<Query Kind="Program">
<NuGetReference>System.Collections.Immutable</NuGetReference>
<Namespace>System.Collections.Immutable</Namespace>
<Namespace>System.Linq</Namespace>
</Query>
void Main() {
var junkyards = ImmutableList.Create(
new Junkyard("1340 E Bay Ave", ImmutableList.Create(
new Car("HVU 9747", ImmutableList<Tire>.Empty),
new Car("GWH 8379", ImmutableList.Create(
new Tire("front left"),
new Tire("front right"),
new Tire("rear left"),
new Tire("rear right"),
new Tire("spare"))))),
new Junkyard("360 Nostrand Ave", ImmutableList<Car>.Empty),
new Junkyard("148-36 Liberty Ave", ImmutableList.Create(
new Car("AZ 3381", ImmutableList.Create(
new Tire("front right"))),
new Car("2100952", ImmutableList.Create(
new Tire("rear left"),
new Tire("rear right"))),
new Car("HMZ 2798", ImmutableList.Create(
new Tire("front right"),
new Tire("rear left"),
new Tire("rear right")))))).Dump("original");
var modified1 = ModifyNaively(junkyards)
.Dump("ModifyNaively()");
var modified2 = ModifyAndMinimizeReconstruction(junkyards)
.Dump("ModifyAndMinimizeReconstruction()");
var modified3 = ModifyAndMaximizeSharing(junkyards)
.Dump("ModifyAndMaximizeSharing()");
var modified4 = ModifyWithEfficientBuilder(junkyards)
.Dump("ModifyWithEfficientBuilder()");
}
public ImmutableList<Junkyard> ModifyNaively(ImmutableList<Junkyard> junkyards) {
Car RestockTires(Car car) => new Car(
car.Plate,
car.Tires.AddRange(GetSpareTires(Math.Max(0, 4 - car.Tires.Count))));
Junkyard RefitCars(Junkyard jy) => new Junkyard(
jy.Address,
jy.Cars.Select(RestockTires).ToImmutableList());
return junkyards.Select(RefitCars).ToImmutableList();
}
public ImmutableList<Junkyard> ModifyAndMinimizeReconstruction(ImmutableList<Junkyard> junkyards) {
Car RestockTires(Car car) => car.Tires.Count >= 4 ? car : new Car(
car.Plate,
car.Tires.AddRange(GetSpareTires(4 - car.Tires.Count)));
Junkyard RefitCars(Junkyard jy) => jy.Cars.All(c => c.Tires.Count >= 4)
? jy
: new Junkyard(jy.Address, jy.Cars.Select(RestockTires).ToImmutableList());
return junkyards.Select(RefitCars).ToImmutableList();
}
public ImmutableList<Junkyard> ModifyAndMaximizeSharing(ImmutableList<Junkyard> junkyards) {
bool IsMissingTires(Car c) => c.Tires.Count < 4;
Car RestockTires(Car car) => new Car(
car.Plate,
car.Tires.AddRange(GetSpareTires(4 - car.Tires.Count)));
Junkyard RefitCars(Junkyard jy) => new Junkyard(jy.Address, jy.Cars
.Where(IsMissingTires)
.Aggregate(jy.Cars, (cs, c) => cs.Replace(c, RestockTires(c))));
return junkyards
.Where(jy => jy.Cars.Any(IsMissingTires))
.Aggregate(junkyards, (jys, jy) => jys.Replace(jy, RefitCars(jy)));
}
public ImmutableList<Junkyard> ModifyWithEfficientBuilder(ImmutableList<Junkyard> junkyards) {
bool IsMissingTires(Car c) => c.Tires.Count < 4;
Car RestockTires(Car car) {
var ws = car.Tires.ToBuilder();
ws.AddRange(GetSpareTires(4 - car.Tires.Count));
return new Car(car.Plate, ws.ToImmutable());
}
Junkyard RefitCars(Junkyard jy) => new Junkyard(jy.Address, jy.Cars
.Where(IsMissingTires)
.Aggregate(jy.Cars.ToBuilder(), (cs, c) => { cs.Replace(c, RestockTires(c)); return cs; })
.ToImmutable());
return junkyards
.Where(jy => jy.Cars.Any(IsMissingTires))
.Aggregate(junkyards.ToBuilder(), (jys, jy) => { jys.Replace(jy, RefitCars(jy)); return jys; })
.ToImmutable();
}
public ImmutableList<Tire> GetSpareTires(int count) => ImmutableList.CreateRange(Enumerable
.Range(1, count)
.Select(_ => new Tire("spare")));
public class Junkyard {
public readonly string Address;
public readonly ImmutableList<Car> Cars;
public Junkyard(string address, ImmutableList<Car> cars) => (Address, Cars) = (address, cars);
}
public class Car {
public readonly string Plate;
public readonly ImmutableList<Tire> Tires;
public Car(string plate, ImmutableList<Tire> tires) => (Plate, Tires) = (plate, tires);
}
public class Tire {
public readonly string Id;
public Tire(string id) => Id = id;
}
public static class BuilderExt {
// For some reason the `Builder` class is missing the `Replace()` method. This is just a
// substitute implementation (see https://github.com/dotnet/corefx/issues/41959).
public static void Replace<T>(this ImmutableList<T>.Builder b, T oldItem, T newItem) {
var idx = b.IndexOf(oldItem);
if (idx < 0) throw new InvalidOperationException($"Old item '{oldItem}' not found.");
b.RemoveAt(idx);
b.Insert(idx, newItem);
}
}
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View raw

(Sorry about that, but we can’t show files that are this big right now.)

@thibautde
Copy link

Thank you for this article, I really enjoyed reading it! All code examples are appreciated, especially F#/C# equivalences for each function which help understand each concept.
Do you plan to write on Either?
Also I’m going to try your Nullable.Extensions package which seems interesting.

@bert2
Copy link
Author

bert2 commented May 8, 2020

Hi @thibautde, thanks for the kind feedback. Really appreciate it!

The Either is one of the things I have planned to write about as well. I think it will be a summary of Scott Wlaschin's railway oriented approach, an introduction to F#'s Either and an implementation example in C#. I hope I'll find the time to do it soon :)

@bert2
Copy link
Author

bert2 commented May 22, 2020

HitCount

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment