Skip to content

Instantly share code, notes, and snippets.

@ZaymonFC
Last active March 16, 2022 06:03
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ZaymonFC/809f1b4b112498151623671428971dc8 to your computer and use it in GitHub Desktop.
Save ZaymonFC/809f1b4b112498151623671428971dc8 to your computer and use it in GitHub Desktop.

Thoughts on OOP.

Note: This is written as part of a letter in response to a good friend who asked for my opinion on OOP. This is mostly an off the cuff work in progress:

What a question to try and tackle. I have so much that I want to say but I am going to try and distill my thinking to what I believe are the most relevant points. This might get long because I'm going to use this as a recap of my own understanding on the topic as well 🧠.

Mutable State

By far the most dangerous thing about OOP is the use of mutable state. You've probably noticed in your programming career, especially making the jump to the front-end that mutable state makes everything harder. Every guarantee breaks down. When you have some object with a mutable API, and multiple things holding a reference to that object or it's state, there's no good way to coordinate updates to that object without breaking consumers, especially in a multi-threaded context.

Each object is in charge of managing its own state / data. It's far too easy for a programmer to call a function on an object and break another part of the code in a non-obvious way. The biggest crime is when objects expose functions that change the state internally and give no way of indicating this to the consumer. This usually means that even for classes that have a well crafted public API, programmers often have to dig through and understand the internals of the class anyway because of the lack of guarantees.

Modern systems are asynchronous and multi-threaded, this problem just balloons in complexity. One thread can read the state of an object, use that value and then the object get's changed out from underneath it at any time by some call from another part of the system. There's no common semantics for how to handle this and it's just a mess.

When you embrace immutable data a lot of these problems go away. Data can be shared. It doesn't matter if hundreds of parts of the system are looking at a value because it will never change. Each part of the system is free to derive new versions of the data at will.

For more on this I would recommend Keynote: The Value of Values - Rich Hickey

Conflation of State and Behaviour

This leads into, the conflation of state and behaviour. The fact that factory classes exist is the epitome of this problem. Classes and objects are inherently nouns, but in programming we need to do lot's of things. We have lots of actions (verbs) which we need to shoehorn into this model.

In OOP actions are always done to classes. Data structures and behaviour are always bound together. It's easy to reason about data. Data doesn't do anything by itself. If you want to reason about the data within a class though, you really have to trawl through a lot of the internals to give you context. Especially since often most of the data within a class will be contextual to modelling some kind of state. Combine this with mutable state and then you lose all hope since any function you call on a class might change it's internal state, resulting in subsequent calls to other methods doing something completely different.

If you want to take two classes and produce a third type of class, where does that functionality live? Is it a member of one of the two classes?? Do you make a factory?

static class CFactory () =
   static member Make (thing: A) (otherThing: B) -> C

In the functional world you'd just have a function which takes two chunks of data, and creates a third chunk of data.

I was watching a panel on the future of programming languages. Someone asked "What features do we need in tomorrow's programming languages". Rich Hickey quickly grabs the mic, thinks for a second and then says: "Functions... and ... data", then put's down the mic again.

Concretion and mapping

This 5 minute video really hits home on this problem: https://www.youtube.com/watch?v=aSEQfqNYNAc (Yes I love Rich Hickey)

When we create a class with data, every single time, we are required to implement completely unique semantics for how to access the data stored within it, and how to change the internal state. There's so much replication in an OOP code base because you can't point any general purpose data manipulation function at a class. (For example, if you have a map in Clojure, there are a hundred functions for letting you modify, remove, merge, filter and more). However, each class is a unique snowflake, even if the data inside it is a map, we need to understand the ways in which we can access it. We re-implement logic for how to add and remove things from collections, maps and many other data structures. We implement methods over and over that give us different views into the internal state of the thing.

With static typing, even if two public static classes have EXACTLY the same internal data, it's not possible to just use one or the other, if you need an A but have a B with the same data inside it you're going to have to map that B type into the A type before you can use it.

One of the best things about typescript is the structural typing: 'if B satisfies the structure of A then it can be used in the place of A' even if B has additional data it doesn't matter, it can still fulfil the contract.

For example:

type A = { foo: string }
type B = { foo: string, bar; string }

const myFunc = (a: A) => { return a }

const a : A = { foo: "Hello" }
const b : B = { foo: "Hello", bar: "There" }

myFunc(a) // Compiles 🙂
myFunc(b) // Also compiles 😎

This is so liberating because structural typing in TS gives you a lot of the benefits of static typing while also letting you leverage the dynamic, data-centric nature of JS.

myFunc doesn't care that b has extra data, the contract is fulfilled, it has everything it needs so it 'just works'.

Semantics of Composition—(lack thereof)

When you compose classes a lot of things go out the window. You know how some things are copy by value and others are copy by reference? How do you make a deep copy of a class? (Especially hard in C++). Now, how do you make a copy of a Class hierarchy? Classes that have classes as members which have classes as members... etc.

You have to read and understand the entire hierarchy of code before you duplicate it or make a copy. Object composition gives you 0 compositional semantics because there's no common way that this should work.

In Clojure, if I have a map, with a nested map inside that, and another nested map inside that. I don't care or have to understand the internal structure to make a copy. It will just work.

Multiple inheritance

That situation you described with the data harvesters is just chaos. Even in the OOP community multiple inheritance and inheritance in general is falling out of favour. I don't see any benefit in inheritance. Inheritance puts you in this mindset where you start trying to create the perfect taxonomy and categories like the canonical animal -> mammal -> elephant example that you see floating around in uni's / the internet.

Our brains slice up reality into buckets that we call concepts. However, concepts are never truly representative of reality. Concepts are lossy approximations. In the words of Richard Feynman's father, "You can know the name of a bird in 100 different languages but at the end of the day you'll still know nothing at all whatsoever about the bird".

Taxonomies are perfect in theory but they never quite workout when they come up against the real world. Even in our attempts to classify everything in the animal kingdom there are endless special cases. I'm sure there's an elegant way to link this into Gödel's Incompleteness Theorem, the gist is, any system of logic we build is flawed and incomplete. I tend to believe this is due to the nature of our brains and conceptual existence.

The best we can do imo is model the data, and keep massaging the data closer to the domain / real world iteratively over time. The idea that we can create the perfect super-class to inherit from in every subsequent situation is absurd to me, it's always going to be flawed or result in a lot of special cases.

Bonus Round

Why are we still teaching SOLID? Ask anyone to explain the Liskov substitution principle and there's a good chance they'll have to look it up.

Some of the things in SOLID are so out of date. Take the open/closed principle for example:

"Software entities should be open for extension but closed for modification"

Bertrand Meyer proposed the open/closed principle. A significant issue of the time (80's?) was that compilation units were not extensible.

Open: extend the interface, add a new public function to the module

Closed: if used by other modules, changes to a module will impact existing consumers

Back in the day

Modifying code in a module required consumers to re-compile (difficult / time consuming / caused breaking changes).

  • How can you make a module that is both open and closed under these conditions?
    • Classical OO Inheritance was the answer:
      • Extend an existing class by creating a sub-class
      • Override old methods (usually stubbing out to the underlying super-class)
      • Implement new ones
    • Existing consumers unaffected.

In typescript / clojure / F# / C#—sometimes, modern module based languages

We typically think: "Module A" depends on -> "Module B"

In reality some functions in "Module A" call functions in "Module B"

As long as behaviours don't change in "Module B", "Module A" doesn't care if "Module B" get's new functions or changes in some way

This clarifies Meyer’s original problem:

A module needs to change its functionality

It must not break its promises to existing clients.

We need to catalog what changes are permissible within those constraints. We can call these extensions, since they don’t remove or break existing promises.

Open closed, in the way it's taught doesn't make sense in a modern context at all. If you're relying on functions from a module it's no problem if that module get's new functions or expands function contracts in non-breaking changes. Recompilation is cheap and libraries ship changes and new versions all the time.

Closing

I hope there's a few new ideas in here and that I've given you a lot to think about, this is a summary of the most interesting and compelling arguments that I've learned against OOP. Sometimes I do think an object is actually the right choice, it's rare but occasionally classes can be very useful. The idea that everything should be a class is absurd.

In closing I leave you with two quotes, the first from RE: What's so cool about Scheme?, and the second from Joe Armstrong creator of Erlang.

The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?"

Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."

Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures.

He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.

On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures."

Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened. :)


Why OO was popular?

  • Reason 1 - It was thought to be easy to learn.
  • Reason 2 - It was thought to make code reuse easier.
  • Reason 3 - It was hyped.
  • Reason 4 - It created a new software industry.

I see no evidence of 1 and 2. Reasons 3 and 4 seem to be the driving force behind the technology. If a language technology is so bad that it creates a new industry to solve problems of its own making then it must be a good idea for the guys who want to make money.

This is is the real driving force behind OOPs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment