Skip to content

Instantly share code, notes, and snippets.

@sevas
Created July 17, 2014 23:09
Show Gist options
  • Save sevas/c1f3a0a78496bca683fd to your computer and use it in GitHub Desktop.
Save sevas/c1f3a0a78496bca683fd to your computer and use it in GitHub Desktop.
https://www.quora.com/Object-Oriented-Programming/Was-object-oriented-programming-a-failure/answer/Michael-O-Church
No, not a failure. It's the worst kind of success.
Alexander Stepanov's complaint is blistering and accurate. If you read Types and Programming Languages, you get a sense for just how much complexity objects add to your world. OOP, as commonly envisioned, doesn't play well with static or dynamic typing.
Is OOP a failure? Well, what is it? I've heard OOP given about 12 definitions, all credible in some core way, but many conflicting. Like "Scrum", it's too all over the place to justify a closed-form, final opinion. It's either highly beneficial or loathsome depending on which interpretation one uses. There's good OOP and bad OOP. This should be no surprise: in the anti-intellectual world of mainstream business software, it's mostly bad OOP. (For "Scrum", there's the same sad story.)
Separation of implementation and interface is a clear win. That's not limited to OO languages, of course. Haskell has type classes, Clojure has protocols, and Ocaml has (if you're brave) functors. Nonetheless, I'm going to score that as a clear Good Idea that OOP championed early on.
For that, Alan Kay's inspiration was the biological cell. Alan Kay is one of the best software designers alive, and has been extremely critical of modern OOP. Now, the cell: it's an intricate, convoluted machine, almost on the verge of collapsing under the weight of its own complexity. In a larger organism, cells communicate through a simpler interface: chemical signals (hormones) and electric activations. If they coupled more tightly, the organism wouldn't be valuable. Kay was not saying, "you should go out and create enormously complex systems". OOP, to him, was about how to manage it when complexity emerged. In this way, OOP and FP were actually orthogonal (and could support one another) rather than in conflict. It was still desirable that objects do one thing and do it well; but interfaces were intended to underscore that "one thing" when the demands on the implementation made it hard to tell what that was.
OOP and FP (and, in reality, all higher-level languages) both exist to answer the question, "How do we prevent software entropy?" See, Alan Turing's result on the Halting Problem isn't about termination or about machines and tapes. It's the first of many theorems establishing the same thing: we can't reason, in any way whatsoever, about arbitrary code. It's mathematically impossible. Obvious solution: "don't write arbitrary code." (Most code that a person would write to solve a problem is in a low-entropy region where reasoning about code is possible.) Equally obviously, no one does write "arbitrary code". Generally, we don't go very far at all into that chaotic space of "all code", and that's good. However, as the number of hands that have passed over code increases, it gets further into that high-entropy/"arbitrary code" space. FP and OOP are two toolsets designed to prevent it from getting there too fast. FP enforces simplicity by forcing people to think about state and mutability, encouraging code that can be decomposed into "do one thing" components-- mostly mathematical functions. OOP tries to make software look like "the real world" as can be understood by an average person. (CheckingAccount extends Account extends HasBalance extends Object). The problem is that it encourages people to program before they think, and it allows software to be created that mostly works but no one knows why it does. OOP places high demands on the creators of the machinery (in effect, a new DSL) that will be built to solve a problem. Because of the high demands OOP places on human care of the software, the historical solution has been to have elite programmers (architects!) design and peons implement; that never worked out for a number of reasons-- it's hard to separate capability from political success, the best programmers don't want to be fucking around with lines and boxes and DDL, business requirements are still a constant source of increasing complexity (with outdated or unwanted requirements never retracted).
What went wrong? People rushed to use the complex stuff (see: inheritance, especially multiple) when it wasn't necessary, and often with a poor understanding of the fundamentals. Bureaucratic entropy and requirement creep (it is rare that requirements are subtracted, even if the original stakeholders lose interest) became codified in ill-conceived software systems. Worst of all, over-complex systems became a great way for careerist engineers (and architects!) to gain "production experience" with the latest buzzwords and "design patterns". With all the C++/Java corner-cases and OO nightmares that come up in interview questions, it's actually quite reasonable that a number of less-skilled developers would get the idea that they need to start doing some of that stuff (so they can answer those questions!) to advance into the big leagues.
Is OOP a failure? I would generally say "no". I also strongly dislike most of what gets passed off as OOP. I'd say that it endured a fate worse than failing. It evolved into a caricature of bad programming practices that are antithetical to what it was originally invented to do.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment