Create a gist now

Instantly share code, notes, and snippets.

This document has moved!

It's now here, in The Programmer's Compendium. The content is the same as before, but being part of the compendium means that it's actively maintained.

dangoor commented Aug 23, 2016

I love the clear approach you took to breaking down the terms people use and providing examples to clarify how that breakdown works in practice.

One other trend that I think is hard to ignore: optional static typing. There's Python's typing module, for one. Then there's TypeScript, which RedMonk called one of the fastest growing languages of recent years. It's a superset of JavaScript with switches to control how strict it is at compile time (noImplicitAny, strictNullCheck) and a type system designed to accommodate the common weird patterns that JavaScripters have developed over time. There's also Facebook's Flow type checker for JavaScript, and Facebook's Hack superset of PHP.

This supports your conclusion of a trend toward more powerful systems over time, since even the dynamic language people are adding static types, albeit optionally.

dangoor commented Aug 23, 2016

To specifically address the "Adding static types to dynamic languages" section in relation to my comment above: while some dynamic language features, such as eval, make retrofitting static types difficult to impossible, most code that I've seen in dynamic languages is not that dynamic.

Tools like TypeScript and Flow are designed specifically to allow evolution from a dynamically typed system to a statically typed one. For example, you start by declaring variables as any type, which gives you the same dynamic behavior you had before. Then you incrementally go back and narrow those hastily thrown in anys to the actual data types you expect to see. It's not an easy process, but it is at least an evolution rather than a rewrite.

djc commented Aug 23, 2016

I think the term for what @dangoor is talking about is "gradual typing", which I think also invokes the right imagery.


I've been changing this continuously, and there's now a short section on gradual typing. It replaced the section on "can we do type inference in dynamic languages?" but maintains the brief discussion of eval.


Flow is also able to make static inferences from dynamic type checks, so the following can be checked effectively. I don't know enough about gradual type systems to know how widespread this is, but I think Racket and Clojure's systems can also do this:

var a = eval(some_network_thing())
if (typeof a !== "number") a = 0
// at this point the type of a is known to be number.

Damn, I used to think strong typing was a thing and now you convinced me otherwise =/

Anyway, this is great!

Possible typo? -> "for a programs to be valid"

Question: where would you put the type systems of some dynamic languages in that list at the end? They might not be able to prevent errors at compile time, but they can say very similar things about values that type systems like the ones from Java or C# can say about variables/expressions. And even if they are not enforced at compile time, they can be, as the type validation of IDEs like PyCharm does, even without adding type hints.


Small typo

Languages with both eval and a type system have [to] abandon type safety whenever eval is used.

tel commented Aug 23, 2016

Line by line:

  • "A type is a collection of possible values" is not quite true. It arises because sometimes programs evaluate to a value and type safety says that a program's type cannot change under evaluation. But, truly, a type classifies the program itself. For instance, the type getChar : [Teletype] Char classifies the action of getting a character by its side effect.
  • "We can imagine any type we like" is true, but (a) doesn't suggest what it takes to imagine a type and (b) doesn't suggest the damage in imagining a type with bad properties. To clarify (a): if we could only imagine types by listing their inhabitants then we'd lose many interesting types. To clarify (b): consider a type Nope a b which can be introduced by both NopeLeft :: a -> Nope a b and NopeRight :: b -> Nope a b and eliminated by nope :: (a -> b -> r) -> Nope a b -> r. This is a satisfactory way to define a type, but it lets us write coerce.
  • "Statically typed languages constrain variables' types" I have a minor quibble with: really we give types to names usually, not variables. It's not always exactly clear what variables are.
  • "Dynamically typed languages tag values with types" might confuse someone into thinking that what static type systems call "types" are the same as what dynamic type systems do. You can do a little mixing with the "types are bags of values" idea, but it will fall apart.
  • "Static langauges evaluate types" is a bit funny—"evaluation" of types might be confusing. It's true in many more sophisticated type systems, but typically the types have no evaluation semantics but are merely checked or inferred or both.
  • "Every expression in a statically typed language has a definite type that can be determined without executing the code." This is a very accurate picture! (Well, modulo uncheckable, ill-typed expressions.)
  • "Other statically typed languages can infer types automatically." Notably, some languages can infer all types, some can infer most types, some can infer types so long as you don't use an advanced feature.
  • "Dynamically typed languages don't require type declarations" doesn't seem entirely true. I don't have to ascribe functions, but when I build a value I have to deliberately tag it most of the time.
  • "Most dynamic languages will error at runtime when types are used in incorrect ways" It might be worth clarifying what "used in incorrect ways" means here.
  • Discussion of "strong" and "weak" is great!
  • "the type Any... : it's the type that can have any value" Or, perhaps more concretely, it's the type of a value for which we know nothing at all
  • "but they can never provide the absolute guarantees of a truly static language." Well, they could! It's much more a matter of the stubbornness or agility of the community along with the momentum of libraries which lack type information and possibly are ill-typed entirely.
  • "Go's type system isn't very powerful." Worse, as you're about to note, it has easy-to-access coercion!

Anyway, sorry for all the notes, but I really think these sorts of writeups are very important! Thanks for putting it together! 👍

tel commented Aug 23, 2016

@glenjamin The way flow does that it is statically analyzes the shape of the check that will occur dynamically and infers what information it reveals about the types. This means that it has a sophisticated sort of static inference system. It's sort of something the Flow team has intentionally obfuscated by making it seem like the dynamic types and the static types are the same.

twotwotwo commented Aug 24, 2016 edited

I like the cleanup in "Diversity..." since the first time I saw this. It might be a similar clarification to change "can't express the simple idea 'a list of ints'" to "can't express our simple MyList idea." With MyList it's very explicit you mean a generic, statically checked, custom type. Plain "a list of ints" could be read as also covering a concrete type for listing (only) ints, a builtin, etc.

Could imagine switching examples from a list to something that lacks a special-cased builtin (sorted set? threadsafe map?) but dunno if worth it.

kevinburke commented Aug 24, 2016 edited

It was a little confusing reading lines like this:

Compare all of this power with Go, which can't express the simple idea "a list of ints"


For example, suppose that we write our own list type. It would be nice to tell the compiler that xs is a list of integers. We want it to warn us if we accidentally try to insert a string into xs, or if we accidentally try to put an element of xs into a variable of type string.

You can do something like this in Go:

package main

import "fmt"

type IntList []int

// You can put custom functions on types
func (il *IntList) First() int {
    return (*il)[0]

func main() {
    a := IntList{3, 4, 5}
    a = append(a, 7)
    a = append(a, "a string")
    var x string
    x = a[2]

This will trigger two errors at compile time:

$ go run main.go
# command-line-arguments
./main.go:15: cannot use "a string" (type string) as type int in append
./main.go:19: cannot use a[2] (type int) as type string in assignment

I am probably mis-understanding, but it seems like that code snippet / compiler behavior contradict what you've written.

It makes more sense later on in the essay, in this context:

We can't define a MyList type that can be used as a "list of ints", "list of strings", etc. Instead, it will be a "list of unspecified values".

But higher up, it's missing that context.

apg commented Aug 24, 2016

@kevinburke I agree that the List type example could be more clear. I recognized the point to mean "you can't define a single implementation of List that can contain one of int, string, struct foo, float, in a type safe way."

This is definitely true, and a limitation of Go's type system. You can get around it with a lot of extra typing. One could simply build a wrapper around container/list with methods that take an int, and type assert an interface{} value coming out as the target type, int.. This would be type safe, possible, and easy to build a code gen tool for (with the caveat that some of container/list's methods expose Element's making it actually *not safe..., but I digress). However, some languages, Java, C#, Haskell, Scala... etc, can do this automatically at compile time, which is the important point.


@kevinburke, I've just modified a bunch of the Go stuff. It was already changed in my local copy, but I forgot to update it here as well. @apg's comment reflects what I was getting at, and the text itself should reflect that better now as well.


very good job, I have enjoyed reading :) greetings.

alex commented Aug 24, 2016

@garybernhardt, this seems to be in conflict with some material from Codemash a few years back, can you clarify?


hi Alex


@garybernhardt when you say

A type is a collection of possible values

does that mean mean it is a mathematical set? I'm not sure if you use "collection" because it isn't actually a set, or simply because it is a more familiar term. (Apologies if that sounds like a criticism posed as a question, it's not meant to be. I honestly don't know the answer here).

Also, is it possible for a value to be in more than one 'type'. E.g.: if we have two made up type: LessThan5 = {1, 2, 3, 4} and LessThan3 = {1, 2} are the 1 and 2 the same value? Or must they be different because any given value can only be of 1 type? (I'm not sure if this question has any practical bearing on things, it is mostly an academic question I think).

(Also I enjoyed the read, thanks!)


@bennoleslie They're sets, but I say collection because it's not a mathematical term.

Whether the 1 and 2 in those two types are the same value depends on the type system. That example corresponds most closely to sum types, and a lot of languages (the ML family, Haskell) require constructors on them. They have to be valid identifiers, so you might say data LessThan5 = One | Two | Three | Four | Five. If you then try to say data LessThan3 = One | Two, you'll get compile errors: "Multiple declarations of ‘One’", etc. I should probably add a sentence clarifying that completely arbitrary types like HighFive aren't usually possible due to the constraints of actual type systems. (There used to be one but it disappeared due to other changes.)


@garybernhardt Thanks for the comment. I just tried that exact example in ghci, and in that case there is no error but the type of 'One' ends up being LessThan3.

Interestingly the type of 1 (in Haskell at least) is a very interesting Num a => a, which as far as I understand, means that the value inhabits any type with a typeclass of 'Num'. I don't understand what kind of magic is going on there, and if it would be possible to do the same magic with a user-defined type/typeclass.

The above leads to some interesting (to me) behaviour. Specifically I can do something like:

> let x = 10000000000000000000000
> x::Integer
> x::Int

I'm not sure if is is good, bad or otherwise, but I at least found it interested, and would love to know if I can implement that in a user-defined type/typeclass.



For your strong/weak table & type-system section you might want to add Ada.

tel commented Aug 24, 2016

@bennoleslie From a theoretical point of view, it's possible for 2 to be part of both LessThan5 and LessThan3. Sometimes, practically, this is frowned upon, though. As an example, such a thing would wreck havoc on inference: what type does the expression 2 have now? There are two valid ones and it's unclear how to make the choice.

Typically, to have good inference it's important to have what's known as a "principal type property" which is to say that any program expression can be given a unique "most general type". If there were a type for which both LessThan5 and LessThan3 belonged to somehow then we'd be in the green. One solution to this is to introduce subtyping in which case LessThan5 is a supertype of LessThan3 and it's acceptable to say that 2 has the principal type LessThan3.

There's a lot more to say in this domain. At least academically.


@bennoleslie Types are not really sets in the mathematical sense. Type theory was specifically designed as a way to get around the paradoxes in set theory, for example Russel's Paradox.

eduardoleon commented Aug 24, 2016 edited

What types denote varies from one type system to another. For example, in the simply typed lambda calculus, types denote sets. In a language with general recursion, types as collections of expressions denote domains. In homotopy type theory, types denote higher groupoids.


This hit the front page of HackerNews


The Idris example seems to need further explanation:

In Idris, we can say "the add function takes two integers and returns an integer, but its first argument must be smaller than its second argument":

add : (x : Nat) -> (y : Nat) -> {auto smaller : LT x y} -> Nat

add x y = x + y

That's all well and good, if you know the values of x and y at compile time. Consider a program that reads x and y from STDIN. The user could provide an x that is equal to or larger than y (or could provide only one value, or values that are not numbers). I see no way to deal with that except to throw a runtime error. Is that what would happen?


Scala supports dependent typing, and is used quite extensively in many projects.


Excellent material.
Typo: "respectively: type classes, traits, traits, and protocols"


@adrianvoica That duplicate "traits" is correct: Scala and Rust both have them.

AndrasKovacs commented Aug 24, 2016 edited

@twblalock in any programming language, we usually verify statically unknown data with runtime checks, and do some error throwing/handling if our data is not OK. We do the same in Idris/Agda, except we have runtime checks that return proofs of properties. If the data is OK, the checks return proofs of OK-ness, and we pass the proofs along to functions that require them. If the data is not OK, the checks return proofs of not-OK-ness.

For example in Idris one would generally use a comparison function with a type like the following:

compare : (x : Nat) -> (y : Nat) -> LT x y \/ LT y x \/ (x = y)  

where the return type is a tagged union with three possible cases, each returning a proof. This function itself can be considered verified, since its return type specifies all relevant properties of the semantics (given that the definitions of LT x y and x = y also have the correct meaning). In contrast the non-verified comparison function has type:

data Ordering = LT | GT | EQ

compare : Nat -> Nat -> Ordering

It's non-verified because it has implementations with the wrong semantics:

compare x y = LT  -- ignores the arguments

In contrast, all implementations of the verified compare must actually compare the arguments. This can be generalized to many more properties; at the extreme end we have things like the verified CompCert C compiler which has all the non-concurrent ANSI C syntax and semantics specified in types.

Note that it's usually possible to specify erasable or "irrelevant" proofs that don't appear in generated code, therefore ensuring that our verified code has the same runtime performance as unverified code. In some cases it's necessary or more convenient to use proofs that appear at runtime though. With dependent typing it's common to use data structures that store "normal" data but also act as proofs because they have certain structural properties or invariants.


Union types (as in TypeScript) are a good solution (type foo = String | Int) to constraint different-root types at compile time.
Type class types (as in Delphi / Object Pascal) are wonderful (foo = class of TBar) to constraint to a precise point of a class hierarchy


When using dynamic languages, even a simple type error like "a" + 1 can occur in production.

It's actually a feature that I sometimes rely on so the nonsensical marker might be a bit too much. For generating ID-s or class names ( replace 1 with i ) this is just handy, because they can't start with an integer.


Small typo here

Static langauges check


Any blanket statement of the form "static languages are better at x than dynamic languages" is almost certainly nonsense.

How about this one:

static languages are better at avoiding type errors than dynamic languages

Type errors are real and cause many, many preventable bugs and runtime crashes. Statically typed languages decimate that entire category of problem.

How about this other one:

static languages are better at communicating programmer intent than dynamic languages

Code is distinctively less ambiguous and more readable when the language requires the programmer to explicitly annotate type parameters, class member variables and return values – or deterministically infers it for him or her.

And this one:

static languages are better at providing auto-complete functionality in IDE than dynamic languages

Static languages let IDEs infer more things about your code, allowing it to make more helpful suggestions for code-completions.

So I'm struggling to find arguments for dynamic languages. "Rapid prototyping" and "easier programming" are favourites, but aren't those short-term productivity gains offset by the long-term productivity losses I mention above?

Summate commented Aug 24, 2016

A type is a collection of possible values.

I knew immediately that this was written from the perspective of a functional programmer. Object-oriented programming (or really, any sort of imperative programming) destroys this notion because types in OO represent an internal state and some operations upon that internal state. You might argue that the internal state is "a collection of possible values" and you can argue that operations are also "possible values" in and of themselves, but it's obtuse to do so in both cases. It's obtuse because you have values of a type rather than types expressing the values themselves. This only makes sense for things like enumerations, which do enumerate all possible values. Your definition is either imprecise or precise in a context that isn't as general as you're aiming at with the rest of your writing.

FWIW, I think it's very difficult to describe what a type is without a proper context and no single definition fits neatly into what we think of as types in the programming sense - or at least a single definition that isn't so general/imprecise as to be meaningless. I also think that this is generally a good piece of writing. As I was reading through it, the first sentence was my only problem with it.


So I'm struggling to find arguments for dynamic languages.

It produces less code. If you are developing on scale ( 50k lines + ) then it's true what you meant, lack of types can produce a class of bugs. However in the case of JavaScript it's pretty good to not have types. Typical use cases are not really complex apps, but small functionality on a single page. How convenient is to just write:

notifications.innerHTML = "Messages (" + messages.length + ")";

instead of casting everything into the right type adding god know how many lines?

Also, with component based design ( React, Polymer ... ) your code live in a small bubble and communication is done by events so you can have a more or less good idea what your types are.

peterholak commented Aug 24, 2016 edited

Why is everyone always focusing on primitive types (i.e. number, string, etc.) when comparing statically and dynamically typed code?

I get that it makes for very short examples, but when trying to showcase the kinds of errors that static typing helps discover, the "a" + 1 example feels very contrived, and therefore easy to dismiss.

In my experience, most real-world errors that a type system would catch are more along the lines of

  • "Tried to use a non-existing property of an object, because I have some objects in my program that have that property and other similar objects that don't" (often resulting in the infamous undefined is not a function error before chrome got a better error message for that).

    This can happen easily when you pass an object to a function, which then passes it to another function, which passes it to another function etc. and the different functions can have different expectations of what objects they accept. It can be really difficult for humans to track this in their heads when the program gets to a non-trivial size - especially when you then need to actually change one of the already existing functions some time later.

  • "A function returned null and I didn't check for that". (if the type system differentiates between nullable and non-nullable types, or at least has a Maybe/Optional/etc. wrapper)


Sorry for the rant, it's a nice summary otherwise. You could also mention the difference between nominal (e.g. Java, C#, Swift, etc.) and structural (e.g. Go, TypeScript, etc.) typing.


In Haskell those types actually support more powerful functionality (like foldMap and traverse which are my main go to examples), which is often much more of a compelling reason to have a static type system than "Make sure this is an Int".

hugows commented Aug 24, 2016 edited

What I find amusing is: why so much useful, popular software is being written in C++ and Go, and almost none in Haskell?

If Haskell gets the best programmers using the best language, why aren't they dominating all industries?

I also love playing with all kinds of programming languages but don't see anyone delivering fun or useful stuff like the users of "lowish" languages: C/C++/Java/Go/Python...

Maybe its about fundamental shortcomings about the languages (OCaml & Haskell until Rust came), or maybe everyone involved in those communities want to hack on the language instead of developing their own products..

I hope Rust is the answer to this in a few years - coming from Mozilla with the explicit requirements of simplifying development of a big popular piece of software.

This discussion reminds me of the famous (anti?) lisp rant by Ron Garret:

ssebastianj commented Aug 24, 2016 edited

What I find amusing is: why so much useful, popular software is being written in C++ and Go, and almost none in Haskell?

@hugows Well, Pandoc is a quite known, useful and popular piece of software mainly written in Haskell.

yogthos commented Aug 24, 2016 edited

@garybernhardt this is a very handwavy statement:

Dynamic language advocates point out that dynamic languages seem to be easier to program in.

There is a tangible reason for dynamic languages being easier to program in. The reason being that a static type system requires you to prove any statement you write formally. Proving something can often be much more difficult than simply stating it. Fermat's Last Theorem being a great example of this.

Baking a proof into the solution leads to incidental complexity. Once you run into limits of what the type system can easily express then you end up having to write increasingly more convoluted code to satisfy it.

This results in code that’s harder to understand because it compounds the complexity of the problem being solved with the complexity of expressing it using the type system. Effectively, any statement we make in our program has to be accompanied by a proof of correctness to make it possible for the compiler to verify it. The requirement of proving that the code is self-consistent is often at odds with making it simple.

While the resulting code would be provably correct, it would be harder for the developer to reason about its intent. Therefore, it's difficult to say whether it's correct in any meaningful sense.

Ultimately, a human needs to be able to understand what the code is doing and why. The more complexity is layered on top of the original problem the more difficult it becomes to tell what purpose the code serves.


I immediately did a search for "duck" and was disappointed our web-footed friend was not present in your essay!

lpar commented Aug 24, 2016 edited

As well as duck typing, it'd be good to put in something about primitive types, reference types, composite types, union types.

A great section would be one with links to actual scientific studies on effectiveness of type systems, like the ones here:

In the benefits and drawbacks area, I'd mention that heterogeneous data structures are usually much easier in dynamically typed languages.

stereobooster commented Aug 24, 2016 edited

UPD: after discussion, I came to conclusions that there are more than one significant difference between Go and Haskell. Generics, "sum types", not nullable types by default (instead you can construct "nullable" types when you need).

Note on section: Concrete examples of type system power differences

There are interfaces in Go which can serve (in some way) as generics:

type geometry interface {
    area() float64
    perim() float64
func scale(g geometry) geometry {

From my POV outstanding difference between Go and Haskell type systems is that Haskell has "sum type"

data IntOrNothing = AnInteger Int
                    | Nothing
stereobooster commented Aug 24, 2016 edited

Note on section: Gradual typing

Can we add static types to a dynamic language? In some cases, we can; in others, it's difficult or impossible.

What is the case for impossible? Any citation on that?

I imagine language with "sum type" support which can provide typecasting for any result. Including eval

data IntOrNothing = AnInteger Int
                    | Nothing

evalToIntOrNothing  :: string -> IntOrNothing

I believe this is how Elm treats code written in JS. cc @rtfeldman


Note that we don't have to assume that a type system with types is necessarily type-theoretic.

Here's the Idris example in Monte, an untyped dynamic language comparable to E in terms of type systems:

def add(x :Int, y :(Int >= x)) :Int:
    return x + y

(You might remember this as the language with the "rubby syntax" from PyCon posters.)

$ monte repl
▲> def add(x :Int, y :(Int >= x)) :Int { return x + y }
Result: <add>
▲> add(1, 2)
Result: 3
▲> add(2, 1)
TRACE: ["In <[2, ∞) <IntGuard> region>.coerce(1, throw):"]
TRACE: ["In\"1 is not in <[2, ∞) <IntGuard> region>\"):"]
Error: 1 is not in <[2, ∞) <IntGuard> region>
tel commented Aug 24, 2016

👍 @eduardoleon

@twblalock In order to produce a value of LT a b for unknown a and b you have to go through a process that might fail. DTs will ensure that that process is run before attempting to call add in all code paths.

@Summate Even for FP, types are thought of as "a set of values" are very, very weak and generally a low quality intuition.

@MostAwesomeDude Where can I find more information on Monte? What do you mean by it not being "type theoretic"?

stereobooster commented Aug 24, 2016 edited

Note on section: Strong and weak typing

Sometimes, "strong" means "doesn't convert between data types implicitly". For example, JavaScript allows us to say "a" + 1, which we might call "weak typing". But almost all languages provide some level of implicit conversion, allowing automatic integer-to-float conversion like 1 + 1.1.

Important difference here is that there are different types of conversion. I made up those terms myself, I suppose there are official names for them (in Category theory or similar):

Symmetric, defined for all values conversion. Example: Integer to Float. Any Integer can be converted to Float and there are no two Integer values which can be converted to the same Float value.

1 -> 1.0
1.0 -> 1

Asymmetric, defined for all values conversion. Example: Float to Integer. Any Float can be converted to Integer, but there is no one-to-one correspondence. For example we can take round-down as type conversion

1.0 -> 1
1.1 -> 1
1 -> 1.0

Not defined for all values conversion. Example: String to Integer. Some strings can be converted to Integer, but some can not.

StringToInt :: String -> IntOrNothing

"1" -> 1
"a" -> Nothing

So comparing Float to Integer and String to Integer conversion is not correct. The same way as Zeno said that all sums of infinite sequences tends to infinity. There are different types of infinite sequences: divergent, convergent (and convergent has subtypes too).

implicit explicit
symmetric 1 + 1.0 -> 2.0
asymmetric Math.floor(1.0)
Not defined for all values WAT parseInt("") -> NaN
tel commented Aug 24, 2016

@stereobooster These ideas come from basic abstract algebra, though they show up in category theory. Normally "function" implies that all values in its domain have values in its range. If that's not true, you might say a function is "partial". A function for which no two domain values have the same range value is called "injective". There's nothing particularly symmetric about it though.

In particular, abstract algebra and category theory have a lot to say about pairs of functions such that one is injective and the other "reverses" that. toFloat :: Integer -> Float and round :: Float -> Integer form this relation, e.g. The most important thing to note is that round is a one-sided inverse of toFloat, or, that for all integers x, round(toFloat(x)) = x but it's not the case that for all Floats, y, toFloat(round(y)).

When you've got a situation like this, you call the pair a "section" and "retraction".

rafaeldff commented Aug 24, 2016 edited

Guy Steele provides an excellent discussion of the need for a name describing "run-time types", possible words to name the concept and how it relates to "static" types. They took the blog offline, but the internet archive came to the rescue:


@garybernhardt Scala and Rust traits have nothing in common other than the name. Rust traits are type classes with another name.

@stereobooster Go's interfaces can't even express something as simple as Haskell's Num type class, because you can't constrain the two summands to be implemented as the same struct.

@rafaeldff What you call “run-time types” is more appropriately called “classes”. By the way, this doesn't contradict what happens in Java, because reflection makes class information available at runtime. See here for the full story.

stereobooster commented Aug 24, 2016 edited

@eduardoleon can you be more specific on that. Provide example what exact operation is not possible?

Go's interfaces can't even express something as simple as Haskell's Num type class, because you can't constrain the two summands to be implemented as the same struct.

eduardoleon commented Aug 24, 2016 edited

@stereobooster You can define a function that adds two integers, another that adds two floats, another that adds two complex numbers, etc. But you can't give them all a common interface in Go.


@eduardoleon can you be more specific. I can define following construct. See details.

package main

import "fmt"

type num_type interface {
    real_part() float64
    complex_part() float64

type int_type struct {
    val int

type float_type struct {
    val float64

type complex_type struct {
    real    float64
    complex float64

func (i int_type) real_part() float64 {
    return float64(i.val)

func (i int_type) complex_part() float64 {
    return 0

func (i float_type) real_part() float64 {
    return i.val

func (i float_type) complex_part() float64 {
    return 0

func (i complex_type) real_part() float64 {
    return i.real

func (i complex_type) complex_part() float64 {
    return i.complex

func sum_num_types(a num_type, b num_type) num_type {
    return complex_type{real: a.real_part() + b.real_part(), complex: a.complex_part() + b.complex_part()}

func main() {
    fmt.Println(sum_num_types(int_type{1}, float_type{1}).real_part())

Let get back on track. My point was that it is not the generics the main difference between Go and Haskell. In Go you can construct something visually similar to generics, but you definitely can not get "sum types" in Go. Are you arguing with that statement? Do you consider generics the "main" difference in those type systems? Or you pointing on my some kind of misinterpreting of what generics is and how they compared to go Interfaces (I suppose they are not the same, but I do not have formal proof). If it is later can you provide specific example or formal proof.


eduardoleon commented Aug 24, 2016 edited

What you're doing is something different. You created an interface that makes integers and floats usable as complex numbers. But that's not what I originally asked for. I asked for a common interface for the following operations:

  • Add two integers, giving an integer.
  • Add two floats, giving a float.
  • Add two complex numbers, giving a complex number.

Go simply can't do this.

I propose the following benchmark: create a single function that takes a list of numbers, all of the same type (say, all integers, all floats, or all complex numbers), and returns their sum, having the same type. It should use your interface.

In Haskell:

sum :: Num a => [a] -> a
sum = foldl' (+) 0

In Go you can construct something visually similar to generics,

No, you can't. And I don't care much how code “looks”, I care about what code means.

but you definitely can not get "sum types" in Go. Are you arguing with that statement?

Not arguing with this. This much is correct.

Do you consider generics the "main" difference in those type systems?

Both sum types and parametric polymorphism. I consider them basic necessities, since they allow me to communicate things I need to communicate on a daily basis. OTOH, I can live without type classes, higher-kinded or higher-ranked types, effect segregation, etc.


all of the same type (say, all integers, all floats, or all complex numbers), and returns their sum, having the same type

I see you mean

Prelude> :t sum [1,2,3]
sum [1,2,3] :: Num b => b
Prelude> :t sum [1,2,3.0]
sum [1,2,3.0] :: Fractional b => b

That is indeed not possible in Go. Point taken. Thanks. Maximum you can do is (I actually do not know Go, so if it is not true correct me):

func sum(a []num_type) num_type {}
func sum_num_types_arr(arr []num_type) num_type {
    var res num_type
    res = int_type{0}
    for i := 0; i < len(arr); i++ {
        res = sum_num_types(res, arr[0])
    return res

func main() {
    val := []num_type{int_type{1}, float_type{1}}

Languages with both eval and a type system have to abandon type safety whenever eval is used.

I'd amend this. In Haskell we have Data.Typeable which lets you write:

eval :: Typeable a => String -> Either Error a

Your value must be an instance of Typeable, which makes the type of it statically known, while also permitting the eval function to reject the value given by the evaluator if typeOf doesn't match that of the a.



@stereobooster: “Maximum you can do is: (snippet)”



no popular type system can actually express our HighFive type above

In case anybody is interested, you can express this in Flow.


@yogthos “Baking a proof into the solution leads to incidental complexity.”

The programmer has a responsibility to produce the proof, whether it's baked in the code or written on a separate piece of paper. I know which is easier to keep in sync, manage using version control, etc.

That being said, the usability of dependent types still leaves a lot to be desired.

yogthos commented Aug 25, 2016


You only have to prove that your code runs for the cases that actually exist in the application. Static typing requires you to provide a proof for all cases, including those that are unreachable.

STRML commented Aug 25, 2016

In case anybody is interested, you can express this in Flow.

I was surprised to find this wasn't the case in TS. Had always thought it was since it does support string literal unions, just not more mixed unions of literals.

Flow has some nice features and is improving rapidly. It makes for a very interesting study of what is possible when typing a highly dynamic language long after its creation. I've been pleasantly surprised at the very complex types we can express.


Another point, which I haven't thought about before: objects in most OOP languages are "dynamically" typed i.e. they carry type definition at runtime to be able to do method calls on them. Unless you thought that something like Rust can be treated as OOP. It depends on how you define OOP.

@garybernhardt the most good about this article is the fact that it doesn't try to go too strict with some definitions, so it doesn't mislead and generally tends to be true at big scale. Like your writing style. Thanks

eduardoleon commented Aug 25, 2016 edited

@yogthos No idea how other people use static typing, but at least the way I use it, one of the main points is not to have unreachable cases in the first place. If I have unreachable control flow paths, it means that I have to refactor my code. Designing your program so that it expresses exactly what you mean and nothing else is very good mental exercise and discipline.


@yogthos (cc @eduardoleon)

I would take issue with that in the sense that "cases that actually exist in the application" is an assertion, which doesn't have a grounding in proof. You may write test cases to show that a finite number of things cannot happen, but you are not giving a general solution, only a specific one. At some point, your safety will rest on a human assertion "this cannot/will not happen". Humans have proven to be quite unreliable at that without tools - proofs being a very useful one.

eduardoleon commented Aug 25, 2016 edited

@kolektiv Agreed. That being said, I find it ugly to have to prove that certain paths are unreachable. If you're doing things right, unreachable paths simply don't exist in your code.

Another way to put it is that I like my programs to be proofs in minimal logic, not because I don't believe in the validity of the principle of explosion (of course I do!), but rather because I find it beautiful to prove things without relying on mathematical sledgehammers.



  • So someone else here mentioned pandoc as one piece of software that's out there.
  • Facebook use Haskell and have posted publicly about their use of it.
  • A bunch of banks and financial institutions use it.
  • git-annex is written in Haskell.

One thing that works somewhat against it when it comes to visibility is that my my estimation a fair chunk of those using it professionally are building service backends and websites, which aren't particularly great for publicity. As opposed to say something like Docker where the publicity for Go is really public and loud.


@eduardoleon - very much agree with you. One of the goals for me in a type system is to make undesirable or invalid states unrepresentable. The smallest algebra I can implicitly define which represents my desired program and nothing else is the right thing. Thanks for the mention of minimal logic, not something I've looked at in depth before. One for my reading list.

tel commented Aug 25, 2016

@stereobooster: You can write OO without runtime tagging, but you lose reflection. Haskell also has runtime tagging: that's essentially what that Typeable snippet above is doing.

While program expressions are best typed at their most general, at runtime values can be tagged with their most specific types. It's possible (but not obvious or easy) to have the runtime types be similar or even a subset of the compile time types which creates a strong linkage.


@tiborsaas None of those arguments are really about static vs dynamic types.

  1. [Short code] I have found no evidence that static types necessarily causes more code, especially when you have automatic type inference, so you don't actually have to write any type signatures yourself. In some cases you actually need less code since the types carry some of the information needed.
    For example: QuickCheck (an automatic property tester) is a pain to use in Erlang, but it's fully automatic in Haskell, since it can figure out what kind of input to use automatically from the types.
    Furthermore, you'll usually need more unit tests to compensate for the lack of types, which is usually a lot less compact than the corresponding type annotations.
  2. [Javascript example] You can just as easily have automatic coercion (aka. "Weak types") in a statically typed language as in a dynamic one. That is a completely different question.

See also, the blog post about common fallacies in the dynamic vs static types debate, that was quoted in the article:


@yogthos No one is forcing you to prove anything just because you are using a static language. If anything, the compiler relieves you of the burden of proof, by doing the heavy lifting for you.

If you want to, you can encode many complex properties in the types and have the compiler verify them for you. This is obviously more difficult the more complex properties you try to enforce.

If you don't want to do that, just write code similar/identical to what you would have written in a dynamically typed language and get some simple proofs (compatible types, no missing methods, etc.) for free.


Thanks. Good clean article.


no popular type system can actually express our HighFive type above

data HighFive = Hi | Five

I call bullshit. Unless you have a very interesting definition of “popular” or “types”.

garybernhardt commented Aug 30, 2016 edited

@Profpatsch The constructor Five is not the integer 5. Likewise, the constructor Hi is not the string "hi".


@Profpatsch I would've just given him the type Bool.

@garybernhardt Any singleton is as good as any other. They're isomorphic.

garybernhardt commented Aug 31, 2016 edited

@eduardoleon Integers are a set of values containing 5 and others. Strings are a set of values containing "hi" and others. HighFive is the set of values {5, "hi"}, and each of those values also exists within one of the Integer and String sets. A value taken from HighFive is exactly equal to the same value taken from Integer or String, can be compared against it, etc.

This is not an idea that's directly expressible in Haskell; or, as far as I know, in any actual type system. People have told me that it's expressible in TypeScript, but I'm skeptical.

This example is clearly confusing. But, interestingly, the only people who have objected to it have shown that they know an ML-descended language. I suspect that people with no formalities to fall back on pass right over it without a problem. There are sets of values. Some sets intersect with other sets. It's not a big deal.

Of course, all of this is arguing the color that we should paint a yak at the bottom of a rabbit hole. This is the introductory paragraph of a 3,500-word article; a paragraph that contains no details about any formal mathematical system or software system.


@garybernhardt It should be add instead of f in this Haskell example:

f :: Num a => a -> a -> a
add x y = x + y

@EvgenyOrekhov Nice catch, thanks!


There is Complexity Zoo. Does anybody aware of similar resource for type systems?


Is there a list of languages by type system power like the one at the end of this gist, but then with more languages? There is the Comparison of programming languages by type system on Wikipedia, though this does not have the categorization used in the gist and cannot really be sorted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment