Skip to content

Instantly share code, notes, and snippets.

@odersky
Last active January 9, 2016 23:23
Show Gist options
  • Save odersky/92a0c9bcbdb8ff39c93d to your computer and use it in GitHub Desktop.
Save odersky/92a0c9bcbdb8ff39c93d to your computer and use it in GitHub Desktop.
layout post-type by title
blog
blog
Martin Odersky
Higher Kinds Revisited

Generics and Subtyping

The combination of generics and subtyping poses some deep and difficult questions for language design. I have been struggling with these questions for a good part of the last two years. This is an attempt to summarize what I have learned. I first describe how generics are expressed in the current Scala compiler, nsc, then contrast this with how they are expressed in Scala's foundation DOT and the new Scala compiler dotty. This brings us to the question of how higher-kinded types should be encoded in DOT and Scala. I'll explore the design space and motivate a simple solution.

The Problem

If we combine generics and subtyping, we inevitably face the problem that we want to express a generic type where the type argument is an unknown type that can range over a set of possible types. The prototypical case is where the argument ranges over all subtypes or supertypes of some type bound, as in List[_ <: Fruit].

Such partially undetermined types come up when we want to express variance. We would like to express, say, that List[Apple] is a subtype of List[Fruit] since Apple is a subtype of Fruit. An equivalent way to express this is to say that the type List[Fruit] includes lists where the elements are of an arbitrary subtype of Fruit. By that reasoning, List[Apple] is a special case of List[Fruit]. We can also express this notion directly using the wildcard type List[_ <: Fruit]. Definition-site variance can be regarded as a user-friendly notation that expands into use-site variance expressions using such wildcards.

The problem is how to model a wildcard type such as List[_ <: Fruit]. Igarashi and Viroli's original interpretation was as an existential type exists(T <: Fruit).List[T] which is written List[T] forSome { type T <: Fruit } in current Scala. However, existential types usually come with explicit open and pack constructs, which are absent in this setting. Moreover, actual subtyping rules as e.g. implemented in Java are more general than what can be expressed with existential types alone. The actual modelization is not fully known and the issues look complicated. Tate, Leung and Learner have explored some possible explanations in their PLDI paper, but their treatment raises about as many questions as it answers.

A New Simple Theory of Types

DOT and dotty solve the problem by a radical reduction. Type parameters and type arguments are not primitive, but are seen as syntactic sugar for type members and type refinements. For instance, if List is declared like this:

trait List[type Elem] { ... }

then this would be expanded to a parameterless trait with a type member Elem:

trait List { type Elem; ... }

An application such as List[String] is expanded to List { type Elem = String }. If List were declared as covariant (using [type +Elem]), the type application is instead expanded to a refinement with an upper bound, List { type Elem <: String }. Analogously, applications of contravariant types lead to refinements with lower bounds. The example used a named parameter [type Elem], which is a new construct in dotty. If we had written instead

class List[Elem] { ... }

this would expand to

class List { type List$$Elem; private type Elem = List$$Elem; ... }

The type parameter of List would then get the compiler-generated name List$$Elem in order not to clash with other member definitions of the trait. The parameter is accessible under its original name Elem via the private type alias in the body if List.

The dotty scheme has two immediate benefits. First, we only need to explain one concept instead of two. Second, the interaction between the two concepts, which was so difficult before, now becomes trivial. Indeed, a type like List[_ <: Fruit] is simply List { type Elem <: Fruit }; that is, wildcard parameters translate directly to refinements with the same type bounds.

Beyond that there is a more important, albeit less tangible benefit: DOT and dotty have a simple view of what types are. In that view, a type name introduces a new type, like an uninterpreted symbol of a theory. Types are related by constraints, which introduce subtyping relations between types. Type equality can be expressed by mutual subtyping. By contrast, the traditional view of types sees type names as abbreviations for the types they stand for. There is usually a set of primitive types over which parameterized types are built. So generics in this view are generative; they generate instances of non-generic type structures. This makes sense intuitively, but is not enough. We also need a way to represent abstract types, and the best known way of doing this is the one of SML or DOT or dotty: use opaque type names with constraints. Mixing the generative paradigm with the type abstraction paradigm inevitably leads to more complexity than to stick with one paradigm, as dotty does.

Generics of a Higher Kind

So far we have seen that the DOT/dotty approach is better founded and simpler than the previous scheme used by nsc. But things get murkier once we try to include higher-kinded types. If we start with type applications, then a good way to model higher-kinded types is in terms of type lambdas. This is a somewhat more principled version of what is done in nsc. A type constructor such as List is expressed as a type lambda [X] -> List[X]. There can be several parameters like X and they can have variances attached. So the covariant list constructor would be expressed as [+X] -> List[X]. A higher-kinded type parameter such as [CC[+X] <: Iterable[X]] is represented as a type parameter with a type lambda as upper bound. So the previous expression is equivalent to [CC <: ([+X] -> Iterable[X])]. There is a standard beta reduction rule for applications of type lambdas. The type ([X] -> S)[T] simplifies to [X := T]S. The subtyping rules for higher-kinded types are fairly standard, I'll omit them here in the interest of brevity.

A tricky aspect concerns type inference. One important part of type inference in Scala infers method type parameters from term arguments. Say you have the function

def f[CC[+X], T](x: CC[T]) = ...

and the application f(xs) where xs: List[Int]. Type inference will determine that CC should be [+X] -> List[X] and T should be Int. However, we could also have xs of a type that has several unrelated parameterized supertypes. In this case, there is no best instantiation of CC; the compiler will the pick one of the possible supertypes in some fairly arbitrary way.

This modeling assumes type lambdas and type applications as primitive constructs. If these constructs are absent, as is the case in dotty, higher-kinded types need to be re-evaluated. Do we have to give up on them, or are the ways to emulate them? I have explored two schemes that do this in detail. One is based on type projections, the other on named parameters. In the following, I describe the two schemes and compare them.

Type Projection Encoding

The type projection approach was originally suggested by Adriaan Moors and has been worked out in detail in the current dotty compiler. It uses the following basic encodings.

  • A type lambda [X >: S <: U] -> T is encoded as the refined type

     Lambda$I { type $hk0 >: S <: U; type $Apply = [X := this.$hk0]T }
    

    This makes use of a family of synthetic base traits Lambda$..., one for each vector of variances of possible higher-kinded parameters. A suffix of I indicates a non-variant parameter, P (positive) a covariant parameter, and N (negative) a contravariant parameter. Each base trait defines parameters $hk_i with the given variances and an abstract type member $Apply. For instance, the base trait

     trait Lambda$NP[type -$hk0, type +$hk1] { type $Apply }
    

    is used for binary type lambdas where the first type parameter is contravariant and the second is covariant.

  • An application of a non-variant higher kinded type C to an argument T is encoded as

     C { type $hk0 = T } # $Apply
    

    Covariant and contravariant hk-type applications lead to projections with upper and lower bounds instead.

  • Beta reduction is supported by dereferencing the type projection. Indeed,

     Lambda$I { type $hk0; type $Apply = T } { type $hk0 = A } # Apply
    

    reduces to

     [this.$hk0 := A]T
    

Ideally, an encoding of higher-kinded types into type members and refinements would be sufficiently expressive; an encoded term should be type-checkable in the base calculus without special provisions that account for the fact that types were originally higher-kinded. Unfortunately, there are a number of areas where higher-kinded types do shine through. To make, e.g. the standard Scala collections compile, all of the following tweaks are necessary:

  1. $Apply refinements are covariant. If T <: U then

    S { type $Apply = T }  <:  S { type $Apply = U }
    

    This subtyping relationship does not hold for ordinary type refinements. It would hold for upper bound refinements, of course. But we cannot model $Apply refinements as upper bound refinements because that would lose beta reduction.

  2. Encodings of type lambdas distribute over intersections and unions. For instance,

    Lambda$I { ... type $Apply = T } & Lambda$P { ... type $Apply = U }
    

needs to be normalized to

    Lambda$I { ... type $Apply = T & U }
  1. A subtyping test of the encoded versions of

    A[T1, ..., Tn] <: C[U1, ..., Un]
    

where A is an inferable higher-kinded type parameter and C is a class or a trait leads to the instantiation A = C.

  1. A subtyping test of the encoded versions of

    T <: A[U1, ..., Un]
    

where A is an inferable higher-kinded type parameter is handled by looking in linearization order for some base type of T that is of the form C[T1, ..., Tn] where C is a class or trait whose parameter variances match those of A. Then infer A = C for the first such class or trait C that is found.

  1. A subtype test

    [X1, ..., Xn] -> T  <:  C
    

of an encoded type lambda and a class constructor C is rewritten to `T <: C[X1, ..., Xn]. Analogously, the subtype test

    C <: [X1, ..., Xn] -> T

is rewritten to C[X1, ..., Xn] <: T.

  1. In certain situations, a subtype test of encoded versions of

     [X] -> T  <:  [+X] -> U
    

might still succeed, even though normally non-variant lambdas are not subtypes of covariant lambdas. This is specifically the case of the two types appear as an arguments of another type application that only expects a non-variant higher-kinded type.

Another shortcoming, depending on one's point of view, is that the encoding uses type projections, which are an extremely powerful but also controversial feature of Scala. Type projections can encode SK combinator calculus and therefore make Scala's type system Turing complete. This means there are reasonable programs for which the Scala compiler will not terminate or will crash with a Stackoverflow exception. Also, type projections have a somewhat uncertain status in Scala's type theory. The most reasonable explanation I know regards the type projection T#A as equivalent to the existential type

 x.A forSome { val x: T }

But this begs the question what the correct modeling of such existential types in DOT is. For instance, what are the correct subtyping rules? Questions like these have not yet been settled conclusively; several obvious attempts have turned out to be unsound.

To summarize, a representation of higher-kinded types in terms of refinements and type projections is feasible, but has shortcomings. First, the abstraction leaks internally, that is, encodings of higher-kinded types still have to be treated specially, notably in the subtype checker. Second, the abstraction relies on a very big hammer -- type projections -- which render the type system Turing complete. In the next section, I discuss a more modest and simpler encoding.

Named Parameter Encoding

One alternative to the projection encoding is not to have higher-kinded types at all, but to rely only on named arguments.

There is one concession for brevity. We allow named type arguments, analogous to named value arguments. A named type argument

C[A = T]

instantiates the type field A of C with T. It is required that A is an abstract type member of C. The named application is expanded as usual: to C { type A = T } if A is invariant, to C { type A <: T } if A is covariant and to C { type A >: T } if A is contravariant.

There can be several named type arguments. Positional and named arguments cannot be mixed in a type application.

Besides conciseness the main benefit of the named argument syntax over type refinements is that the mechanics of translating declaration-site to use-site variance need not be repeated by the programmer.

To get an idea of the changes needed in expressing higher-kinded functionality, here is a snippet from the collection strawman.

Current version:

trait Iterable[+A] extends CanIterate[A] with FromIterator[Iterable]

trait FromIterator[+C[X] <: Iterable[X]]] {
  def fromIterator[B](it: Iterator[B]): C[B]
}

New proposed version:

trait Iterable[type +Elem] extends CanIterate[Elem] with FromIterator[Iterable]

trait FromIterator[+C <: Iterable]] {
  def fromIterator[B](it: Iterator[B]): C[Elem = B]
}

There are three differences between these versions.

  1. In the new version, Iterable has a named parameter, Elem. This is arguably good. Type parameters, as everything else, should be given meaningful names, in particular if they are pervasive.

  2. In the new version, C is a simple parameter (C <: Iterable), where before it was itself parameterized and passed the parameter to the bound (C[X] <: Iterable[X]). This is also good because it makes parameter declarations much more readable than before.

  3. In the new version, the higher-kinded application in the result type of fromIterator has to be explicitly named: C[Elem = B] instead of just C[B]. This is more verbose, and might become mildly annoying when there are many such applications to write. On the other hand, it is at least as clear as the original version.

One important difference to the previous higher-kinded type scheme is that now the names of higher-kinded type parameters matter. This means it is no longer possible to have a type parameter that can range over all constructors of given arity and variances, no matter what the parameter names are. Higher-kinded abstractions need cooperation from the libraries they range over. For instance, to make higher kinded abstractions over collections pervasive, there needs to be a named parameter like Elem in the root class, which represents the element type of each collection. Enforcing such naming discipline can be useful in its own right because it helps understandability. Also, see the section on type adapters for how this restriction can be mitigated.

Partial Type Applications

Partial type applications fall out naturally from named arguments.

For instance, consider the trait

trait Map[Key, +Value] extends Iterable[(Key, Value)] { ... }

The types

Map[Key = Int]
Map[Value = String]

can both be seen as unary type constructors in their Key, respectively Value types. Generally, a refined type keeps all type parameters of its parent that are not mentioned in a refinement.

Type Inference

It looks like type inference for the parameter name encoding can be simpler and more robust than type inference for traditional higher-kinded types. Here's the previous example again, encoded in the new scheme:

def f[CC <: Iterable, T](x: CC[Elem = T]) = ...

Given an xs: List[Int] and an application of f(xs), the inferencer would have to solve the constraint:

List[Elem = Int] <: CC[Elem = T]

for variables CC and T. By the existing rules we decompose this into the constraints

List <: CC
Int <: T

Unless there are more constraints, this then leads to the desired instantiations CC = List, T = Int. What's more, since all constraints are over named types, there is never a question which of several possible parameterized base types to pick. So the ambiguities of old-style higher-kinded type inference never arise. One also does not need to synthesize type lambdas to get to a particular type arity, so higher-order unification is not required.

Type Aliases

Aliased type constructors are a special case of higher-kinded types. A typical example is code like this in the scala package object:

type List[+X] = scala.collection.immutable.List[X]

This syntax would no longer be allowed. However, I think it is reasonable to let an alias (but not an abstract type!) inherit the type parameters from its right-hand side. So instead of the above, one should be able to write

type List = scala.collection.immutable.List

and still use application syntax like List[String] as a shorthand for List[Elem = String]. One can also obtain more interesting parameterized aliases by combining with partial type applications or more general refinements:

type IntMap = Map[Key = Int]
type Transform = Map { type Value = this.Key }

There are aliases which are not encodable. Some representative examples are

type Rep[T] = T
type RMap[X, Y] = Map[Y, X]
type LL[X] = List[List[X]]

A possible extension of the system is to allow fully parameterized aliases (but not parameterized abstract types) and to expand them when a type application is encountered. So fully parameterized aliases always have to be applied immediately. This is a minor extension. It does allow us to express some natural abbreviations, but at the price of a minor inconsistency since parameterized aliases are not full-blown types but rather templates that generate types.

In summary, the named parameter encoding is very simple and well founded in theory. It adds no new concepts to the conceptual treatment of generics in DOT and dotty. It can also be implemented straightforwardly, and tends to lead to types that are simpler and less cluttered than traditional higher-kinded types. However, it is less expressive than the type projection encoding, and in particular requires all higher-kinded type applications to refer to a particular parameter name.

Type Adapters

The restriction that all higher-kinded type parameters must be named fields makes some universal abstractions such as Functor or Monad less useful. Let's say we have classes

 class Option[type +Value] { ... }
 class List[type +Elem] { ... }

We would like to write a trait Monad that applies to both Option and List, even though their parameters have different names. To start, here's a Monad that works well for Lists.

 type Constructor = { type Elem }
 trait Monad[M <: Constructor] {
   def unit[A]: M[Elem = A]
   def bind[A, B](x: M[Elem = A], f: A => M[Elem = B]): M[Elem = B]
 }

 object ListMonad extends Monad[List] {
   def unit[A] = Nil
   def bind[A, B](x: List[A], f: A => List[B]) = x.flatMap(f)
 }

We now describe a possible extension of the named parameter encoding that allows the Monad trait to be also instantiated with Option.

The idea is to use a type adapter which in effect renames the parameter of Option from Value to Elem:

 type OptionM = Option & { type Elem; type Value = Elem }

OptionM now has the abstract type member Elem, so we can write an implementation of Monad for it:

 object OptionMonad extends Monad[OptionM] {
   def unit[A]: OptionM[Elem = A] = None
   def bind[A, B](x: OptionM[Elem = A], f: A => OptionM[Elem = B]): OptionM[Elem = B] = x.flatMap(f)
 }

However, with current Scala typing rules, this implementation would not typecheck. Indeed, the type of unit's right type side None is Option[A] which expands to:

 Option & { type Value = A }

On the other hand, the declared type of unit expands to:

 Option & { type Elem = A; type Value = Elem }

So the declared type has a type member, Elem, which the actual type lacks. To make this work, we'd need to change Scala's typing rules so that they allow the following conversion:

If an expression E has a fully defined type T that does not define a member A and the expected type U has a type alias type A = S as member, then E also has type T & { type A = S }.

That is, type aliases can be automatically added to types of values if the expected type requires them. This looks quite compatible with the way things are currently modeled in DOT. Nevertheless, it would be good to verify the soundness of this rule formally.

Other open questions are whether we want to make type adapters also work for standard parameters (i.e. not introduced with type) and whether we want to have special syntax for type adapters. One possible candidate is to re-use the syntax [type <parameter>] in a type. E.g.

Option[type Elem]

would be equivalent to the abovementoned adapter:

Option & { type Elem; type Value = Elem }

This could also be used to make classes without named parameters (e.g. those coming from Java) available as instances of higher-kinded types. Finally, it could be used to change the original variance of a type. E.g.

Option[type Value]

would be a non-variant Option type. It remains to be worked out whether it is worth adding this or another syntax to achieve these objectives.

What to Pick?

There are three candidates to choose from.

  1. The status quo, i.e. the previous nsc scheme which has type application and type membership as separate concepts and uses type lambdas for higher-kinded types.
  2. The type projection encoding.
  3. The named parameter encoding.

The first two alternatives are quite similar. They cover about the same user-level language, and differ mostly in the areas where precise explanations become difficult. Alternative (1) introduces complexity explaining wildcard arguments, whereas alternative (2) introduces complexity because the original abstractions shine through in the encoded programs. So the choice between (1) and (2) essentially boils down to a matter of implementation. By contrast, the third alternative is very different from the first two. It implies a different, somewhat more restricted user-level language but has much simpler foundations.

Which of the three alternatives is preferable depends on what we want Scala to be. If we want maximal expressiveness and power even at the price of complicated or unclear foundations, we will choose one of the first two. Another reason to pick the first alternative is if we see Scala as a language which is primarily about categorical abstractions and type-level programming. In that case, encodings would only be a distraction, and the status quo works well enough.

I don't think either of these goals should be our main priority. Scala is not primarily about maximal expressive power, and neither is it primarily about type-level programming and categorical abstractions. The ability to do these things is a testament to the flexibility of the language that allows one to venture into these areas, among others.

For me the primary aim of Scala is still the same as when I created it: To explore a particular fusion of ideas coming from functional programming on the one hand, and object and module systems on the other hand, and to make this fusion simple, powerful and practical. DOT is a very important step forward in this development because it shows how compact a formal foundation for this fusion can be. I believe it is important we capitalize on these foundations, keeping the overall goal of simplicity at the top of our agenda. By this reasoning, alternative (3) is the one that makes most sense to me. A welcome side benefit is that the named parameter encoding also seems to lead to code that is often simpler to understand than traditional higher-kinded types.

A Migration Strategy

The discussion in the previous section did not really pay attention to the fact that Scala is already a well-established language. True, higher-kinded types need to be enabled explicitly with a scala.language import and the user is warned in the doc comments that they might change in the future. But there is a lot of code out there that relies on them. So, any changes to the way higher-kinded types are expressed needs to come with a migration strategy.

Fortunately, we have techniques in place to help migration. We can support both parameter encoding schemes initially. The named parameter syntax is very convenient whether we have traditional higher-kinded types or not, so adding it looks like a net win (and it also gives us partial type applications, a much demanded feature).

At least initially, old-style higher-kinded types could also be supported through the type projection encoding in dotty. This encoding is already fully implemented, so having it help migration is a good use of the considerable investment that went into it. It's an open and secondary question whether such support would be enabled by an import from language.higherKinds, as before, or would be lumped into the language.Scala2 import that turns on Scala 2.x language features in dotty.

Acknowledgments

Thanks to Nada Amin, Guillaume Martres, Dmitry Petrashko, Sandro Stucki, and Vlad Ureche for useful discussions around this topic. Guillaume Martres provided very valuable help in the implementation of the type projection encoding.

@odersky
Copy link
Author

odersky commented Jan 7, 2016

@sstucki There's a little bit of explanation in the section "A New Simple Theory of Types" but maybe not enough.

For example, why is the Elem parameter of Iterable prefixed by type while the covariant parameter C in FromIterator is not?

Because we want to refer to Elem by its name later. As the section explains the type parameter name of C is mangled.

Why do we have to write C[Elem = B] but not CanIterate[Elem = Elem]

Because CanIterate is a class and C is not. Named parameters are optional for classes. For hk-types they are only way to parameterize.

@sstucki
Copy link

sstucki commented Jan 7, 2016

@odersky, I see.

The first point makes sense to me: the analogy between type and val for class parameters is nice (I had missed it the first time around). Still, for pedagogy's sake I'd maybe present a "fully named" example first before introducing unnamed parameters. I'm also not quite sure if and how this translates to DOT, since we don't have a notion of class at that level.

As for HK types and their instantiation, I'm not sure I like the named parameter syntax. It seems to give the wrong intuition: since it's the same syntax as for classes (and for Scala 2 HK-type application), I would expect instantiation to work without the named parameter. Since these new HK types are really just names with constraints (and not typelevel-functions), why not just use the refinement syntax? That makes it clear that we're not instantiating a class, but refining a constraint.

I'm also still a bit suspicious about the treatment of variance annotations (@smarter has already pointed out some issues). Are variance annotations of named parameters just used during elaboration or are they part of the type system? Is there some special treatment for variance annotations of class parameters?

@Ichoran
Copy link

Ichoran commented Jan 8, 2016

@odersky

I used to think this, too. But what we observe more and more is this sequence of steps: (1) people use Turing completeness (or other levers) to create cool type-level computations and advertise them widely. (2) Others use the cool constructs in their applications. (3) They then notice really slow compile times and blame the Scala compiler.

This is the point where I'd really focus in and ask: is it slow because something inherently computationally intensive is being asked for, or because the type-programming language is low-level and not optimized for those computations? My read on a lot of this stuff is that it is not that the type computations are too powerful (admitting cool stuff at great effort), but that the type computations are too weak (forcing people to do things like use a Church numeral encoding in types which for large values can slow computations down by a factor of a billion).

From what I've seen, most of the cool type computation stuff doesn't actually try anything too unreasonable; it's just very, very hard to tell the type system what is desired. And right now for e.g. type inference it's pretty much all-or-none; either the compiler gets it perfectly, or you have to manually insert gargantuan type signatures. If there were better ways to help the compiler along, the search space could be much smaller and thus the resolution of the typing much faster.

(4) Their coworkers say Enough! and the whole project migrates to another language. So, somewhat predictable compile times are still an important good, even if we know how to handle the worst case of non-termination. Can we obtain predictable compile times by compiler limits and settings? I have severe doubts about that. Say you have a setting about maximal type checking time per source line. You write a library and test it OK. Your client uses it in a larger project where type checking has to work harder, and compiler aborts. After a quick consultation of the available options, the setting is set to infinity. That's what will happen.

You're probably right. Almost all the time the compiler has to act reasonable by default.

I still think the solution to "people use it but it's too slow" ought to be "make it fast" not "make it impossible".

Of course it's often a tradeoff since a simpler system will typically be faster in those cases it can cover.

@odersky
Copy link
Author

odersky commented Jan 8, 2016

Of course it's often a tradeoff since a simpler system will typically be faster in those cases it can cover.

Exactly. The typical typechecker optimizations are by special-casing the simple case. If we have to prioritize the hard case instead, the simple case risks becoming slower.

There's the other concern whether Scala wants to be a language that is good at supporting this kind of stuff. I have my doubts about that, too. That's a complex issue which needs a separate discussion, though.

@SethTisue
Copy link

But what we observe more and more is this sequence of steps: ... Their coworkers say Enough! and the whole project migrates to another language

If that's the real reason, I think the blog post should say so. (As I read the draft, I had precisely Rex's reaction to the material about Turing completeness. I think it will be a common reaction.)

Related: any blog post you write (as opposed to a scala-internals post, say) will be read widely by an audience with a highly varying level of sophistication about language design, type theory, etc. What you have written so far is clear, but still requires a pretty high level of sophistication to follow. So I would suggest beginning with a couple paragraphs where you summarize what is to follow, and its implications, for those unable (or just unwilling to take the time) to grasp the details. If you don't simply state the takeaways yourself, other people will state them for you and you might not like their version.

@Ichoran
Copy link

Ichoran commented Jan 9, 2016

@odersky - Right now, putting lots of information into types is the primary way to make the compiler help me write correct programs. I also have my doubts about whether types are where to put all one's bets in that regard, but as long as that remains the situation, I expect that the type system will be under considerable pressure from expert users.

I would also like to point out that if the complex system is a generalization of the simple one, you can usually recover most of the original speed by having an optimized path for the simple case. (Examples: JIT compilation of non-final method calls, anything with a Bloom filter, etc. etc..) It does make the underlying implementation less elegant.

Finally, the type adapters as specified seem to be making the programmer do extra work for very little benefit. I don't care whether Option happened to name its type parameter Value or Elem or Wrapped or whatever, and I don't care whether a collection happened to name its parameter Value or Values or Elem or Elems or whatever. And because this name is so significant it's necessary to have a number of special cases devoted to manually managing the case where the type parameters happen to have a different name. (What about the reciprocal case where they happen to have the same name but you need to keep them distinct?)

I would think that in most cases an automatic adaptor would make everyone's life a lot easier. E.g. def to[Coll](cbf: CanBuildFrom[Nothing, A, Coll[_ = A]]).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment