Implement a better typechecker for Dhall. Hopefully still straightforward to implement and with good errors.
I think it will essentially be a reification of the typechecker into data structures such that data still only flows one way, but acts like it flows both ways via substitution. Very tricky.
If I get it implemented, then I can look at performance.
Why is impredicative universe usually the lowest? Maybe because there is no highest.
-
I was just giving an example to answer your question of whether it is consistent to have a (univalent) predicative universe inside an impredicative (univalent) one. If you now want a bigger one, we can look for that, too : – )
Anyway, I think it’s better to have the impredicative universe at the “top”, rather than at the bottom of a hierarchy of predicative universes.https://homotopytypetheory.org/2018/11/26/impredicative-encodings-part-3/#comment-126560
-
Notes on Universes in Type Theory by Zhaohui Luo
-
On Universes in Type Theory by Erik Palmgren
-
Is Impredicativity Implicitly Implicit? by Stefan Monnier and Nathaniel Bos
-
The axiom of unique choice together with classical logic (e.g. excluded-middle) are inconsistent in the variant of the Calculus of Inductive Constructions where
Set
is impredicative. -
Universe Hierarchies by Conor McBride
-
Dependently typed lambda calculus with a lifting operator by Damien Rouhling
-
Algebraic Type Theory and Universe Hierarchies by Jon Sterling
-
A New Paradox in Type Theory by Thierry Coquand
-
A Simplification of Girard's Paradox by Antonius Hurkens
-
Paradoxes of impredicativity by András Kovács
-
system F impredicativity and set theoretic impredicativity are identical by Cody Roux
-
Impredicativity of Prop as sets by Andrej Bauer
- Looking for languages that combine algebraic effects with parallel execution
- Program Adverbs and Tlön Embeddings by Yao Li, Stephanie Weirich (via chessai on Element)
What things would be awkward to do in a hypothetical "strict" Haskell variant, that are now not awkward to do?
Call-by-value Termination in the Untyped lambda-calculus by Neil D. Jones, Nina Bohr
The 'logic' of the contradiction with
Type:Type
is that you can create a term of any type including 'empty' type by 'cheating' by never returning. This is essentially the only way to cheat because of subject reduction and the subformula properties.
Deeper Shallow Embeddings by Jacob Prinz, G. A. Kavvos, Leonidas Lampropoulos
(7 May 2022 via lyxia on Element)
Implementing a Modal Dependent Type Theory by Daniel Gratzer, Jonathan Sterling, Lars Birkedal
(4 May 2022 via boarders on Element)
Implementing Anti–Unification Modulo Equational Theory by Jochen Burghardt and Birgit Heinz
(18 Feb 2022 via totbwf on Element)
Mini Class on Normalization in Type Theory by Aaron Stump
(06 Jan 2022 via boarders on Element)
Dershowitz-Manna ordering on multisets to show type preservation
Proof-relevant unification: Dependent pattern matching with only the axioms of your type theory by Jesper Cockx and Dominique Devriese
Verifying the Unification Algorithm in LCF by Lawrence C. Paulson
A categorical unification algorithm by D. E. Rydeheard and R. M. Burstall
Unification and Anti-Unification in the Calculus of Constructions by Frank Pfenning
Algorithms for Equality and Unification in the Presence of Notational Definitions by Frank Pfenning and Carsten Schürmann
Generalized Universe Hierarchies and First-Class Universe Levels by András Kovács
Type inference for record concatenation and multiple inheritance by Mitchell Wand
(18 Dec 2021 via Gabriella Gonzalez)
Row types.
A Unifying Theory of Dependent Types: the Schematic Approach by Zhaohui Luo
Une Théorie des Constructions Inductives by Benjamin Werner
(22 Nov 2021 via cody on cstheory SE)
No need to bother the developers! Impredicative Set is quite nasty, and in particular, it conflicts with some rather natural choice principles and the so-called "informative excluded middle"
forall P : Type, {P} + {~P}
, as this + impredicative set implies proof irrelevance (andnat
is not proof irrelevant). See e.g. coq.inria.fr/library/Coq.Logic.ClassicalUniqueChoice.html and coq.inria.fr/library/Coq.Logic.Berardi.htmlIn fact it is not even clear that Coq + impredicative set is consistent without even any additional axioms! Werner's PhD work seems to suggest that it is, but for the full theory with universes this is a very open problem.
Diagnosing Type Errors with Class by Danfeng Zhang, Andrew C. Myers, Dimitrios Vytiniotis, Simon Peyton-Jones
(18 Nov 2021 via @boarders on matrix)
this thing with tracking all unification states is nice, iirc there are some papers where you track the unification graph (e.g. https://www.microsoft.com/en-us/research/wp-content/uploads/2016/08/tc_diagnost.pdf) with the purpose of trying to give better errors
Finding Counterexamples from Parsing Conflicts by Chinawat Isradisaikul and Andrew C. Myers
Type checking through unification by Francesco Mazzoli and Andreas Abel
We propose an algorithm that encodes a type-checking problem entirely in the form of unification constraints, reducing the complexity of the type-checking code by taking advantage of higher order unification, which is already part of the implementation of many dependently typed languages.
Types Are Not Sets by James H. Morris, Jr.
A Typed Foundation for Directional Logic Programming by Uday S. Reddy
Information Aware Type Systems and Telescopic Constraint Trees (Slides) by Philippa Cowderoy
(05 Mar 2021 on Types Zulip)
Using information effects to encode unification semantics
Calculating Correct Compilers by Patrick Bahr and Graham Hutton
Encode type indices as parameters via Fording: “You can get any index you want so long as it's <blank>”.
There Is Such A Thing As A Declarative Language, and It’s The World’s Best DSL and Referential Transparency by Robert Harper
(12 Aug 2021 via Robert Harper on Slack.)
The semantics of variables, as opposed to assignables, as opposed to references, and math is the world’s best DSL – why else does everyone do semantics in math?
⩧ ≅ ≡: Leibniz Equality is Isomorphic to Martin-Löf Identity, Parametrically by Andreas Abel, Jesper Cockx, Dominique Devriese, Amin Timany, Philip Wadler
Incremental lambda calculus by Paolo G. Giarrusso
Bindlib by Christophe Raffalli and Rodolphe Lepigre
(23 Aug 2021, from Rodolphe Lepigre on Types Zulip)
Rodolphe Lepigre:
Bindlib is basically based on HOAS, but in such a way that the closures representing binders never contain "unexpected" computations. More precisely, performing substitution just amounts to a function call that either just returns (i.e., it is just a constant function whose body is a value) if the variable does not occur, and a function call that does an array access (where the array and the index are already values) otherwise. So this is obviously pretty efficient, at least in OCaml, where closures have a small overhead. However, to maintain this invariant, you have to pay a cost at term construction: terms are built under a type constructor
'a box
, and lifting a term of type'a
to'a box
is linear in the size of the term, plus a log factor in the number of free variables (if I remember correctly). For most applications this is fine, but if you need to work under binders, that means that you first deconstruct them (by substituting a fresh variable), do the work on the body, and reconstruct the binder by lifting to thebox
type, binding, andunbox
-ing. So one has to be careful with that, but it is generally fine for type-checking. If you want to do complicated term rewriting under binders, then you have to be really careful (and pull some tricks) to still be efficient. Another thing I should mention is that performing substitution may mess up variable names (i.e., if you print terms naively, you'll have some "visual capture"), so terms need to be traversed (bybox
-ing and thenunbox
-ing) to make sure names do not clash. That is another trick that is used for efficiency: you do not pay a cost at computation time, only at printing time (which you do far less often).
Everybodyʼs Got To Be Somewhere by Conor McBride
(23 Aug 2021, via Bob Atkey on Types Zulip)
Introduces a co-de-Bruijn representation of variables, to avoid traversals while substituting &c. Havenʼt read thoroughly yet …
A Simpler Lambda Calculus by Barry Jay
(23 Aug 2021, via Nate Faubion on PureScript Discord)
Not immediately relevant, but definitely interesting. Alternate lambda calculus, with indices, substitutions, and terms all in the same syntax kind. Simpler meta-theory around substitutions/variables, normal forms exist, and has a translation to combinators.
(23 Aug 2021)
Mechanism for memoizing free variables. I guess they track which subterms have which free variable. Not quite sure of the tradeoffs with just having each term remember its free variables.
The Essence of Principal Typings by J.B. Wells
(06 Feb 2021, from mdl on Slack)
Two more general definitions of principal typings.
The intrinsic topology of Martin-Löf universes by Martı́n Hötzel Escardó and Thomas Streicher
(06 Feb 2021)
Sequences in U
converge to anything!
An extensional nontrivial map U -> 2
gives WLPO (Weak Limited Principle of Omniscience, i.e. dec_eq
for the non-decreasing sequences of bools).
One possible reaction to Rice’s Theorem for the universe is that this is to be expected: after all, there are no elimination rules for the universe, as it is not inductively defined. But our arguments show that, even if there were, Rice’s Theorem for the universe would still hold, which justifies the lack of elimination rules, at least if one wishes to retain the compatibility of type theory with extensionality.
***A tutorial implementation of dynamic pattern unification by Adam Gundry and Conor McBride
(23 Jan 2021, by mention of Joseph Eremondi on Slack.)
To be read …
***What are principal typings and what are they good for? by Trevor Jim
(Discovered 05 Feb 2021, by mention of Robert Harper via email.)
Principal typings generate their context! Which is exactly what Iʼve been looking for! This is great.
So the idea is that we can generate a principal typing for a given expression that details the minimum requirements on its context and gives the most general type possible for the expression.
This makes things so much more modular, and now we can typecheck before we resolve imports. Also good for incremental computation, better errors, etc.
The other problem solved by this view of principal typing is the variance issue: I want typechecking to be covariant, so I can map over the resultant types, but having to provide a context would ruin that (contravariance) and require me to stick with one type representation.
*Type Inference for Records in a Natural Extension of ML by Didier Rémy
(04 Feb 2021 via Robert Harper.)
The introductory paper on row types. Still need to finish reading.
- Does this catch contradictions early enough, or will it let us run with an inconsistent type? (This might be a problem for partial evaluation of partially-typed terms.)
The simple essence of algebraic subtyping: principal type inference with subtyping made easy (functional pearl) by Lionel Parreaux
(25 Jan 2021 via Asad Saeeduddin.)
Figure out when constraints are inconsistent and would result in non-terminating normalization.
Ideally all the operations would still be syntactic but fast through smart memoïzation, in which not only are results cached, but laws are applied, such as typechecking commuting with context changes or whatnot, and then additionally things could be incrementally computed.
I need to understand these better.
Also will be important.
Does Dhall have extensionality for functions Bool -> Bool
? Maybe if we desugar &&
and ||
and ==
to if
s?
The produced context will be variables in pretty much a random order, so it will work up to permutation, provided the dependencies are met. Of course, we will have to detect if dependencies are circular!
But we would get something like 4 forms for the context of an fmap
operation:
forall (a : Type) -> forall (b : Type) -> (a -> b) -> List a -> List b
forall (b : Type) -> forall (a : Type) -> (a -> b) -> List a -> List b
forall (a : Type) -> forall (b : Type) -> List a -> (a -> b) -> List b
forall (b : Type) -> forall (a : Type) -> List a -> (a -> b) -> List b
Assuming that we have no typecase.
(Note: this implies parametricity, more or less, since a non-parametric forall (a : Type) -> Bool
would give a typecase violation using a nontrivial Bool -> Type
, and many other parametricity violations can be reduced to that form, except in the case of incompletely specified types like Text
and Double
in Dhall which have no substantial maps out of them.)
It also means that all functions Type -> Type
are either judgmentally-constant or judgmentally-injective (and this can be generalized for each parameter, of course).
(Proof: Iʼm pretty sure this just comes down to having a normalization function via which judgmental equality is defined.)
So this means that we need much less effort to figure out functors, but thereʼs still some work to be done.
Obviously if we know f x
for arbitrary x
then f = \x -> f x
(eta rule … which Dhall actually doesnʼt have yet).
If f Int = Product Int Int
, then f
has four possibilities:
f x = Product Int Int
(constant)f x = Product x Int
f x = Product Int x
f x = Product x x
We could represent this with union types, sorta: f x = Product (x \/ Int) (x \/ Int)
, but it's not really like that – we canʼt arbitrarily pick x
or Int
to fill it, we just donʼt know what fills it yet? Idk.
It reads more like a specification than an actual thing, but maybe thatʼs just my bias against union types.
Uhm we can play this game with higher orders:
If g List = List (List Int)
then g
could be:
g f = List (List Int)
(constant)g f = List (f Int)
g f = f (List Int)
g f = f (f Int)
And again, knowing where g Optional
goes (for example), determines it.
(a y x x = x) => a = \xyz. y \/ z
(a x x y = x) => a = \xyz. x \/ y
a = \xyz. y
I think the pattern fragment algorithm solved this by showing & recording that a
was constant on certain arguments, but my way means we can reuse more parts of unification, instead of diving into the constraint solving algorithms just yet.
Dhall currently has a1, a2 : A : k => (if (b : Bool) then a1 else a2) : A : k
for any universe k
, which means that you can construct non-constant functions T : Bool -> Type
, but you canʼt construct any terms of type forall (b : Bool) -> T b
!
So a naïve solution would be to have a1 : A1 : k1, a2 : A2 : k2 => (if (b : Bool) then a1 else a2) : (if (b : Bool) then A1 else A2)
(and propagate the if
into the type).
Since we have the rule if b then a else a ==> a
, this would work out.
Except that it wouldnʼt kind check!
If A1
and A2
belong to different universes (which seems like something we would want to be able to do, in analogy with say records), we would get that (if (b : Bool) then A1 else A2) : (if (b : Bool) then k1 else k2)
, and that would not normalize to a single universe k
like we want.
(Recall the general property of type inference: if a : A
then a : A : k
for some universe k
, and so on up until you run out of universes.)
So, either we need to force the things to live in the same universe, which sounds boring, or we need some way of typing so they unify? I guess it would be weird to rely on unification during normalization, hmph.
Suppose op T 0 = T
, op T (n+1) = T -> op T n
.
When can we solve for n
?
Note that T
will be distinct from T -> _
, i.e. we need a non-surjective function.
(If it is constant, then we just get information for isZero n
.)
Non-surjective functions are headed by a type constructor.
So non-constant, non-identity.
(op Natural n = Natural) => n = 0
(op Natural n = Natural -> Natural) => n = 1
Is it possible to have cumulative universes? Or do we need explicit universe lifting operations?
One big obstacle is the variance. But maybe unification variables will help? We just instantiate every constructor with a universe variable and solve it to fit sometime?
We would still have the issue that you canʼt instantiate the “same” thing to different universes in different places, sigh …
Need to keep track of the different kinds of variables, and how they work.
I think we will end up with a theorem that all unconstrained unification variables can be universally quantified.
Perhaps we will have to pass into the algorithm whether a particular variable is constant (i.e. let
-bound) or arbitrary (lambda
or forall
-bound).
It makes sense, it just feels weird …
Or maybe we just substitute in before typechecking … (Preferably memoïzed)
Or … maybe we just record all the constraints whole-hog, and solve them once the variable is introduced (lol that sounds weird).
What I would call a closed term: a term with no free variables. Ground confluence (aka ground Church-Rosser) means that rewrites on ground terms end up being confluent, though not for open terms.
The leftmost term after all the lambda introductions – i.e. the function thing that triggers β-reduction if it is a lambda itself.
Used in the terms “head normal form” and “weak head normal form”.
Reference: Linear β-reduction.
H ::= □ | λx.H | Ht
(i.e. □
in λx̄.□t̄
)
“evaluates redexes as soon as they are created” (Gundry & McBride)
“metavariables must be applied to spines of distinct bound variables” (Gundry & McBride)
- Andreas Abel
- Thorsten Altenkirch
- Bob Atkey
- Nils Anders Danielsson
- Jana Dunfield
- Conal Elliott
- Martín Escardó
- Robert Harper
- András Kovács
- Conor McBride
- Jens Palsberg
- Frank Pfenning
- Jon Sterling
- J.B. Wells
- Benjamin Werner
- typechecking
- Unsound optimization with
error
- Derived Foldable and Traversable instances become extremely inefficient due to eta-expansion
- GHC fails to quantify over a quantifiable equality constraint
- HasField with QuantifedConstraints
- type-checking of
coerce
depends on which newtype constructors are imported - stimes for Endo
- DmdAnal:
lubBoxity = unboxedWins
means thattopDmd
isn't actually top of the lattice - Alternative "field-like" superclass syntax
- Skolems can escape in arrow notation, resulting in Core Lint errors
- can newtype be extended to permit GADT-like declarations (no)
- Types with different forall placements don't unify with QL ImpredicativeTypes
- How to require functional dependencies in kind signature?
- Universe Polymorphism: Minimization
- univMinim.ml
- Runaway universe
- universe inconsistency (minimization related)
- Anomaly. Universe undefined. Program and Mono-/Polymorphic interaction.
- The graph of constraints over which universe polymorphism quantifies may contain simplifiable equations over universe variables
- [setoid_rewrite] failure for inductive relation with type parameter
- Universe with Prop <= i not minimized to i := Set, resulting in costly, unconstrained universes
- Stack overflow with universe polymorphism
- invalid deduction of an equality between two universe levels
- https://fasterthanli.me/articles/lies-we-tell-ourselves-to-keep-using-golang
I remember fondly the time an audience member asked the Go team "why did you choose to ignore any research about type systems since the 1970s"?
- https://lobste.rs/s/deni6q/lies_we_tell_ourselves_keep_using_golang#c_e8gvb9
I think most people don’t realize how radical the “Bell Labs gang”’s conceptualization of simplicity is. It is rooted in a deep suspicion of any kind of abstraction and a defeatist attitude when it comes to building them: [...]
- https://news.ycombinator.com/item?id=26546477
- https://news.ycombinator.com/item?id=16830153
- Term Rewriting and All That
- Computational Logic - Essays in Honor of Alan Robinson
- https://uniformal.github.io/doc/tutorials/prototyping/
- http://www.call-with-current-continuation.org/fleng/fleng.html
- Compiling Functional Languages by Johan Nordlander at Chalmers, links to various optimization techniques