Skip to content

Instantly share code, notes, and snippets.

@AndrasKovacs
Last active November 13, 2024 09:33
Show Gist options
  • Save AndrasKovacs/1758f83cced957afb00b1382a8974c92 to your computer and use it in GitHub Desktop.
Save AndrasKovacs/1758f83cced957afb00b1382a8974c92 to your computer and use it in GitHub Desktop.
Basic setup for formalizing elaboration

Basic setup for formalizing elaboration

First attempts

Elaboration is, broadly speaking, taking user-written syntax and converting it to "core" syntax by filling in missing information and at the same time validating that the input is well-formed.

I haven't been fully happy with formalizations that I previously used, so I make a new attempt here. I focus on a minimal dependently typed case.

  • We assume that the core syntax is a minimal MLTT, given as the initial model. In other words, only well-formed syntactic objects exist, and the syntax is quotiented by conversion, so that convertible objects cannot be distinguished.
  • The surface syntax is standard. The only elaboration feature is that we can write _ anywhere, to be filled in by unification.

Importantly, I don't want to deal with setoids or old-school typing and conversion relations; I only want to have quotiented algebraic core syntax.

At the very least, we want a partial function from presyntax to core syntax:

check : PreTm → (A : Ty ∙) → Maybe (Tm ∙ A)

Checking maps a preterm and an expected core type to a Maybe of a core term. The denotes the empty context. For simplicity, I ignore open contexts and resolution of names into formal variables.

This checking can be implemented as a constant Nothing function, and the implementation can also return "wrong" output that doesn't match the input, e.g. by swapping Boolean true and false everywhere. So we need more specification.

How to specify an elaboration relation between presyntax and core syntax?

Sometimes in the literature, an erasure map is used for this. It transforms core syntax to presyntax, and we might say that elaboration is correct if erasure is its inverse in some sense. In our case though, erasure is not quite workable, because we can only define functions that preserve conversion in core syntax, i.e. convertible core terms must be mapped to equal preterms. Hence, erasure can be defined as first computing normal forms, and then erasing those, or perhaps as a function which returns propositionally truncated preterms. Neither option is appropriate for us.

Instead, let's define an inductive relation Rel, which says that core term t is related to preterm u when t has the same structure as u except that holes are related to any core term (i.e. holes get filled). Again, I ignore the question of resolving named variables to formal variables. Next attempt:

check : (t : PreTm)(A : Ty ∙) → Dec ((u : Tm ∙ A) × Rel u t)

Dec A means A + ¬A, and (a : A) × B denotes a sigma type.

This looks better, but it doesn't capture a lot of behavior that we're used to. In particular, Agda strives to only succeed when there's a unique elaboration output, and to throw "yellow" errors when there are multiple possibilities. So:

check : (t : PreTm)(A : Ty ∙) → Dec (isContr ((u : Tm ∙ A) × Rel u t))

isContr A means unique inhabitation of A.

This is still not perfect. Problem: the uniqueness of the output is specified up to conversion, which is a bit too liberal and doesn't match how existing compilers and proof assistants work. Take this surface program:

let x : Bool = _ in true

The possible fillers of the hole, up to conversion, are true and false, because all closed Bool terms are convertible to these. In Agda, we get an error here. But our "check" function succeeds and returns some term that's convertible to true!

The hole filling ambiguity doesn't matter because the x definition is not used in the overall output. let x : Bool = true in true and let x : Bool = false in true are both convertible to true.

This specification is also extremely inefficient to implement. We would need to check if an unsolved metavariable can be eliminated from a term by conversion, e.g. by removing dead let-bindings or reducing constant function applications. In the general case, we would need to do a full beta-normalization of the output.

So this is not really what we want. Intuitively, elaboration should strictly preserve everything that's written by the users. We need an additional "strict" notion of equality.

Moving to two-level type theory

A common way to have a "weak" and a "strict" equality is to work in a two-level type theory. Basic setup:

  • The inner level has the core syntax as a quotient inductive type.
  • The meta level has the surface syntax as an ordinary inductive type.

Now,

  • Inner equality of core terms means conversion. We write _=₀_ for inner equality.
  • Outer equality of core terms should be strict equality, or at least imply it. We write _=₁_ for outer equality.

The setup is a bit subtle. More precisely, outer equality of inner things means definitional equality, in the empty context. If the inner theory is an observational type theory (OTT), we get canonicity for quotient inductive-inductive types, so that every closed inhabitant is a canonical constructor. This means that definitional equality coincides with an intuitive "strict" equality.

We also have QIITs in cubical type theories, but we get many extra closed inhabitants (formal compositions and coercions). There are many definitionally distinct terms which could be still viewed as having the same shape. So, both OTT and CTT might be workable but OTT looks nicer here.

We want to express that

  • The elaboration output has strictly the same shape as the input.
  • Holes have unique fillers in the output, up to conversion.

We have the elaboration relation Rel necessarily as an outer relation, but otherwise it's the same as before. Lambdas are related to lambdas, applications to applications, and so on, and holes are related to any core term. Since Rel is outer, its indexing by core terms is up to strict equality.

Finally:

check : (t : PreTm)(A : Ty ∙) → Dec (isContr (((u : Tm ∙ A) × Rel u t) / (λ(t,_)(t',_). t =₀ t')))

Some notes:

  • isContr is necessarily defined using outer equality
  • ((u : Tm ∙ A) × Rel u t) / (λ(t,_)(t',_). t =₀ t') is a quotient of the sigma type by inner equality of the first projections.

Why is this the correct thing?

First, (u : Tm ∙ A) × Rel u t expresses that u strictly has the same shape as t and also fills the holes. isContr ((u : Tm ∙ A) × Rel u t) is not quite sensible, because an outer equality in (u : Tm ∙ A) × Rel u t is only provable if the core terms are strictly the same, and we want to identify the convertible hole fillers.

An outer equality in ((u : Tm ∙ A) × Rel u t) / (λ(t,_)(t',_). t =₀ t') permits exactly the hole fillings to be convertible, because the Rel forces the core terms to be strictly the same everywhere else.

Alternatively, we can write the previous type in an equivalent form without the quotient:

check : (t : PreTm)(A : Ty ∙) → Dec ((u : Tm ∙ A) × Rel u t × (∀ u' → Rel u' t → u =₀ u'))
@andrejbauer
Copy link

The notion of conservativity from Anja Petković Komel's PhD thesis seems somewhat relevant, as well as the elaboration theorem in chapter 10. The work tries to cover more general ground, but I think it should specialize to the present situation. Note that the two-level type theory is still just a special case of finitary type theories: the syntactic equality arises as judgemental equality if we don't posit any additional equality rules, apart from the general ones (Lema 10.3.6 in Anja's thesis).

@AndrasKovacs
Copy link
Author

@marklemay
Copy link

Since this setup requires holes be filled uniquly up to definitional equality, does that mean holes can't really be filled under binders? For example, if you don't have eta in your equality \ (x: Bool) -> _ of type Bool -> Unit won't have a unique hole fill.

But even with eta, uniniqueness seems like it will have issues with the undecidablility/intractability of general equality. Or perhaps I'm misunderstanding.

@AndrasKovacs
Copy link
Author

AndrasKovacs commented Aug 27, 2024

\(x : Bool) -> _ of type Bool -> Unit for sure only has a unique filler if there is eta for Unit, I don't see an issue with that. This is how Agda works in this case. If you declare Unit with eta, elaboration succeeds, otherwise fails.

It is very easy to get an undecidable specification, for example as soon as we need any kind of unification for hole filling we have the issue that full higher-order unification is undecidable.

The current formalization is more like a starting point. For something like bidirectional checking it looks feasible to use it as it is, but for more elaboration features we'd need to make the spec more complicated and somehow restrict ourselves to decidable unification.

@marklemay
Copy link

marklemay commented Aug 27, 2024

I was more concerned with if case x of true -> tt | false -> tt =?= tt

Since there will be a lot of things that will make terms not unique if you have a usual notion of definitional equality. Like add 0 x =?= add x 0.

However, as a starting point this seems reasonable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment