Create a gist now

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Example of Andromeda ML-programming
(* In Andromeda everything the user writes is "meta-level programming. *)
(* ML-level natural numbers *)
mltype rec mlnat =
| zero
| succ of mlnat
end
(* We can use the meta-level numbers to define a program which computes n-fold iterations
of f. Here we give an explicit type of iterate because the ML-type inference cannot tell
whether f is an ML-function or an object-level term. *)
let rec iterate f x n : judgment → judgment → mlnat → judgment =
match n with
| zero ⇒ x
| succ ?n ⇒ f (iterate f x n)
end
constant A : Type
constant a : A
constant g : A → A
do iterate g a (succ (succ (succ zero)))
(* output: ⊢ g (g (g a)) : A *)
(* Defining iteration of functions at object-level is a different business. We first
axiomatize the natural numbers. *)
constant nat : Type
constant O : nat
constant S : nat → nat
constant nat_rect : Π (P : nat → Type) (s0 : P O) (s : Π (n : nat), P n → P (S n)) (m : nat), P m
constant nat_iota_O :
Π (P : nat → Type) (s0 : P O) (s : Π (n : nat), P n → P (S n)),
nat_rect P s0 s O ≡ s0
constant nat_iota_S :
Π (P : nat → Type) (s0 : P O) (s : Π (n : nat), P n → P (S n)) (m : nat),
nat_rect P s0 s (S m) ≡ s m (nat_rect P s0 s m)
(* We use the standard library to get correct normalization of nat_rect *)
now reducing = add_reducing nat_rect [lazy, lazy, lazy, eager]
now betas = add_betas [nat_iota_O, nat_iota_S]
(* Iterataion at object-level is defined using nat_rect, as it is
just the non-dependent version of induction. *)
constant iter : ∏ (X : Type), (X → X) → X → nat → X
constant iter_def :
∏ (X : Type) (f : X → X) (x : X) (n : nat),
iter X f x n ≡ nat_rect (λ _, X) x (λ _ y, f y) n
(* We tell the standard library about the definition of iter *)
now betas = add_beta iter_def
(* An object-level iteration of g can be done as follows. *)
do iter A g a (S (S (S O)))
(* output: ⊢ iter A g a (S (S (S O))) : A *)
(* We can use the whnf function from the standard library to normalize a term. By default
it performs weak head-normalization. But here we *locally* tell it to aggresively
normalize arguments of g. *)
do
now reducing = add_reducing g [eager] in
whnf (iter A g a (S (S (S O))))
(* output: ⊢ refl (g (g (g a))) : iter A g a (S (S (S O))) ≡ g (g (g a)) *)
(* We can also pattern-match on terms at ML-level. *)
do
match g (g (g a)) with
| ⊢ (?h (?h ?x)) ⇒ h x
end
(* output ⊢ g (g a) : A *)
(* We can match anywhere, also under a λ-expression. Because match is ML-level it computes
immediately, i.e, it actually matches against the bound variable x. *)
do
(λ x : A,
match x with
| ⊢ ?f ?y ⇒ y
| ⊢ ?z ⇒ g z
end) (g a)
(* output ⊢ (λ (x : A), g x) (g a) : A *)
(* Now suppose we want to construct λ h, h (h (h a)) using iteration. Since λ-expressions
are still just ML-level programs which compute judgments, we can do it as follows using
the ML-level iteration. *)
do λ h : A → A, iterate h a (succ (succ (succ zero)))
(* output: ⊢ λ (h : A → A), h (h (h a)) : (A → A) → A *)
(* If we use object-level iteration we get a different term which
is still equal to the one we wanted. *)
do λ h : A → A, iter A h a (S (S (S O)))
(* output: ⊢ λ (h : A → A), iter A h a (S (S (S O))) : (A → A) → A *)
(* The standard library is smart enough to see that the two terms
are equal. (It has funext built in.) *)
do
let u = λ (h : A → A), iterate h a (succ (succ (succ zero)))
and v = λ (h : A → A), iter A h a (S (S (S O))) in
refl u : u ≡ v
(* output:
⊢ refl (λ (h : A → A), h (h (h a)))
: (λ (h : A → A), h (h (h a))) ≡
(λ (h : A → A), iter A h a (S (S (S O))))
*)
(* We could use object-level iteration and normalization to compute
the desired term as follows. *)
do
λ h : A → A,
now reducing = add_reducing h [eager] in
match whnf (iter A h a (S (S (S O)))) with
| ⊢ _ : _ ≡ ?e ⇒ e
end
(* output: ⊢ λ (h : A → A), h (h (h a)) : (A → A) → A *)
@mikeshulman

This comment has been minimized.

Show comment
Hide comment
@mikeshulman

mikeshulman Aug 4, 2016

Very neat, I think I am starting to get the idea.

At first I was confused by what "we give an explicit type of f" meant, since I didn't see a type of f anywhere. But I guess you mean the type judgment → judgment → mlnat → judgment of iterate? (I expected "type of f" to mean an object-level type.)

It's going to take me a little while to get used to the overloading of syntax for meta-level and object-level operations. For instance, in that type of iterate the symbol refers to an ML function-type, whereas in constant g : A → A the same symbol refers to an object-language function-type; right? Do you just learn which contexts are meta and which are object? And similarly it appears that something of type judgment can be "applied" to another judgment to produce an object-level function application, using the same juxtaposition syntax that applies ML functions to their arguments?

What is the difference between the symbols == in nat_iota and in iter_def?

It seems kind of tedious if every time we want to define something at object-level we have to say constant thing : ... and then constant thing_def : ... and then now betas = add_betas thing_def. Is there any shorthand?

Finally, exactly what kind of thing is the variable that gets bound by a λ? Inside the scope of the λ, is it just an ML variable of type judgment? And is the λ-binding just an ordinary ML variable binding like an ML lambda-abstraction? How does matching on terms deal with alpha-equivalence? Are there any named variables at object-level, or are all the names meta-level with only de Bruijn happening at object-level? (The random mixing of object-level and meta-level names is something that I've always found very confusing about Ltac, so I'm hoping that Andromeda is more consistent.)

Very neat, I think I am starting to get the idea.

At first I was confused by what "we give an explicit type of f" meant, since I didn't see a type of f anywhere. But I guess you mean the type judgment → judgment → mlnat → judgment of iterate? (I expected "type of f" to mean an object-level type.)

It's going to take me a little while to get used to the overloading of syntax for meta-level and object-level operations. For instance, in that type of iterate the symbol refers to an ML function-type, whereas in constant g : A → A the same symbol refers to an object-language function-type; right? Do you just learn which contexts are meta and which are object? And similarly it appears that something of type judgment can be "applied" to another judgment to produce an object-level function application, using the same juxtaposition syntax that applies ML functions to their arguments?

What is the difference between the symbols == in nat_iota and in iter_def?

It seems kind of tedious if every time we want to define something at object-level we have to say constant thing : ... and then constant thing_def : ... and then now betas = add_betas thing_def. Is there any shorthand?

Finally, exactly what kind of thing is the variable that gets bound by a λ? Inside the scope of the λ, is it just an ML variable of type judgment? And is the λ-binding just an ordinary ML variable binding like an ML lambda-abstraction? How does matching on terms deal with alpha-equivalence? Are there any named variables at object-level, or are all the names meta-level with only de Bruijn happening at object-level? (The random mixing of object-level and meta-level names is something that I've always found very confusing about Ltac, so I'm hoping that Andromeda is more consistent.)

@mikeshulman

This comment has been minimized.

Show comment
Hide comment
@mikeshulman

mikeshulman Aug 4, 2016

Actually, how does Andromeda handle object-level variable contexts in general? Is it HOAS like in LF?

Actually, how does Andromeda handle object-level variable contexts in general? Is it HOAS like in LF?

@andrejbauer

This comment has been minimized.

Show comment
Hide comment
@andrejbauer

andrejbauer Aug 4, 2016

Thanks for the comments. This is exactly the kind of input we need. And you get to influence our design decisions!

Oops, I meant "type of iterate", not "type of f". I will fix that, thanks.

There are no object-level operations. The user never gets to manipulate an expressions, only entire judgments. The output is the only place where you see object-level expressions, as parts of judgments. Even if you evaluate something like Type that still requires a little bit of evaluation to get you to ⊢ Type : Type.

Yes, a judgment can be applied to a judgment to produce a judgment, by the elimination rule for . The ML-level types simply see all object-level things as completely opaque. ML-level typing inference happens before evaluation, and so it cannot do anything inteligent about the dependently typed object level. So, application is in fact overloaded in AML.

Regarding definitions: we might provide some sort of syntactic sugar for definitions, but we do not want to bake in a notion of
definition such as those in Coq and Agda and Lean. Consider:

  • By having both x and x_def you can refer to the definition. It is not all bundled together in a thing called x.

  • In our system you can provide several definitions (whether they are consistent is your problem) and then conveniently use the one that is most appropriate at any moment. In contrast, Agda/Coq/Lean always have a preferred definition and you sometimes need to work around that.

  • Nobody forces you to define x in terms of x ≡ e for some expressions e. You might define x in some other way, say with an arbitrary equation:

    constant x : A
    constant x_def : f(x) ≡ g(x)
    

    Then, we can prove that this is well-defined later by showing that Σ (x : A), f(x) ≡ g(x) is contractible. In fact, given any
    sequence of constant declarations c1 : A1, ..., cn : An we can prove that they implement a thing by showing that
    the corresponding dependent sum Σ (c1 : A1) ..., An is contractible. Thus we bring in the usual way of doing mathematics
    where it is allowed to have "tentative" definitions that are proved sensible. You do not have to dance a little dance around
    a proof assistant with a baked in notion of definitions.

So, I think the price to pay by writing x and x_def is perhaps worth the extra flexibility. In any case, we should have more syntactic sugar, but we do not want to add it until we know what people really want.

You can tell that refers to a type because it appears in a typing annotation to a let-statement, which is part of AML. I hope that's not too confusing.

And it looks like I forgot to eliminate some ASCII. We allow UTF8 and ASCII syntax (I will fix that also):

  • == is the same as
  • -> is the same as
  • lambda is the same as λ
  • and Π are the same as forall
  • is the same as =>
  • is the same as |-

Finally, you ask about variables. Internally, variables bound by λ and are de Bruijn indices and variables in a typing context are atomic names (this is known as "locally nameless"). There is no HOAS, PHOAS and other fancy stuff, the variables are actual strings (I am lying a bit). In particular, two different contexts may share the same variable. This is necessary, because variables in inference rules are shared by the premises.

You can generate any number of fresh variables of a given type A by writing assume x : A in ... This will create a new judgment xᵢ : A ⊢ xᵢ : A and bind it to x in ..., where xᵢ is guaranteed to be fresh. Since we use context joins, such a freshly generated variable will only appear where it is used, i.e., we're not "polluting" a "global context" with potentially unusued variables. (This is one of the main reasons why each judgment has its own context and the contexts get joined, as opposed to some sort of a global "background" "invisible" context.)

Owner

andrejbauer commented Aug 4, 2016

Thanks for the comments. This is exactly the kind of input we need. And you get to influence our design decisions!

Oops, I meant "type of iterate", not "type of f". I will fix that, thanks.

There are no object-level operations. The user never gets to manipulate an expressions, only entire judgments. The output is the only place where you see object-level expressions, as parts of judgments. Even if you evaluate something like Type that still requires a little bit of evaluation to get you to ⊢ Type : Type.

Yes, a judgment can be applied to a judgment to produce a judgment, by the elimination rule for . The ML-level types simply see all object-level things as completely opaque. ML-level typing inference happens before evaluation, and so it cannot do anything inteligent about the dependently typed object level. So, application is in fact overloaded in AML.

Regarding definitions: we might provide some sort of syntactic sugar for definitions, but we do not want to bake in a notion of
definition such as those in Coq and Agda and Lean. Consider:

  • By having both x and x_def you can refer to the definition. It is not all bundled together in a thing called x.

  • In our system you can provide several definitions (whether they are consistent is your problem) and then conveniently use the one that is most appropriate at any moment. In contrast, Agda/Coq/Lean always have a preferred definition and you sometimes need to work around that.

  • Nobody forces you to define x in terms of x ≡ e for some expressions e. You might define x in some other way, say with an arbitrary equation:

    constant x : A
    constant x_def : f(x) ≡ g(x)
    

    Then, we can prove that this is well-defined later by showing that Σ (x : A), f(x) ≡ g(x) is contractible. In fact, given any
    sequence of constant declarations c1 : A1, ..., cn : An we can prove that they implement a thing by showing that
    the corresponding dependent sum Σ (c1 : A1) ..., An is contractible. Thus we bring in the usual way of doing mathematics
    where it is allowed to have "tentative" definitions that are proved sensible. You do not have to dance a little dance around
    a proof assistant with a baked in notion of definitions.

So, I think the price to pay by writing x and x_def is perhaps worth the extra flexibility. In any case, we should have more syntactic sugar, but we do not want to add it until we know what people really want.

You can tell that refers to a type because it appears in a typing annotation to a let-statement, which is part of AML. I hope that's not too confusing.

And it looks like I forgot to eliminate some ASCII. We allow UTF8 and ASCII syntax (I will fix that also):

  • == is the same as
  • -> is the same as
  • lambda is the same as λ
  • and Π are the same as forall
  • is the same as =>
  • is the same as |-

Finally, you ask about variables. Internally, variables bound by λ and are de Bruijn indices and variables in a typing context are atomic names (this is known as "locally nameless"). There is no HOAS, PHOAS and other fancy stuff, the variables are actual strings (I am lying a bit). In particular, two different contexts may share the same variable. This is necessary, because variables in inference rules are shared by the premises.

You can generate any number of fresh variables of a given type A by writing assume x : A in ... This will create a new judgment xᵢ : A ⊢ xᵢ : A and bind it to x in ..., where xᵢ is guaranteed to be fresh. Since we use context joins, such a freshly generated variable will only appear where it is used, i.e., we're not "polluting" a "global context" with potentially unusued variables. (This is one of the main reasons why each judgment has its own context and the contexts get joined, as opposed to some sort of a global "background" "invisible" context.)

@mikeshulman

This comment has been minimized.

Show comment
Hide comment
@mikeshulman

mikeshulman Aug 6, 2016

Thanks! This business of joining contexts will take some getting used to, but I think I sort of see how it works.

Regarding definitions, remarks like "whether they are consistent is your problem" make me a bit queasy. I realize the point here is to allow the user to implement an arbitrary theory which might or might not be consistent, but it seems to me that we need some way to clearly separate the activity of "specifying a theory", when we allow ourselves to make postulates and don't expect the system to check their consistency, from the activity of "working inside that theory", when we do want the system to verify that we aren't introducing any new postulates by accident. Coq, for instance, separates "Axiom" from "Definition" for this reason, and I can "Print Assumptions" to see which axioms a theorem depends on. But if every "Definition" is actually just a pair of "Axioms", how can I separate out the ones that are "real" axioms? The point of having only a single definition for each term is that we can prove a meta-theorem ("delta-reduction"?) that any such definition can be eliminated; for more questionable kinds of "definitions" there is no such meta-theorem.

An unrelated question: I gather that every "judgment" is the same sort of judgment, namely a typing judgment for some term in some context. Is it possible for the user to extend Andromeda with other kinds of judgments? For instance, various people and I have been thinking about type theories with a judgment for "cofibrations"; could that be implemented in Andromeda?

Also: is it possible to "subscribe" to a gist so that I get email notifications when someone posts to it, the way I do when I comment on a github issue? It's annoying to have to keep coming back here and reloading the page.

Thanks! This business of joining contexts will take some getting used to, but I think I sort of see how it works.

Regarding definitions, remarks like "whether they are consistent is your problem" make me a bit queasy. I realize the point here is to allow the user to implement an arbitrary theory which might or might not be consistent, but it seems to me that we need some way to clearly separate the activity of "specifying a theory", when we allow ourselves to make postulates and don't expect the system to check their consistency, from the activity of "working inside that theory", when we do want the system to verify that we aren't introducing any new postulates by accident. Coq, for instance, separates "Axiom" from "Definition" for this reason, and I can "Print Assumptions" to see which axioms a theorem depends on. But if every "Definition" is actually just a pair of "Axioms", how can I separate out the ones that are "real" axioms? The point of having only a single definition for each term is that we can prove a meta-theorem ("delta-reduction"?) that any such definition can be eliminated; for more questionable kinds of "definitions" there is no such meta-theorem.

An unrelated question: I gather that every "judgment" is the same sort of judgment, namely a typing judgment for some term in some context. Is it possible for the user to extend Andromeda with other kinds of judgments? For instance, various people and I have been thinking about type theories with a judgment for "cofibrations"; could that be implemented in Andromeda?

Also: is it possible to "subscribe" to a gist so that I get email notifications when someone posts to it, the way I do when I comment on a github issue? It's annoying to have to keep coming back here and reloading the page.

@andrejbauer

This comment has been minimized.

Show comment
Hide comment
@andrejbauer

andrejbauer Aug 6, 2016

You can tell which tuples of constants are harmless by showing that their types are contractible, as I mentioned earlier. That's what it takes for them to be eliminated. And this is a semantic condition, not a built-in syntactic approximation of it. In general, we abhor syntactic conditions because they are necessarily limiting.

You should in general use constant for postulates and assume for thigs you want tracked. (I realize now that the above example does not have any assumes.)

Whether we can have different kinds of judgments is a very interesting question. It would be great if we could, but then I think we'd move much more in the direction of LF. I am not saying that's bad, just that it requires a bit of thought.

Do you see the gist comments in your GitHub notifications? I don't. It might not be possible to follow them, which is a bummer. In that case, it would be better to use Andromeda issues for discussions like this one.

Owner

andrejbauer commented Aug 6, 2016

You can tell which tuples of constants are harmless by showing that their types are contractible, as I mentioned earlier. That's what it takes for them to be eliminated. And this is a semantic condition, not a built-in syntactic approximation of it. In general, we abhor syntactic conditions because they are necessarily limiting.

You should in general use constant for postulates and assume for thigs you want tracked. (I realize now that the above example does not have any assumes.)

Whether we can have different kinds of judgments is a very interesting question. It would be great if we could, but then I think we'd move much more in the direction of LF. I am not saying that's bad, just that it requires a bit of thought.

Do you see the gist comments in your GitHub notifications? I don't. It might not be possible to follow them, which is a bummer. In that case, it would be better to use Andromeda issues for discussions like this one.

@mikeshulman

This comment has been minimized.

Show comment
Hide comment
@mikeshulman

mikeshulman Aug 6, 2016

No, I don't see anything in my github notifications.

It's true that syntactic conditions are more limited than semantic ones, but the point of a syntactic condition is that it's something you can check. The ultimate semantic condition is "P is true", but of course you can't check that directly; you have to give a proof, which is a syntactic thing whose correctness can be verified. I would be really unhappy if I had to prove every time I make a definition that it was well-defined; it's precisely the limitedness of syntactic conditions that allows us to prove metatheorems about them once and for all, like the admissibility of definitional extensions.

Is there an actual difference in meaning between constant and assume?

Finally, I think I still don't understand how variable binding works. Can you go through the funky handler example Gaetan posted on your blog

try lambda x : A, raise (Error x)
with Error e => e
end

and explain exactly what happens at every step and what the types of everything is?

No, I don't see anything in my github notifications.

It's true that syntactic conditions are more limited than semantic ones, but the point of a syntactic condition is that it's something you can check. The ultimate semantic condition is "P is true", but of course you can't check that directly; you have to give a proof, which is a syntactic thing whose correctness can be verified. I would be really unhappy if I had to prove every time I make a definition that it was well-defined; it's precisely the limitedness of syntactic conditions that allows us to prove metatheorems about them once and for all, like the admissibility of definitional extensions.

Is there an actual difference in meaning between constant and assume?

Finally, I think I still don't understand how variable binding works. Can you go through the funky handler example Gaetan posted on your blog

try lambda x : A, raise (Error x)
with Error e => e
end

and explain exactly what happens at every step and what the types of everything is?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment