Last active
December 24, 2015 23:24
-
-
Save barrucadu/6cf3d1ddaa0b3967ca84 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
theory SumT | |
imports Main | |
begin | |
(* http://isabelle.in.tum.de/exercises/lists/sum-tail/sol.pdf *) | |
fun sumTAux :: "nat list ⇒ nat ⇒ nat" where | |
"sumTAux [] s = s" | | |
"sumTAux (n#ns) s = sumTAux ns (n+s)" | |
(* From the answers, this works immediately: *) | |
lemma sumtaux_add: "∀ a b. sumTAux ns (a+b) = a + sumTAux ns b" | |
by (induct ns) auto | |
(* Why does this have a more complicated proof? *) | |
lemma sumtaux_add: "∀ a. sumTAux ns a = a + sumTAux ns 0" | |
apply (induct ns) | |
apply simp | |
by (metis add.assoc add.commute sumTAux.simps(2)) | |
(* Sledgehammer can't progress beyond `simp`, and `auto` hangs. *) | |
(* Why does the ∀ make a difference? *) | |
lemma sumtaux_add: "sumTAux ns a = a + sumTAux ns 0" | |
apply (induct ns) | |
apply simp | |
sorry | |
(* Same problem *) | |
lemma sumtaux_add: "sumTAux ns (a+b) = a + sumTAux ns b" | |
apply (induct ns) | |
apply simp | |
sorry | |
end |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Initial question: irclog.perlgeek.de/isabelle/2015-12-23
(Note: My comment is longer than your post. I can delete this, and whether useful or not, I may delete it someday.)
A summary would be the following, where my claims are partly speculative, based on some knowledge and some experience:
simp
orauto
following a very different line ofsimp
rules. To answer any "why" would require following thesimp
trace, and there can be hundreds or thousands of rewrites in asimp
trace.lemma_1
, than forlemma_2
, it does not mean a simpler proof doesn't exist forlemma_1
. It only means the magic tools didn't give you a simpler proof. Their magic is only so strong. To understand the "why", well, remember, this is supposed to be the summary.LHS = RHS
, when theLHS
andRHS
are "hopelessly expanded".\<forall>
operator is a first-order-ish logic operator (polymorphic here, but also a standard op for 1st-order logic). I assume it lends itself to first-order provers, because definitions and theorems integrate it with the other first-order connectives, andsledgehammer
somehow knows how to work well with all of that.\<forall>
means there's only one free variable,ns
, to apply induction to. This keeps the formula simple enough for Sledgehammer to be successful.ns
anda
, if you don't also induct ona::nat
, then Sledgehammer can't work its magic. But if you do induct ona
(in a separateapply
), you end up with a formula that's hopelessly expanded.\<And>
, the meta-logic quantifier, and\<forall>
, the object-logic quantifier, is the hardest part to explain, but only because\<And>
is hard to explain.\<forall>
operator is "easy" in the sense that if you work hard enough, you can see that it and its properties are purely a result of how it is defined in HOL.thy, and how axioms are used to give it its properties.Here's some source I messed around with.
Accept the gift of workarounds
The person with more knowledge showed up on IRC, so I'll bring this to a quicker end.
It's important to ask "why", but when you're working on your own stuff, rather than just doing exercises, you need the biggest toolbox possible to draw from.
My lesson from your question is what I do for the proof of
[eq.1a]
.I'm glad to know that binding a variable with
\<forall>
helped Sledgehammer out.Recently, I replaced some complicated proofs with much simpler proofs, just by intelligently using a structured proof, with old-fashioned thinking, where I actually understood the proof steps I was implementing.
That's the basis of my comment in my summary. Magic tools can let us move very fast, but sometimes old-fashioned thinking is required.
Some specifics of
\<forall>
Here, I make a short comment to support my comment that
\<forall>
is easier than\<And>
.Essentially, 3 meta-logic ops are these:
==
, meta-equality, where it works like you expect, reflexive, symmetric, and transitive.==>
, meta-implication, where it works like you expect, and is used to define object logic operators and theorems. It is implication, but if I define-->
with it, it's not a circular definition.\<And>
, meta-universal-quantifier, the one without a one-liner explanation. One short answer is that it's related to schematic variables (produced after the proof), and free variables in a formula ,which are implicitly quantified with, guess what, yep,\<And>
. In my experiments defining a full set of meta-logic connectives, I didn't get any help from Sledgehammer for the first few theorems using those connectives, which is far as I got. In general, variables should be left free, rather than explicitly quantifying them. And never quantified with\<forall>
without a good reason, because you'll have to "unwrap" them in a step-by-step proof.Anyway, that's just to give you the idea that in HOL.thy,
\<forall>
is the result of being defined with meta-logic operators.Here is the signature at HOL.thy#l88:
Here is
All_def
, in with some other basics, HOL.thy#l187:Quantifiers are harder to completely figure out, because of all the binder notation.