Skip to content

Instantly share code, notes, and snippets.

@RoyTinker
Created October 19, 2018 03:55
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save RoyTinker/1a7e27cb4dbd468b45586a380fdb109c to your computer and use it in GitHub Desktop.
Save RoyTinker/1a7e27cb4dbd468b45586a380fdb109c to your computer and use it in GitHub Desktop.
Pasted closed captions from this talk: https://www.youtube.com/watch?v=egK4xhuWsVY
so so it's my distinct pleasure and privilege to host URI for a talk on a
topic I have discussed with him since I was even a student models of computation
and through the years we have interacted in various ways on on both fundamentals
of models of computation and then abstract state machine applications and
and and at this point it's also the probably they say the last very likely
the last talk URI will give here as a resident as he is moving to Michigan
early next week but I have no doubts that there will be many opportunities to
to interact with Yuri also in the future yeah I hear that there's a birthday
party coming up in a couple of years and and and and and and it was all always a
pleasure to participate in these events so with a yeah without morph at you
where disco here thank you very much supporter so Richard for a day
we see how do I move that so in 2008 now whom there sure it's my very learned and
scholarly friend and myself we wrote this paper where we proved church's
thesis and Turing thesis but it was very technically there there are slight
differences it's a technicality for classical algorithms now what do I mean
by classical algorithms now quantum people hijack the board classical it
became to me non non quanto but our meaning was classical in the central
algorithms from time immemorial till certainly till 1957 in 60s so this is
the algorithm that people that church in Turing had in mind when they put forward
their theses so another name for this classical algorithms as traditional
algorithms or sequential I will call the mostly sequential because the first sort
of non classical non traditional algorithms were parallel and so there
was those traditional were known as sequential
now you can generalize and generalize in generalize and then it's not completely
obvious why it should be even true and so I had this argument implicit
published years ago but as you see here is a prompt for today's talk let me read
it the Dershowitz gravich paper says
nothing about probabilistic or quantum computation it does write down a set of
axioms about computation and prove the church-turing thesis assuming those
axioms however we are left with justifying these axioms neither
probabilistic no quantum computation is covered by these axioms they admit this
those criminals for probabilistic computation and do not mention quantum
computation at all so it's quite clear to me that these axioms actually Falls
in the real world so it means two mathematical axioms to be false in the
real world even though the church Turing thesis is probably true so there's a
problem with scholarly papers it takes time to read them the whole scheme so he
definitely didn't read he actually probably didn't read in even the
abstract with where it says that this is about classical algorithms of course
it's a great hero of quantum computing but that's how it is so let me say what
is church's thesis actually this discussion was quite civil it is sure
there is another picture here of Stephen cleaning by the
way in which means in Dutch small Hall Klein in would be German
that's a meaning of his name so he was a student of charge he actually made the
church this is what it is he straightened it cold it fizzes Church
himself coded called conjecture and spoke only about total algorithms the
better place the better version is partial algorithms so in if somebody
interested in history and all the subtleties I've sent you to our paper
with nothing Jesuits here a formulated thesis there are many forms you can
formulate it they all essentially equivalent so if you have an algorithm
so none of them spoke what algorithm they spoke about calculability but there
is something that make it calculate bill so this is algorithm so if an algorithm
computes a partial function from strings to strings then it is Turing computable
how important is the thesis my personal thing is not very the importance mostly
historical but that's how people are something historical has a lot of seams
we walk a lot of interest so these were much more naive times and decidable seem
to be really decidable and undecided the first truly undecidable but by now we
know it doesn't work that way okay so decidable problems may be
practically undecidable so a good example is the first order theory of
real field which proved was proved decidable by tarski in 1948 it's a
wonderful result except we can't use it much so there are people here who tried
it would be a huge boon to software verification if there was a good
procedure also undecidable problems may be in reality decidable because they
speak about the worst case so they may be decidable if there's a good
probability distribution on average they can be typically decidable so the
classical the the first and decidable problem was a halting problem and there
is a tool called Terminator which was developed at Mossad Cambridge UK which
worked with C programs you bring it C program and it tells you whether it will
halt or not it worked pretty good pretty well so it probably can be improved but
the whole thing is decidability it has a ridiculous assumption that we have
unlimited resources and where else in the world you you make such assumptions
no why physicists are interested is beyond me physicists are very strange
things they tend to make huge pronouncement whether
there are little reservations here in there which needs to be made but but are
not anyway so complexity theory replaced all practical purposes this decidable
undecidable think complexity really has its own problem it's it it is asymptotic
but that's a different story so there is something much more real
software specifications so so I came to computing in 1982 when I sort of moved
culture from mathematics to computer science he wanted to understand what
what what do they do there so I volunteered to teach programming my
chairman said no say why not he said we have enough people to teach
programming we don't have anybody to teach theory those were those for the
time is not enough theorist in computer science I said but if I always teach
stick to theory we'll never learn much about computing he thought a little says
okay once you teach it once so I thought essentially Pascal and then I discovered
that they have my little Macintosh and in yourself Michigan had a huge
mainframe they actually competed with IBM there was in seven eight
universities around the globe that adapted this Michigan terminal system so
in any way as far as I'm concerned there was just two compilers or one
interpreter one compiler for Pascal and they disagreed what was
postcard you know a mathematician you teach something but what it is you don't
know there is no sight nobody knows what the
scan is there is no there was no Canon there was no definition so that arose my
interest rate to all these things especially those specifications and then
I discovered that everybody among the theorists likes this
declarative approach so I knew these people like Dana Scott and others
I knew they're not good programmers so how can they teach there are this
population of programmers engineers who actually brought all those positions
that the feeders and now you tell them no no y'all doing wrong so everything
should be declarative clean high-level so my impression is that executable
specifications are the way to go which and I was actually laughed at by
theorist you can execute so there are companies like Zed so you can invite
there you say here's I'm software I'm doing the expert come and take few
months and they write a book like and they bring your book now nobody in your
company can read this book in a mean time software it's like organism it
doesn't stay in place it's developed whether this book was ever reflecting
the right thing we don't know it's useful no often it's useful to read it
because you see their way to how they understood and they good often very
clever good people but you cannot test across the against the book so what you
want to have you want to have a specification which you can run and
compare yeah what is the meaning of rent is it
is Prolog program executables at what level of complexity is do you call it
executable just that has state state evolves at the end you haven't just
normal run anything which runs operational the traction is suitable for
me synonyms okay so this is a famous V diagram of software engineering so you
start from certain problem you refine refine refine it go down in levels of
abstraction then you code and then you test component test subsystem text and
so on go up okay now then suppose you discover say from this test it's wrong
and in the meantime a lot of people did a lot of work and you discover this so
the obvious idea which was around a lot was to test in this horizontal ways on
the same level of abstraction so in order to test from my point of view all
this should be algorithms should be operational executable programs
high-level may be not very efficient but nevertheless so I started to think how
how to approach it in general and there is this theoretical problem can you have
a more general kind of Turing machine of course Turing machine is ridiculously
low level works with single bits yes and one tape and so on but can you have a
very advanced versatile computation model which is execute operational it's
it's an algorithm you can run it on any level of abstraction
so you can simulate on any level abstraction and abstract state machines
were developed to meet this challenge so my first thought was not it can possibly
work so I said but nevertheless let's try so first I worked with sequential
algorithms in fact the only algorithms I knew at the time so and so heaviest
machines I can't explain them again but then I go with my story in that somebody
wants I will be happy to explain what they are so there are some people who
like the ideas a sm community has been formed here in Europe and other places
and then we pushed it to parallel algorithms to interactive algorithms to
all kind of to a number of classes each time when things are well understood not
just to algorithms so this is the general thesis from from this paper it's
it's some Oxford you press which is not available so I just when I thought about
this talk I went and made or thought maybe I want to give this thought so I
made in made it in archive version so this is a more general thesis that
whenever you have and reasonably understood ok this is a thesis
reasonably understood class of species of algorithms you can extend the
sequential thesis to this class now what are sequential algorithms so I it's
probably what what you think they are but here is a very influential book by
Hartley Rogers who taught logic MIT this is theory of recursive functions
the book in the area so that's how he defined and on say on page two of his
book what he thinks informally algorithms are so in particular they're
deterministic without any resort to random methods or devices so it's not as
well acknowledged that we don't have probabilistic device in our algorithms
that that's by definition so there is often impression the curing captured
sequential algorithms and that's not true so I'll show you some very
classical algorithms here is Euclidean algorithms that's the whole algorithm
but Euclid had two versions of it one for natural numbers and another one for
length or today we would say negative or non negative or positive real numbers
but they didn't have a notion of real numbers they had notion of length so the
version for numbers always converges version for length may not converge
depends whether a in length a and B are commensurate and only one of these two
algorithms actually can be executed by Turing machine because it can deal with
arbitrary lengths here's a ruler and compass algorithm
it's picture comes courtesy of the Vulcan ricey who is professor in Berlin
so he hair of Petri maybe you know Petri Nets he was the one wrote a book on
Petri net and for years he argued with me whatever we can do with abstract
state machines Petri Nets and then certain point he started to write review
and converse I can hurt and he told me the story one one of the reasons for his
conversion he said when he took a course in you know standard course where they
explain you regular languages and context-free languages in Turing
machines so the professor started with given rule encompass algorithm as an
example of an algorithm okay and then he went to finite machines PGA's and Turing
machine and he said that I waited we waited when when he will return so the
last lecture he says I raised my hand and asked professor what about that rule
and compass algorithm and he said professor became very angry so because
you cannot put these things on Turing tape bisection algorithm I can't
right just throws noise ah so arguably I would say argue the Turing had the right
idea as it captures the fact that that you can't do this on paper no no okay
doesn't say for the Greeks you know when you look at Greeks they
always speak about ruler and compass why didn't they realize that there are
ellipses and so on they did but ruler and compass was there a computer there
are computer had noise yes right and so if you think you'd be algorithm
abstractly and mathematically as dealing with real numbers it's not true all
right that doesn't model they can get it wait wait the whole arrived to this true
and any machine has noise another dimension is about non determinism do
you hello okay I'll let you yeah coming there will
be special slides in fact here there was a tiny non determinism there was a
choice between this solution that solution button in classical theory they
just ignored it so you can think bisection algorithm or Gauss elimination
problem now you can approximate them with her admitted Asian but you don't
capture them as if faithfully as ease so it turns out that from my point of view
so so the the problem how to axiom at is this classic algorithms was there from
the beginning so in his beautiful paper curing does a lot a lot of actions he
doesn't think in terms of axioms but he says without loss of generality of
generality so for example the very first sentence in in the Cape in the
key section is we compute I forgot exactly but by pushing symbols
around so from very beginning he restricted set of symbolic algorithms
ruling out for example rule encompasses and as as it goes he made you know there
was only finite part of the brain is involved so he made kinda strange things
they all justifiable but obviously too many of them I think the problem was
difficult because they didn't have a proper obstruction they were they were
like that actually relates to your point they were thinking about algorithms
which actually compute one way or another something more more or less real
but the simple axiomatization turns out to be on a much higher abstraction level
when you deal with abstract structures say abstract structure of the reals okay
that's why you can deal with with rule encompass because so this is this
abstract state so so when I started to think about this problem the first is
trivial second came to me sort of obvious things to just being a logician
any mathematical reality which is static is a kind of structure so if you go to
university of washington capture some mathematicians ask ask him what he is
working on some kind of structures topology graphs whatever whatever but
some kind of structures okay the third problem was difficult he took me like 15
years and so Kolmogorov thought about this what he said that steps should be
of bounded complexity each step not a number of steps but there should be bump
bound on the complexity of a single steps so that was a challenge
so if you wish at the end I can explain the axioms but so if you take this
axioms then every sequential algorithm it is this so short says that actions
probably are wrong now I don't know but but and for all this years 18 years so
there is a quite now layer so substantial ASM community and they tried
all kinds of things and there are various people from mathematicians to
engineers very engineering people computer scientists and so never nobody
came across an algorithm which wouldn't whether it's very high level or low
level yes it's a Hebrew statement yes yes because because because they all
thought let me give another example much more so there was a school method of
mathematics called constructivist and constructivist said that only computable
numbers exist and they started to develop mathematic alternative
mathematics certain theories were false because you say that always there exist
a maximum say a buyers trust theorem but this maxim may be reached in a non
computable real and in their world this theorem fail ok that's a very
interesting our famous whale Herman whale the hero of quantum world had was
a constructivist in his young years and he had a bet with polla so he predicted
that in 20 years Weierstrass theorem will be false and so
this bet was done in in at the highly versatile search in
Surrey ETH and so very famous people signed it so the other story I was in
ETH in visiting comic and I see something in German on his table and I
see many famous signatures I say what's that he said that's the bet so he told
tells me the story and I decided to write I wrote a little I have a column
in some journals for I wrote an article about this ask somebody make a good
translation then after 20 years it was close to war I think in 1938 all already
were the war so Herman whale came to Poe and said let's just forget about or
something like that so the according to the bed he was supposed to publish the
one who lost I was supposed to publish in one of the best mathematical journals
acknowledgment that he lost the bed but this never happened okay so so what was
the problem so I have another article I say why why constructivism is wrong it's
not an another respectable approach to mathematics but it's just wrong why it's
wrong because they were fascinated by this recursive et by Turing
computability but the real world is different so so they made put some
beautiful theorems they were there much before complexity theory and knew a lot
but they were not interested they were fascinated by by this level and that's
the level Roger says so the in the beginning they all were fascinated with
this working on this level where everything is somehow
feasibly completely not feasible in the sense that you can execute steps in a
feasible way doesn't make sense might take away or three years that is it this
is a general statement to make so there is no of this contradiction depends
what's continues so what what he was ruling out making an infinite number of
changes in one step or a growing number yes and boundary changes beyond crossed
steps so essentially ruling out analog
alternatives that's what the that's what's his goal yes thank you but but
number three right
didn't understand what what do you see but the really starting point for my
comment was that enough number three is a generalization of
Rogers so what is the definition of founded complexity when I read the term
comparator usually to me that depends on a model of computation so what is good
let me leave it after after I've come to bottom line because otherwise I will
never finish I'm I'm sorry I'm happy to talk about it but so let me see so
Nachum became one of the most active people in abstract state machine
community and he was interested in proving church-turing thesis and in
order to prove to Churchill in physics we had to go back somehow to the initial
state in our case can could be very rich we're all know initial States out so for
example initial States may have a solution to the halting problem or
whatever so and then of course there is no chance for you to compute only Turing
computable functions so you need to somehow cut it down and this cutting
down is relatively easy there are some technical details but so we formulated
arithmetic al axiom similarly we could instead formulate string kind of axioms
improved oops sorry and proved it okay now let me for a
moment speak both numbers what are numbers originally they were just 1 2 3
then negative numbers appeared rationals 0 actually came came about very late
that's why we have this one DC and 1 BC and there is no year 0 but by now we
have algebraic numbers and reals and complex numbers and p-adic so it's not
about quantities anymore and we keep going that's quite an
interesting serial number so the number notion of number is evolving and
widening so there's not much you can say all numbers are such a size it's just
analogy because I see a familiar notion so same happen to algorithms
so when Church and Turing wrote the North America written was relatively
robust everybody will agree it more or less what what are examples what is it
what in a sin in a way you know when you see it everybody knew what an algorithm
what snot out of him I don't know any single example that people will argue oh
no this is not algorithm so it was quite a robust notion not formalized but but
robust now where does it end so the notion of algorithm is evolving and
widening and getting ever more liberal surely new species will be introduced so
not much can be said about algorithms in full generality so that was my original
argument why this not Turing thesis but this
over-generalized version of it - all algorithms why should it be true because
it would say something about all algorithms which is very unlikely seems
to me so there were some contrarians very early
I asked andreas whose example hundred was no he thought it from kalmar but I
couldn't check so it's one of the very early examples of you know algorithm
looks algorithmic and obviously doesn't computer Turing computable recursive
function but it could be all finite and therefore it's all it's all waiting wait
a second so touching so now might have been Brooklyn so non determinism from
traditional point of view non determinism is non-deterministic
algorithm is a contradiction in terms because what's an algorithm algorithm is
a recipe which you execute exactly now if you come to the fork in the road take
it what take what you know what kind of algorithm it is it's a joke okay I don't
mean a physical Fork of the 14 road so now today of course they are ubiquitous
also up interactive algorithms so algorithm was supposed to be isolated
that's why you couldn't you know measure rain contradicts isolation but by now
interactive algorithms are very very common
oops excuse me so how is non determinism resolved it's by interaction now imagine
you have an algorithm come to the fork in the road take it
so some operating system will flip a coin for you does something so it's
interaction in this example with the operating system but in general there
are interacting algorithms in AI there are learning algorithms they often learn
from people so they interact with people and they definitely algorithm learning
algorithms now we can argue say learning algorithm is not an algorithm you know
it doesn't it may change its own program so in quantum mechanics quantum
mechanics actually is very different before quantum mechanics all this exam
all in in computer science non determinism always is resolved by
interaction with another agent obvious agent computer person in quantum
mechanic you interact with God with nature so when you measure a measurement
is collection of linear operators and when you measure by Fiat one of the
indices is chose so here we have a couple of experts on quantum mechanics
which hopefully will kipnis honest nobody knows it's a lot oh it's kind of
mysterious how it happens but that's how it happens
so interact with nature ah but if you interact with nature and with people
that opens opens up a lot of possibilities for example that same
example what's wrong with it it's interactive algorithms well it
interacts with nature again you know it's playing against an adversary or you
know you can imagine lots of processes that are not algorithms a game is much
more complicated because you have no determinism in a system you have to ask
as you say the question of who is providing that yes right and so is it an
adversary is it cooperative and so on so all I'm saying is you could define that
to be an algorithm that's something that interacts right or you can say no that's
something else I mean it's not it's not clear why would I
so the point you're making actually is a point what I want to make that's a
matter of agreement do we consider this algorithm or not probably not yet but it
interact with natures is the way you know measurement interacts with nature
act of measurement is some kind of interaction now there it's a kind of
mysterious and something happens by as you measure here it's very openly now
you you have to fix details what doesn't mean arraign so one can argue that's not
quite an algorithm because is it raining or not raining there are some one can
try to fix details instead I am going to something else so let's consider this
medical machine like CT scanner or MRI scanner it sucks you in and makes all
kind of measurements so in the process it computes a function
the input is UID who are you and the time it was done the output is whatever
report machine produces okay the machine is completely completely automated
algorithmic so in that sense I know I'm not making any claims and I'm not trying
to convince you this isn't Hungary - all I want to make to shake you believe that
we know exactly what algorithm is so do you want me consider this an algorithm
so let me simplify a little so that produces just 0 1 or minus 1 so it still
can make all kind of tests before but it plays chess suppose you always white the
machine is black it plays and then it produces plus 1 minus 1 or 0 depending
on where the machine one lost her or it was a draw so here's one objection
oh yes so it doesn't compute because the result depends not only on your ID and
time but depends on whatever you say in the chess game depends on your moves
maybe but your ID and time completely determines the game historically that's
that's the game so it it determines it also we know we don't we don't often
require that all this additional data should be count as input when you
consider say we have randomized algorithm giving two graphs you want to
determine whether they're isomorphic or not okay you flip coins but the input is
just terrible so it's not an algorithm because without
the written my component says you should take the same algorithm run again on the
same data maybe but this is legislation it's a
stipulation you say every algorithm has this property we never agreed with on
this why should be the case so the whole the whole idea of sucking you in into
the machine was so you cannot make play two games so I specifically try to
construct scenario which you cannot repeat definitely violating this
requirement okay so I'm coming to the end is a chess-playing machine an
algorithm or not I think out of this context and we discussed this if you
talk to somebody says here the algorithmic machine
some people may accept it as an algorithm some not
but my point is whether you accept whether its algorithm or not you
probably accept that they will you know I just constructed this in the last few
days because I was preparing this talk some more clever algorithms will appear
yes you studied deep learning right so you
have more interesting examples which I have no idea what they are okay so so
maybe learning learning from that point of view the most interesting species
unfortunately I don't know much about it so we have this two versions which great
sure confused one is the original church-turing thesis for traditional
algorithms the one they had in mind when they spoke about all this and that has
been proven no it's still until now I didn't see any objection so in the very
beginning when they came with this I wrote practical to anybody I knew please
attack give me your critique and people seem to
buy well now short still maybe right now with you take this generalized or
generalized over generalized version we still call it church-turing they would
not recognize it and so you speak about a notion which is evolving widening ever
more a liberal notion and you want to claim something about this now of course
we can agree we can stipulate we can introduce new species of algorithms but
if they don't compute whenever they compute a function let me also remark in
the beginning mathematicians who did computations they spoke about functions
today computing functions is tiny zero of algorithms don't repeat
functions they do tasks they execute all kind of things very few of them actually
compute functions but nevertheless so unless we stipulate I don't think on any
point in history it can be established in any way that's then thank you okay
what are they so are there any reasonable restrictions about the
environments so you have restrictions on what the
algorithm can do in terms of finites found it work and myspace per pound so
let let me say to two things so in full generality no but I tell you
so undressed Blas and I will formalize so called interactive algorithms and our
restriction was that agents may be spread around the world
but when you send me a message it makes sense to me so in mice
since my world is mathematical structure what you sent is an element of this
world or maybe few elements not about observability but meaning before in the
moment I'll say what I'm trying to rule out and for this we proved what what did
with what exactly did we prove that if you look at one agent who interact with
many agents but whenever he has a message this message makes sense not
allotted to him he may be an algorithm he's an algorithm but it makes sense in
this world it's a it's it's it is an element of his structure but what he
receives or representation of an elementary structure and this will
examine taste there was another thing I defined distributed algorithms and at
the time I thought in full generality and it was used very successfully so for
example there was ITU International Telecommunication
Union they had some some algorithm which they program themselves then they
revised it and at certain point the whole thing their system became so
complicated they needed help so they wanted to to take their system to
specify it precisely which will allow them to move forward and so they allow
two tall theories in the world please do and not only the same group which
consists of German professors not only they want competition but at certain
point they were the only ones quite early who willing to participate because
this thing was for example you can do a lot of things in zero time because it
was the physical time but counter as they realized you know it's never
written just in zero time you can do all kinds of things so what kind of the zero
time it is so they did it and so I went to Stockholm gave he gave a talk on took
two community of people whom I never knew on something I had no idea what it
was a spoke without state machine kind of foundation for for what this German
professors did then I really and there was not a single example in the world
where it didn't work but in principle it couldn't work so it's pretty clear that
there is no general there was no axiomatic for general distributed
systems and the reason is this you can work on completely different levels
that's what that half was an unreal world so you have very clever algorithm
you proved you've verified it somebody comes from outside Russian hacker
changes bits you know on level free algorithm nothing nothing happens
but on the level of single bits he makes a hole and comes in so so what he from
the point of view of mathematical world of agent one the incoming messages have
no meaning but that was in in software engineering they call the leakage
problem information leaks down so so if you if you realize that there is a
infinite variety of abstraction levels so what what I didn't prove but I'm
pretty sure it's the following is true that if you consider one or maybe maybe
finite mean but certainly one so everybody works on the same abstraction
level distributed but that's for how did our distributed algorithms are written
today what people call distributed algorithms that letters all reasonably
understood but an implicit assumption is that we all work on the same level so
when when send my message it makes sense to me
or at least it makes sense in my world that's not an isolation and to me that
that's sort of those ideas are more important to the idea of computing than
this church-turing idea of computing function so in other words to meet
computing is taking the physical world setting up its initial conditions and
boundary conditions such that its behavior simulates an abstract system in
order to simulate you have to have my solution in other words you can have
information leakage from the outside world so to speak into your system it
has to be its own clothes on universe or a little more general in distributed
case now you may have exchange but the exchange should be highly disciplined
right so then right so to me right so input/output or communication is
secondary right but again that that input/output or communication has to be
at the level of the abstraction yes that you're producing it can't have
information that leaks in from from outside so so that so that idea of
isolation and abstraction is I think it's more fundamental to computation
than the idea of computing a function which is to say it's a tiny you know
tiny aspect of computer there could be a language by awaited 10 8 years because
Shaw wrote his remark in 12 2010 somewhere but I just recently discovered
it by googling core being probably pinging because I don't want Google to
know much of me and Microsoft already knows everything there is to know
right so I was looking for something else and suddenly popped up
[Applause]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment