Skip to content

Instantly share code, notes, and snippets.

Created May 10, 2013 23:18
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save anonymous/d10fcbbd6b7b07336670 to your computer and use it in GitHub Desktop.
Save anonymous/d10fcbbd6b7b07336670 to your computer and use it in GitHub Desktop.
Questions and replies from Scott Aaronson's Ask Me Anything! Tenure Edition. The output includes a lot of HTML that could be parsed out, but it was good enough for me.
Reply to: wolfgang #1:
Question: What is the probability that we live in The Matrix ?
Well, I’d estimate the probability that we live in the <b>“Movie Matrix”</b> (defined as a simulated world that keeps humans entertained while evil AIs extract electric power from them, which can be escaped by swallowing an appropriate pill, and destroyed via slow-motion kung-fu battles) as extremely close to 0. But I’d also estimate the probability that we live in the <b>“Unitary Matrix”</b> (defined as a universe that obeys the computable laws of quantum mechanics, which could in principle be efficiently rendered by a quantum computer, though whether it is or isn’t so rendered is probably an operationally meaningless question) as fairly close to 1.
Reply to: starc #10:
Question: Can you give a one paragraph exposition of Glauber dynamics?
No, absolutely not! Thanks for such an easy question. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Alex #8:
Question: Is it true that as a postdoc introducing talks, you used to interrupt and end presentations that were not interesting?
I have no recollection of such a thing! As a postdoc at Waterloo, I did run the weekly IQC lunch for a while, but we didn’t even really have formal talks there. We just had students, visitors, etc. speak for 5-10 minutes about open problems or interesting things they had found. And as “moderator,” sure, my role was both to ask questions and to figure out when a discussion had died down and it was time to move on to the next person’s show-and-tell. So, I assume that’s where the “legend” you refer to originated, but I seem to have turned into some sort of ogre in the retelling! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Barry McManus #3:
Question: What is the saddest thing?
First thoughts:
But on reflection, talking about the deaths of millions of innocent people, or the problem of evil, is <i>way</i> too abstract and impersonal to evoke anything close to the maximum amount of sadness. I could easily give you a “sadder thing” (the “saddest thing”? I don’t know) by relating—in graphic, novelistic detail—the true story of a single child somewhere being raped at gunpoint and then forced by soldiers to dig her own grave.
But I don’t want to do that, because it would be too sad.
Reply to: ppnl #20:
Question: Could you do a quantum version of Conway’s game of life?For example start out with all cells in a 50% mixed state. As they evolve and interact you should generate arbitrarily complex linked states. But if you never look at any of the cells they will all be in the same state in a sense.Now look at one of the cells. What happens to all the others?
Yes, of course, why not? There’s an entire field devoted to <a href="http://en.wikipedia.org/wiki/Quantum_cellular_automata" rel="nofollow">quantum cellular automata</a>. On the other hand, one immediate difficulty is that Conway’s Game of Life is not a reversible CA. So it’s far from obvious what the “right quantum analogue” of GoL would be. My guess is that there are many <i>different</i> possible “quantum Games of Life” that could be suggested. To decide between them, we should probably just study them and see which one produces the most interesting behavior—i.e., the same thing Conway did to invent his otherwise-arbitrary rule for the <i>original</i> Game of Life!
Well, if it’s entangled (or even just classically correlated) with those other cells, then after we condition on the outcome of our measurement, the states of those other cells will change, of course. That’s just standard QM and probability theory.
Reply to: Xiaotong #34:
Question: Maybe a stupid question, but I’m always wondering why people tries to prove P!=NP even though people still cannot prove P!=PSPACE? Thanks
P≠NP is the “flagship” question of our field, and it’s a perfectly good choice of flagship—but the fact that it plays the flagship role doesn’t mean that complexity theorists currently spend much time trying to proving it. It’s just <b>too hopelessly far</b> beyond our current abilities—and so, for that matter, is the P vs. PSPACE question!
So even the few people who are currently doing serious, ambitious work on circuit lower bounds—such as Ryan Williams, Ketan Mulmuley, and Ran Raz—concentrate for the time being on <i>vastly</i> easier questions even than P vs. PSPACE. One example was the NEXP vs. ACC question, which Ryan actually <a href="http://www.cs.cmu.edu/~ryanw/acc-lbs.pdf" rel="nofollow">solved</a> in his 2010 breakthrough. Some examples of current “frontier” questions are EXP vs. ACC, NEXP vs. TC<sup>0</sup>, and derandomizing polynomial identity testing.
Reply to: Jr #38:
Question: How much General Relativity do you know?
Embarrassingly little. I know SR, and I’ve read at least two dozen expositions of GR (popular, technical, and everything in between, including Einstein’s originals), and I usually felt <i>while reading the expositions</i> that I more-or-less understood. But put me on a desert island with some blank paper, and could I retread Einstein’s path? Doubtful.
For me, the three main stumbling blocks have always been the following:
1. I <b>hate</b> tensor notation, with the raised and lowered indices, the mu’s and nu’s, the implicit summations, and the “artificial compression” of what are actually extremely complicated differential equations. (I once decided to expand out Einstein’s equation, “G=8πT,” directly in terms of the metric tensor. It took more than a page, but I did it, and I felt better after I did!) Now, I have enough experience (see upcoming answer about Dirac ket notation) to know that my revulsion is <b>purely</b> a function of my lack of familiarity, and that if I studied the subject more, I’d probably grow to love the convenience and power of tensor notation. But it still puts me off <i>today</i>! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
2. There are many topics in classical physics that are conceptually or historically <i>prior</i> to GR, and that I already don’t understand well enough. Those include gauge theory, Maxwell’s equations, the Lagrangian formulation of classical mechanics, celestial mechanics (i.e., how to solve Newton’s equations), and perturbation methods.
3. Most important of all, I’ve never been clear on what my <b>goal</b> should be in learning GR. What questions am I trying to answer about it? Something to do with computational complexity, maybe? And if I don’t <i>have</i> a definite goal, then why isn’t my current, handwavy understanding good enough? Now, you might say: isn’t satisfying my intellectual curiosity a worthy goal on its own? And I’d say, sure it is, in principle! But in actual fact, being the slovenly animal that I am, I’ve never been able to force myself to learn anything whatsoever—in math, CS, physics, you name it—without a very visible carrot dangling in front of me to help organize my thoughts. (Now that I have tenure, though, maybe I’ll search for such a carrot, just <i>because</i> of the excuse it would give me to finally learn GR!)
Reply to: Ken Schutte #37:
Question: I’ve read somewhere you criticize (but then, ultimately accept) Dirac (bra-ket) notation. As an outsider (computer scientist), I agree. Every other branch of math does just fine with standard notation for vector spaces, inner-products, conjugate transpose, etc. If you were inventing notation for QM from scratch, would use choose different notation? What?
No, I can’t suggest any clear improvement over ket notation (though I also haven’t thought about it much). In fact, I confess that I sometimes slip into ket notation even when doing linear algebra that has nothing to do with QM! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
There are at least three advantages of ket notation:
1. Provided we avoid the slip I mentioned above, ket notation makes it obvious at a glance <i>when we’re talking about a quantum state</i>—with all the specific facts and intuitions about quantum states that one can then “load into working memory.” (For example: this vector has to be normalized; its global phase is irrelevant; we can only learn about the vector’s components by applying a quantum measurement, not by observing them directly.)
2. Ket notation lets us <b>avoid subscripts</b> that we’d constantly be forced to use with conventional vector notation. E.g., instead of v<sub>0,x,y</sub>, whose meaning would then need to be defined every time, we can just write |0,x,y⟩.
3. The entire issue of primal vs. dual vectors is handled automatically. E.g., it’s obvious at a glance that ⟨ψ|φ⟩ is a number whereas |ψ⟩⟨φ| is a rank-1 matrix.
Reply to: Vadim #48:
Question: If you weren’t in academia, is there anything else you could see yourself doing?
As far as I can remember, the evolution of my career goals was more-or-less as follows:
garbage man (age ~2) → eye doctor (age ~3) → spaceship designer (age ~5) → marine biologist (age ~8) → videogame designer (age ~10) → writer, novelist, children’s-rights advocate, and social reformer (age ~12) → software/Internet entrepreneur (age ~13) → AI researcher (age ~15) → theoretical computer scientist (age ~16)
Nowadays, I can still imagine having become some sort of (failed?) professional writer or software entrepreneur, had my teenage years gone a little differently.* But I’m very happy with where I ended up. Among other things, in this job I get to interact quite often with my friends in AI and software—and, of course, I get to write!
(* Or, of course, an AI researcher, but that would probably still be academia!)
Reply to: Anthony #49:
Question: What are the most interesting open questions relative to BosonSampling in your opinion?
Well, we gave a long list of them in our <a href="http://theoryofcomputing.org/articles/v009a004/v009a004.pdf" rel="nofollow">paper</a>!
The most obvious problems are to prove our #P-hardness conjecture and our anti-concentration conjecture. I’m optimistic that one or even both of these might be settled in the near future.
After that, perhaps the next problem is whether there’s a “traditional” decision problem that’s solvable by a BosonSampling device but that’s intractable classically. I’ve revised my opinion a bit, and now conjecture that the answer might be yes (but I’m really not sure).
After that, the next biggest problems all involve bridging the gap between our result and what the experimentalists can currently do. Some examples:<br>
- Can you still give formal evidence for the hardness of classical simulation if the inputs are Gaussian states, rather than single-photon Fock states?<br>
- What if the number of modes is only linear in the number of photons, rather than quadratic?<br>
- What if the real distribution has constant variation distance from the ideal distribution, rather than 1/poly distance?<br>
- What if a constant fraction of the photons get lost along the way?
Finally, there are questions that <i>I</i> really love, but that might be harder to motivate to non-complexity-theorists. These include: is there an oracle where, contrary to my and Alex’s conjectures, FBPP=FBQP but the polynomial hierarchy does <i>not</i> collapse? (Note that Fortnow and Rogers gave an oracle where P=BQP but the polynomial hierarchy doesn’t collapse. But even the intermediate case of PromiseBPP=PromiseBQP remains open.)
Reply to: Felipe #52:
Question: Scott, Can you sign my copy of QCSD? (I can drop by your office pretty much any time in the next 3 years, whenever’s most convenient for you..)
Of course!! Just let me know by email when you want to stop by, so I can make sure to be around.
Reply to: SomeGuy #47:
Question: What is the probability that the matrix we live in is totally-unimodular?
LOL! Well, for the unitary matrix describing our world to be totally-unimodular, it would need to be a permutation matrix times a +1/-1 diagonal matrix. So <i>really</i> you seem to be asking for the probability that our world is deterministic and classical! And I’d say that’s extremely close to 0. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Isaac Duhor #46:
Question: If the universe/multiverse are infinite, then there has to be infinite worlds, and so we would have infinite Earth-like planets, and even a planet very similar to this one, but without leap years, or kangaroos, or Santa Claus. Do you agree with this line of thinking, and if not, what are the arguments against it?
I hate to be the one to break this news to you, but not even <i>our</i> world contains the third item you listed… <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley">
Seriously: from the hypothesis that the universe is infinite, it doesn’t <i>logically</i> follow that all possible Earth-like planets have to exist somewhere. For example, maybe there’s a yet-undiscovered law of physics implying that every Earth-like planet must eventually contain at least one kangaroo! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> But, as the preceding sentence already illustrates, it’s extremely <i>hard to imagine</i> how the laws of physics could be, so that you could get an infinite universe <i>without</i> every possible Earth-like planet arising somewhere in it (indeed, arising infinitely many times) with probability 1. This is similar to how, while no one has rigorously proved that the binary expansion of π must contain the complete works of Shakespeare and every other finite string, on statistical grounds it’s very hard to imagine how that <i>couldn’t</i> be the case.
(Note that, when it comes to the physical universe, if you accept that quantum mechanics applies always and everywhere, then you could alternatively use <i>that</i> to argue that every possible Earth-like planet arises with some nonzero probability amplitude, and therefore infinitely often with probability 1 in an infinite universe.)
For the reasons above, physicists and cosmologists who worry about such things <i>do</i> typically assume that an infinite universe would contain infinitely many observers nearly-identical to ourselves—thereby leading to all the infamous philosophical puzzles associated with the anthropic principle, the cosmological measure problem, the Boltzmann brain problem, etc. (Google those things if you don’t know what they are.)
Reply to: Chan Li #60:
Question: Do you believe that 9/11 was an inside job? Official explanations of collapse of WTC #7 , crash of Flight 77 into Pentagon and crash of Flight 93 at Shanksville are simply unbelievable.
Not only do I emphatically reject that view, I don’t believe <font color="red"><b>any</b></font> conspiracy theory — as in none, zero, not one. I believe there have been <i>conspiracies</i>, of course—the conspiracy of the actual 9/11 hijackers was one straightforward example—but I regard conspiracy <i>theories</i> as one of the worst and ugliest <a href="http://xkcd.com/258/" rel="nofollow">known bugs in human reasoning</a>. Everyone should learn the telltale signs of conspiracy theories as part of their education, then run away from them like the plague.
For more, see my satirical essay <a href="http://www.scottaaronson.com/writings/boxcutter.html" rel="nofollow">“Occam’s Boxcutter”</a>, which I wrote back in 2002 (when the 9/11 conspiracy virus was already proliferating).
Reply to: Adam #59:
Question: Does the fine structure constant limit or enhance the power of quantum computers?
Almost certainly it has no effect on their power! <i>In principle</i>, if the decimal expansion of the fine structure constant were to encode, say, the solutions to the halting problem, then by measuring the constant to greater and greater precision, we could boost our computational power from BQP up to some fixed language in the class BQP/poly.
On the modern view, however, the fine structure constant has some completely different value at Planckian energies anyway. And it probably has the value it does at low energies because of some complicated sum of contributions arising from a more fundamental theory, not because its decimal expansion (or expansion in <i>any</i> base) encodes a message from God.
Reply to: Arrogant Undergrad #4:
Question: What do you think are a few of the most accessible open problems in complexity theory to someone with a limited background (presumably, they’ll be unimportant enough that real complexity theorists such as yourself haven’t bothered to solve them)?By limited, I mean have digested most of Sipser and Arora-Barak.Thanks Scott!
That’s an <i>incredibly</i> hard question to answer, but check out my <a href="http://www.scottaaronson.com/blog/?p=663" rel="nofollow">“Projects Aplenty”</a> post for a few of my ideas. If you want more, my suggestion is to can the “arrogance” and go talk to a professor at your own university! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> (Or just scan the latest STOC/FOCS/CCC proceedings and <a href="http://eccc.hpi-web.de/" rel="nofollow">ECCC preprints</a>, looking for problems that strike your fancy.)
Reply to: itssummer #5:
Question: Razbororov produced a superpolynomial monotone circuit bound for an NP complete problem. Is an exponential lower bound provable for the same model within our known knowledge?
OK, I was just googling your question (you’re welcome <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> ), and it <i>looks</i> like the best currently-known lower bound on monotone circuit size is basically that of Alon and Boppana: 2<sup>Ω(√k)</sup> for deciding whether a graph of size n has a k-clique, provided k≤n<sup>2/3</sup>. (Note that if k is constant, then Amano proved an essentially-optimal monotone lower bound of ~n<sup>k</sup>.) I suppose the main question is whether the 2<sup>Ω(√k)</sup> can be improved to 2<sup>Ω(k)</sup> for non-constant k by modifications of existing techniques, whether bold new ideas are needed, or maybe even whether the truth is 2<sup>o(k)</sup>. But given that I just now had to look up what the situation was, I can hardly be expected to have a strong intuition about this question! My advice is to go post your question over on <a href="http://cstheory.stackexchange.com" rel="nofollow">CS Theory StackExchange</a>.
Reply to: itssummer #6:
Question: In 2011 in your comment here http://www.scottaaronson.com/blog/?p=741#comment-26835 you gave a possible explanation for why Grover’s algorithm does not close the question that quantum computing is superior. Since you were out of time then, I could not ask the question further. Here I go after almost 2 years. Could you comment more on the scaling part?
Oh alright. What I was trying to explain to you there was simply that the quadratic advantage of Grover’s algorithm, compared to the best classical algorithm, is only a proven fact in situations where you know for sure that brute-force search <b>is</b> the best classical algorithm! However, if you were trying to prove (let’s say) that a quantum computer can solve some problem in O(n<sup>2</sup>) time that a classical computer <b>can’t</b> solve in O(n<sup>2</sup>) time, then this would not be your actual situation. Yes, the quantum computer could run Grover’s algorithm to search a list of size n<sup>4</sup> in O(n<sup>2</sup>) time. But how would you prove that a classical algorithm <i>can’t</i> do the same? Since the size-n<sup>4</sup> list is much larger than the actual input (which, by definition, has size n), the list must have been generated by some algorithmic process, rather than being completely arbitrary. But that means that the classical algorithm might work in a much cleverer way than simply searching the list element-by-element! Indeed, ruling that possibility out would require a breakthrough on the scale of P≠NP.
I hope that makes it clearer. If not, then once again I’ll direct you to CS Theory StackExchange.
Reply to: me #7:
Question: Could you explain fisher information in a way that makes it seem inevitable?Shannon information is beautiful. Kullback-Leibler Divergence, makes sense because of this explaination http://www.snl.salk.edu/~shlens/kl.pdf‎ .But I havn’t yet seen the steps laid out on how fisher information was derived. (and so I don’t have confidance that it measures what it is supposed to)
Given that I just had to look up and remind myself what Fisher information <i>was</i>, I’m going to guess that the answer to this is no. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: bitter #9:
Question: Hey Scott, how come you had the title of assoc prof before you had tenure? Is it common that the two promotions are treated separately, or were your circumstances exceptional?
No, at MIT it’s completely standard. First you become what’s called “Associate Without Tenure” (AWOT), and only ~3 years after that are you considered for tenure.
Reply to: Lukasz #12:
Question: apropos comment #1, how do you estimatete probability of us being a simulation ran by someone else, i.e. we dont have any ‘physical’ bodies?since this is a clarification question i hope it’s ok i’ll keep my right to ask a proper question
If the simulation was <i>perfect</i>, then by hypothesis, the “fact” of our being in the simulation would have no observable consequences whatsoever. In which case, it’s not even clear what it means to discuss the probability of such a thing being true or false.
As I was saying before, in the Matrix movies they obviously had to assume that the simulation <i>wasn’t</i> perfect (since otherwise, it could’ve been any “ordinary” movie—say a romantic comedy—and the audience would’ve been none the wiser!). And I’d estimate the probability of our living in an <i>im</i>perfect simulation like the one depicted in those movies as exceedingly close to 0.
Reply to: Mathematician5 #14:
Question: Is the universe (or the entire history of the universe) really just a vector in a Hilbert space, and if not, where does quantum theory break down?
That’s an enormous question, and one that I’ve addressed from different angles many times in this blog’s history (try searching!).
The short answer is that the universe being “just a vector in Hilbert space” is certainly the view that the logic of quantum mechanics <i>militates</i> toward. Obviously the word “just” has to be interpreted with extreme care in the last sentence: it’s true only in the extreme-reductionist sense that, say, World War II was “just” the movement of a large number of quarks, gluons, and other subatomic particles. With that caveat in mind, though, the superposition principle is like a “universal acid”: introduce it <i>anywhere</i>, and it immediately wants to apply <i>everywhere</i> to maintain logical consistency. On the modern, “decoherence” view, even the measurement process is understood as just another example of unitary evolution of a system coupled to its environment. And in nearly a century, no one has yet come up with any elegant way to modify quantum mechanics in a way that <i>avoids</i> these conclusions.
On the other hand, taking the above seriously leads to the famous “measurement problem,” of explaining how it is that the Born Rule (and more broadly, our subjective experience) arises from unitary evolution—and to the never-ending debate about whether we’re forced to accept the reality of Many Worlds, and what exactly one even means by that. So I think it’s best to keep an open mind about the possibility that quantum mechanics might <i>someday</i> be superseded—even as we insist, vehemently, that any “claimant to the throne” reproduce all the existing successes of QM as a prerequisite to being taken seriously.
Reply to: wolfgang #70:
Question: Scott #68: >> And I’d estimate the probability of our living in an imperfect simulation like the one depicted in those movies as exceedingly close to 0I know I have only one question, but I *have* to ask this follow on question:You seem to say there are basically three possibilities:
R: the world is real as it is
P: we live in a perfect simulation
I: we live in a flawed simulationYou say we cannot distinguish R and P so it follows that p(R) = p(P) you also think that p(I) ~ 0 So since p(R) + p(P) + p(I) = 1 it follows that p(P) > p(I)Why would you think a perfect simulation is more likely than an imperfect one?
You misunderstood me: in your notation, I was claiming that R=P, that there’s nothing meaningful we can even point to that would distinguish those possibilities. Thus, I think “p(P)” is large simply because I think p(R) is large! And I think p(I)~0 simply because p(I) represents the probability of Keanu Reeves materializing in my living room with a slow-motion kung-fu kick, or some similar event that in ordinary language would be called “supernatural.”
Incidentally, your dilemma reminds me of Homer Simpson’s, after he’d finally reached the guru on a remote mountaintop in India to whom he could ask 3 questions:
“Are you <i>really</i> the guru?”<br>
“Yes.”<br>
“Really?”<br>
“Yes.”<br>
“You?”<br>
“Yes.”
<img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Patrick #69:
Question: Just talking with a buddy about this quantum internet paper from los alamos. http://arxiv.org/pdf/1305.0305v1.pdfWe got curious about something, probably pretty basic from your perspective, and I remembered that you were doing the AMA.So…how do you set the state of qubit? Are people running this network setting the spin of one entangled photon to 1 and then the other entangled photon get’s set to 0? If yes, how do you actually set the spin?
Or are they just measuring the q-bit, but not setting it, and using it as a one time pad to encrypt data and then send it conventionally. (Relying on the fact that the receivers will get the binary complement of their one-time pad)More generally, I’m a little foggy on how you would configure or initialize the state of any quantum computing device to be able to ask it a question.
Do you just mean <i>in general</i>? (Remember the rules: no questions that require me to read and comment on a particular paper.)
Do you agree that it’s possible to prepare (say) a photon in a state of horizontal or vertical polarization? Or to put an electron in either its ground state or its first excited state? If so, then you know how to set the state of a qubit!
At a more philosophical level, it’s an interesting point that even state preparation ultimately relies on <i>measurement</i>: to set a qubit to |0⟩, we first measure it to see whether it’s in the state |1⟩, and <i>only if it is</i> do we then apply a NOT gate to change its state to |0⟩. Or to say it more generally, state preparation is inherently an irreversible, entropy-increasing process.
But that’s fine! After all, entropy-increasing processes are constantly taking place all around us, and initializing a <i>classical</i> computer’s memory to the all-0 state is an example of such a process as well.
Reply to: davood #18:
Question: As a comment to your previous post you said “my tenure case heavily involved three things—algebrization, BosonSampling, and quantum money”. How d’ya know? Are these 3 items what you emphasized in your statement, or have you seen letters of others, or what?
No, I certainly didn’t see any letters of others! On the other hand, many times over the last year, I got requests from senior colleagues to explain various aspects of my work, in order that <i>they</i> could then explain those aspects to other people at the department, School of Engineering, and then Institute level. These requests were often surprisingly detailed (“in your quantum money scheme, how do you evaluate the low-degree polynomial on a superposition of subspace elements without <i>destroying</i> the superposition?”) So from them, I could piece together a rough picture of what must have been going on above my head.
Reply to: hull #19:
Question: Suppose you are engaged to lead a project to try to build, in a reasonable amount of time (20 years, 30 years?), a “real” q. computer (you define what “real” means, I suppose one that can be used to solve “efficiently” non trivial problems of practical interest). Suppose you are given an unlimited amount of resources to complete the task (unlimited money, top level scientists of your choice in all the fields you think are relevant for the job etc., anything available on the earth ..) What would you practically do?
Step 1: Build a new facility (“Los Alamos II”?) specifically devoted to the task.
Step 2: Hire David Wineland, Ray Laflamme, John Martinis, Rainer Blatt, Ike Chuang, Dave Cory, and all the world’s other leading QC experimentalists by offering them exorbitant salaries.
Step 3: Put them in a conference room and let them hash out several different routes to be pursued in parallel.
Step 4: Give them all the money they need to pursue whichever routes they decide on.
“It’s not exactly rocket science — just quantum information science!” <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Carl #77:
Question: Do you believe God would be able to solve the halting problem, decide undecideable questions, etc.? (This is assuming you believe in a god but I only got one question.)
Well, if you simply <b>defined</b> omniscience to be one of God’s attributes (as, I guess, many philosophers and theologians would?), then obviously God <i>could</i> solve the halting problem, along with every other well-posed mathematical problem. Certainly there’s nothing <i>logically</i> contradictory about an entity that’s not itself a Turing machine being able to solve the halting problem for Turing machines. In fact, computability theorists reason about such hypothetical entities all the time! (They call them “oracles.”)
On the other hand, we <i>could</i> generate a variant of the ancient dilemma of God making a stone so heavy that not even He can lift it, by asking whether God can solve His <i>own</i> halting problem—and thereby run forever if He halts and halt if He runs forever.
On the third hand, I find it amusing to contemplate a God who <i>wouldn’t</i> be omniscient—just extremely, unimaginably scient—whose mind would encompass (say) the NP-complete problems, yet would be just as impotent as ours when faced with #P-complete problems and higher.
Reply to: Jimmy #43:
Question: Who’s yer daddy?!
Uh, my father’s name is Steve Aaronson. He worked as a science writer and then as a PR executive at several companies. He also plays bass guitar and is an avid bicyclist.
Reply to: Sid #32:
Question: How long would you want to live if you could live as long as you wanted in good health?
In this thought experiment, do I have to fix a number ahead of time? I.e., will I be like some legendary figure who <i>can’t</i> die before his appointed date, even if he later decides that he wants to? If not, then I’ll simply say “forever,” with the option to kill myself after (say) a trillion years if things have gotten too boring by then.
Reply to: me #83:
Question: Given the assumption that large-scale quantum computers can be build. Could there be a fundamental reason for the difficulty of building an n-qbit quantum computer to “increase exponentially in n”?
Sure, that’s a logical possibility. But on our current understanding of physics, it would be extremely mysterious—in my opinion, every bit as mysterious as a scalable QC just being flat-out impossible. One would demand to know: what is it about the laws of physics that <i>causes</i> the difficulty to increase exponentially with n? For example, is the issue that QC can only work in a “postselected” setting, where the number of postselection steps increases linearly with the size of the computation, so that the probability of all the postselections succeeding decreases exponentially? If so, then what new physical principle on top of QM prevents <i>non</i>-postselected QC from working?
Reply to: Jordan Ash #76:
Question: If quantum computing is equal parts computer science and physics, then why do most QI people assume we *require* physically different hardware to achieve scalable UQC? $0.02: Should scalability be fundamentally about the intelligent manipulation of information, it shouldn’t matter whether we’re using trapped ions or an abacus to perform efficient UQC.
One answer is already implicit in your question. If you <i>don’t</i> have physically different hardware, then you’re not talking about something that’s “equal parts computer science and physics”: you’re just talking about computer science, as conventionally understood!
Of course, in principle it could turn out that BPP=BQP, in which case quantum computers would be much less useful than we thought—being needed only for polynomial speedups or for applications (e.g., quantum money) that inherently involve the processing of quantum information. But an algorithm to simulate arbitrary quantum computations in classical polynomial time would be so unlike anything we know, that the burden is on anyone who considers that a serious possibility to give evidence for it. (The <i>best</i> evidence, of course, would be to code up a simulator and use it to break RSA!)
Reply to: Bram Stolk #30:
Question: Could the outcome of the double slit experiment be explained by ‘lazy evaluation’ by the computer simulating our universe? The quantum eraser experiment shows that a slit is chosen only if we measure AND we keep the measurement result. ‘Lazy evaluation’ of our universe could be skipping the slit-choice.
There <i>is</i> something about quantum mechanics that’s strikingly reminiscent of “lazy evaluation” in CS. That is, the universe “cooks up a probabilistic answer to order” for whatever question you asked, but subject to that, it leaves anything that you <i>didn’t</i> ask about in whatever superposition of answers it was in before! (Or at least, that’s one way to summarize the rule for a partial measurement.)
However, I have two strong cautions:
First, if you consider (say) a quantum computer with millions of entangled particles, then “lazy evaluation” seems like a misnomer! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> This is still a form of evaluation that (in some sense) works astronomically harder than all the classical computers on earth, even if it doesn’t work any harder than it “needs” to work.
Second, I wouldn’t say that lazy evaluation “explains” quantum measurement—just that the two things are analogous to each other. Personally, I don’t see enough explanatory oomph here to go further than that.
Reply to: Sabrina #89:
Question: What would you think we’ll achieve first: a breakthrough in genetic engineering so we can create a flying horse (like Pegasus), or a breakthrough in robotics and nanotechnology which allows us to build a flying mechanic horse?
The flying mechanical horse. Depending on what <i>exactly</i> the rules are (i.e., how horse-like does it have to be?), we can probably already build that today.
<b><font color="red">Addendum:</font></b> A colleague of mine has pointed out that we can easily get <i>either</i> type of horse today, as long as it only needs to “fly” for a very short time. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Everyone: I’m going to sleep now! Thanks for all the great questions, and I apologize if yours wasn’t answered yet—I’ll get to it, honest!
Reply to: PS #101:
Question: When was the last time you wrote a computer program? What did it do, and what language was it in?
A couple years ago, I wrote a quiz program to help me memorize Hebrew words, in QBASIC. If you’re willing to go back a decade, you’ll find a much larger number of programs, mostly written in C — and if you go back <i>two</i> decades, then a much larger number still, but again written in QBASIC. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
(Note: Here I’m not counting little 20-line programs that I write to do Monte Carlo simulations and confirm or disconfirm this or that conjecture. In those cases, I’m using the programming language basically as a glorified calculator.)
Reply to: Ashley #99:
Question: Well, how much do you sleep on average daily?
(Of course I asked this as you last commented you are going to sleep, but I would like to understand if intellectually productive people are usually those who have a lot of sleep too. If you find this nosy my apologies.)
The normal amount, ~8 hours (though with considerable variation). Now that the baby is around, it certainly isn’t in contiguous blocks, and is often on an opportunistic basis.
Reply to: Rahul #102:
Question: What are the recent (or underrated) advances in Comp. Sci. (QC or not) that ( will ) translate into the largest applied / technological impact? To illustrate what I ask with examples from the past: Dijkstra’s shortest path algorithm or Fast Fourier Transforms or Gale–Shapley or Mersenne twister. Not sure whether these are the best examples (or even correctly classified as Comp. Sci.) but I’m looking for those algorithmic / theoretical advances which might end up having immediate applications or might solve some applied hurdle. Even ones that are already in use but relatively unknown / underrated are welcome. Thanks!
Two recent examples that spring to mind immediately are fully homomorphic encryption and the sparse Fourier transform (google them). I’d be extremely interested if other commenters want to suggest more examples!
Reply to: cgy4ever #103:
Question: What will you ask if you see Terry Tao post “Ask Me Anything!” on his blog?
(And you can only ask one quesion, like us.)
Well, I had the opportunity to meet Terry at UCLA, where I gave him some problems I’d been saving up for a while (especially the “Aaronson-Ambainis Conjecture” about bounded low-degree polynomials), and he generously spent a half hour thinking about them and gave me some potentially-useful leads. (I also asked him about whether he thought the value of BB(10) was independent of ZF set theory, and about how having a baby was affecting his career.) Since then, he’s also kindly answered several of my MathOverflow questions. So if Terry were to hold such an event, I’d probably figure that I’d already asked the poor dude enough for the time being. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Mike Lawler #42:
Question: In 10 years, how many digits do you think the largest known prime number will have? How about in 100 years?
You might enjoy Section 3.3 of <a href="http://www.scottaaronson.com/papers/philos.pdf" rel="nofollow">Why Philosophers Should Care About Computational Complexity</a> on the large conceptual difficulties in even <i>defining what we mean</i> by the largest “known” prime!
But suppose I make the assumption—which seems reasonable for the foreseeable future—that the “largest known prime” is extremely likely to be a Mersenne, 2<sup>k</sup>-1 (because of the availability of the Lucas-Lehmer test), and that to “know” it simply means to have written down k, and to have verified by computer that 2<sup>k</sup>-1 is prime.
In that case, I would generate my projection for 10 years from now by simply taking all the results of <a href="http://www.mersenne.org/" rel="nofollow">GIMPS</a> since 1996, plotting them on a log plot, and extrapolating the result to 2023! (I’d be amused if someone actually wants to do that, or has already done it.)
I’d be hesitant to predict <i>anything</i> for 100 years from now, since all my predictions about Mersenne primes would be hostage to my predictions about other issues, like whether humans will have uploaded themselves to a post-Singularity borg, or (more likely) whether civilization will have collapsed.
Reply to: Michael Bacon #110:
Question: Scott@78,I’ve thought about this some and I believe that it would be useful to throw in a few physics and CS “theorists” as well. And perhaps a couple of mathematicians for good measure. Also, I’d add some kind of generalist — perhaps someone with a penchant for jazz or Ikebana — just to sit in the conversations and make occasional comments. What do you think?
Well, theorists would <b>obviously</b> be needed for this project: at the very least, you’d want a large team thinking up better fault-tolerance schemes. But I’d put the experimentalists in charge of assembling that team.
Regarding mathematicians, musicians, and generalists: personally I couldn’t agree more! (What’s “Ikebana,” though?) On the other hand, the question asked how I’d go about <i>building a QC</i> given unlimited resources, not how I’d go about assembling the most interesting people to have a rollicking good time. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley">
Reply to: Nex #117: “It’s complicated.” <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> As it happens, I’ve been working on and off for the past two years on a huge essay setting out my thoughts about free will and predictability—and the essay will be online in just a week or two! So, can you will yourself to wait for that?
Question: Do you believe in free will?
Reply to: Mark #124:
Question: What is the nature of time?
I just <a href="http://www.closertotruth.com/video-profile/Setting-Time-Aright-Scott-Aaronson-/2169" rel="nofollow">posted a video</a> about that very topic last week! And, the nature of time being what it is, the fact that last week was before this week means that you can now watch it. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley">
(For more, check out Section 10 in <a href="http://www.scottaaronson.com/papers/philos.pdf" rel="nofollow">Why Philosophers Should Care About Computational Complexity</a>.)
Reply to: NotAnonymous #123:
Question: Is this AMA still going on?In reference to age ~10 of Scott #51:What videogame(s) would you design based on your research interests (broadly construed) and are all ideas on this blog public domain/free to use?
That’s 2 questions. Pick one or the other, dude. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Richard #119:
Question: How has your progress in learning Hebrew gone? For example, can you explain what is/was the hardest aspect and how you dealt with it?
I can speak at basically the level of a 3-year-old. I can order at a restaurant in Israel and be understood just fine, but the waiter will embarrass me by answering in English. Or (say) if Dana is arguing with her parents in rapid-fire Hebrew, I can pick up enough words and phrases to figure out what’s at issue.
If you were to plot my progress, you’d see a very gradual increase from the ages of 3 to 13, then a slow decline back to almost nothing (i.e., the alphabet, counting to 10, yes and no) after I left Hebrew day school, then a sharp increase when I started dating Dana and decided to practice with her, other Israelis I knew, Rosetta Stone, Pimsleur, my own QBASIC program, and anything else I could think of, and since then, my knowledge either remaining static or else slowly deteriorating again.
This might surprise people, but I’ve found that the hardest aspect of learning a foreign language is that <b>you have to memorize, like, <i>thousands and thousands</i> of words.</b> Coincidentally, I ran into exactly the same difficulty when I lived in Hong Kong at age 13, and tried to learn Mandarin (which was what the school offered—it didn’t offer Cantonese). My brain is <i>really</i> not well-adapted to learning languages, at least outside the brief childhood window of opportunity.
I’m still trying to find ways to deal with that problem (hence my QBASIC program). You’d think having an Israeli wife would help, but normally she just wants me to quickly understand the content of whatever she’s saying and therefore switches to English. Her parents provide better practice, since their English is only a little better than my Hebrew, forcing us to use a combination of languages. And there are also some other Hebrew learners at MIT who I practice with.
After the sheer memorization of words, my next biggest difficulty is the “combinatorial conjugation” (male/female, singular/plural, past/present/future), which follows rules that are “easy” if you’re a native Hebrew speaker but pretty random and arbitrary if you’re not.
Reply to: Nagesh #79:
Question: Hi Scott,Thank you for this opportunity!!Due to the journey of my career, I came to realize that there is an increasing push towards finding biological and molecular correlates to the patterns in social sciences and psychology using tools such as advanced imaging and genomics!With current social experimental designs, for the most part it is hard to distinguish if the individual differences in psychology and behavior arise from biology or if the biological effects observed are due to behavior which are mostly caused by much higher-order constructs such as culture, parenting or other such “brain washing” mechanisms etc.What kind of breakthroughs do you think/feel are required in social sciences or psychology (or any other discipline) for minimizing the probability of self-destruction of human species a.k.a the “world-peace problem”?Thank you!
Sincerely,
Nagesh
I’m not sure whether there are <b>any</b> plausible breakthroughs in social sciences or psychology that would contribute substantially to solving the “world-peace problem” (there might be, but I can’t think of them). Personally, I see the problem as fundamentally not scientific, but political. Namely, the sane, level-headed, democracy-respecting, scientific-tempered, solution-oriented people have to <b>win</b> over the screaming maniacs who believe anything is justified because they have God (or the Historical Dialectic, or <i>whatever</i>) on their side. And what makes the problem so intractable is precisely that, as their descriptions already indicate, the “screaming maniac” side will almost always be able to muster more boldness and zeal than the sane side.
Still, I agree that there <i>are</i> certain scientific advances—for example, the development of a superhuman AI—that could completely alter the outlook for this ancient conflict between the maniacs and the “saniacs” (though exactly <i>how</i> is hard to foresee).
Reply to: Bram #56:
Question: Well gee, coming up with a question feels like a huge responsibility, because it’s been so long that since I spent time trying to think about deep complexity issues, or at least all the ones I’ve been thinking about lately require way too much backstory to be able to ask simply, and I don’t want to squander this opportunity to ask something deep. So at the risk of impinging on your no-Dwave requirement, I have a question which is of very much interest to me due to my interest in walksat, which is, can you think of a classical problem where it seems likely that an adiabic quantum computer could produce a better than quadratic speedup over classical stochastic search, and if so can you give an intuition of why that should be the case?
It’s a very good question.
If you’re allowed to tailor your instance specifically to make (say) classical simulated annealing fail—even though someone who <i>knew</i> how the instance was designed could easily solve it classically—then <a href="http://arxiv.org/abs/quant-ph/0201031" rel="nofollow">this paper by Farhi, Goldstone, and Gutmann</a> answers your question. Specifically, they construct a set of fitness landscapes such that the adiabatic algorithm reaches the global minimum in polynomial time, whereas simulated annealing gets stuck in a local minimum and needs exponential time to get out of it.
Now, a harder question—one that I’ve been raising for years (see Section 4.2 of <a href="http://www.scottaaronson.com/papers/npcomplete.pdf" rel="nofollow">this paper</a>)—is whether you can design a set of fitness landscapes {f} such that the adiabatic algorithm reaches the global minimum in polynomial time, whereas <b>any</b> classical randomized algorithm provably requires exponentially many queries to f.
In trying to answer this harder question, an extremely natural idea is to use the <a href="http://arxiv.org/abs/quant-ph/0209131" rel="nofollow">Childs et al. conjoined-trees construction</a>—which is known to provide an exponential speedup by a quantum walk compared to any classical randomized query algorithm—and then somehow “embed” the conjoined trees into a fitness function f, in such a way that the adiabatic algorithm will simulate the quantum walk algorithm that we already know works for the conjoined-trees problem. And apparently there was recent progress toward doing this! But
(a) While I saw the preprint on the arXiv about a year ago, I can’t seem to find it now—does anyone want to help tracking it down? <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
(b) The preprint in question didn’t quite achieve a separation using the adiabatic algorithm itself, but only using the “NON-adiabatic algorithm”—i.e., an algorithm that’s like the adiabatic algorithm, except that it transitions from the ground state up to the first excited state and then back down to the ground state.
Of course, even if problem (b) could be solved, it would remain an outstanding challenge to find a more natural, less contrived example where the adiabatic algorithm got a super-quadratic advantage. I regret that I don’t have a good intuition about this challenge. However, see this <a href="http://arxiv.org/abs/1302.5733" rel="nofollow">important paper by Freedman and Hastings</a> for some very recent results relevant to the challenge—indeed, the only reason I’m not writing more about their paper is that I haven’t yet read and understood it!
Ultimately, of course, the question of whether the adiabatic algorithm can ever get a superquadratic advantage (or even, for that matter, a <i>quadratic</i> advantage) “in practice” might need to be settled experimentally.
Reply to: Gugg #107:
Question: “[I]f you want a universe with certain very generic properties, you seem forced to one of three choices: (1) determinism, (2) classical probabilities, or (3) quantum mechanics.” (You in QCSD.)If one understands the 0-norm to be the intersection of the 1 and 2-norms, shouldn’t the trilemma really be the following (for any universe with very generic properties)?1. Determinism, classical probabilities and quantum mechanics are all true.
2. Classical probabilities is true; the other two are false.
3. Quantum mechanics is true; the other two are false.
Thanks, that’s a very interesting and amusing way to put it! But I suppose it ultimately boils down to a matter of definition. Many people would interpret “Classical probabilities is true” to mean, not merely that all your states are probability vectors, but also that <i>some</i> states are <i>nontrivial</i> probability vectors. Likewise, they would interpret “Quantum mechanics is true” to mean that some states are nontrivial quantum superpositions.
One technical comment/correction: in case 2, the probabilistic case, you also shouldn’t rule out quantum mechanics being true—since maybe your states are “really” quantum mixed states whose density matrices just happen to be diagonal!
Reply to: Joe Shipman #143:
Question: Why do you call your blog “Shtetl-Optimized”?
I’m sure I’ve answered that question several times before on this blog—search for it!—but I also like the answer <a href="http://www.scottaaronson.com/blog/?p=1366#comment-73043" rel="nofollow">Brian Hayes gave just recently</a>.
Reply to: John Sidles #135: Predicting “with accuracy with 55% or better” is an extremely low bar—indeed, a forecaster would only have to adjust her “true” probabilities (whatever they were) by at most 5% to clear that hurdle. And that, in turn, would decrease her “expected rightness” by well under 1 assertion out of the 10. Indeed, I read in Nate Silver’s book that weather forecasters routinely do this: that is, they adjust probabilities that the computer model says are 50% to either 40% or 60%, simply because laypeople think 50% sounds too wishy-washy, and they’re willing to take a slight hit in accuracy to keep up appearances! So, adopting the same principle, I’m going to say that all 10 of your assertions could readily be predicted with “accuracy 55% or greater.”
Question: Scott, here is meta-question, that partly reflects general curiosity with regard to the future, and partly is a tribute to your recent MIT promotion!MIT background information
• MIT was founded in 1861 (one hundred fifty-two years ago).
• Google Earth shows MIT’s Killian Court (just south of the Infinite Corridor) standing eight feet above mean sea level.The Question Asked How many of the following ten yes-or-no assertions can be predicted with accuracy 55% or better, by a STEM researcher of average knowledge and imagination, after (at most) two minutes of consideration for each question.The questions concern either MIT specifically, or the STEM community broadly, in the year 2085 … that is, half as far in the future, as MIT’s founding is in the past.—————Assertion 1: In the year 2085, MIT’s Killian Court will stand below mean high-tide sea level, and winter snow will be uncommon in Massachusetts.Assertion 2: At least one MIT undergraduate engineering degree program will include the word “quantum” on its diploma.Assertion 3: At any given time, the majority of MIT students will *not* attend classes on-campus and will *not* view lectures in-person.Assertion 4: At least some MIT courses will be taught (and graded) entirely by artificial intelligences.Assertion 5: At least one quantum computer will exist on the MIT campus that is used mainly for practical computation.Assertion 6: Post-graduation, the majority of MIT students will *not* take jobs associated to residency within the United States.Assertion 7: Post-graduation, the majority of MIT students will *not* take jobs associated to residency upon the planet Earth.Assertion 8: The Earth’s population will be less than five billion.Assertion 9: Median global life-expectancy will be more than one hundred years.Assertion 10: Histories of the 21st century will commonly mention at least three MIT faculty (by name) as exerting strong influence in regard to the outcomes of Assertions 1-9. —————Reminder Individual responses are not solicited (except voluntarily) … rather, the question asked is whether (in your opinion) the STEM community can generate at least some predictions regarding the future of MIT (and the STEM enterprise) that are appreciably better than random guesses.
Reply to: Bob #128:
Question: What books/films would you like to see Lily grow up with?
Good question! I’ll expose her to my own childhood and adolescent favorites: the classic Disney movies, Narnia and Bridge to Terabithia and Wrinkle in Time and Rats of NIMH, Tom Sawyer and Huck Finn, Asimov’s short stories, Martin Gardner’s games columns, Surely You’re Joking Mr. Feynman, Real Genius …
Ultimately, though, she will form her own tastes, and they will differ from mine.
Reply to: Shehab #15: Since I’m not entertaining complicated multi-part questions in this session, let me address your first question only.
Question: Can a quantum system generate true random sequence? If chaotic system also can generate true random sequence, can we build a partition between the sets of quantum true random sequence generators and the sets of chaotic true random sequence generators based on their scale? If such partition exists can we apply the idea in a reverse direction to test quantumness? If quantum measurements can never be unbiased in real world can we say that a quantum system can not generate true random sequence?
The answer is yes, in <b>two</b> quite strong senses.
The first sense is definitional: suppose you measure a bunch of 45-degree polarized photons in the horizontal/vertical basis, and you get outcomes that are NOT uniformly random bits. In that case, the photons wouldn’t have been described by quantum mechanics at all, but by some different theory that modifies QM. I.e., the Born rule is simply part of the <i>definition</i> of QM, and the rule requires true randomness.
The second, much stronger and more interesting sense, is that <b>if</b> you can do a physics experiment that violates the Bell inequality, then either there must have been faster-than-light coordination between the two detectors, or else the outcomes must have had some genuine entropy, which moreover was “generated on-the-fly” at Alice’s and Bob’s respective detectors. Of course, what makes this relevant is that quantum mechanics <i>does</i> let you do experiments that violate the Bell inequality, and moreover those experiments have already been done. However, the crucial point is that <b>the conclusion, of “true randomness,” doesn’t presuppose the truth of QM—only that the Bell inequality was violated and that superluminal signalling is impossible.</b>
This was the upshot of Conway and Kochen’s famous 2006 “free-will theorem,” but it’s also been noted independently by others (like me, for example, in my <a href="http://www.scottaaronson.com/papers/nks.pdf" rel="nofollow">2002 review of Stephen Wolfram’s book</a> <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> ). More recently, this observation is also what underlies protocols for generating “Einstein-certified random numbers,” such as those of Pironio et al. or more recently <a href="http://arxiv.org/abs/1210.1810" rel="nofollow">Vazirani and Vidick</a>.
Reply to: PeterM #149:
Question: My question is partially motivated by the Foundation series. There is the science of psycohistory about which Asimov hardly says anything specific but still, even in its vagueness the existence of this discipline within the stories is fun. I am curious if instead of making a novel around a fictitious science you could write an exposition of a fictitious proof of P does not equal NP. Preferably it would involve some sensationalist statement. For example, many people take it for granted that from a proof like that we would gain tremendous insight. However, at least when there is interaction, learning the truth of something is possible without extra insight. So, for the sake of a fun fiction, one could start: “In her monumental proof the first step was the realization that a so called minimal proof of P not equal NP is absolutely insight free in the sense that any (…) “.
Sure! In some sense, few things are easier than generating a fictitious proof. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> (Indeed, I’ve already described a few fictitious P≠NP proofs, most recently for my <a href="http://www.scottaaronson.com/blog/?p=1293" rel="nofollow">April Fools prank</a>.)
Of course the catch—which also applies to Asimov’s “psychohistory,” or for that matter, to his robots that were intelligent because of “positronic brains”—is that these fictional explanations always contain a black box somewhere, into which the entire difficulty gets shoved. (Another classic example is the “flux capacitor” from <i>Back to the Future</i>.)
But crucially, consumers of science fiction turn out to be perfectly happy with “flux capacitors”! Indeed, even if Asimov <i>could</i> explain how his psychohistorical or positonic black-boxes actually worked (which, as he understood perfectly well, he couldn’t), all he would’ve achieved would be to bore, alienate, and confuse most of his audience.
Reply to: Travis #138:
Question: What is the secret to achieving happiness?
Bertrand Russell famously wrote a self-help book called “The Conquest of Happiness” while in the midst of a suicidal depression. Like Bertie, I’ve also spent a decent fraction of my life depressed, which makes me hesitant to field your question! On the other hand, right now I’m quite happy, so maybe I <i>can</i> take a swing.
I don’t think there’s a “secret” to happiness: just non-secret, widely-agreed-upon information that each individual has to figure out how to apply. Do work that’s important and meaningful to you, and that you’re good at. Be part of a community that appreciates you. Don’t try to be something you’re not; it won’t work. Travel the world. Be careful with drugs. Perhaps most important, with very few exceptions, <b>actually do the things</b> that you fantasize about doing. (Good examples: asking someone out on a date, writing a book, creating a “zoo” of complexity classes. Bad example: shooting up an elementary school.)
Reply to: Tobi #95:
Question: What are some of the things that you did in college and afterwards that helped you become the researcher you are today and acquire tenure
Took lots of math/CS classes; spent lots of time in the library (yes, people still went to libraries back then) hunting books and papers both classic and obscure (in math, CS, physics, philosophy, biology, social science…); emailed the authors of those books and papers; found wonderful mentors to bounce ideas off of; went to every conference and workshop I could; and most of all, spent <b>unbelievable</b> amounts of time trying to solve open problems, mostly unsuccessfully.
Between the ages of 14 and 24, I had zero dating life, which was the overriding reason why I was so depressed (see #155). And if I had to do it over, I’d almost certainly do things differently. But the one thing I <i>did</i> have was math—endless hours to think about oracles and advice states and low-degree polynomials—and the tenacious hope that, if only I could succeed there, everything else in life would eventually sort itself out. So yes, I probably had a hunger that more contented, well-adjusted people don’t have.
Reply to: Bill Kaminsky #36:
Question: [Preliminary Note: Hmm... seems like my first attempt got eaten by WordPress... apologies for any redundancy... hopefully this second attempt not only goes through, but also is actually more understandable.]To a rough order of magnitude, do you have any sense of would be the size of the 3-SAT instance whose (un)satisfiability would signify the (non)existence of a proof of some interesting-to-humans theorem that’d fit within, say, 10 journal pages if the proof were rewritten in the usual, mathematics-for-humans form?I ask since SAT heuristics like survey propagation can sometimes handle millions of variables in linear time. Might SAT heuristics getting to the point that one might hope for sorta-brute-force computer-generated proofs of some wide swathe of interesting-to-humans theorems? P.S. I’ve done a nontrivial amount of Googling about automated theorem proving using state-of-the-art SAT solvers, but alas it seems to me that at least among the papers Google brings up (e.g.,”Encoding First Order Proofs in S[atisfiability] M[odulo] T[heories]” http://people.clarkson.edu/projects/carl/papers/smt_encoding.pdf) those in automated theorem proving aren’t focusing on the new nifty SAT heuristics like survey propagation, but good ol’ DPLL… especially since they seem focused actually on how you’d solve *UNSAT*. Please correct me if I’m wrong, but UNSAT is co-NP and proving P = NP wouldn’t settle NP vs. co-NP, correct? )
This is a wonderful question—for years I’ve tried to get students interested in a project that would help determine the answer to it! (So far without success.)
There are really <i>two</i> blowups to worry about here: first, the blowup in translating a 10-page human-understandable proof into a machine-checkable format (HOL, Mizar, etc.), and second, the further blowup in translating the HOL or Mizar proof-checking process into 3SAT. If the translations were done naively, I can easily imagine that they’d produce a 3SAT instance with many billions of clauses.
So the key research question is how to do the translations in a <i>less</i> naive way. I actually see that as a very deep question, and <i>not</i> just a grubby matter of whittling down constant factors. (Which is not the view Lubos Motl’s caricature of me would hold, but <i>is</i> the view <i>I</i> hold!) One wants to know: can (say) the rules of first-order logic and the axioms of ZF set theory be encoded in some much more “succinct” way? Which, as it happens, is directly related to <i>another</i> of my favorite open problems: namely, the problem of finding as small a k as possible for which the value of BB(k) (i.e., the k<sup>th</sup> Busy Beaver number) can be proved to be independent of ZF set theory.
Reply to: Matt Leifer #26:
Question: A while ago, I asked several academic publishers whether they would publish a book that had been made available electronically for free under a permissive license. Most of them said that they would provided they could have exclusive rights to the print version, so I was wondering about your experience of this.Did the fact that your lecture notes are available for free on the interenet cause any problems with getting your book published and what is the status of your online lecture notes now that the book is published, i.e. does CUP own the copyright or you and is CUP going to sue me if I print the whole thing out and give it to my students?OK, that’s several questions, but I bunched them all together so that I only had to use one question mark.
My editor, Simon Capelin, raised no problem whatsoever with my leaving the lecture notes online, and the lecture notes remain under my own copyright, not CUP’s. Simon’s open-mindedness about this issue was one reason why I decided to publish with CUP in the first place.
Reply to: Evan #27:
Question: What are your top 5 (or so) books that haven’t yet been written that you would most like to read? (Within a five-year horizon, so let’s say “a popular account of the proof that P!=NP” is a silly answer.)
I’m sure there are hundreds, but here are the first five that came to mind:
General Relativity for Theoretical Computer Scientists<br>
Quantum Field Theory (Plus Strings and AdS/CFT) for Theoretical Computer Scientists<br>
The Workings of the World’s Economy for Theoretical Computer Scientists<br>
The Brain for Theoretical Computer Scientists<br>
Mulmuley’s GCT Program Made Easy
Reply to: ThereIsNoGoo #80:
Question: Do you believe in Goo?**”Goo” is any instance of the continuum, e.g. a copy (topologically) of the real line, anywhere in the actual real physical world.
I suppose one can guess from your name which answer you’re hoping for! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
But it depends on what you mean by being “in the actual real physical world.” I believe that amplitudes in quantum mechanics are genuinely continuous—i.e., they’re arbitrary complex numbers of norm at most 1. The reason is that any attempt to “discretize” amplitudes seems to lead to mathematical nonsense.
Crucially, however, amplitudes <i>also</i> aren’t directly observable. They’re simply used to calculate probabilities—which I also believe can be arbitrary real numbers in [0,1], but which <i>also</i> aren’t directly observable.
And I also believe the <a href="http://en.wikipedia.org/wiki/Holographic_principle" rel="nofollow">holographic principle</a> from quantum gravity, which implies that any “physical” Hilbert space should be finite-dimensional, and that any measurement that we can physically perform should have only a finite set of possible outcomes.
(Whether the holographic principle is realized by space and time literally getting “discretized” at the Planck scale, or only in a more subtle way, I think has to be regarded as a profound open question, which might not even have an unambiguous answer.)
Anyway, if the above is correct, then in some sense the continuum would be implicit in our best <i>descriptions</i> of Nature, yet it would never manifest itself directly in experiments. So for example, we could never exploit the continuum to build “hypercomputers” able to solve the halting problem in finite time, etc., and there would be no physically-meaningful questions whose answers would depend on the truth or falsehood of the Continuum Hypothesis, the Axiom of Choice, or anything of that kind.
This balance between discrete and continuous probably won’t satisfy either the Wolframian digital-physics partisans or the Lubosian continuum-snobs! But it strikes me as not only strongly consistent with current physics but as having the harmonious ring of truth.
Reply to: Incidentally, Bill Kaminsky #93:
Question: Having done some further Googling and skimming, I’d like to revise my question fromTo a rough order of magnitude, do you have any sense of would be the size of the 3-SAT instance whose (un)satisfiability would signify the (non)existence of a proof of some interesting-to-humans theorem that’d fit within, say, 10 journal pages if the proof were rewritten in the usual, mathematics-for-humans form?
(see Comment #36 above). to the following more qualitative (and more sweeping) one:*******
Wait a second! What’s all this?! Is it somehow the case we could prove P = NP yet still be *unable* to search efficiently for short proofs of theorems that indeed possess short proofs?!
*******I ask since my Googling and skimming found this paper from the 1994 STOCS Jean Goubault “The Complexity of Resource-Bounded First-Order Classical Logic”, available freely at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.3225 At the risk of violating your caveat #3 about not having to read anything from the literature, lemme quote a brief part of Goubault’s abstract and a brief part from his conclusion.So do human mathematicians still have jobs in a world where P = NP but the polynomial hierarchy doesn’t fully collapse?Thanks again, Scott! Yayyyy, tenure!!
Err, if P=NP, that <b>implies</b> that the entire polynomial hierarchy collapses to P!
But even setting aside that issue, the paper you link to must be defining the “proof search” problem in a nonstandard way. If you measure the “complexity” of a first-order proof in the most obvious way—i.e., by the number of symbols it contains over some fixed alphabet—and ask whether there exists a proof of a given statement with n symbols or fewer, then you clearly get a problem in NP.
Reply to: Aykut #100:
Question: Physics is looking for the axioms of the nature observing the theories nature produces (physical events/experiments). From incompleteness theorem we know that (in a vague manner) for any axiomatic system we have either an incomplete or inconsistent set of axioms. I have the following questions, since they all indicate the same phenomena (i hope) all can be considered as one, if not please answer of your choice.1. Are physicists assuming the nature is consistent?
2. If yes, then this seems to me that at best we can have physical laws that can only explain some fraction of the nature?
3. If no, then did we observe any inconsistent behavior in the nature?Thanks.
What on earth could it possibly <b>mean</b> for nature to be “inconsistent”? Or better: how would I recognize “inconsistent behavior” in nature if I saw it?
Certainly we can imagine <i>insane</i> behavior in nature. For example, maybe the gravitational force will triple in strength tomorrow, then on Thursday God will hold a press conference and explain that that was a big mistake and there will be no further changes to the laws of physics, but then on Friday all protons will turn into tiny unicorns.
But even if that happens, <b>there still won’t be any logical inconsistency!</b> 2+2 will still equal 4, Fermat’s Last Theorem will still hold, and the laws of physics will still be perfectly consistent—just vastly crazier than we’d thought.
In short, the way I use words, there’s no <i>conceivable</i> state of affairs that could correspond to “nature being inconsistent.”
<b>Addendum:</b> It’s probably relevant here to clear up a depressingly-common misconception regarding Gödel’s Theorem. Gödel didn’t prove, or find the slightest indication of, any “inconsistency in math.” In fact, if you’re unwilling to regard the truth or falsehood of arithmetical statements as having a definite meaning, then ironically you can’t even state Gödel’s Theorem itself! Gödel showed that any computable formal system for <i>reasoning</i> about arithmetic has to be either inconsistent or incomplete. For common formal systems like PA and ZFC, the truth is almost certainly the latter. But even if, implausibly, such systems turned out to be inconsistent, that would merely be a sign that we should switch to <i>better formal systems</i>! The truths of arithmetic themselves, like 2+2=4, would remain serenely unaffected by this formal axiomatic turmoil. Your common sense, and Winston Smith’s in the novel <i>1984</i>, are perfectly correct on that point. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Michael Sweney #132:
Question: “The mouse is eating cat food, but the cat food bowl is broken”what conditions must be met for an answer to this be acceptable to a grumpy zen master?
The cat food bowl must have been broken by a single hand that was clapping. A hand belonging to a good-natured zen novice. A novice who is also a hungry mouse. A mouse with hands, I suppose. [INSERT GONG SOUND]
Reply to: William Banks #134:
Question: Maybe, it’s the equality sign? Maybe, the very expression of mathematical equality is in its essence illusory?
Maybe, it’s the Psilocybin mushrooms? Maybe, a few folks here should try laying off them for a bit? <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley">
On that note, I’m going to sleep now. By my count, I’ve answered 62 questions so far, with 25 remaining to be answered.
Reply to: Mark H. #176:
Question: ‘X’ is to quantum computer as quantum computer is to conventional computer. What is ‘x’?
Yeah, yeah, I’ve been thinking about that one for more than a decade. It’s possible that there’s simply no ‘x’ that fills the analogy particularly well.
(For a comparison, try this, suggested by Greg Kuperberg: ‘x’ is to a round earth as a round earth is to a flat one. What is ‘x’?)
However, for my own best attempt to identify such an ‘x’, see my paper <a href="http://www.scottaaronson.com/papers/qchvpra.pdf" rel="nofollow">Quantum Computing and Hidden Variables</a>. There I propose a model of computation that’s able to search an unordered list of size N in time ~N<sup>1/3</sup>, compared to ~N<sup>1/2</sup> needed quantumly and ~N needed classically.
Reply to: Serge #24:
Question: Why is polynomial behavior considered as the only practical definition of feasibility, given that the feasible algorithms actually require very low exponents and low constants to be executed on a computer?Subsidiary question:
What intuition tells you that the NP-complete problems couldn’t be solved by some polynomial algorithm which can’t be run because it doesn’t meet the above constraints?
The entire premise of your question is mistaken. Theoretical computer scientists have been thinking about particular exponents (and even leading constants), and how to make them as low as possible, for the past half-century!
Polynomial-time is universally recognized to be nothing more than the “first-pass approximation” to feasibility, but one that has the enormous advantage of mathematical simplicity (mostly because it’s closed under arbitrary polynomial-time reductions—in contrast to, say, quadratic time, which is <b>not</b> closed under arbitrary quadratic-time reductions).
See <a href="http://www.scottaaronson.com/blog/?p=122" rel="nofollow">this post</a> for my attempt to explain my intuitions. Most (not all) of them apply regardless of what the constants and exponents are.
Reply to: Tom #25:
Question: Who is your favorite complexity theorist?
The answer changes from week to week. <i>This</i> week, however, it’s Boaz Barak, because of <a href="http://windowsontheory.org/2013/05/06/reasons-to-care-in-honor-of-scott-aaronson/" rel="nofollow">this post</a>.
Reply to: B. Bogart #121:
Question: Given that simulations are intentionally created in the context of human knowledge and reality is independent of human knowledge, what is the relation between a perfect simulation and reality?
I’m not sure I agree with the premise that simulations need to be “intentionally created in the context of human knowledge”: the notion of “simulation” seems relevant whenever the functional properties of one system are replicated by another system with a completely different physical or mathematical substrate. And that could happen, e.g., entirely inside a computer, with no conscious human being around to recognize it as a “simulation.”
Furthermore, even if I grant the premise, I’m not sure why the <i>intention</i> in creating a simulation should be relevant in assessing the simulation’s “relation to reality.” Whatever our answer to that question—and whatever we interpret the question even to <i>mean</i> (it’s not obvious)—shouldn’t we be able to discuss the question purely in terms of the properties of the simulation itself, forgetting about who created it and why?
Reply to: Karthik #122:
Question: How does the below approach sound for factoring large integers?*Get the nearest even number to the given integer.
*Factor the even number in many different combinations
*Now get a collection of primes nearer to the above factors
*Within the collection, multiply each number with every other number and check if we get the given number.Thanks.
Err, it sounds like someone totaling their car before making it out of the driveway! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
- “The nearest even number” to a given n is n itself is n is even. If n is odd, do you want n-1 or n+1?
- How on earth do you “factor the even number in many different combinations”? You seem to have shoehorned the entire difficulty of the factoring problem into that one step! (Also, of course, every integer has exactly one prime factorization up to ordering, not “many different” ones.)
- Finally, the prime factors of n+1 (or n+2, etc.) need not be <b>anywhere close</b> to the prime factors of n. Factoring is extremely “non-robust” in that sense. Try some examples and you’ll see!
Reply to: Jimmy #133:
Question: Are you a Bayesian? If not, could you describe your non-Bayesian belief system in short?
I’d ascribe maybe a 40% posterior probability to Bayesianism being true. (Up from my prior of 20%, after reading <i>The Signal and the Noise</i> by Nate Silver.)
With 60% probability, I think quantifying our uncertainty using probabilities is great whenever possible, but is unambiguously meaningful only when an event is sampled according to a probabilistic process that we know something about theoretically (e.g., in quantum mechanics), or in the limited domain of “bettable events” (i.e., events that belong to a large-enough ensemble of similar events that one could form a decent betting market around them). In other cases—including many of the ones people care about the most—I think we’re really in a state of <a href="http://en.wikipedia.org/wiki/Knightian_uncertainty" rel="nofollow">Knightian uncertainty</a>, where at most we can meaningfully give upper and lower <i>bounds</i> on the probability of something happening. And in those situations, we might prefer “paranoid,” worst-case reasoning (as in Valiant’s PAC model, or indeed almost all of theoretical computer science) over Bayesian, average-case reasoning. Indeed, this might both explain and justify the phenomenon of <a href="http://en.wikipedia.org/wiki/Risk_aversion" rel="nofollow">risk-aversion</a> in economics, as well as well-known “paradoxes” of decision theory such as the <a href="http://en.wikipedia.org/wiki/Ellsberg_paradox" rel="nofollow">Ellsberg paradox</a>.
Again, though, that’s only with 60% probability, and is susceptible to revision as new information comes in.
Reply to: CognitiveScientist #114:
Question: (congrats on your tenure!)
Can you point to the main obstacles and/or draft a research program for how to build an intelligent machine/software? where for our purpose an intelligent system is defined as one that can do everything that humans can do.
No, I can’t. (And even if I could, I certainly couldn’t do it in the space of a blog comment!)
The most “obvious” approach to building an intelligent machine would be to have nanobots swarm through a human brain recording all the relevant information about it, then simply to simulate the brain neuron-by-neuron and axon-by-axon in a digital computer. But of course, many people have discussed that idea for a long time, and we still seem incredibly far from being able to do it. I regard whether or not this can be done—and what even <i>counts</i> as “all the relevant information” about a brain—as some of the most profound scientific questions that there are.
Maybe it’s worth mentioning that, <i>if</i> you believe humans are basically “machines made of meat,” then a procedure to build intelligent machines already exists and is well-known. My wife and I carried it out ourselves this past year and can recommend it. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Jim Graber #28:
Question: Will we ever use a proof program the way we do Mathematica or Maple? Anytime soon? Why or why not?
It might be worth mentioning that <i>I’ve</i> still never really learned to use Mathematica or Maple! (As mentioned in previous answers, I more often use MS-DOS QBASIC.) So I regard it as entirely plausible that, a few decades from now, <i>other</i> math/CS researchers will regularly use automated theorem-proving programs, but <i>I</i> still won’t be using them. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Or maybe not. The fundamental difficulty is well-known: the general problem of finding a proof (even a bounded-length proof) is NP-hard, whereas typically people only use Mathematica and Maple for problems that are in P or BPP. Of course, there are many NP-hard problems that computers, though they can’t solve them in the worst case in polynomial time, still typically solve much faster than the fastest humans! So maybe that will eventually turn out to be the case for (say) the problem of finding formal proofs in ZF set theory.
Even if it does, however, there’s still a further difficulty: math, as human mathematicians practice it, is really about understanding and explanation, about asking the right questions and finding answers that are humanly comprehensible, about finding the “right” definitions and axioms (the ones that lead to the “nicest” theorems), etc. etc. Finding a formal proof for a formally-stated proposition is only one (albeit important) aspect of math, much like cutting skill is only one aspect of being a surgeon. And the other aspects strike me as “AI-complete”—i.e., as hard to replicate as <i>any other human ability</i> that I can think of.
So I guess my prediction is this: automated theorem-provers will occupy a significant and growing niche in mathematical research (they already do have a niche—having led, for example, to the proof of the <a href="http://en.wikipedia.org/wiki/Robbins_algebra" rel="nofollow">Robbins Conjecture</a>). But even in cases where they’re used, humans will still be doing almost all of the “conceptual heavy lifting.” <i>Until</i>, that is, we pass the AI Singularity, when computers become able to write love poems and do all the <i>other</i> things humans can do.
(If anything, I’d guess that we’ll see computer-generated poetry worthy of being published in the <i>New Yorker</i>, <b>before</b> we see computers explaining math at a conceptual level! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> )
Reply to: Vinicius #158:
Question: How could you plan to become a “theoretical computer scientist” at age of 16? I guess most people don’t even have a clue of what is this at that age
First, I started programming at 11—older than many other CS folks, but <i>immediately</i> I got interested in the “big questions” of what sorts of programs you could write and what sorts you couldn’t write, whether one programming language was more powerful than another, etc. (though I wouldn’t have expressed myself clearly about any of those questions). That led me to self-study for the AP Computer Science AB test (which, sadly, was <a href="http://www.scottaaronson.com/blog/?p=321" rel="nofollow">recently discontinued</a>), to programming competitions (which I enjoyed but never really distinguished myself in), to Robert Sedgwick’s book “Algorithms in C++”, and to books about fractals, genetic algorithms, and all sorts of fun topics like that. None of this was “complexity theory” (which I wouldn’t have been able to appreciate yet), but I guess it laid the groundwork.
Second, I was lucky enough to attend <a href="http://www.mathcamp.org/" rel="nofollow">Canada/USA MathCamp</a> at age 15, where Richard Karp gave a series of lectures about theoretical computer science. That was my first introduction to NP-completeness and to the P vs. NP problem.
Third, the summer I attended MathCamp, I also dropped out of my high school (which I mostly hated—that’s <a href="http://www.scottaaronson.com/writings/beehive.html" rel="nofollow">another story</a>), and enrolled in <a href="http://www.clarkson.edu/tcs/" rel="nofollow">The Clarkson School</a>, a wonderful program for high school students at Clarkson University in Potsdam, NY that (like MathCamp) I’d very warmly recommend to others. There, I was lucky enough to do a research project on hypertext design with <a href="http://people.clarkson.edu/~clynch/" rel="nofollow">Chris Lynch</a>, during which Chris taught me an incredible amount about CS research (he also gave me the Garey &amp; Johnson book, which I devoured).
Fourth, when I was 16 and a freshman at Cornell (which unlike Harvard, Princeton, Yale, MIT, or Stanford, had kindly accepted me with this bizarre background <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> ), I foolishly decided to take Jon Kleinberg’s graduate course on algorithms, then Ronitt Rubinfeld’s graduate course on complexity theory. I didn’t understand anything too deeply (I only got A-’s), but I was enthralled with whatever I <i>did</i> understand. I distinctly recall my main ambition in life being to achieve 0.1% of Jon Kleinberg’s greatness (and this was in 1996, before Jon <i>really</i> got famous!)
Reply to: James Miller #17:
Question: My 8-year-old son likes computer programming. We have a system where he does X minutes of “learning” to earn X minutes of video game time, where I get to pick the learning. Do you have suggestions for types of learning that will help develop his interest and skill in programming?
Firstly, I’m not sure how much I like the idea of making learning a chore that your son has to do to “earn” his video game time! Unless, that is, he enjoys the earning of video game time as a “game” in itself. I feel like the ideal would be to find some topic—almost <i>any</i> topic, really—that he enjoys learning about so much that you have to stop him from spending <i>too much</i> time learning about it!
Having said that, I can say what developed <i>my</i> interest in programming: wanting to make cool pictures of fractals. Wanting to write little text-based adventure games, or Pong-like games, for my friends and I to play. Wanting to write a program that could compose its own jazz (it turns out to be easy: just string together a bunch of “jazzy” sounds randomly). Wanting to make a calculator that could handle complex numbers (I didn’t know, at the time, about any of the calculators or programs that <i>could</i> handle them). Wanting to know how long a random walk on the 2D grid, which repeatedly picks a random neighboring cell <i>among the cells that it didn’t already visit</i>, will go for before it traps itself on all sides. (It turns out be about 70 steps on average: for some reason, I cared so much about this question that I still remember the answer after 18 years.)
Admittedly, I wasn’t yet doing any of that when I was 8 (mostly when I was 11-13). And the things that would fascinate “kids these days,” with their iPads and Twitters and whatnot, are probably different from the things that fascinated me back in days when MS-DOS QBASIC seemed like a radical new improvement over GW-BASIC (look ma, no line numbers!). But I hope it at least gives a few ideas that you could adapt for your son. I wish him the best.
Reply to: Dan Ashwander #145:
Question: Am I insane?
Searching for insight into this question (I’ve never met you, after all), I googled you, and quickly found out that you’ve published an <a href="http://www.amazon.com/Am-I-Insane-Dan-Ashwander/dp/0962276308" rel="nofollow">entire book</a> (!) with the title <i>Am I Insane</i>. Here’s Amazon’s description of the book:
Based on that description, I would say: <i>either</i> you seem to be engaging in some sort of sophisticated, ironic performance-art project, <i>or else</i> the prognosis doesn’t look too good regarding your question. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Peter Boothe #81:
Question: We know an algorithm (Levin universal search) that achieves the optimal runtime for all problems in NP intersect co-NP but we do not know the runtime of that algorithm. What’s up with that?
What’s up with it is that Turing machines can be enumerated and dovetailed over, yo! And each machine can be halted when and if it finds either an NP or a coNP witness, my man. But the simple fact that you can dovetail over all possible algorithms tells you <b>nuthin’</b> about the runtime of the optimal algorithm. Nothing too mysterious about it, know what I’m sayin’? <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
Reply to: Jay #191:
Question: >‘x’ is to a round earth as a round earth is to a flat one. What is ‘x’?Oblate spheroid. >we still seem incredibly far from being able to do it [to simulate the brain neuron-by-neuron ].Actually we’re as far from that as 10 years. >The Brain for Theoretical Computer ScientistsI saw at least three neuroscientists asking questions. Maybe you could just try an “Answer anything!” post?
So, your prediction is that by May 8, 2023, there will exist digital computers able to pass an unrestricted Turing Test, by simulating human brains neuron-by-neuron? Are you willing to bet with me about that? How much: $10,000?
Reply to: GooInterested #190: The point I was trying to make was that you need to explain exactly what it would <b>mean</b> for the real numbers to “exist in the real world,” or for reality to be “constricted to the Computable Real Numbers.”
Question: Reading up your answer to “ThereIsNoGoo” (I am not him!), I think you misunderstood the point, or maybe my question is different…The point is not getting rid of continuity, it is:Do the Real Numbers as defined by Dedekind and Weiserstrauss, exist in the real world?
Isn’t reality constricted to the Computable Real Numbers?Or are all those undefinable reals (which are the uncountable part) just the consequence of a bad axiomatization?
IMO Lebesgue Theory is another way this bad axiomatization shows up, you need to invent a way around these phantoms that the bad axiomatization brought with it….
There’s not some registry where we get to look up which mathematical concepts are “real” and which ones are “fictitious”! (People imagined such a registry for a long time—that’s why we’re still stuck with the misnomer “imaginary numbers”—but eventually they got over the misconception.)
So what can we do? Well, we can look at which mathematical structures <i>naturally</i> arise in our <i>best</i> descriptions of Nature. In that case, real and complex numbers of some sort obviously make the cut—although not, I don’t think, in such a way that it ever matters whether we mean “all” the real numbers, or the computable reals “only.” (This point cuts both ways: some people might say, then why not just restrict ourselves to the computable reals? while others would say: then why not use <i>all</i> the reals, without worrying about the superfluous restriction of computability?)
On the other hand, we can also ask questions like how many bits can be stored in a bounded region of space and reliably retrieved from it, or how many distinguishable outcomes a physical measurement can have. And on those questions, the universe reveals a much more “discrete” character.
I hope that clarifies my answer.
Reply to: itssummer #96:
Question: Followup http://www.scottaaronson.com/blog/?p=1385#comment-73364There already seems a questionhttp://cstheory.stackexchange.com/questions/4248/improved-lower-bound-on-monotone-circuit-complexity-of-perfect-matchingFrom what I understand from the answer and the question, it looks like permanent or #P complete problem has only n^{omega(logn)} lower bound since perfect matching has such a lower bound. So exp lower bound is known in this case.However NP compete problem has sub exponential (exp(n^a) with a <1) lower bound from what you are quoting.Is my interpretation correct? Why is there a diff between #P and NP complete lower bounds? I would have thought #P was harder and hence should have exp(n^a) lower bound.
No, deciding whether a bipartite graph has a perfect matching (Razborov’s “logical permanent”) is <b>not</b> #P-complete; indeed it’s in P. It’s only counting the <i>number</i> of matchings that’s #P-complete.
Reply to: Jay #194: OK, thanks for clarifying. Personally, I’d only be interested in betting about a milestone that had to do with the <i>actual intelligent behavior</i> of the neural net—not with the sheer number of neurons being simulated or some other internal criterion like that.
Question: Scott #192No, I’m just willing to bet on what I said, namely that we will have a large scale simulation of the brain neuron-by-neuron by 8 may 2023. Please don’t bet before reading about the Human Brain Project, and I won’t go over $1000. I’m not yet willing to bet on how much time this landmark will lead to digital computers able to pass a (reasonably unrestricted) Turing test. Imho this will depend on how much time it’ll take to go from “nanomachine” recording all neurons in a living zebrafish brain, published three months ago, to the same thing in a human brain. Let’s wait and see what happens this fall with the BAM/BRAIN project.
Reply to: Ray #163:
Question: How being in communist Berkeley during your formative years have affected you?
Well, not <i>everyone</i> at Berkeley is a communist (there are also libertarians, anarchists, apolitical stoners, possibly even one or two Republicans… <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> ), but the strange environment there did affect me profoundly.
By any ordinary American standard, I’m a “leftist”: someone who cares deeply about global warming and endangered species and gay rights and women’s rights and voting rights and gun control and church/state separation and higher education funding and having a progressive income tax.
But when I came to Berkeley, I suddenly found myself on the “right”! I’ve written before on this blog about the loud arguments I heard there, right before the 2000 election, that Gore and Bush were equally contemptible corporate puppets, and the only thing to do was to vote for Ralph Nader. I hope the stupidity of that position is now obvious to <i>everyone</i>! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> Even at the time, I found it frightening enough that I became an Internet evangelist for “NaderTrading” (the scheme whereby people in swing states would vote for Gore, and would have people in safe states “vote for Nader in their place”). Sadly for the world, we fell short by just a few hundred swaps in Florida.
I’ve also written on this blog about how, at a “memorial service” held on the evening of Sept. 11, 2001, grief for the victims (who, of course, were just then being pulled out of the wreckage) almost immediately gave way to wildly-applauded speeches about how the US should look inward, examine its own moral failings, etc., rather than going to war against the poor people who had done this. A Communist organization was also on hand to distribute flyers about how it was basically our fault. I got disgusted and left.
For me, though, possibly the biggest turning point came when anti-Israel demonstrators took over Sproul Plaza, many abandoning pretense and carrying straightforward anti-Semitic signs (like an American flag with the stars replaced by a combination of corporate logos and Stars of David). They deliberately did this on Yom HaShoah (Holocaust Remembrance Day), in order to make the “point” that Israel’s treatment of Palestinians was at least as bad as the Holocaust. They then invaded academic buildings in order to disrupt classes and exams (one class I was taking had to be relocated to another building). Again, I didn’t read some journalist’s account of what happened; I was there and saw. Around the same time, anti-Israel activists also smashed windows of Berkeley’s Hillel building and spray-painted “Fuck the Jews” there, and beat up some Orthodox Jewish students. The administration, maybe justifiably fearful, was extremely slow about condemning these actions. In response, Jewish groups invited a few pro-Israel speakers to campus; they each required massive police presence and were still shouted down (called “murderers,” etc. etc.) at pretty much every sentence. Then I was at a city hall meeting where the activists tried to get the city of Berkeley to boycott all companies doing business in Israel (including Intel, Microsoft, and … well, you see the practical problem there <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> ). The measure failed after “only” getting 3 out of 7 votes.
No, this wasn’t Nazi Germany. No, I didn’t overreact and fear for my life. But I was tempted to say: “so <i>this</i> is your liberal Utopia? <i>This</i> is your safe haven for intellectualism and humane tolerance? If so, I’ll take the America of Bush and Cheney and Fox News, if that’s my only other choice!”
So I feel extremely grateful that that <i>isn’t</i> my only other choice. There are still people in the US (and, of course, in <i>every</i> country) who basically believe in the liberal, democratic spirit of the Enlightenment, in what Karl Popper called “the open society.” Who reject every kind of fundamentalism: the all-American bible-thumping kind, but also the even more virulent Communist and Jihadist kinds. Who are more interested in advancing science and technology and using them to alleviate human misery, than in blaming all such misery on “the bourgeoisie” or “the elite intellectuals” or “the Jews.” Who want to correct our own numerous moral failings, but not if that means grovelling for forgiveness before a lunatic with a knife. Furthermore, these people are not some tiny minority: depending how you count, they probably constitute somewhat more than half of the country, and were responsible (among other things) for electing our current President—a man who I passionately support despite whatever failings he might have.
Reply to: GooInterested #204:
Question: “Israel’s treatment of Palestinians was at least as bad as the Holocaust”, you are right about condemning this analogy. But you should concede that Israel’s treatment of Palestinians is dangerously close to South Africa’s Apartheid regime…
I concede that there’s a vague analogy, particularly if you consider the nutty West Bank settlers. However, the analogy would be stronger if the United Nations had proposed giving most of South Africa to its black population and only a tiny sliver to the white; if the white population had grudgingly agreed; if the black population had not only rejected the deal, but spent the next few decades launching three wars against the white population, each with the loudly-proclaimed goal of wiping the whites off the face of the earth, and each with enthusiastic support from all the neighboring African countries.
For further insight, we can look at the opening words of Nelson Mandela’s 1955 Freedom Charter:
Now let’s compare to the 1988 Hamas Charter:
See a difference? <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
And true to their words, according to Wikipedia, even <a href="http://en.wikipedia.org/wiki/Nelson_Mandela#Umkhonto_we_Sizwe_and_African_tour:_1961.E2.80.931962" rel="nofollow">Mandela’s sabotage operations</a> were designed “to exert maximum pressure on the government with minimum casualties, bombing military installations, power plants, telephone lines and transport links at night, when civilians were not present.” The Jihadists, of course, pack their bombs with nails and shrapnel in an effort to kill as many civilians as possible and maximize their heavenly reward.
So, carrying your analogy further, one might say that the real tragedy of the Palestinian people is that they haven’t yet had their Mandela.
Reply to: Peter #151:
Question: Given the potential security applications of quantum computing, what do you think will be the delay between (A) a “useful” large-scale quantum computer being built and used, and (B) such a computer– perhaps a different one– being publicly announced (and later accepted by the community)? And which will happen first?
Well, D-Wave has already done (B), but as far as we can tell today they haven’t yet done (A)—so I think we can say with some confidence that (B) precedes (A)! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> Asking by <i>how much</i> (B) precedes (A) is equivalent to simply asking how long it will be until we get useful QCs—which of course is a huge question, and one that I’ve touched on in various ways many times on this blog, but not one where I like to speculate. It would <i>astonish</i> me, though, if we were less than (say) 15 years from (A).
Reply to: Ramsey #147:
Question: How do you work? More specifically, how do you go about coming up with ideas, how do you structure your work time, and what do you do when you’re stuck?Also, how have your work habits changed over time?
For the first two questions, I’ll point you to <a href="http://www.scottaaronson.com/blog/?p=741#comment-26299" rel="nofollow">a post in my previous “Ask Me Anything”</a> that addressed them.
As for how my work habits have changed over time: between teaching, supervising students, various administrative duties, answering emails from journalists and all sorts of other people, infant care, and just generally being a more sluggish, less motivated person, I have <b>vastly</b> less time for research than I did when I was in grad school.
Reply to: Ray #210:
Question: Wow, I knew that Berkeley was bad but I had no idea that it was that bad.
To be fair, there are many, <i>many</i> wonderful things about Berkeley. The weather, for example. And the fresh fruit smoothies. And the fact that you can eat so well for so cheaply. Or expensively (Chez Panisse), or anything in between. And walking down Telegraph Ave. over the weekend and seeing all the loony characters there. I get nostalgic just thinking about it, and am happy whenever I get to return for a visit.
It’s relevant, of course, that I spent the vast majority of my time at Berkeley with the computer scientists in Soda Hall, one of the finest groups of people with whom I’ve ever been associated. If anyone else were considering going to Berkeley to study CS, math, physics, etc. I <i>certainly</i> would never suggest they shouldn’t go because of politics (at most I might try to convince them that MIT is cooler <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley"> ). Probably I spent less than 2% of my time at Berkeley worrying about political issues like those I described in comment #201. But since those are what I was asked about, those are what I discussed.
Reply to: Everyone: Yeah, I know that it’s Thursday morning, and I have roughly a dozen questions still to answer! If your question wasn’t answered yet, please be flattered: it means yours was one of the ones that I saved to really think about. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley"> I’ll try to get through them today.
Reply to: internationalstudent #182:
Question: Why does the US not limit temporary visas by country but limit the number of permanent visas (green cards) to 6000 per country? This brings in 50000 thousand H1Bs from India and may 20000 from China and they have to indentured to a particular company (cannot leave company) at a particular position for a decade (if the employee leaves his queue starts all over again; if he is laid off he becomes an illegal) or more while someone from Taiwan or UK can get a green card in 6 months? The people who formulate the laws are from the best schools at US (like at MIT or Harvard). Do you think the lawmakers and hence the US population they represent voluntarily support a mild form of slavery or rather indentured labor even after almost 150 years after emancipation? What mathematical law can explain such absurdities? I myself left the country to see my sick mom after a decade and now I have problems getting back. Why is sensible modifications to laws not achievable by people trained by the smartest professors on earth?
First off: I’m incredibly sorry to hear about your predicament. Personally, I regard US immigration policy—and particularly this weird, persistent issue of people with visas leaving the country and then not being allowed back in—as something of a national disgrace.
If I had to sum up US immigration policy in one sentence, I’d say: we somehow succeed brilliantly both at keeping <a href="http://en.wikipedia.org/wiki/Paul_Erd%C5%91s" rel="nofollow">Paul Erdös</a> and <a href="http://en.wikipedia.org/wiki/Andris_Ambainis" rel="nofollow">Andris Ambainis</a> out, <i>and</i> at letting Mohammad Atta and Tamerlan Tsernaev in!
I suspect the heart of the problem is simply INS bureaucrats blindly applying rules made up by an even more out-of-touch Congress, rather than people who are able or allowed to apply creative intelligence to questions like: “<i>who is this person, actually?</i> is this a brilliant scientist who we should be <i>grateful</i> to have come to the US? is the possibility that he’s a criminal or terrorist something we need to seriously consider, or is that the stuff of comedy?”
Of course I’m lucky that, being a US citizen, I never had to bear the brunt of these policies. Well, until recently: now that I’m married to a visa-and-then-green-card-holder, I’ve seen firsthand some of the weird, arbitrary indignities she’s had to go through.
The one place where I disagree with you is in your intimation that <i>MIT and Harvard professors</i> (!) are somehow responsible for this situation. I mean, think about it: if we as a group had that much power, then wouldn’t we have <i>also</i> managed to get the US to enact sensible gun control policies, or set carbon emissions caps, or ratify all those perfectly-innocuous UN treaties that it’s so far refused, etc. etc.? Congress—and the paranoid, reactionary part of the country that Congress overrepresents—are not exactly refusing to do these things at the behest of Harvard professors. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
OK, I just it looked it up, and it seems that at least in 2010, there were <a href="http://www.usnews.com/news/slideshows/the-top-10-colleges-for-members-of-congress/2" rel="nofollow">15 members of Congress</a> with bachelor’s degrees from Harvard (13 Democrats and 2 Republicans), and many others with business or law degrees from there, but I can’t find <i>any</i> current Congresspeople who went to MIT (does anyone else know of any?). Also, according to <a href="http://www.crisp360.com/hosted-infographic/where-did-congress-go-college-full" rel="nofollow">this graph</a>, 34.8% of Congresspeople have degrees in government and law, and 20.9% in humanities, compared to only 11.5% in science and technology. So even if maybe, just maybe, you could put <i>some</i> of the blame for our current political problems at Harvard’s feet, I really don’t see how you could put any at MIT’s. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_biggrin.gif" alt=":-D" class="wp-smiley">
Reply to: Raoul #215: Sorry, fixed!
Question: Re 84:I think a base guitarist is a bass player that can only thump away at the base note in the chord.
Reply to: Juan #218:
Question: I just reread the rules and realized my previous (and only) question may be in violation of #3. Doh! If Scott doesn’t mind I would like to ask another question as a replacement:- What’s your favourite Israeli dish? Is there anything in particular you would recommend and that perhaps the folks at home could make? :p
I’ve actually been <b>meaning</b> to read Deutsch’s new paper, but I haven’t gotten around to it yet! Partly prompted by your question, I just printed it out tonight, but it might take me some time.
Regarding your replacement question, unfortunately I can’t really make <i>anything</i>—American, Israeli, you name it! (Well, I can make fruit smoothies and sorbets, and once I boiled some eggs.) But here are the things I like to eat most when I go to Israel:
- Chicken schnitzel in a pita with hummus, tahini, salad, and french fries<br>
- “Sabich” (pita with eggplant, hard-boiled egg, hummus, tahini, and hot sauce)<br>
- Frozen yogurt with whole fruits (dates, figs, bananas, melon…) and chocolate mashed into it<br>
- Fresh pomegranate juice
Another dish that I’m told is very easy to make, and that Dana loves, is shakshuka: basically tomato sauce with mixed vegetables simmered in a pan, with some eggs then cracked over it. You then wipe it with bread.
Reply to: Clayton #120:
Question: If you guess randomly, what are the approximate odds of getting the answer to this question correct?
A) 25%
B) 50%
C) 33%
D) 25%
Yeah, that’s a fun one! I’ll assume “randomly” means “uniformly at random.” Then presumably, a valid answer is any probability p that satisfies the consistency condition
p = (# answers that are p) / 4.
So for example, if there were one 25% option, or two 50% options, or three 75% options, then <i>any</i> of those would constitute a consistent answer. As it is, however, the unique consistent answer is
<b>p = 0,</b>
even though of course that isn’t one of the options listed.
The whole thing is reminiscent of computing with closed timelike curves; see <a href="http://www.scottaaronson.com/papers/ctc.pdf" rel="nofollow">this paper</a> for details.
Reply to: Joshua Zelinsky #44:
Question: A while ago you mentioned the class of problems “Are there infinitely many n such that BB(n) is [even],[odd],[prime],etc.” where BB(n) is the Busy Beaver function. In that context, is it even obvious that for any k, BB(n+1)-BB(n)>k for all sufficiently large n? I’d certainly guess so, but I don’t see any obvious way to rule out even almost all of BB(n)’s growth occurring at n which are far away from reach other.
What a <font color="red"><b>beautiful</b></font> question! I saved it till tonight only because, unlike most AMA questions, I found it neither easy enough to answer nor hard enough to flippantly non-answer. You aimed for, and hit, the dreaded “thought-requiring intermediate zone.” <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
<b>Observation 1:</b> it’s easy to design Turing-universal models of computation where what you ask for does <b>not</b> hold. For example, let BB(n) be the largest finite number of steps taken by any program with n bits or fewer, and suppose our programming language (like most real languages) requires programs to contain an integer number of bytes. Then obviously
BB(8n)=BB(8n+1)=…=BB(8n+7)
for all n.
So, just like with the questions of whether BB(n) is even, odd, prime, etc., to prove your assertion in the case of Turing machines—and I’d be <i>flabbergasted</i> if it failed for them!—we’ll need to use something specific about the Turing machine model. (I assume you also observed this, and that it’s why you asked the question in the first place.)
<b>Observation 2:</b> For the Turing machine model, we at least have BB(n+1)≥BB(n)+1 for all n. Proof: With n+1 states, take an n-state Busy Beaver, but then instead of it halting, have it transition to a “purgatory” state where it does nothing for one more time step and <i>then</i> halts.
<b>Observation 3:</b> Suppose we modify the 1-tape Turing machine model in the following way. There exists a special symbol #, such that the tape is initially filled with #’s, but the TM itself is never allowed to write #’s, only 0′s and 1′s. In that case your conjecture holds: indeed for all sufficiently large n, we have
BB(n+1) ≥ BB(n) + 0.49log<sub>2</sub>BB(n).
Proof: Similarly to Observation 2, we’ll define an (n+1)-state machine that first simulates the n-state Busy Beaver, then enters a special “purgatory state” instead of halting. But now, the purgatory state will <i>repeatedly</i> move either left or right (whichever leads to a longer running time) until it encounters a # symbol, and only then halt.
Let k be the total number of squares visited by the n-state Busy Beaver. Then clearly, the number of additional steps we can “buy” this way is at least k/2. Moreover, we must have
nk2<sup>k+1</sup> ≥ BB(n),
since otherwise we’d enter the same configuration twice and loop forever. Solving,
k + log<sub>2</sub>(k) ≥ log<sub>2</sub>(BB(n)) – log<sub>2</sub>(n) – 1.
And for all sufficiently large n, the above will give us (say)
k ≥ 0.98 log<sub>2</sub>(BB(n)).
Now, it seems likely that we could modify the above construction to make it work even without the ‘#’ condition! To do so, we need a way to take any n-state TM M, and modify it to an (n+1)-state TM M’ that first simulates M, then uses a single additional state to move much of the way across the visited part of the tape and then halt.
<b>Observation 4:</b> Suppose we could show that, for some universal constant c, it’s possible to output a self-delimited description of an arbitrary n-state TM using an (n+c)-state TM. (This seems probably true, though I’m not sure how to prove it! I only know how to output a description of an arbitrary n/log(n)-state TM. And even with ‘string-based’ programming languages like C, Python, etc., we’d still have to worry about the self-delimiting issue, which could add an unwanted log(n) factor to c.)
Now, if we could show the above, then I claim that there’s certainly <i>some</i> constant c such that
BB(n+c) &gt; 2BB(n)
for all n. Or more generally, that for any computable function f, there exists a constant c<sub>f</sub> such that
BB(n+c<sub>f</sub>) &gt; f(BB(n))
for all n.
Proof: If the hypothesis holds, then there exists an (n+O(1))-state TM that explicitly writes to tape the (self-delimited) value of BB(n), by first writing a self-delimited description of an n-state Busy Beaver, then simulating that Busy Beaver using a fixed <i>universal</i> TM. Therefore, for any computable function f, there also exists an (n+O<sub>f</sub>(1))-state TM that explicitly writes to tape the self-delimited value of f(BB(n)). So if
BB(n+c<sub>f</sub>) ≤ f(BB(n)),
then it can also compute BB(n+c<sub>f</sub>), and therefore solve the halting problem for (n+c<sub>f</sub>)-state TMs. But if we choose c<sub>f</sub> sufficiently large, then this, in turn, would enable such a TM to find, and output, a Kolmogorov-random string with more bits than the length of the TM’s own description. Since this is a contradiction, we conclude that
BB(n+c<sub>f</sub>) &gt; f(BB(n)).
I’m afraid I’m now entering my own “halt” state, at least for tonight! But I’d love to think about this further, and also encourage others to do so. Would you like to post your problem on MathOverflow or CS Theory StackExchange, or would you prefer that I do it?
Reply to: Artem #45:
Question: Computational modeling and simulations are becoming very common in evolutionary biology and ecology, as is the vague concept of “complexity” (in the SFI sense). However, I am not seeing much headway of computational complexity into evolutionary biology and ecology. Valiant’s work on evolvability (as an extension of PAC-learning) is interesting, but does not resonate with biologists, and Chaitin’s metabiology is a false start. How can computational complexity (and theory A more generally; i.e. algorithmic lens) best contribute to evolution of biological and social systems? Can this contribution ever be as substantial as the one to physics?
This is a huge question—one that I strongly encourage everyone in our community to think about—but <i>way</i> too broad to answer in the space of a blog comment! I.e., if I had an answer that satisfied me, then rather than sketching it here, I’d write a paper about it, and hope to launch a whole new branch of CS theory the way Valiant has on several occasions.
The central difficulty is obvious: to whatever extent your model is “real” CS theory, biologists and social scientists will probably deride it as too idealized; conversely, to whatever extent biologists and social scientists like your model, it will probably have too many “arbitrary” aspects for CS theorists to prove anything clean about it.
Let me put it this way: to connect computational complexity (as actually practiced by computational complexity theorists) to physics (as actually practiced by physicists), seems to me like digging a tunnel between England and France. Highly nontrivial, but by 1994 people finally managed to do it!
To form an equally-compelling link between computational complexity and biological evolution feels to me like digging a tunnel between England and the US. A worthy aspiration, but orders of magnitude more ambitious!
Of course, as you’re probably aware, there are <i>aspects</i> of biology (e.g., DNA sequence alignment, protein folding, phylogenetic tree reconstruction), and <i>aspects</i> of social science (e.g., game theory, auctions, “Six Degrees of Separation”), that already have pretty compelling links with CS theory. But I assumed above that you were after something bigger: namely, a “complexity-theoretic re-imagining” of the entire foundations of biological or social evolution—a new yet mathematically-precise way to frame the central questions, etc.—that’s as fruitful as the complexity-theoretic re-imagining of quantum mechanics that’s already well underway.
Reply to: internationalstudent #228: Dude, your first question was already well after the deadline. To ask a <i>second</i> question after the deadline is just presumptuous. But, alright … I have no idea when the Shannon capacity of cycles problem will be solved, but it would astonish me if it were anywhere near as hard as P vs. NP.
Question: Thankyou Professor. However I am not completely satisfied with your answers. Can I assume “Congress—and the paranoid, reactionary part of the country that Congress overrepresents……” as an hint to the question “Do you think the lawmakers and hence the US population they represent voluntarily support a mild form of slavery or rather indentured labor even after almost 150 years after emancipation?” I am highly inclined to say it may be possible that it is illegal constitutionally to have insane temporary visa rules to make people from a few countries suffer while people from a majority of elite countries reap the benefits of us immigration? I wonder if we would even be having this discussion if European Union were considered as one supra country and the whole ‘supranation’ is subjected to same per country quota as China and India?One technical question. Do you think Shannon capacity of cycles will be known before the end of the decade or is it as hard as P=NP question? What techniques you think will make it decidable either way?
Reply to: NotAnonymous #123:
Question: Is this AMA still going on?In reference to age ~10 of Scott #51:What videogame(s) would you design based on your research interests (broadly construed) and are all ideas on this blog public domain/free to use?
Probably I <i>wouldn’t</i> design such games, because they wouldn’t sell! <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley">
I did once write a game based on the Euclidean Traveling Salesman Problem—where you simply had to find the shortest route you could connecting a bunch of randomly-chosen points in the plane. And it was actually kind of fun, and maybe games of that sort could be used to do empirical studies of how <i>humans</i> solve NP-complete problems when confronted with them. (I believe Michael Kearns has already done some studies of that kind.)
I’ve also tried out videogames that tried to teach the principles of special relativity and of quantum mechanics (not both in the same game…). However, those games mostly just induced a feeling of confusion! As someone who already knows the theories in question, I had a hard time figuring out what the point of the games was, or how the theories were reflected in them, or what people were supposed to learn about the theories by playing the games. So I regard it as an extremely interesting open problem whether it’s possible to create a videogame that <i>successfully</i> provides intuition for modern physics!
(The easier problem, of creating a videogame that provides good intuition for Galilean physics, has arguably been solved by Angry Birds. <img src="http://www.scottaaronson.com/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley"> )
Reply to: Clayton #232: Well, it’s clear that not every question of this form has to be soluble! Here’s a simple example that captures the difficulty:
Question: Scott, thanks for the attempt! I’m not entirely convinced there is a good way of asking or of answering this. Your reasoning: …a valid answer is any probability p that satisfies the consistency conditionp = (# answers that are p) / 4.…seems totally sound and indeed makes the question insoluble.One way I thought of possibly answering this is to solve for the expected value of an answer that could be valid given two appearances of 25% and one appearance of 50%. Then we’d have0.25 * 1/2 + 0.5 * 1/4 + X * 1/4 = Xand solving for X we get 0.3333…This method has the downside of obviously not satisfying the criteria above, but has the upside of being soluble.Can you think of a way of phrasing the question that would make that solution acceptable, but wouldn’t give away the trick? It seems kind of pointless to say “solve for the final value in this list of four answers that would provide the correct expected value of answering a four point multiple choice question correctly…” I haven’t been able to think of a better way of posing it, though.
Reply to: Jay #155:
Question: Suppose you were a software designed to conduct some experiment and perform bayesian reasoning. For what kind of experiment do you think your computation should diverge because of possible copies of yourself?
After rereading it several times, I still don’t understand this question! For example, I don’t know what it means for a computation to “diverge” (does it just mean to run forever?).
Having said that, yes, there are well-known examples where it’s totally unclear how software should be written in order to reach the “correct Bayesian answers” in situations where there might be multiple copies of itself. For example, suppose that we toss a fair die, and if it lands 6, we make 1000 copies of your program P, and if it lands anything else, we leave just 1 copy. Then we ask P to guess whether the die landed 6. If it guesses “no,” then it will be right with probability 5/6, and wrong with probability only 1/6. However, if it guesses “yes,” then <i>most</i> of the copies of P that could ever come into existence (or “do” come into existence, in a quantum multiverse) will be right! So, what should P be trying to maximize? What should its goal be? For more about this question, I’d encourage you to read <a href="http://www.amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565/" rel="nofollow"><i>Quantum Computing Since Democritus</i></a> Chapter 18, or the book <a href="http://www.amazon.com/Anthropic-Bias-Observation-Selection-Philosophy/dp/0415883946" rel="nofollow"><i>Anthropic Bias</i></a> by Nick Bostrom.
Reply to: jh #237:
Question: Scott #225:A trivial extension of your Observation 3: there exists c such that BB(n+c) > BB(n) + 0.49 log B(n)Just take the BB on n states and supplement it with keeping track of the numbers of leftmost and rightmost visited cells and the current head position. Then apply your argument.
Thanks! I had the same idea, but I couldn’t immediately see how to make it work. The issue is this: if our alphabet consists only of 0 and 1 (and we have only one tape), then how exactly do we keep track of the leftmost and rightmost visited cells, without using some sort of self-delimiting code, which might add a number of states to our TM that grows with n (like n or at least log(n))?
We could try to do something like this: encode each “0″ cell as “00″, and each “1″ cell as “10″. And use “11″ to signal the start or end of the visited part of the tape. The problem is that the logic needed to handle the “even-numbered squares” actually seems to <i>double</i> the number of states! I’d be very interested if you or anyone else had a solution to this problem.
Reply to: Sniffnoy #166:
Question: Which would be more surprising: If PH were to equal PSPACE, or if the polynomial hierarchy were to collapse without PH being equal to PSPACE?
This is another question that hit the dreaded “thought-requiring intermediate zone”! (See comment #225.)
Let me first rephrase the question in a way I find easier to think about: what’s my estimate for
Pr[ PH=PSPACE | PH collapses ]?
Of course, we could ask the following closely-related and better-known question: what’s my estimate for
Pr[ P=PSPACE | P=NP ]?
For both of these questions, the central difficulty is that the hypothesis is something I consider <b>so</b> unlikely that I have a hard time forming “reasonable judgments” about a hypothetical world where it held. (Compare: “if aardvarks suddenly started speaking fluent English, would they also want to learn math and science?”)
On the other hand, we <i>do</i> have some past experience with a surprising thing happening to NP, and then the surprise “percolating all the way up to PSPACE.” E.g., coNP⊆IP was followed within less than a year by P<sup>#P</sup>⊆IP and then by IP=PSPACE. Likewise, the existence of a computational zero-knowledge protocol for NP then percolated up to all of PSPACE being in CZK assuming the existence of one-way functions. In short, while PSPACE seems like a pretty secure “containment facility” for complexity-theoretic surprises, the same can’t be said for NP or PH.
So, based on that experience, I’m inclined to say that if P=NP, then “all bets are off” and the probability of P=PSPACE is certainly far from negligible (maybe 1/2? I don’t know). Similarly, if PH collapses then the probability of PH=PSPACE is far from negligible.
(Incidentally, we could also ask both of these questions with P<sup>#P</sup> in place of PSPACE, though for me it wouldn’t change too much.)
$('cite a.url').each(function() {
if (this.text === "Scott") {
$(this).closest('li').each(function(i,e) {
$(e).children('p').each(function(ii,ie) {
if (ii==0) {
console.log("Reply to: " + ie.innerHTML);
var match = /#(\d+)/.exec(ie.innerHTML);
var comment_id = match ? match[1] : 0;
if(comment_id) {
comment_id = comment_id - 1;
console.log("Question: " + $('ol.commentlist>li').eq(comment_id).children('p').text() + "\n\n");
}
} else {
console.log(ie.innerHTML);
}
})
console.log("\n\n");
})
}
});
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment