The first Breakthrough Prize Ceremony to include mathematicians was held in November 2014 in a 198-foot high former airship hangar in Mountain View, CA. I drove down from Berkeley, where I was spending a few months that fall. A press pass — which I still have — gave me access to the Red Carpet and to a press tent off the main banquet room, where, along with the other more or less legitimate journalists and paparazzi, I enjoyed refreshments and a view of the official ceremony and celebrity banquet on a giant screen. My account of this experience was published on Slate, which had procured my press pass.
The very next day I drove back from Berkeley to Stanford to attend the Math Panel at the Breakthrough Prize Symposium. The video is still online on the Breakthrough YouTube channel. About six months later I wrote a few paragraphs about the reaction of the five laureates when Breakthrough Prize initiator Yuri Milner asked them what they thought of Vladimir Voevodsky’s Univalent Foundations program, and whether computer-verified proofs were likely to be widespread in mathematics in the future. Although the discussion was brief, it represents one of the few opportunities to see what some of the most prominent mathematicians of 2014 — all of them still prominent today — had to say about the question that motivates much of this newsletter. Those who see formalization as the future of mathematics should ponder the Simon Donaldson quotation that starts, “One doesn’t read a mathematical paper…” before making rash predictions.
I reproduce my blog post below.1 The discussion that followed my post was by a large measure the liveliest and most wide-ranging of any that my blog had the privilege to host, and I’ve included a few of the first reactions to give a sense of the intensity of the exchange. Voevodsky, regrettably, is gone, and mathematicians now mainly imagine formalization on the model of Lean rather than HoTT, but most of the discussion is still relevant today.
“No Comment” was Jacob Lurie’s reaction when the panel of Breakthrough Prize laureates was asked, at last November’s Breakthrough Prize Symposium at Stanford, what they thought of the “prospects for the Univalent Foundations program and whether that is a contender for changing” foundations of mathematics. The other panelists had deferred to Lurie, in some amusement; after Lurie’s “No Comment,” Terry Tao gave the diplomatic answer that “the person you need to ask is not sitting here.” Of course this can also be understood to mean that the five laureats don’t feel concerned by the question.
The same questioner2 continued: “Is anyone willing to bet against” the prediction that computer-verified proofs will be of “widespread use in mathematics” within 25 years? Lurie immediately replied: “I’ll take that!” to which Richard Taylor added “Yes, me too.”3 Terry Tao thought that some people, at least, would be working with computer verification at that point.
From some recent comments on this blog it seems not all readers are happy with this attitude; please don’t blame me!
Simon Donaldson made a point (around 30:15) with which most of my colleagues would agree: “One doesn’t read a mathematical paper, what one gets is the idea to reconstruct the argument… it’s not that people (generally speaking) would be …checking the logic line by line — they would go and extract the fundamental idea; that’s really the essential thing.” This is very similar to one of my earlier quotations from Atiyah: the proof “isn’t the primary thing at all.” I hope it’s clear just how radically different this is from the mainstream philosophical position,4 that the written proof is an imperfect approximation to an ideal formalized proof. Donaldson (and elsewhere Atiyah, and even earlier G. H. Hardy) is saying that the written proof is an imperfect approximation to a pure stream of ideas.5 In that optic, not only the language of a formal proof, but even the standardized language of a publishable proof, is an obstacle to understanding, a necessary evil in the absence of a pure language of ideas.
Richard Thomas, the geometer from Imperial College, reminded me of this exchange on Monday; but I shouldn’t have needed reminding, because I was sitting in the room during the whole Symposium, and I probably even took notes. The Symposium — some 40 minutes of answers to questions posed by Breakthrough Prize co-founder, the billionaire Yuri Milner, followed by 20 minutes of answers to questions from the audience — can be viewed on YouTube and is quite interesting from start to finish. (The sequence described in the previous paragraph starts at about 54:50; the “No Comment” is at 55:19.) As I wrote in my Slate piece, Milner really does know what mathematics is about, though I remember at the time thinking that he seemed a little disappointed that the laureates showed so little interest in computer-assisted proof. Maxim Kontsevich — who comes as close as any mathematician I know to speaking in pure streams of ideas — made some subtle and unexpected comments; maybe I’ll return to them later. He did say (around 41:40) that AI is not so hard and that there may be mathematically intelligent computers in our lifetime. When Milner asked what made him so optimistic, Kontsevich answered that he was actually pessimistic: he doesn’t see any fundamental difficulty, but “it would be immoral to work on that.”
Tao was the most positive; at about 31:20 he expressed the hope that “some day we may actually write our papers, not in LaTeX … but in language that some smart software could convert into a formal language, and every so often you get a compilation error: ‘THE COMPUTER DOES NOT UNDERSTAND HOW YOU DERIVE THIS STEP.'” To Milner’s question, “Do you think computer verification will get to that stage where it will verify every new paper?” Tao replied “Oh, I hope so.”
And this reminds me of a quite different concern I have — this is MH speaking now, not the panelists — about computer-verifiable proofs: the prospect of technological lock-in. The Wikipedia article has a perfectly good summary of the pitfalls. For better or for worse, mathematical typesetting is now locked-in to LaTeX, but even if it weren’t as generally satisfactory as it is, it doesn’t constrain the imagination. Algebraic geometers had the flexibility to slough off (small f-) foundations when they grew too confining or inflexible; but imagine what would happen if the technology of mathematical publication were yoked to a specific proof verification system (25, or 50, or 100 years from now). Suppose (as is extremely likely) it’s a system that is well-adapted to verifying that a computer program will run as it should (the original, and perfectly understandable, motivation for automated proof verification). Would mathematics then have to adapt in turn to the system, and would mathematicians who refuse to adapt — for example, because they find the system an obstacle to expressing their ideas — be forced to resort to samizdat, or even blogs, to get their work published?
The current system is already full of obstacles to expressing our ideas. My understanding is that Voevodsky’s project developed out of frustration with some of those obstacles.
In any field it’s a struggle to turn informal arguments into formal ones. In algebraic topology over the decades they have been having to struggle harder and harder — I don’t think it’s the same in every field. Every so often the formalism of six operations, or something, has to be developed from scratch. Those redevelopments involve some cake and a lot of spinach.
Lurie’s book “Higher Topos Theory” is an expanded form of his paper “On \infty-Topoi.” The paper is 50 pages of pure cake, beautifully and informally written. The book is 1000 pages long. There is some new cake there, but not 20 times as much. Even at a thousand pages, it doesn’t take as long to read as it took Lurie to write: the paper was on the arxiv in 2003. The book was published in 2009.
Voevodsky might have gone in the direction he did, because he didn’t want to take that long to communicate what seem to him to be simple ideas. The dream is that the computer assistant will eat the spinach for you. The nightmare is that it will force-feed it to you.
Some software developers are living in the nightmare, ask them about “Error establishing a database connection.” But a lot of software consumers are living in the dream, the modern system of buying diapers or using Mathematical Reviews really is a marvel.
I’m sure it will make its march through the publication process eventually, but meanwhile the majority of Lurie’s output is samizdat. Not because he’s refused to adapt to the current system, but because adapting is taking a long time. Any idea that is, say, 10 times as difficult to wrangle into an Annals of Math book as Lurie’s ideas are, is for now beyond the reach of our discipline. Isn’t it conceivable that those ideas could exist, and be important? I’m rooting for Voevodsky.
Liked by 2 people
Jacob Lurie May 17, 2015 at 10:34 pm
To clarify: for the most part, I didn’t want to comment because I haven’t really been following the development of homotopy type theory and didn’t feel qualified to speak about it (especially on camera).
But since I’m weighing in now, I think that DT’s comment is misleading. I agree that algebraic topologists (and other practitioners of what might be called “homotopy coherent mathematics”) face special obstacles in expressing their ideas. To a large extent, these difficulties come from the fact that the notion of “commutative diagram” which appears in ordinary mathematics needs to be replaced by a more sophisticated notion of “homotopy coherent diagram”. And while the commutativity of a diagram is a property, the homotopy coherence of a diagram involves an additional datum (a “witness”) which can be of a very complicated nature (depending on the diagram). The theory of quasicategories is an attempt to address this issue by packaging and manipulating these “witnesses” in an efficient way, so that you can talk about them without losing your mind.
It does not seem to me that this issue is addressed by homotopy type theory at all. To my knowledge (and an expert should correct me if I’m wrong), it is an open question whether or not the notion of “homotopy coherent diagram” can even be formulated in the language of homotopy type theory. And I think that the expectation is that this question will have a negative answer unless the theory is modified in some fundamental way (at least this is my expectation based on thinking about a similar issue in a different context). The theory of quasicategories was developed in order to have a good language for talking about certain types of mathematical structures (for example, structured ring spectra) which can perhaps not even be defined in the setting of homotopy type theory (at least in a way that uses “types” to play the role of “spaces”).
Even apart from the above issue about expressive power (which is on some level a technical point, though probably an important one), I also think it is very strange to suggest that the introduction of a formal system would simplify the expression of ideas that are already difficult to communicate in an informal way. “Homotopy coherent” mathematics is difficult to communicate largely because the datum of a homotopy coherent diagram is elaborate. It’s elaborate both for diagrams which commute for interesting reasons (which you might want to spell out) and for those which commute for obvious reasons (which, when writing informally, you would probably want to ignore). Working in a formal system, more or less by definition, means that you can’t ignore steps which are routine and focus attention on the ones that contain the fundamental ideas.
By analogy: there are many very intricate proofs by induction in the mathematical literature. There’s also a first-order theory (Peano arithmetic) which can be regarded as a formalization of what it means to give a “proof by induction”. But I think it would be very strange to suggest that a very complicated inductive argument should become simple if it was rewritten as a formal deduction in Peano arithmetic (more likely, exactly the opposite would happen). Of course, the formalization of that argument might address other concerns (if, like Voevodsky, you are concerned that the argument might contain mistakes). And that’s not to say that Peano arithmetic isn’t interesting: just that it is primarily interesting for investigating questions in mathematical logic, not as a tool for proving new theorems about natural numbers.
Mike Shulman May 18, 2015 at 8:14 pm
As someone who has been closely involved with the development of HoTT, I think that what Jacob says is mostly correct. The only thing I would like to point out is that when speaking about a “formal system” we might mean only a way to formalize existing arguments by “reducing them to primitives”, but we might also mean a system which allows us to prove things in new ways, such as by “packaging” things that used to be complicated individual arguments into powerful but easy-to-use tools. If the former, then yes, I agree that it would be weird to expect any simplification to result. But the latter is expressly designed to achieve simplification; indeed, the theory of quasicategories (mentioned in the previous paragraph) is roughly a formal system of this sort, although not a “fully formal” one since it is expressed in set-theoretic language. The point of the latter sort of system is specifically to allow you to ignore the “routine” steps, and moreover to expand the collection of things that count as “routine”.
HoTT is both sorts of formal system: it allows us to formalize existing arguments, but it also gives us a new way to talk about homotopical objects, and so far it seems that that new way does sometimes give us simpler ways to say things. This is what gives some of us hope that eventually, a theory like HoTT could be made powerful enough to achieve even more useful simplifications. The obstacles mentioned in Jacob’s third paragraph are real, and it’s possible we won’t ever solve them; so current techniques such as those used in Higher Topos Theory are by no means out of date, and won’t be for quite some time. But I also don’t think there are a priori reasons to think that those obstacles are insurmountable, so it’s at least a direction worth pursuing.
Liked by 4 people
Urs Schreiber May 18, 2015 at 11:35 pm
Homotopy type theory is the internal language of higher toposes. It is as fundamental to higher topos theory as tradtional intuitionistic logic is to topos theory. That HoTT was found by computer scientists instead of by topos theorists or by homotopy theorists already doesn’t reflect well on the latter two groups, but disregarding HoTT for how computer scientists happen to think about using it is worse. That’s like disregarding the theory of elliptic curves for the fact that computer scientists use it for something as mundane as encrypting online bank transfers.While the world is a better place since “HIgher topos theory” exists, the evident omission of the book, given its title, is any actual topos theory, beyond the fundamentals of non-elementary higher toposes, even if these took order 10^3 pages to set up. Homotopy type theory is, once brought to fruition, just that: the actual topos theory of higher elementary toposes. It is true that the computer scientists are unlikely to develop this to any satisfactoy degree. But this does not imply that homotopy type theory is not a topic of major interest in the intersection of topos theory and homotopy theory. I am hoping the distraction that the presence of computers in this discussion is causing will go away and make room for what is going to be the glorious future of elementary higher topos theory.
Liked by 2 people
Jacob Lurie May 19, 2015 at 1:23 am
So are you saying that the analogy with Peano arithmetic is flawed because homotopy type theory is “modular” in a way that classical first order logic is not, and therefore the costs associated with writing “fully formal” proofs are substantially lower? (And may someday be low enough that they are outweighed by accompanying benefits?)
Or are you making a stronger claim that formalization via homotopy type theory could have advantages over “working informally” even for the purposes of communicating mathematical ideas to other people (as opposed to proof checking)?
The latter seems like a claim that I’d have trouble believing, since the idea of “packaging things that used to be complicated individual arguments into powerful but easy-to-use tools” seems like something that we do all the time when working informally.
Though one could make the case that some aspects of the culture of mathematics discourage us from taking full advantage of this. For example, there’s a sense that the payoff in a mathematics paper is supposed to be a theorem rather than a construction. I think people often write papers where the “real” goal is to highlight some interesting construction, but the “stated” goal is some propositional consequence of the existence of that construction. It may be that we could communicate certain ideas more effectively if we weren’t encouraged to “bend the narrative” in this way.
Liked by 1 person
Thanks for your reply. I should have explained better my remark about “simple ideas.” I didn’t mean to say that Voevodsky could have done it better than you!
About simplicity and complexity, I had this VV quotation in mind:
“the question of the convergence of applied and pure mathematics, that I have here what the picture is. Pure mathematics works with abstract models of high-level and low complexity (mathematics this small difficulty like to call elegance). Applied mathematics is working with more specific models, but a high level of complexity (number of variables, equations, etc.). Interesting application ideas of modern pure mathematics most likely lie in the field of high abstraction and high complexity. This area is now virtually inaccessible, largely due to the limited capacity of the human brain to work with such models.”
That quotation is available to me because of Google Translate, a very complicated tool (but very simple for the user) that has done sensational things for human communication. Maybe one day a tool like that could do something sensational for mathematical communication. It’s a dream, not a prediction.
If homotopy type theory has a flaw that makes it less suitable for reasoning about homotopy types than Peano arithmetic is for reasoning about numbers, I’m disappointed to hear it. But the criticism of formalism, that a fully formalized argument would be very complicated, doesn’t bother me. C code compiles to assembly. Python code compiles in a more complicated way, and runs about a thousand times slower than C. Both compilation process have complicated output, but the processes are purely mechanical, and the uncompiled code is easy for a human to read. Python is especially easy for mathematicians.
So far, there is no useful high-level language in which you can write a proof that then “compiles” to a proof in a formal system. I have no expertise, but I think such a high-level language is desirable, and I hope that it’s possible. I’ve heard that one obstacle to creating such a language, is that it is difficult to formalize the metamathematical principle that an isomorphism between X and Y provides a procedure for turning theorems and proofs about X into theorems and proofs about Y — a very homotopy-theoretic problem. But why in principle should it be impossible?
Anyway “I’m rooting for Voevodsky” is an emotion, not an opinion. There’s no contradiction between rooting for Voevodsky’s program, and rooting for its alternatives. I just hope somebody someday can get me out of the flybottle.
Liked by 1 person
The comments continued on the blog in this vein for another 23000 words, with intense disagreements between some of the main players in homotopy theory and cameo appearances by Jack Morava, David Corfield, and Voevodsky himself, among others. You are invited to revisit the dialogue, which has had more than 26000 views, in the light of the latest developments in formalization, and to add your opinions to the replies under the original blog post.
Please don’t point out that I’m still using some of the same punch lines eight years later.
Yuri Milner, of course.
Eight years later most mathematicians would still agree with Lurie and Taylor, but there is now a viable project that aims to change that.
Here called the “Central Dogma.”
Compare David Bessis:
Quelque chose qui jusqu’alors était impensable devient soudain une évidence : voilà exactement le type d’effet que les mathématiques produisent sur votre cerveau.
(Mathematica, Paris: Éditions du Seuil (2022), p.44)