# On the implications of Scholze's liquid tensor experiment

### Did it make Lean happy?

*What follows is the text of an article, adapted for this newsletter, that I submitted to four (or maybe five) prestigious magazines, three of them directed at a general scientifically literate audience, the fourth more specialized but not specifically addressed to mathematicians. The first version was written in January, immediately after Scholze's liquid tensor challenge was published on the xena blog. The text has gone through various revisions as editors apologized for not being able to use it (hints about the publications in question have been left in the revisions for those attuned enough to notice them). The most recent drafts duly mention that the challenge was successfully met, but only in a few words, since — please note! — this success is irrelevant to the larger point of the article.*

*I mention the rejections not to complain but rather to explain the presence of a few hundred words on what number theory is about, since the hope was — and is — that the text will eventually be read by people who are not mathematicians and therefore are likely to greet the news of the successful challenge as one more milestone on the way to computers being "able to replace humans at music and science journalism and everything else," in the words of Scott Aaronson, quoted below. I could probably have published the text in a magazine for mathematicians but that would have defeated my purpose.*

*The sentence in boldface italics has a history to which I will return; in the meantime, check out the third paragraph of the blog post to which it links. *

*Because this week's newsletter was originally written as a self-contained article, it treats themes that will be discussed at greater length in future entries. Some entire sentences will be recycled. Apologies in advance for future redundancy.*

Will computers ultimately be able to generate the kinds of proofs pure mathematicians value, and if so will that constitute progress?

The modern history of the word "progress" has so many nasty associations — including colonialism, white supremacy, and genocide, as well as the obliteration of entire skilled professions and the ways of life that accompany them — that I find it difficult to invoke the notion as a positive value, even though in other contexts I'm happy to identify as a progressive. At the very least this troubled history should remind us that it makes no sense to speak of progress without immediately making clear: *for whom*? But then to ask whether an innovation, technological or not, represents progress *for mathematics* points to an intriguing question that is too frequently overlooked. What, namely, *are* the values of mathematics? "What is it good for?" is the inevitable cocktail party question.

One answer is obvious: mathematics is not only good but indispensable for engineering, technology, record keeping, and any activity that involves predicting the future. Early in the COVID-19 pandemic I heard it claimed that mathematics was "essential work": all the epidemiological models on which national and international health authorities were basing their recommendations, all the experiments evaluating the effectiveness of vaccines then under development, were thoroughly mathematical. But such an answer won't fly when the question is about pure mathematics, the kind concerned with proofs rather than applications. It's popular to refer to the theory of "The Usefulness of Useless Knowledge," the title of Abraham Flexner's 1939 *Harper's* article, in which he motivates the creation of the Institute for Advanced Study, of which he was the founding director, on the grounds that the kind of "spiritual and intellectual freedom" to be fostered there often "proves unexpectedly the source from which undreamed-of utility is derived. In blurbs for a reprint of Flexner's piece, Executive Chairman Eric Schmidt of Alphabet Inc agrees that "basic research … is a proven and essential seed for the revolutionary technologies that fuel the economy, transform society, and provide solutions for the world’s problems"; Bruce Alberts, former editor-in-chief of *Science* magazine, adds that "human progress" — that word again — "has depended-- quite unexpectedly - on unfettered scholarship carried out by talented, obsessively curious individuals." A major problem with such arguments is that no one sincerely believes, and Flexner himself certainly didn't claim, that the values motivating pure mathematicians have anything to do with "undreamed-of utility," "revolutionary technologies," or even solving "the world's problems."

A more immediate problem with the "essential work" arguments is that computers are better than humans, by many orders of magnitude, at calculation, record keeping, and all the practical applications of mathematics. For these purposes, the "essential work" to bring the pandemic under control was actually carried out by computers; from this perspective the elimination of the human mathematicians in the middle may well be seen as "progress." Ah, but doesn't the computer need human applied mathematicians to develop models, before it knows what to calculate? In fact, machine learning and data analysis, promoted as alternatives to modeling, promise to consign the applied mathematician to the recycling bin, alongside travel agents, long-haul truckers, and sports journalists. Whether or not such methods will keep their promise is the subject of intense discussion among AI specialists, as reported in popular books like *Rebooting AI* by Gary Marcus and Ernest Davis, or Brian Christian's *The Alignment Problem*. This hasn't stopped Oxford professor Marcus du Sautoy from speculating in *The Creativity Code *that "algorithms… may learn from the human mathematicians of the past how to distinguish the thrilling theorems from the boring ones," a development he finds "potentially so threatening to a [pure] mathematician like me."* 1*

In this connection, if and when future historians (if there are any) try to determine when the project to mechanize mathematics jumped the species barrier between brute force calculation and the core areas of pure mathematics, they may settle on the day in December 2020 when Peter Scholze proposed a challenge on Kevin Buzzard's xena blog. The world beyond mathematics had already met Scholze in 2018 at age 30, when he received the coveted Fields Medal for his work that blurred the border between number theory and geometry. Though Buzzard is well known for important work on that same border, the online publication *Quanta Magazine* chose to feature him three times in 2020 in recognition of his increasing prominence in the small but growing community of pure mathematicians who are trying to reconfigure our field into a thoroughly computer-friendly activity. “We’re going to digitize mathematics," said Buzzard in one of the Quanta pieces, "and it’s going to make it better.”

"Mechanizing mathematics," in the sense of using mechanical devices to answer mathematical questions, is at least as old as the abacus. These days most number crunching is already performed by some sort of digital calculator — *digitized*, in other words. Buzzard and his colleagues have something more substantial in mind: digitizing mathematical *reasoning*. By this I mean the process of deduction that leads from axioms and hypotheses, like those in Euclid's *Elements*, to full-fledged mathematical theorems, like the Pythagorean theorem or the theorem that there is no largest prime number (both in Euclid). Contemporary human mathematicians still devote our research to proving new theorems by reasoning, and this is what Buzzard thinks will be "better" when it's digitized. This comes in two strengths. *Automatic proof verification* consists in sending proofs to computers to check their correctness. Software developers have used such systems for decades, and they have had some notable recent successes in pure mathematics, but the catch is that the proof has to be written in a formal language that the computer can process. It has to be *digitized*, in other words; the technical term is *formalization*. Mathematicians still prefer to write their proofs exclusively for each other; this is what the xena blog is hoping to change.

More aggressive mechanizers aim to entrust the actual *conception* of proofs to computers. Most mathematicians remain skeptical, but computer scientists have repeatedly predicted since the 1950s that *automated theorem proving* was just 10 years away, and this continues: at last year's 13th International Conference on Intelligent Computer Mathematics, Christian Szegedy reported on efforts, at Google Research among other places, to develop "a human level mathematical reasoning engine in the next decade."

A guest appearance of a pure mathematical icon like Scholze in a venue devoted to "automating mathematical reasoning" already qualifies as news, the more so when his challenge was met, to his great satisfaction, exactly half a year later. *Quanta Magazine* certainly thought so; on July 28 they published an article entitled "Proof Assistant Makes Jump to Big-League Math" on the challenge and its successful conclusion, incidentally confirming the aptness of the word "jump" that you read three paragraphs back. Scholze is impressed:

*I now think it’s sensible in principle to formalize whatever you want in Lean. There’s no real obstruction.*

"The revolution has begun!" enthuses Jonathan Kujawa (twice in the same article). But those curious about possible future cohabitation of mathematics with computation should look more closely at the contents of Scholze's challenge.

Scholze's reputation thus far largely rests on his theory of *perfectoid spaces*, described in a 2016 Wired article as both "a class of fractal structures" and an "infinite wraparound construction," that "make it possible to slide questions about polynomials … into a different mathematical universe in which arithmetic is much simpler…" This is an understatement: since Scholze first announced this theory these spaces, and the allied notions Scholze and his collaborators have cooked up to explore their structure, have settled long-standing unsolved problems in number theory while enormously expanding its scope. Scholze hopes the *liquid vector spaces*, the subject of his more recent work with his post-doctoral fellow Dustin Clausen, will provide the basis for a mathematical universe that similarly simplifies *analysis* — the theory of infinite processes and the real number line that underlies everything from calculus to chaos theory to the equations of mathematical physics — and number theory as well.

In his challenge Scholze uses the expression "clouds of dust" rather than "fractal structures" when writing about the new approach to geometry he has been developing over the last few years. Most of number theory ultimately comes down to "questions about polynomials," and more precisely understanding solutions to polynomial equations in whole numbers. Any solution has two kinds of properties. The most obvious is its *size* — how much space it takes up. When you are checking whether or not an equation has solutions, it would take infinitely long to check all the whole numbers; but if you know independently that there has to be a solution smaller than, say, 1,000,000, a computer can find it instantaneously (and no one finds this controversial).

The other salient property of a solution is how *divisible* it is, in the sense that 15 is divisible by 3. The secret leitmotiv of most of number theory is the interplay between these two properties.

Perfectoid spaces build a geometry out of clouds of dust, in order to study divisibility which, crudely put, is an algebraic notion, and therefore lends itself well to manipulation by computers. So while Buzzard's team's formalization of perfectoid spaces is impressive it is not altogether shocking; the language and the intuition are algebraic. Size, on the other hand, is a geometric notion, a representation of our experience of space. The methods number theorists use to account for size build on our intuition of *continuity*, which philosophers since the pre-socratics have struggled to formulate in a language amenable to manipulation. The philosophical problem is that continuity can only be grasped through the intuition of infinite processes. But our minds are finite. Thus Zeno found it perplexing that Achilles's arrow could ever reach its target, because it first had to go halfway, then halfway again… Analysis, since the 17th century, has been mathematicians' answer to Zeno's paradox. The *condensed mathematics* of Clausen and Scholze, however, which includes liquid vector spaces, reimagines the arrow's trajectory as another kind of cloud of dust.

Viewing the real number line as a cloud of dust will not tame chaos theory, nor will it settle the fundamental questions of fluid dynamics — one of the six remaining million dollar Clay Millenium Problems, and the one to which analysis is most immediately relevant. But Scholze's *liquid tensor experiment* challenge expresses the hope that the hard aspects of analysis can be "black-boxed" using liquid vector spaces, so that arguments in analysis can be made more congenial to algebraists: "a powerful framework for real functional analysis." Reading between the lines, perhaps that will be enough to combine *size* and *divisibility* into a single uniform method for answering questions in number theory. Scholze and Clausen are not analysts, however, and the Theorem in Scholze's blog post — the basis for his black-boxing hope — may be, in his words, "the most logically involved statement I have ever proved," possibly his "most important theorem to date."

Before setting the theorem on liquid vector spaces loose on analysis, therefore, "Better be sure it's correct…" Clausen and Scholze have written what they believe to be a complete proof of the theorem, and this is supposed to guarantee correctness, but Scholze suspects that no other human has gone through the details: the "proof is not much fun…," he writes.

His challenge acknowledges that human operations may benefit from a confirmation beyond merely human proof, one now — as of June 5, 2021 — provided with a mechanical stamp of approval, when they are particularly delicate; and that this doubly ultimate check is especially warranted when an operation has the potentially momentous implications at which Scholze has been hinting.

Far from heralding a step toward making digitization routine, then, Scholze's initiative clearly expressed an intention to reserve such a step for exceptional circumstances. And there are reasons to digitize with caution. As historian Stephanie Dick observed in her study of early efforts to digitize mathematics,

*In developing theorem-proving software, practitioners endowed mathematical objects with computational properties - they were translated into dynamic, algorithmic, discrete things.*

The boundaries of mathematics are so nebulous, and have been so variable historically — even the use of the word "mathematics" as a blanket description covering both algebra and geometry dates back only a few centuries — that it's perfectly possible for mathematics to be *redefined* in practice to fit the constraints of whatever digital systems are adopted. The machines would then be better at mathematics than humans *by definition*. Szegedy's prediction on behalf of Google suggests that Silicon Valley is unlikely to object. The history of industrialization, and of the tech industry in particular, reminds us that these efforts are no less likely to force human mathematicians to adapt to the computer's idiosyncracies.

Would mathematicians who refuse to adapt — for example, because they find the rigidity of the system an obstacle to expressing their ideas — be forced to resort to *samizdat*, or even blogs, to get their work published? This kind of thinking has no natural limit. Computer scientist Scott Aaronson found a peculiar way to express skepticism about prospects for rapid digitization of mathematics recently on the *Scientific American* blog:

*It’s conceivable that someday, computers will replace humans at all aspects of mathematical research — but it’s also conceivable that, by the time they can do that, they’ll be able to replace humans at music and science journalism and everything else!*

In some quarters, questioning the desirability of replacing humans at "everything" can lead to accusations of *Luddism*. (Here, for example, and also here.) This is a reference to the movement of 19th century English artisans who smashed textile machinery, in a vain attempt to protect their traditional livelihood, and is a standard feature of the ideology the social psychologist Shoshanna Zuboff calls "inevitabilism": the claim that technology imposes rules of development that society is powerless to resist. Accusations of Luddism are especially handy as a distraction from the real target of the most salient critiques of contemporary technology, namely the unprecedent concentration of power and wealth that these technologies enable.

But applying the term "Luddism" to pure mathematics (not to mention "everything") is a category mistake. The original Luddites smashed machines that formed a new link in the chain leading to the production of clothing for the consumer market. * What pure mathematicians find valuable about what we do is precisely that it provides a kind of understanding whose value is not determined by the logic of the market*. That there can still be such an activity in this neoliberal age may seem scandalous, but anyone who wants to avoid writing nonsense about the joint future of mathematics and computation will have to grasp this essential characteristic of the former.

The conflation of mathematical creation with industrial production can be traced in part to the different roles proof plays in pure mathematics and in software verification. You may or may not care by what route the electricity that powers the machine on which you are reading this article has traveled from the generator to your home. But it matters very much that it will get to you safely, without breaking down or setting off fires or explosions, especially if the generator is a nuclear plant. So verification of the software that structures the grid is an urgent necessity… Donald MacKenzie reminds us in his *Mechanizing Proof *that this is why proof became important for software developers in the 1960s.

Proof in mathematics plays a much less consequential role. The late geometer Sir Michael Atiyah once described mathematical proof as

*the check that I have understood, … It is the last stage in the operation—an ultimate check—but it isn’t the primary thing at all. *2

Or, to repurpose an old Dutch proverb, mathematics is "not about the marbles, but about the game." We play the game to understand, not to win. Whatever else the game of mathematics brings to human beings, the history of mathematical innovation teaches us at least that it always has room for new kinds of marbles — imaginary numbers, riemannian geometry, perfectoid spaces, or liquid vector spaces — provided they are adapted to the game, not vice versa. Our mechanical counterparts won't be ready to join the game until they have absorbed this lesson.

Google Research, I have no doubt, would patent the word "progress" if they thought they could get away with it. It's more likely than not that the latest efforts to automate human level mathematical reasoning will miss Szegedy's 10-year deadline, even by Google's own criteria. But even if I'm wrong about this, I don't think mathematicians will see the result as "progress." This is not so much a misunderstanding of *mathematics* as a misunderstanding of *humans*. John Harrison, of the of the Automated Reasoning Group at Amazon Web Services, inadvertently revealed the source of this misunderstanding in an article he wrote over a decade ago to explain why he thought mathematicians should pay attention to the mechanization of proof verification, and should therefore learn to prepare *formal proofs*, the only kind computers can process. To his rhetorical question, "Why should we wish to create formal proofs?" he dismissed a first possible answer — that it is "a harmless and satisfying intellectual activity"—before arriving at two "more substantial reasons" that occupy the bulk of his article. It's already as clear to me from what he rejected as from what he chose to consider substantial that computer scientists like Harrison and mathematicians like me inhabit very different existential universes. "Satisfying intellectual activity" is near the top of my list of things that make life worth living. (And to be fair, Harrison's reply to a letter I wrote in response to his article strongly suggests that he feels the same way.) The most persuasive argument I've seen in favor of automated proof verification is that it is a "satisfying intellectual activity." Kevin Buzzard understands this:

*In Lean we are showing no signs of running out of steam, and there is very little funding, there are just mathematicians doing it for fun. This is the attitude we have to encourage.*

(Tweet @Xenaproject Feb 5, 2021).

Meredith Broussard's *Artificial Unintelligence* concludes with a reminder of "the point" of technological development that those she calls "technochauvinists" seem programmed to forget:

*Humans are not now, nor have they ever been, inconvenient. Humans are the point. Humans are the beings all this technology is supposed to serve. *

Mathematics is just one of the innumerable ways that humans have found to infuse our lives with meaning. As such, mathematics deserves to be judged by the standard John Ruskin proposed in * The Seven Lamps of Architecture (1849)* for medieval stone-cutters:

*I believe the right question to ask … is simply this: Was it done with enjoyment…? It may be the hardest work possible, and the harder because so much pleasure was taken in it; but it must have been happy too, or it will not be living.*

When mechanical mathematicians come for job interviews, then, your should ask them: was your work done with enjoyment? Did it make you happy? Is it *fun*? If they don't understand the question, you don't want them as colleagues.

*Davide Castelvecchi’s June 18 article in Nature*

*on the liquid tensor experiment is entitled “Mathematicians Welcome computer-assisted proof in ‘grand unification’ theory*.”

*The use of the word “welcome” is a bit tendentious: Emily Riehl finds that proof assistants “have begun to change the very way she thinks of the practices of constructing mathematical concepts and stating and proving theorems about them,” but the last sentence is a quote from Scholze: “For now, I can’t really see how they would help me in my creative work as a mathematician.” For the record — because I have already been cast, certainly not for the last time, as the token skeptic — I have no disagreement with either of these judgments. I will devote a future text to how and under what circumstances I might find myself feeling the same way as Riehl.*

*Please submit comments here.*

All three of these books will eventually be reviewed in this newsletter.

Don’t be surprised if you see this quotation again. It’s taken from *The Mathematical Intelligencer* Vol. 6, (No. 1) (1984), 9-19. It’s a favorite among philosophers who are interested in what mathematicians actually do, and who can blame them?

## Create your profile

## Only paid subscribers can comment on this post

Sign in## Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.