*Readers who are only interested in the polemical part may want to skip to the end. Readers who just want to know whether or not they should read the book can stop here: it’s definitely worth reading. You won’t catch me giving stars in my reviews, though.*

This is the first of a series of reviews of books I have been reading in order to provide at least a veneer of familiarity with the material on AI that I am subjecting to critical analysis. Marcus du Sautoy's book should be the easiest to review, because du Sautoy is a mathematician and his book recounts a mathematician's attempt to come to grips with the implications of recent developments in AI, some of them dramatic, for the future of his own profession, which is also mine. The title announces that du Sautoy wants to understand, and to help his readers understand, what AI may mean for the future of human creativity more generally, but it is clear that mathematics is very much on his mind, as it is on mine.

Du Sautoy is not only a mathematician — a Professor of Mathematics at Oxford and a Fellow of New College — but he is also a professional communicator, holder of the Simonyi Professorship for the Public Understanding of Science at Oxford, author of a number of books for the non-specialist public, and frequent contributor to print media and to BBC 4. This probably explains why his book is not only as well documented as the more specialized books I have been reading but is also a noticeably better read, something I can't help appreciating as a reviewer, independently of our common membership in the broad clan of number theorists.

To be honest, however, I was expecting to find *The Creativity Code* frustrating. Popularizers of science have a tendency to smooth sharp edges and avoid controversy (Richard Dawkins, du Sautoy's predecessor as Simonyi Professor, erred, often wildly, in the opposite direction), and some of du Sautoy's previous writings struck me as glib, detached, and excessively consensual. Happily, his latest book is not at all like that. On the contrary, du Sautoy's tone repeatedly turns elegiac as he contemplates a future of mathematics in which people like him — people, to put it more simply — are only a knock on the door away from losing their place.

*I have found myself wondering, with the onslaught of new developments in AI, if the job of mathematician will still be available to humans in decades to come. Mathematics is a subject of numbers and logic. Isn't that what a computer does best? Part of my defense against the computers knocking on the door, wanting their place at the table, is that, as much as mathematics is about numbers and logic, it is a highly creative subject involving beauty and aesthetics*.(p. 5)

Elsewhere he speaks of his "existential angst" and writes that, when DeepMind trained AlphaGo to defeat the world's Go champion by means of something resembling human intuition, it "triggered an existential crisis" (p. 141). I know exactly how du Sautoy feels. In my own first newsletter I wrote that "[t]he stakes of mechanization [for mathematics] are *existential*." Here and elsewhere the text provides signs that convince me that du Sautoy's existential anxiety is genuine and not staged for the purposes of the narrative. This gives his book an emotional resonance that one doesn't often find in books of scientific popularization.

Before I start the review proper, though I have to add that I understand existential stakes in two ways. There are the familiar stakes for the individual: what do my life's struggles mean if the little they have achieved is destined in time to be reproduced effortlessly? This question obsesses me all the time, and I don't need machines to trigger this kind of existential crisis. Chapter 7 of *Mathematics without Apologies* is largely dedicated to this problem, and to André Weil's image of the cycle of "knowledge and indifference." Colin McLarty gives a very good treatment of an important historical incident of this kind: when Paul Gordan's life work apparently went up in smoke, superseded by Hilbert's finiteness theorems, he reportedly exclaimed "This is not Mathematics, it is Theology!" This newsletter, however, is more directly concerned with the existential challenge to mathematics *as a human profession*, or *human vocation*.[1] The existence of the newsletter is a symptom of my concern about this particular challenge, which I haven't seen addressed to my satisfaction anywhere in the extensive debate about mechanization.

I will get to the differences between du Sautoy's existential angst and my own at the end of the review. But first I had better start talking about the book. As I wrote, the author is a professional communicator, and the book reflects that. It asks a question in the first chapter, and by the concluding chapter we have learned whether or not the author has found the answer. The question is a novel take on the Turing test, adapted to recognize creativity:

*…an algorithm has to produce something that is truly creative. The process has to be repeatable … and the programmer has to be unable to explain how the algorithm produced its output. We are challenging the machines to come up with things that are new, surprising, and of value. For a machine to be deemed truly creative, its contribution has to be more than an expression of the creativity of its coder or the person who built its data set.*

Du Sautoy calls this the **Lovelace Test**, in honor of programming pioneer Ada Lovelace, who doubted a machine could pass the test:

*It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine… The Analytical Machine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.*

Over the course of the next 14 chapters du Sautoy introduces readers to many of the researchers and corporations that are trying to prove Lovelace wrong, mainly in emulating five distinct forms of human creativity: game-playing, painting, mathematics, music (there's a lot about mathematics and music), and writing. This is the best account I've seen of how far AI engineers have come to date in their ambition to reproduce in silicon what Donald MacKenzie has called "the essence of humanness." It's the sort of reporting du Sautoy does so well that, if this were a normal book review, I would devote more space to this material. However, this review has ulterior motives, so I will move on.

In several of these arenas he detects something like genuine creativity on the part of our artificial partners. On the second day of its tournament against Go master Lee Sedol, AlphaGo "defied… orthodoxy built up over centuries of competing" and "taught the world a new way to play an ancient game." The algorithm's "truly creative act" has since been analyzed by champions. Jazz pianist Bernard Lubat, meanwhile, expresses admiration for the Continuator, an AI jazz improvisation algorithm based on Markov chains:

*The system shows me ideas … that would have taken me years to actually develop. It is years ahead of me, yet everything it plays is unquestionably me.*

The algorithm "helped [Lubat] to be more creative." This, for du Sautoy, "seems… like the moment when the Lovelace Test got passed."

Du Sautoy is more ambivalent about what he brought back from his encounters with algorithms to create poems or art works. "We can never exclude that a great work can be created by a machine" said Hans Ulrich Obrist, director of London's Serpentine Gallery, while his gallery was hosting BOB, an "artificial life form" that evolves through interaction with visitors to the gallery. That was as far as Obrist was willing to go, even after "human subjects" rated works generated by algorithm more highly than "art generated by contemporary artists and shown in top art fairs" such as Art Basel 2016, and after a portrait painted by an algorithm went for $432,000 at a Christie's auction. After reviewing the output of storytelling algorithms, du Sautoy concludes that "it's fair to say that novelists are not likely to be pushed out of their profession soon." Still, he is genuinely intrigued by a computer-generated novel called *The Seeker* that felt like "getting inside the head of the algorithm as it tries to make sense of humans," the closest approximation to an artificial "emerging consciousness" that he encountered while doing research for his book.

A bit more of *The Creativity Code*'s space is devoted to mechanizing mathematics than to the other creative activities facing mechanization — three chapters rather than two or one. This is where du Sautoy unveils "the question at the heart of this book: Why can't a computer join the ranks of Fermat, Gauss, and Wiles" by matching their "ability to prove theorems?" He rephrases the question more concretely a few chapters later: If DeepMind cofounder Demis Hassabis "could get an algorithm to 9-dan status in Go, could he get an algorithm … elected a Fellow of the Royal Society" for its proof of a theorem?

Hassabis's whispered reply keeps the plot moving forward: "We're already on the case." But it should come as no surprise to mathematicians who have been following developments in AI that Hassabis's promise turns out to be hollow. Although the prospect of mechanization of mathematical creativity is what drives *The Creativity Code*, by the time du Sautoy reaches "the heart" of the book we realize the engineers have very little to show for their efforts. Much of the space in his three mathematical chapters is necessarily devoted to explaining the basics of mathematics and proof to his target audience. For the rest, he pushes all the familiar buttons artfully — the Four Color Theorem, Kepler's Conjecture, the unsurveyable complexity of the classification of finite simple groups, and so on — but there are only so many buttons to push.

This may change with time. We've already seen that mechanization today mainly consists in formalization and automated proof checking. Du Sautoy devotes a few pages to the success of the formal system Coq in verifying the proofs of the Four Color Theorem and the Odd-Order Theorem. Lean has since displaced Coq as the formalizers’ system of choice, and in the wake of the successful resolution of Scholze's challenge we can anticipate with confidence that Lean has a bright future, though I remain skeptical, for reasons that will be explained at length and repeatedly in future installments, of predictions that "it will become common for mathematicians to provide a formal verification of their papers."

Developments in artificial theorem proving, on the other hand, such as the expansion by Google and DeepMind of the Mizar library of proofs — that’s what Hassabis was whispering about — leave du Sautoy cold; he calls it "a mindless machine cranking out mathematical Muzak" or a "Mathematical Library of Babel." "Mathematics is not a list of all the true statements we can discover about numbers." And then, 3/4 of the way through the book, du Sautoy is finally ready to play his hand:

*But it wasn't just the linguistic problem of the Mizar project that was bugging me. Of those extra 3 percent of theorems that the DeepMind and Google team had managed to generate, were there any that would surprise me or make me gasp? I began to feel that this whole project missed the point of doing mathematics. But what exactly is the point?*

Exactly! Du Sautoy's tentative answer is that "Mathematicians… are storytellers. Our characters are numbers and geometries. Our narratives are the proofs we create about these characters. And we make choices based on our emotional reactions to these narratives" — du Sautoy has already told us that the mathematics we value as "huge emotional content" — "deciding which ones are worth telling."

My personal investment in mathematics as storytelling is contained in my chapter in *Circles Disturbed*, a book entirely devoted to exploring the parallels between mathematics and narrative. I plan to devote a future entry to some of the insights of that book, and to illustrate these parallels with an astonishing mathematical story from a recent talk by Dick Gross.

So by the end of the book have AI researchers convinced du Sautoy that Lovelace was mistaken? The answer is appropriately hedged. Du Sautoy follows Margaret Boden in distinguishing three kinds of creativity — exploratory (exemplified by the classification of finite simple groups or by Claude Monet), combinatorial (exemplified by the proof of the 3-dimensional Poincaré conjecture based on the Ricci flow, or by Zaha Hadid), and transformational (exemplified by the square root of minus one, or by Picasso). This allows him to raise the bar:

*Many will concede that exploratory creativity and combinatorial creativity can be achieved by an algorithm, because it relies on previous creativity by humans which it then extends or combines. What they are unwilling to concede is the possibility of transformational creativity being algorithmically produced*.

The algorithmic moments that he finds unequivocally creative — AlphaGo's "truly creative act" or the Continuator's emulation of Bernard Lubat — du Sautoy qualifies as "exploratory creativity." Combinatorial creativity is exemplified by François Pachet's Flow Machine creating a jazz synthesis of Charlie Parker with Pierre Boulez. As for transformational creativity, du Sautoy hasn't yet seen it, but he is prudent enough not to rule it out, allowing for the possibility of meta-algorithms "designed to break the rules and see what happens."

In the end, du Sautoy decides that "[u]ntil a machine has become conscious, it cannot be more than a tool for extending human creativity." But when that happens — and du Sautoy, unlike some of the authors I will be reviewing in the future, sees no reason it can't — then the machine "will want to tell us what it's like." That, he concludes, will be both the beginning of machine creativity, and incidentally the best hope to "alleviate concerns about renegade 'evil AI' taking over the earth."

Because this newsletter is ostensibly concerned with AI, I will also have to get around at some point to saying what that is. Because *The Creativity Code* is definitely a book about AI, du Sautoy has to spend some time explaining to his readers just what AI is. And because du Sautoy is not an engineer trying to make AI work, his explanation is more relaxed and leisurely than what I've found in most of the recent treatments. So this review is as good a place as any to start getting the explaining out of the way.

The original approach to AI consisted in giving the machine rules to follow, depending on the situation. So: when you see this picture, say "cat"; when you see this other picture, say "cat"; when you see this third picture, say "not cat." This approach is called *top-down* and it is not good at handling situations not explicitly anticipated by the program, so the instruction set is top heavy. The current revival of AI, after several disappointing "AI winters" marked by substantially reduced investment, is based on the *bottom-up *approach. I'll introduce some of the different flavors of the bottom-up approach in future book reviews. The two main bottom-up methods considered in du Sautoy's book are *machine learning* and *deep learning*, the latter a distinct subclass of the former. Both train algorithms on the kind of massive data sets that were not available during previous AI winters. Machine learning usually involves a human trainer to tell the machine "yes cat", "not a cat," and so on.

All of us are unwitting and involuntary participants in a massive machine learning experiment called ReCaptcha, teaching machines to recognize bridges, traffic lights, motorcycles, and so on. If you are a self-driving car you are probably pretty good at ReCaptcha. In deep learning, in contrast, the machine mainly trains itself. Its (deeply) layered neural network extracts features from an enormous data set of pictures of cats to establish the concept "cat" so well that it can design its own cats.

The expression "deep learning" only appears once explicitly in the book, without a definition, and I suspect it was left in by mistake after a preliminary explanation was edited out of an earlier draft. It's clear throughout, though, that deep learning, without constant oversight by programmers, is the only route du Sautoy sees to genuine algorithmic creativity. In a future installment I will attempt to illustrate mathematical creativity by means of deep learning in response to a challenge, posed by my friend and colleague Jean-Michel Kantor, to imagine how an algorithm could have formulated the Poincaré Conjecture without human intervention. For now this quick technical overview will have to suffice.

At one point du Sautoy defines his existential crisis as concern "whether the job of being a mathematician would continue to be a human one." This is where the ways he and I understand "existential" subtly diverge. I have written on this newsletter, and I will be writing again, that mathematics is "one of the innumerable ways that humans have found to infuse our lives with meaning." That is not a job description. From beginning to end du Sautoy's book makes it clear that he doesn't really think of mathematics or any creative activity as a "job," but when the word appears in sentences about machines replacing human mathematicians, as it does consistently, it's a hint that someone is looking at the wrong existential crisis.

Many jobs and whole professions are being and will continue to be eliminated by mechanization. That's a political and social crisis that can justifiably be called existential, and I firmly believe that decisions with such potentially catastrophic consequences for so many people should be subject to democratic oversight and not left to the obscenely wealthy owners and managers of a handful of powerful corporations. Dispossessing human lives of their meaning belongs to a different existential category altogether. Du Sautoy quotes Jordan Ellenberg:

*If we were to imagine a future in which all the theorems we currently know about could be proven on a computer, we would just figure out other things that a computer can't solve, and that would become "mathematics."*

That "we would" points in the right direction. Until the subject of Ellenberg's sentence is eliminated, whatever belongs to (perhaps it's better to say "whatever pertains to") that subject can only be confiscated by means of force. And for the foreseeable future, this force will be exerted not by Terminators but by other human beings.

Twice in the book du Sautoy acknowledges that "[t]here is a huge amount of hype about AI", that AI's achievements are systematically exaggerated in order "to enlarge company bank balances." I've already written that the objective of Google and other corporations in their efforts to mechanize mathematics ultimately has the same motivation. It's a neoliberal mindset that makes it possible to worry that "the job of mathematician" will be off-limits to humans, the conviction that the values of mathematics equate to its value for a market. This mindset has deep roots in the Britain of Margaret Thatcher's *There is no alternative*. Du Sautoy's book is evidence that the Simonyi Professorship has not promoted the neoliberal approach to the Public Understanding of Science, but du Sautoy himself has not been immune to its distortions.

The late John Conway, one of the key figures in the classification of finite groups, among many other accomplishments, once said that

*honestly, if you or your readers saw what I actually did, they'd be disgusted. They'd say, 'Good money is being paid out to support these people'.*

I suspect this attitude is behind much of the existential dread of mechanized mathematics. What Conway "actually did" was to create, irrepressibly and inexhaustibly. The final chapter of *The Creativity Code*, entitled "Why we create," contains all the reasons needed to explain why those hypothetical readers would be wrong to be disgusted to see Conway, rather than Google or DeepMind, paid to be creative. I only wish du Sautoy had said so before ending the book.

*Comments are welcome here.*

[1] In the first entry in this newsletter I used the word *existential* to refer to "the nature or self-understanding of mathematics." The "self" is necessarily human.