Notes from the 2022 Fields Medal Symposium
First reactions from several participants in the October meeting in Toronto
Last month I published two essays back to back in preparation for the Fields Medal Symposium in Toronto. Akshay Venkatesh, whose turn it was to be honored at the Fields Institute for his 2018 Fields Medal, asked that the event be marked by a conference on the implications for mathematics of the introduction of new technological methods.
More about the planning can be found in this post. Speakers and participants were roughly equally divided between those attending in Toronto and those joining the conference by video link. Most participants addressed points raised in Venkatesh’s essay on the topic. All talks were recorded and can be viewed online at the Fields Institute website. I found all the talks and the question and answer periods extremely informative, often inspirational. My notes are extensive and detailed and I expect to be quoting them in the coming months. But I think those fortunate enough to be present in Toronto will agree with me that the informal but wide-ranging discussions that took place between formal lectures were the highlight of the conference. In addition to mathematicians from a variety of disciplines — number theory was somewhat overrepresented — participants included (one or more) computer scientists, philosophers, historians, and social scientists, with some people legitimately occupying more than one role. It’s safe to say that many of those present had never had the chance to exchange ideas so intensively on such serious questions with so many representatives of different disciplinary perspectives. Or, as Venkatesh put it in a private message:
The symposium felt to be not so much a series of lectures as a three-day long conversation amongst the participants — a discussion that was passed from lecture hall to tea break to meal while keeping a certain focus and continuity. I found this a unique, and exhilarating, experience.
The Fields Institute administration deserves credit for their brilliant decision to provide lunch daily, thus allowing the conversation to continue uninterrupted. There were also a few dinners, including one for those of us who arrived the night before the conference began. It’s probably because I attended that dinner that Kumar Murty, Director of the Fields Institute, decided to ask me to conclude the opening remarks. It was only 90 minutes before the meeting began that I learned that the few sentences I had scribbled for this purpose would not fill the allotted time; but by speaking slowly I managed to run out the clock.
Here, for the record, are two points I had prepared for the opening remarks and did think worth making (I mentioned the second point later, during one of the discussions):
A merit of Venkatesh’s attention to value is that it offers an opening to a vision of mathematics as a complex way of life1, rather than the neoliberal vision of producer of goods, in this case theorems, for a hypothetical market.
and
To my mind the most immediate danger to human mathematics is the automation of teaching rather than research, since that would eliminate the institutional need for graduate student teaching assistants as well as most faculty.
The exchanges that took place in Toronto were so stimulating, and the perspectives so varied, that I decided to devote this week’s newsletter to participants’ impressions of the event. Nine of these comments are recorded below, slightly edited. Comments by Ernie Davis and Jeremy Avigad, which include programmatic suggestions, are at the top of the list.
As for me, I found the meeting immensely thought-provoking. Of the many insights that stuck with me, I’ll just mention two here. The first is Rodrigo Ochigame’s elaboration on the tension between the leibnizian and cartesian standards of proof, to use the terminology of Ian Hacking. Briefly, as mathematics has grown more complex, the post-cartesian standards have grown less strict, whereas the post-leibnizian standards have grown more strict.2
The second insight is illustrated in the slide reproduced above, from Jeremy Avigad’s lecture. Thoreau is rarely quoted as a philosopher of mathematics. Here is the sole reference to mathematics in Walden:
How could youths better learn to live than by at once trying the experiment of living? Methinks this would exercise their minds as much as mathematics.
Avigad’s recommendation — to which I plan to devote an entire future essay — is, among other things, a recognition that it is possible, and for that matter necessary, to exercise minds in both ways at the same time.
The reference to Thoreau also leads me to wonder what kind of Civil Disobedience would be possible in a world governed by Silicon Valley-style disruption. But I’ll save that for next time.
Ernest Davis, Professor, Department of Computer Science, NYU3
It was a great privilege to participate (by Zoom) in the Fields Medal Symposium, and I'm extremely grateful to the organizing committee for inviting me.
You invited responses. Here goes. I want to discuss two issues. The first is to quickly enumerate some specific directions of research in machine learning that are, or potentially might be, relevant to research mathematics. At the symposium, some of these were mentioned only in passing, and some went unmentioned.
Let me clarify at the start, that when I write "An AI system that does XYZ" I don't mean that it successfully does XYZ all the time or most of the time. Any of these AI systems achieve some percentage over whatever collection of problems. A success rate of 20% on a collection of elementary problems can be considered an impressive accomplishment, depending on the particular task and the data set. Let me also clarify
that I'm neither a mathematician nor an expert in deep learning, just a kibitzer who enjoys following the game.
1. The use of individually designed ML programs as an aid for research in specific research mathematical projects. Geordie Williamson, in his talk, gave detailed descriptions of three such: the use of deep learning to suggest theorems in representation theory and knot theory, presented in an article by Alex Davies et al. in an article in Nature, and the use of deep learning with reinforcement learning to look for counterexamples in graph theory in work by Adam Wagner. Incidentally I wrote a review of the first paper: https://arxiv.org/abs/2112.04324
As Andrew Granville said in his email, fundamentally this is work of the same flavor as the experimental mathematics that has been done for decades. Experimental math, as carried out by virtuosos like David Bailey and the late Jon Borwein, has achieved many extraordinary results in a variety of mathematical areas, in terms of inspiring conjectures, proved and unproved, constructing counterexamples, providing insight, and so on. I don't think that it can yet be claimed that these new machine learning techniques are overwhelmingly more powerful or game-changers. For the moment, ML is a new, interesting tool in the toolbox of experimental math, whose power and breadth of application is yet to be determined.
2. The use of AI systems to do Matlab-style calculations in symbolic math. A seminal paper here was the paper by Lample and Charton on an AI system that did symbolic integration.
https://arxiv.org/abs/1912.01412
I reviewed that as well https://arxiv.org/abs/1912.05752
There have been more projects of the same kind since. These are interesting as a testbed for ML techniques but I don't expect them to replace Matlab and so on any time soon.
3. The construction of AI systems that, given a formalized statement in Lean
(or other such system -- I'll use Lean to stand for all of them) finds a proof in Lean. For instance, last May there was a paper from a group in Meta reporting a system that could solve some International Math Olympiad problems using an technology analogous to the games-playing program AlphaZero.
https://arxiv.org/abs/2205.11491
4. The construction of AI systems that, given the English language statement of a theorem, can generate the corresponding Lean statement.
A system that does this (25% success rate on high school competition problems) was presented here.
https://arxiv.org/abs/2205.12615
A variant of this is a system that, given a theorem and a proof stated in English, translates it into Lean and fill in the details. This is a step toward an future system that could read journal articles as written by mathematicians and automatically generate formal proofs. One such recent study was reported here:
https://arxiv.org/abs/2210.12283
5. The construction of AI systems that can take word problems from college,
or high school, or elementary school math classes and solve them. The problems
here are in the language understanding, not in the math. There's a discussion with a test set here.
https://arxiv.org/abs/2204.05660
Mu feeling is that this is important for research mathematics because the understanding of sophisticated mathematics draws on, and is grounded in, an understanding of basic mathematics.
(3), (4), and (5) have analogues in the space of automatic programming; that is, generate a program from formal specifications or from English language specifications. There is also a lot of work in the problem of determining that software doesn't have various kinds of bugs; for instance Amazon has a major enterprise in developing systems that to prove formally that its cloud service AWS is secure. This is generally more advanced than the mathematical end, because the relevant data corpora (software libraries) are much larger, the relevant analysis is often simpler, the financial stakes are much higher, and an imperfect tool for elementary problems is often very useful.
My second point has to do with tacit knowledge which Rodrigo Ochigame discussed and which was further discussed some during the panel. Tacit knowledge is knowledge that cannot be, or generally isn't, stated explicitly. In the case of mathematicians, this comprises most of the knowledge or skills or art outside the pure mathematics of definitions, theorems, proofs. For example:
The art of formulating theorems and finding proofs. Formulating fruitful
conjectures. Finding the right arguments. Formulating the relevant lemmas. Finding the right generalization. Constructing counterexamples.
Insight. Achieving a Cartesian (in Rodrigo's useful terminology) understanding of
a theorem, proof, mathematical concepts.
Creativity. The art of creating new, exciting, powerful mathematical theories.
Aesthetic judgment. The art of evaluating math products. Judging whether a theorem or proof is "elegant", "beautiful", etc. to the extent that this is different from just being short.
Expression. The art of writing up your new theorem and proof in a form that is both readable and intelligible.
Reading. Learning how to read mathematics and assimilate what's important
into your understanding.
Visualization. Forming mental images that help in the understanding of
mathematics. As was discussed during the panel, in some cases a mathematical fact is "visually obvious" in ways that are painful to
formalize as proofs from axioms.
Mathematical modeling. Applications didn't came up much at the symposium.
I don't know if anyone on this mailing list is an applied mathematician. But the art of using a mathematical model in a real-world situation --- finding or constructing an appropriate mathematical model for a given situation, or combining multiple models; applying it; knowing how much to trust the answers it produces; and so on --- is a problem I've thought about for many years, without getting anything much, beyond an appreciation for the difficulty of the problem. If you're planning to hold more meetings, it might be worthwhile getting the perspective of an applied mathematician.
Some of these may be good fits for machine learning technology in something like the current form. There is not much evidence that they all will be. In particular, current machine learning technology requires data corpora, generally very large ones, and clear-cut evaluation metrics.
Another form of tacit mathematical knowledge, which was also raised during the discussion, is the basic mathematical sense, separate from schooling, found in non-mathematicians, children, and even pre-verbal infants and some animals. I don't know whether this group is seriously interested in this issue, but if so, there's a lot of good work among cognitive psychologists, and it might be worthwhile bringing in someone who works in this area, such as Stanislas Dehaene of College de France.
Jeremy Avigad, Professor of Philosophy and Mathematical Sciences, Carnegie-Mellon4
Among the presentations, I especially liked Johan Commelin's talk on formal methods in mathematics and Geordie Williamson's talk on machine learning in mathematics. These were both down-to-earth overviews of the technology and what they can and can't do. I find that sort of thing useful for thinking about what we can expect in the years to come and how we should allocate time, energy, and resources.
A number of talks and discussions raised concerns about the potential negative impacts of technology on mathematics. I have publicly taken a very optimistic view, saying things like "technology offers vast new opportunities, and we should explore them" and "it's just technology; we are still in charge." But I know that you and others have expressed more pessimism, and the point is well taken that there are good reasons to be concerned. I find it helpful to think about the technology for digital communication that has been introduced over the last few decades. It's hard to remember how we ever used to meet people in places before we had cell phones or how we managed to get anywhere without Google maps, but we have also seen the devastating psychological effects of excessive screen time and the rise of harmful communities sustained by digital media. Smartphones are here to stay, but perhaps we could have one better to anticipate the problems and mitigate them. In the same way, insofar as computational technology is useful for mathematics, we can't prevent it from being used. But now would be a good time to start thinking about the potential downsides and understanding how to use the technology well, and the meeting made a small but promising start on this.
Another theme that the meeting made salient is the way that disciplinary specialization makes it hard to carry out research that does not easily fit into existing boxes. Many mathematicians would characterize Johan's and Geordie's work as computer science, whereas to computer scientists, it is clearly mathematics. Neither mathematicians nor computer scientists are comfortable evaluating the work, so we are seeing important new research being carried out without a proper disciplinary home. Maybe these are just the inevitable growing pains for new fields of research, but at least the meeting at the Fields opened up ways of thinking about such research and how it fits in.
This also applies to other fields of research that fall through the cracks. In my talk I made an impassioned plea for the history of mathematics. At the moment, I can only think of two historians of mathematics employed in academia in North America. (I am sure there are others I don't know about, and I know there are more in Europe, e.g. in France.) This seems crazy, given the role that mathematics has had in shaping our history and thought. But in light of institutional pressures, it is perfectly understandable: mathematicians don't get hired or promoted for learning to master the nuances of historiography and historians don't get hired or promoted for understanding theorems. The meeting helped highlight problems like this by embracing topics that don't have an obvious home.
Indeed, the fact that the meeting was so successful in bringing together different disciplines and perspectives underscores the value of doing so. These days, the humanities are getting squeezed, but we are still seeing young people interested in bringing the humanities and the sciences closer together. For example, we see mathematicians interested in learning about the subject's history and we see sociologists and anthropologists interested in gaining technical skills in the sciences, toward gaining a better understanding of the cultures of thought and the institutions that shape them. That's not to say that specialization is entirely bad; there are plenty of talented young people with extraordinary determination and focus doing wonderful things in their chosen fields. But there is also a value to supporting a diversity of skills and perspectives in any discipline, and current academic practices tend to discourage that, whereas the meeting at the Fields was a step in the right direction.
In short, the meeting made a good start on sustaining a dialogue on a number of important issues. I am deeply grateful to Akshay for making the meeting possible and to you, Kevin, Maia, and Alma for putting it together. I hope the dialogue will continue.
Johan Commelin, Mathematisches Institut, Albert-Ludwigs-Universität Freiburg
This has been a very interesting exchange of ideas. I'm already missing all the fruitful conversations that we had in Toronto. One of the things I liked best about this symposium is how people from very different backgrounds came together and had genuinely useful and interesting discussions together. My impression is that we saw "the whole is more than the sum of its parts" in action, with all the different contributions.
One of the things that didn't come up so often, but maybe permeated several of the discussions, is that we can and should play an active role in how we shape technology and how technology shapes the future.
It doesn't just happen, and we don't have to wait passively to see what Silicon Valley will do.
In that sense, I very much appreciate the final message in Jeremy's talk, where he calls us to live our mathematical lives deliberately.
Andrew Granville, Professor of Mathematics, Université de Montréal
Let me comment on :
"Many mathematicians would characterize Johan's and Geordie's work as computer science, whereas to computer scientists, it is clearly mathematics."and "until Johan managed to find a permanent job at Utrecht, it was a big open question as to whether anyone could find an academic job on the basis of that sort of work."
Mathematics of Computation is an old area that has had its own journal for almost 80 years, and now there is Experimental Math, and there have been one or two others. Dan Shanks, a founding editor of MathComp liked to stress the "Of" in the title of the journal -- it is not a journal for calculations, it is a journal about them. This is an important distinction because the journal is about the mathematics being done (inspired by calculations) ... (though articles in MathComp can do research about, say, running times of algorithms so not involve any actual calculations!) I was an editor of Math Comp for many years but doing that job was mostly boring because of the number of poorly thought out, unmotivated calculations that were submitted. People who did not try to understand the mathematics behind what they were doing, they just hoped that the calculations would help someone more serious. When you see the efforts put into constructing tables of properties of elliptic curves, the best work is full of interest and insight (though sometimes quite technical).
So I for one, 100% do not view Johan or Geordie's work as "not mathematics". They are however very different: --- What Geordie did fits very well into the traditions of mathematics of computation, Deep*** was limited in what it could calculate until Williamson made it flower. He gained insight from getting a wider range of examples to allow him to develop the theory. Exactly what you hope for in a top quality large scale calculation in pure mathematics ---- What Johan did is both very new and very old. In one sense it is a new pedantic proof of an already proved theorem, that chases down some details and this allows both significant simplification and understanding of an important theorem. On this basis alone it is an important work and he is very hireable (if his achievements are recognizable -- they are not that easy to discern in much of what is written about the Liquid Tensor project but perhaps specialized reference letters would have made his accomplishments clear). On the other hand developing and refining a proof this way is indeed very new but clearly among the first steps in having usable and useful proof verification.
Two remarks: I think we should write "that old trope, the 4CT" -- there have been so many great computational projects out there, throughout the history of mathematics. 4CT often seems to be presented as an outlier but it is very far from that. Many breakthroughs come from the right calculations. Let me give some examples in number theory: Famously the Birch-Swinnerton Dyer conjecture started from calculations of some elliptic curves of special form, and they made a spectacular guess backed up by calculations showing their guess seemed inspired. What is historically interesting is that the product that they studied cannot, even now, be proved to be convergent (to prove it does, requires more than Wiles's Theorem + the Riemann Hypothesis). However Davenport suggested they replace their tricky product by a relevantly constructed L-function, so their second paper gives the modern formulation, and this led to an explosion of ideas. The pre-cursor to much of modern arithmetic geometry is Ribet's early understanding of the links between the p-divisbility of the orders of χ-components of the class group of a cyclotomic field and the zeros of p-adic L-functions. If you look at that famous paper you will find him crediting Lehmer's brilliant calculations with identifying the links that his work started from and Ribet's subsequent desire to explain them. A fair portion of Zagier's work, for instance, is massive calculations of examples, and then guesses made and some lemmas proved in the direction of those guesses (he may not 100% present it that way, but read carefully and it is clear that that is what he has done). Much the same can be said of Elkies. I state these examples off the top of my head of great calculations/calculators that change the way we think, changes that probably would have been very slow to happen without the insights and inspiration that brilliant calculations can give.
Geordie Williamson, Professor of Mathematics, University of Sydney
I am a little suspicious of inter-disciplinary conferences. As a speaker at any conference I often struggle deciding who my audience is, and this is magnified by an inter-disciplinary event. I was really impressed by the quality of the talks.
Some points which had a big impact on me. This is very much a personal reflection:
Intentionality: the tech world likes to ask "what will happen?". I can understand the sentiment. Developments in subjects touched by this conference (like machine learning and automated proof checking) have been rapid and surprising over the last decade. However, this conference really emphasised the question "what do we want to happen?". We have some power over this, and we shouldn't forget that. (I think this is one of the main points of Michael's substack, which is one of the reasons I enjoy it so much. Another reason is the unbelievably high ratio of fascinating quotes from throughout history. How does he do it5?!)
Boundary formation: Throughout my mathematical life I have felt that the attitude of some mathematicians towards computation is unnecessarily dismissive. I find the development of computer science over the last fifty years absolutely extraordinary -- why don't other mathematicians seem to care? In Ochigame's talk I learnt that this process of "walling off" has a name -- boundary formation. It was consoling to realise that the dismissiveness that I have perceived has more to do with insecurity around preservation of status than it does any inherent judgement of quality or depth.
It is also interesting that the question of "status" is one of the main themes of Akshay's essay. It is hopelessly naive, but prior to Akshay's essay and this conference, I hadn't really thought about status as a driving force in the mathematical community.
Concepts: more broadly, I found the conference fascinating, in that it gave me a ways of thinking about our community from a more sociological / anthropological point of view. The discussions around development of mathematical taste, how that taste has changed, and whether there is some intrinsic notion of difficulty were all inspiring to me.
I found it amusing that these concepts which I hear once, and are so helpful and enlightening to me, are exactly the kind of things that current AI can't do!
*****
Also, I would like to voice complete agreement with Andrew and Ernie's comments. They very much align with my impression of the computation / math interface. I'm happy to discuss this more, but don't want to burden this thread too much.6
Simon DeDeo, associate professor in Social and Decision Sciences, Carnegie Mellon University7
Three takeaways for me:
Once mathematics detached itself from problems in (for example) physics, the question of mathematical *taste* comes to the fore. What problems are worth working on is no longer even a partial given, and mathematicians must find, or remember, or reconstruct their own epistemic values. To a certain extent this is true for science as well — what scientists think is a good explanation has evolved — but a physicist can always say “quantum theory is beautiful look at this transistor”, while mathematics is far less constrained.
There is no question that the use of computers to aid proof making, or proof verification, will affect the development of these values — making some things less appealing, and other things more appealing. This already happened, as I understand, in the Four Color Theorem; the existence of even an inelegant proof made the search for a human one less appealing.
The second thing I learned—and something that I think I’ll follow up on—is the rather remarkable heuristic that (I think it was) Akshay described as “conservation of difficulty”. A sign (for example) that a lemma is wrong is that it lets you prove things you don’t deserve to.
Machines, of course, don’t need the heuristic. The principle of explosion means that a false lemma can derive trivial falsehoods, which is easier to spot than an “undeserved”, “too-easy” result. But I wonder what role CoD plays in taste-making (for example), and how it guides the development of new notations and forms of what Emily said is sometimes non-pejoratively called “abstract nonsense”.
On the broadest level: I was struck by the extent to which serious mathematicians are concerned about the stability, validity, and reliability of their field. I did not expect this. In the final analysis, I found myself seeing the role of computer-aided proof as an overall positive—despite the reservations described above.
I worry about a nightmare scenario, where major errors are, one day, found in a critical chunk of mathematics. My sense is that the result would be an even greater intensification of boundaries: newcomers to mathematics, new fields, new groups, would be under perhaps pathological levels of suspicion.
Computer aided proof could help both avoid the scenario, and lower the barriers to entrants who would not need to convince a more privileged inner circle to give them a hearing.
Maia Fraser, Associate Professor, Department of Mathematics and Statistics, University of Ottawa
Like Johan and others, one of the things I enjoyed most about the symposium was the way people from very different disciplines came together and genuinely engaged with each other in discussions. This was all the more remarkable to me, and inspiring, in that mathematicians and non-mathematicians were the ones conversing and really learning and connecting with each other. Working in both machine learning and symplectic geometry, myself, I can say this type of open, curious exchange between Math and the "outside world" is not something I have encountered often and this symposium was a moving and very meaningful experience for me. I wish to thank all the speakers and participants who shared thoughts around Math and AI. In particular, all of this exchanging was enabled by Akshay and I am very appreciative that he took such bold steps, both to write his IAS essay and then to dedicate this symposium to the topic. Thank you, Akshay.
To be honest, I found every talk thought-provoking, and the discussions on breaks as well. I still find myself thinking back to these and looking forward to future opportunities to interact. To give a few more specifics in response to Michael's request, here are some moments that were highlights for me:
I really appreciated Jeremy's mention of mindfulness. Since we have been pondering the interaction between human mathematicians and AI, I think it's especially relevant to highlight mindfulness. Not only is mindfulness one way in which humans are truly, deeply connected with each other but in navigating complicated times, it seems one of the sagest principles to follow. In the case at hand it would mean something like neither rejecting changes in a reflexive manner, nor pursuing them with reckless abandon, but finding a balanced way that involves genuine curiosity and care. These were in abundance at the symposium.
I found Geordie's talk a central moment, providing a very effective bridge between pure math and machine learning: his excellent tutorial on what deep learning is and very useful "guidebook" on situations where deep learning is most likely to be an effective tool for mathematicians were great. I also very much liked his advice that mathematicians who want to use deep learning can benefit by interacting with specialists in deep learning, or machine learning more broadly.
Both Tim and Johan, in their talks, took the listener along for what were very open and revealing tours through the process of solving math problems by computer. It was both very interesting for me to see these software tools and to think more closely about the thought process of proving things.
I felt that many facets of the question of how AI and Math will co-evolve were brought to life in the symposium and it would be great to follow through to further develop these. Melanie's discussion of how current AI systems are unable to form humanlike concepts and abstractions is a great doorway into looking further at what the mind is doing, and also at what future AI might try to do and how. Irina's talk offered glimpses at potential developments in this direction. I thought Rodrigo brought up very helpful notions such as boundary formation, and also the listing of steps we might see: Can aleph(0) verify existing results/proofs, Can aleph(0) find novel proofs that humans can easily understand, ditto but that humans cannot easily understand etc.. Ernie followed up to add points there, and using aleph(0) to propose conjectures or find possible counter-examples is already happening (c.f. Geordie's talk). I would love to keep a conversation going to further structure and formulate potential roadmaps and milestones.
Colin McLarty, Professor of Philosophy, Case-Western Reserve
It was a terrific conference, both in the speakers and in the many discussions among everyone there. The Fields Institute is really well arranged to promote discussion outside of the talks and the schedule led to a lot of that each day. For me, speakers on technical topics did a great job laying out the basics and the issues. Of course all of us came in knowing some uses of AI and something about computer assisted proof. But I at least had very little idea how computer proof today actually works or about where the people doing that see it going in the near future. Then discussion within each talk, followed by lots of bigger or small group discussion outside the talk, brought in so many valuable perspectives. Great thanks to Akshay for suggesting this topic.
Rodrigo Ochigame, Assistant Professor, Institute of Cultural Anthropology and Development Sociology, Leiden University
Thanks again to all the organizers and to Akshay for taking the bold steps that made this fascinating space of inquiry possible. It is not easy to produce such thoughtful exchanges among people with such divergent points of view and kinds of expertise. I think the symposium was quite an extraordinary event.
I especially appreciated how, by discussing concrete details of specific mathematical results, formal systems, and computer programs, the group explored novel manifestations of some of the most basic questions about math, from the nature of proof (strictly mechanical? inevitably social?) to the very purpose of mathematical work (the production of new theorems? the experience of new insights?).
Upon reflecting on our discussions, I have been thinking about two potentially fruitful research directions which seem relatively underrepresented in current automation efforts.
The first direction is the design of technologies that facilitate the human understanding of difficult results, including computer-generated proofs. This is slightly related to issues of "interpretability" and "explainability" in machine learning more broadly, but I think the kinds of understanding and insight that matter to mathematicians are quite different from those offered by existing techniques of "interpretable/explainable ML." I wonder if we will see more future work on specialized techniques of computer-assisted mathematical insight.
The other direction is the elaboration of a richer vocabulary for varieties of proofs and results. We currently use the same word, "proof," to designate very different kinds of demonstration, and to draw a clear-cut boundary between theorems and conjectures. This minimalist terminology has its virtues, but can also generate controversy. Many disagreements are centered on the criteria of inclusion into this singular category of proof, for example whether formal verification should be required, or whether completely inscrutable computer-generated proofs should be sufficient. I wonder if we should instead adopt a terminology that distinguishes between multiple dimensions and gradations of reliability.
The symposium inspired me to pursue these issues further, and I look forward to continuing the discussions with everyone.
Several people asked me, naturally enough, what I meant by “complex way of life.” This is a fair question but I used the expression not as an introduction to a taxonomy of “complex ways of life” in which mathematics would fit naturally but rather as an invitation to think of mathematics as inseparable from all the institutions, histories, narratives, and other aspects of material culture that contribute to shaping it as a human practice.
This is explained at much greater length in Ochigame’s lecture, the first I’ve ever heard by a professional anthropologist of mathematics.
Also co-author, with Gary Marcus, of Rebooting AI.
Also Director of the Hoskinson Center for Formal Mathematics.
Note from MH: Sometimes I find the quotes in whatever I happen to be reading, very occasionally I actually remember something I have read. But often I follow an algorithm: (a) Decide on the contents of the quote; (b) Choose a few authors who I remember vaguely as having written about the topic with the desired perspective, although I can’t remember the specific quotes; (c) Type the authors’ names and topic into a search engine and see what comes up.
Often there are surprises. I just tried this with “Kierkegaard” and “automata” and this is what I discovered.
Also Director of the Sydney Mathematical Research Institute
Also External Professor at the Santa Fe Institute
Fascinating post, with lots of interesting discussion. The link following Kierkegaard and automata in note 5 was the highlight, though!