Using AI to cheat at the Joint Mathematics Meetings
ChatGPT is asked to reflect on how AI will ruin mathematics
I know people who have just decided, ‘My time is really important, I don’t care.’ So they ChatGPT their reply to the student. And so we’ve got robots talking to robots. And that, to me, is the end of higher education right there. That’s the apocalypse.
(Amanda Gregg, Professor of Economics, Middlebury College)
The experiment below was inspired by the article in which I read Gregg’s comment.1 By assigning a question to ChatGPT, one is prepared to identify the result of cheating, so that anything sufficiently different from what ChatGPT writes qualifies as original work.2
I had been wondering whether the technology is capable of cheating on behalf of the students in the course I will be teaching next semester with philosopher Justin Clarke-Doane. Will any of them try to get away with using feeding the reading list into AI in order to generate the 20-page term paper that is the course’s main assignment? This raises a few interesting questions: will LLMs eventually expand to occupy the entire space of critical thought? Is this space combinatorial in nature, in the sense that Whitehead claimed that all philosophy is a series of footnotes to Plato?
The assignment I chose was precisely the one I assigned to myself for a panel at next month’s Joint Mathematics Meetings (subtitled “Mathematics in the Age of AI”3) in Seattle. My aim was to use the technology to generate a catalogue of the ideas already in circulation, in "the wild," so to speak (i.e., in essay form), to help me avoid lapsing into what already counts as conventional thinking. As it turns out, some of the points in ChatGPT's essay overlap with points I had already prepared in my notes.
Notably absent, on the other hand, is any reference to the power of the tech industry, or to how adapting to the industry's norms and priorities is likely to erode the norms and values of mathematics. That’s fortunate, because I had chosen to focus on the risks of allowing the industry to use its dominance of the development of AI to determine the agenda of mathematics. But you’ll have to wait a few weeks to see whether or not the message stuck.
Q: Write a 2000 word essay about how AI will ruin mathematics.
ChatGPT said: How AI Will Ruin Mathematics4
The advent of Artificial Intelligence (AI) has brought about monumental shifts in nearly every field, from medicine and law to finance and education. Yet, among the areas where AI's influence may cause the most disruption is mathematics. While AI has the potential to accelerate certain processes within mathematics, automate tedious tasks, and even propose novel solutions, there is an underlying danger that the widespread integration of AI into mathematical practice could have significant detrimental effects on the discipline. This essay will explore how AI could, in fact, “ruin” mathematics by undermining its core principles, diminishing human intellectual engagement, and threatening the foundational aspects of mathematical rigor and creativity.
The Devaluation of Human Intuition and Creativity
At the heart of mathematics lies human intuition and creativity. The exploration of abstract concepts, the development of theories, and the formulation of proofs are inherently human endeavors. Many mathematicians argue that part of what makes mathematics such a uniquely intellectual pursuit is the ability to see patterns, conceptualize relationships, and reason in innovative ways. AI, with its algorithms and data-driven approaches, is not inherently equipped to engage in this type of creative exploration.
AI systems are designed to process vast amounts of data and find patterns within that data. While this can be valuable for discovering new insights or optimizing certain processes, AI operates through statistical methods rather than through true creativity. For instance, deep learning algorithms excel at identifying patterns in numbers and symbols, but they lack the ability to generate truly novel ideas in the same way a mathematician might, through insight or inspiration. This raises a crucial concern: as AI increasingly takes over tasks traditionally performed by human mathematicians—such as conjecturing new results or discovering proofs—there is a risk that the genuine human experience of mathematical discovery could be lost.
Historically, many breakthroughs in mathematics have come not just from computational techniques, but from flashes of insight that transcend rigid algorithmic thinking. Take, for example, Andrew Wiles' proof of Fermat's Last Theorem, which required years of thought, intuition, and interdisciplinary knowledge, far beyond what any machine could have come up with at the time. If AI systems come to dominate the landscape of mathematical research, will we see the end of such creative, groundbreaking work? It is not inconceivable that mathematicians would increasingly defer to machines for solutions rather than trust their own intuition.
Furthermore, AI's deterministic nature could lead to the stagnation of mathematics itself. Once AI becomes adept at proving theorems or suggesting new ones, the temptation will be to treat AI-generated results as definitive without the same level of critical engagement. When mathematicians rely too heavily on AI's computational power, they risk overlooking the deep, abstract nature of the concepts that they are working with. Mathematical discovery, in its richest form, is not about checking boxes or optimizing equations—it's about developing a deeper understanding of the universe through abstract reasoning.
The Erosion of Mathematical Rigor
Mathematics has always been about precision, logic, and rigor. A mathematical proof, to be considered valid, must follow a chain of deductive reasoning that ensures no logical gaps exist in the argument. It is this very rigor that differentiates mathematics from other fields and gives it its enduring reliability. However, AI’s approach to mathematics may threaten the very foundation of this rigor.
Machine learning models, particularly neural networks, are black-box systems. They make decisions and generate outputs based on learned patterns without necessarily providing a clear path of reasoning. In the context of mathematical proofs, this could be disastrous. If AI is used to generate proofs or conjectures, it may produce results that are technically accurate but lack transparency. Since AI lacks the ability to explain its thought process in human terms, it becomes exceedingly difficult for mathematicians to verify or understand the reasoning behind its conclusions. This situation raises serious concerns about the integrity of mathematical work done by AI.
One such example is the use of AI to check the validity of proofs or solve complex equations. While AI tools like automated theorem provers are already being used in mathematics, their dependence on large datasets and statistical methods can make them prone to errors that are not easily detected. Moreover, these systems often do not produce detailed, step-by-step explanations of why a proof is correct, which is essential for maintaining the rigor of mathematical arguments. In the worst-case scenario, reliance on AI could lead to a situation where proofs are accepted simply because a machine says they are valid, rather than because they have undergone careful, human-led scrutiny.
Mathematics thrives on the ability to trace each step of an argument, examining the validity of each assumption and conclusion. If AI-generated proofs become the norm, it could lead to a culture where mathematicians become passive consumers of results, rather than active participants in the process of reasoning and inquiry. The very act of proof could become a formality, devoid of the careful scrutiny that is the hallmark of true mathematical rigor.
The Risk of Overshadowing Human Expertise
Another major concern about AI’s role in mathematics is the potential for it to overshadow the expertise of human mathematicians. In the early days of AI research, many saw the technology as a supplement to human intelligence, a tool that would help augment the abilities of experts. However, as AI continues to improve, it risks shifting the balance from human-driven inquiry to machine-dominated processes. This shift could have profound consequences for the future of the mathematical community.
Consider the example of AI systems being used to solve problems in number theory, a field that has historically been the domain of some of the brightest human minds. The ability of AI to quickly sift through vast amounts of data and perform complex calculations has already shown promise in discovering new patterns and relations. However, this could lead to a situation where AI becomes the primary agent of discovery in such fields, with human mathematicians relegated to the role of passive overseers. As a result, we might witness a decline in the quality of human-driven mathematical innovation, as the community grows increasingly dependent on machine-generated solutions.
The danger here is that mathematics could become more focused on optimizing the use of AI rather than fostering human intellectual curiosity. The ultimate purpose of mathematical research is not simply to generate answers or solutions, but to foster a deeper understanding of the principles and structures that underpin the world around us. If AI begins to dominate this process, it could lead to a situation where the essence of mathematics—curiosity, exploration, and understanding—is sacrificed in favor of expediency.
This dynamic is already evident in other fields. In scientific research, for instance, the increasing reliance on AI to analyze data has led to concerns that human researchers are becoming increasingly disconnected from the scientific process. In mathematics, this trend could be even more pronounced. As AI systems begin to handle tasks ranging from hypothesis generation to proof verification, the need for human intuition, judgment, and expertise may diminish. The result could be a decline in the quality of mathematical thought, as the community becomes more focused on producing results rather than on truly understanding the foundations of those results.
The Standardization of Mathematical Thought
One of the most insidious consequences of AI’s encroachment on mathematics is the potential for the standardization of mathematical thought. AI systems, especially those trained on large datasets, often rely on predefined methods and techniques to generate their results. This reliance on established patterns could stifle the diversity of mathematical approaches that has historically characterized the field.
Mathematics, like any other discipline, has thrived because of the diversity of thought within it. Different schools of thought, different methodologies, and the individual insights of various mathematicians have led to a rich and varied field, where ideas evolve in unexpected and creative ways. However, as AI systems become more prevalent, there is a risk that mathematical inquiry will become homogenized, as algorithms begin to replicate the most common solutions and methods found in existing data. This could lead to a situation where mathematical research becomes an exercise in applying AI algorithms to familiar problems, rather than a process of genuine intellectual exploration.
The danger is that AI could create a situation in which all mathematical research looks the same, follows similar patterns, and explores the same set of ideas. By relying too heavily on AI, mathematicians may lose the ability to think outside the box, to break free from established methodologies, and to develop new, unconventional approaches to problems. This could lead to a stagnation of mathematical progress, where creativity and diversity of thought are sacrificed in favor of efficiency and predictability.
Conclusion
In conclusion, while AI holds immense potential to assist mathematicians in their work, there are significant risks involved in its integration into the mathematical process. The devaluation of human intuition and creativity, the erosion of mathematical rigor, the overshadowing of human expertise, and the standardization of thought all represent potential threats to the discipline. Mathematics is not merely a tool for solving problems; it is a deeply human pursuit that involves creativity, insight, and intellectual exploration. If AI becomes the primary driver of mathematical research, we risk losing the very essence of what makes mathematics a unique and enriching intellectual endeavor.
It is not that AI is inherently bad for mathematics, but rather that it should be viewed with caution. AI should complement, not replace, human mathematicians. The future of mathematics lies in the partnership between human creativity and AI’s computational power, with humans still firmly in control of the intellectual processes that drive mathematical discovery. Without this balance, there is a real danger that AI could, indeed, ruin mathematics—not through malevolent intent, but by eroding the values that make the discipline so vital and meaningful.
Q: Can you identify some of the sources of the ideas in this essay?
ChatGPT said: The ideas discussed in the essay about how AI could "ruin" mathematics draw on several broad concepts, concerns, and trends that are currently part of the conversation in both mathematics and AI. While I can't cite specific sources directly because this essay is an original composition based on general knowledge and concepts, I can identify relevant areas and thinkers that inform some of the points made in the essay:
1. AI and Creativity in Mathematics
The idea that AI lacks true creativity or intuition in mathematics is inspired by ongoing debates in the philosophy of AI and cognitive science. This theme appears in discussions about the difference between human creativity and machine-based pattern recognition. Notable thinkers in AI, such as Hubert Dreyfus, have argued that human expertise and intuition are grounded in embodied experience, something that AI, as it currently stands, lacks.
Philosophers like John Searle (especially with his "Chinese Room" argument) have argued that AI can simulate understanding but does not truly understand in the way humans do, a concern that aligns with the claim that AI cannot replace the intuitive and creative aspects of human mathematical work.
The critique of AI’s reliance on patterns is also inspired by Gary Marcus, who argues in books like Rebooting AI that deep learning (used in many AI systems) is insufficient for replicating the breadth of human cognition, especially when it comes to tasks requiring insight or creative leaps.
2. Mathematical Rigor and AI
The concern about the loss of mathematical rigor when AI generates proofs is inspired by the growing use of automated theorem-proving tools and their limitations. The development of tools like Coq, Lean, and Mizar, which are designed to help with formal proof verification, has sparked discussions about the nature of proofs in mathematics. Some critics argue that while these tools ensure correctness, they may not offer the kind of deep understanding or simplicity that human mathematicians seek when creating proofs.
The notion that AI-generated proofs may lack transparency and detailed reasoning echoes concerns from Joseph Weizenbaum, who in Computer Power and Human Reason raised concerns about the "black-box" nature of computer-based systems. In this context, the inability of AI systems to explain their reasoning in human terms is a significant risk to the transparency and trust that are essential in mathematical work.
3. Overshadowing Human Expertise
The potential for AI to overshadow human mathematicians draws from general critiques of automation in other intellectual fields, as seen in discussions about the impact of AI on science and research. Zachary Lipton and Gary Marcus, as well as scholars in the philosophy of science, have explored how automation might lead to the erosion of human expertise. Specifically, Lipton discusses the dangers of over-reliance on machine learning, where human researchers may defer to the model's predictions without fully understanding the underlying assumptions.
The specific example of number theory and mathematical fields being dominated by AI is informed by the increasing use of AI techniques like machine learning and pattern recognition in areas such as prime number distribution, where AI has been applied to generate conjectures or explore complex number-theoretic data.
4. Standardization of Mathematical Thought
The idea that AI could lead to the standardization of mathematical thinking is reminiscent of broader concerns about AI’s potential to reduce diversity of thought in research. This is discussed in books like “The Age of Em” by Gerd Leonhard, who highlights the concern that AI, in favoring efficiency and patterns, could limit innovation by pushing out less “optimal” ways of thinking.
AI's tendency to optimize solutions within a narrow set of parameters can be seen in critiques of machine learning's overfitting to data. Ellen Ullman, in Close to the Machine, discusses how reliance on algorithms in other domains may lead to a lack of new ideas and discourage creative approaches.
5. Mathematics as a Human Endeavor
The claim that mathematics is not just about solving problems but about exploring human understanding and creativity is rooted in the philosophy of mathematics. Imre Lakatos, in Proofs and Refutations, and Paul Feyerabend, in Against Method, emphasize the importance of human creativity and open-ended exploration in scientific and mathematical development. They argue that the true value of mathematics lies not in its results, but in the ongoing process of discovery.
Additionally, mathematicians like William P. Thurston and Ian Stewart have emphasized the human aspects of mathematical discovery in their work. Thurston, in his famous lecture "On the Need for Mathematical Beauty," argued that mathematics is driven not just by logical rigor, but by aesthetic and intellectual pleasures that AI cannot replicate.
General Inspirations from AI Literature:
The Limits of AI: A range of authors have commented on AI’s inherent limitations in replicating human cognitive abilities. For example, Marvin Minsky and John McCarthy, who were among the pioneers of AI, acknowledged that machine intelligence, even in its most advanced forms, could never fully replicate human ingenuity or creativity.
The rise of automated systems in creative fields—like art, music, and literature—has similarly sparked concerns about the loss of human involvement and intuition. Works like Shoshana Zuboff's The Age of Surveillance Capitalism highlight how automation and machine-driven processes often erode human agency and authenticity in intellectual and artistic domains.
Conclusion:
The concerns expressed in the essay about AI's impact on mathematics draw on a combination of philosophical arguments about the nature of human creativity, critiques of automation in intellectual fields, and specific discussions within the mathematical community about the role of formal proof systems and automated tools. The sources of these ideas range from classic critiques of AI by figures like Dreyfus and Searle to contemporary debates about AI’s limitations in research and human cognition. While no single source directly informs the entire essay, the points made are informed by ongoing discussions in both the philosophy of AI and the philosophy of mathematics.
Concluding thought
It's possible that the AMS is going about this backwards: instead of putting colleagues to the trouble and expense of traveling to Seattle, the organizers can ask speakers to send titles and abstracts, then use the technology to generate the talks. And why not generate the Q&A session as well?
Coming soon: ChatGPT answers the same question using French sources.
Beth McMurtrie, "Cheating Has Become Normal,” dated November 4, 2024, in Chronicle of Higher Education.
Some readers will already have perceived the self-referentiality of the circumstances of this week’s post, and increasingly of the newsletter as a whole. If I were to accept paid subscriptions, would I be able to define my profession as essayist or as mere prompt engineer? (“Prompt engineering” earned its own Wikipedia page as early as October 20, 2021.)
The following figures are taken from the Google Books ngram Viewer:
Age of Aquarius: 0.0000001990% in 1972, at its peak
Age of AI: 0.0000018505% in 2022, the last year for which figures are available, but rapidly growing
An earlier attempt with a more neutral question gave an unsatisfactory result, so I decided to word the question crudely. For the record, I don’t believe that AI will in fact “ruin” mathematics, but I can easily anticipate that the practice will be damaged by close association with the principal industrial developers of AI.
I think the focus of mathematical AI criticism needs to be narrower. Is the concern that AI has limitations, or that it's too good? Nuance allows for both of these angles of attack to coexist, but it's sufficiently bleak to speculate about AI making mathematicians obsolete. Suppose AI progresses to the point where all new conjectures and proofs were developed by AI's before humans - e.g., the Riemann hypothesis gets proven by AI before humans prove it, as does the twin prime conjecture, Goldbach, and every conjectured and as-yet unconjectured mathematical result. Before that point I would start filling out applications to the nearest Luddite cells, and I likely wouldn't be alone.
I notice that people struggle, generally, to show why generative AI is so poisonous. "Oh, what's the big deal? It's just like spellcheck. It's just like a graphing calculator. It's just like a search engine. It's just like Photoshop", etc. It's always analogous to some harmless thing - but it's not! If you're creatively inclined, if you have a mathematics background, you just know in your bones it's not the same. There's a clip of Andrew Wiles describing what proving Fermat meant to him, and he gets so overwhelmed with emotion that he - a grown man in his forties - starts tearing up and has to turn away from the camera. To show what we're in danger of losing, maybe mathematicians - uncharacteristically - need to introduce more emotion to this discussion.
FYI the Chat comment on Paul Feyerabend is confabulation. He and Imre Lakatos were to write a book together just when Lakatos died, they were close friends and PF wrote on IL but not much vice versa. Their texts are correlated. Then unlike Proofs & Refutations Against Method (PK’s 1/2 of planned book) is not at all about math and only indirectly human creativity (vs scientific progress). The comment about ‘results’ for P&R is also off: the point of P&R is the unity of proofs, theorems (‘results’) and concepts essential to both formed through their interaction in historical time. No ‘understanding’ here!