How do mathematicians really feel about AI?
The American Mathematical Society AI Advisory Committee wants to know
That Headline, with a link to the newly convened Advisory Group, arrived in my in-box on August 31. I immediately began sharing, first with members of the committee, and now with readers of Silicon Reckoner.
The AMS invites you to SHARE YOUR THOUGHTS
When you click on the link above, you will be asked to fill in your name and email, and you will see five boxes, each with a suggested topic for comment. Here they are, numbered for typographical convenience:
Publications: Evolving technologies offer new models for publication and communication.
Education: The nature of undergraduate and graduate education in mathematics is changing rapidly.
Research: AI will create new research directions and funding opportunities for mathematicians, and may influence the nature of basic mathematical research.
Community: The role of mathematicians in society, and the types and availability of mathematical work, may all change.
Other: Feel free to share other ways in which you believe the AMS can support our community in handling AI-related questions.
This is a pretty good list, but it doesn’t really leave room for critical analysis, and it persists in treating mathematics in isolation from absolutely everything. These two shortcomings are intimately related. Were our mathematical leaders to look out upon the world, they would see some of our academic colleagues lining up to share their anxieties about the implications of AI for the future of their discipline with anyone who will listen, while others are applying their professional skills to analyzing AI not merely as a series of technical challenges but rather as a complex social, political, and economic phenomenon, and publishing their observations in conventional outlets like the generalist press or the Chronicle of Higher Education as well as in new structures like the journal Critical AI mentioned at the end of this post.
One is left with the impression of mathematics as a giant solipsistic bunker in the heart of academia. Outside the bunker debates about the threats and promises of AI are literally raging in the media and among academics and creative professionals, some of whom are going on strike and actually winning on related issues. Inside the bunker talk is exclusively of “great potential” to “assist mathematical reasoning.” Outside the bunker the Justice Department is suing Google and the Federal Trade Commission is suing Amazon for monopolistic practices; consultations on regulating AI are taking place at the highest levels of governments in Europe and North America, against a background of intense lobbying and massive public relations campaigns by the targeted industry. Inside the bunker our colleagues are inviting representatives of this very same industry to high-level workshops to report on recent collaborations and to plan for future partnerships.
So if you’re a mathematician or if you feel the AMS is at all relevant to you, I strongly encourage you to share your thoughts with the new AI advisory group.
But you don’t have to be limited by the shape of the five boxes provided. My own 12 points, listed below, fit comfortably in the box labeled “Other” — I checked. And the committee as constituted is genuinely open to thinking outside the box — I confirmed that as well.
My recommendations to the committee
Here are my twelve suggestions. Point 12 is included (for now) as a bargaining chip .
1. Acknowledge — explicitly — that mathematics is not the only academic discipline facing challenges from AI, and that other disciplines have in many cases reflected more deeply and published extensively on the implications of the new technology; refer systematically to these publications. See for example
Susan D’Agostino, Why Professors Are Polarized on AI, Inside Higher Ed, September 13, 2023.
Corey Robin, "The End of the Take-Home Essay," Chronicle of Higher Education, August 24, 2023.
"Response to the White House OSTP Request for Information on National Priorities for Artificial Intelligence", Data & Society, July 7, 2023.
Tate Ryan-Mosely, "It's time to talk about the real AI risks," MIT Technology Review, June 12, 2023.
Kyle Chayka, "Is A.I. Art Stealing from Artists?" The New Yorker, February 10, 2023.
"Big Bot on Campus," Collection of articles from Chronicle of Higher Education
(and hundreds of others). See also forthcoming issues of Critical AI (see below).
2. Consult with professional associations in other disciplines, including humanities and social sciences, to formulate common responses to AI.
3. Don't isolate the implications for research from the other aspects of our institutional life, including teaching; study the development of these technologies against the background of multiple crises facing the existing model of higher education.
4. Will adoption of deep learning methods require partnerships with the corporations that have sole access to the necessary equipment and data, given that university resources are far too small and scattered to carry out the kind of training procedures that have given rise to the widely publicized generative AI platforms? Alternatively, would the creation by governments of public deep learning projects be subject to the instability of the political process and/or subordinated to military priorities?1
5. Use this opportunity for a wide-ranging discussion of our values as a discipline.
a. Resist the attitude that places "proving more theorems" above all other priorities.
b. Take advantage of contemporary work by historians of mathematics on the comparative history of the role of "proof."
6. Be very suspicious of initiatives emanating from Silicon Valley. Read between the lines to understand why these corporations are so interested in automating mathematics and how likely they are to continue to support research in this area once they have met their goals.
7. Propose policies prohibiting the introduction of barriers to communication that are standard in the industry (non-disclosure agreements, proprietary clauses, etc.) At the same time, recognize that "open source" often means that outsiders provide work for the corporations without compensation.
8. Pay attention to externalities:
a. Take concrete measures, together with the European Mathematical Society and others, to protect the intellectual property rights of researchers from the unauthorized use of articles as training sets, paying special attention to those who specify non-commercial use in their copyright.
b. Calculate the energy and water requirements needed to train existing generative AI platforms, and estimate how much more would be needed to train generative AI in mathematics. Mathematicians should be well placed to carry out such calculations!
9. Look critically at attempts (Gowers, Bengio) to develop "objective" measures of "interestingness" of a mathematical idea.
10. Some proponents of automation are convinced that it will contribute to the democratization of mathematics — releasing it from domination by a self-selecting elite, or making it possible for groups not affiliated with elite institutions to participate.
a. Take these claims seriously and (in consultation with social scientists) develop a model of the kind of democracy envisioned by these proponents. Will it be dependent on the industry?
b. Use this process as an opportunity to develop a genuinely democratic model of decision-making within mathematics.
11. Identify and critique the established narratives (progress, technological unemployment) and clichés ("best practices," "stakeholders") that undermine attempts to understand what is novel and specific about current challenges.
12. Transform the main internet platforms into regulated public utilities.
Expanding the discussion
In addition to gathering the opinions of the mathematicians who are likely to be directly affected by the introduction of the new technologies, it is essential for the AMS AI Advisory Committee to consult with experts from other fields, including but not limited to specialists in the technology, who have been writing critically about AI and have been increasingly vocal about efforts by the industry to dominate the conversation and to determine the agenda.
There is no reason to think that the industry is attempting to influence the AMS committee’s deliberations, and the committee’s outreach thus far has been free of industry jargon. But the starting point of any discussion, at any level, is the choice of language and vocabulary. Considerable efforts will be needed to resist the implicit adoption of the industry’s conceptual framework. For proof that mathematicians are years behind colleagues in other areas in developing the conceptual vocabulary — the “critical AI literacy” — needed to understand the full implications of AI for the intellectual project of which mathematics is one among many parts — one need only look at the website of Critical AI, the “new interdisciplinary initiative” based at Rutgers that is organizing the event on critical AI literacy mentioned earlier in this sentence. The editorial board of the journal Critical AI is drawn from a broad range of academic disciplines. (Pure) mathematics is conspicuously absent. This should change.
The journal Critical AI is taking the lead in organizing a Global Humanities Institute on Design Justice AI next summer in Pretoria, South Africa. Here is the list of “guiding questions” for the institute:
What would be lost from human creativity and diversity if writers or visual artists come to rely on predictive models trained on selective datasets that exclude the majority of the world’s many cultures and languages?
What frameworks or evaluation practices might help to concretize what is meant by “intelligence,” “understanding,” or “creativity”–for machines as well as humans? How might such humanistic interventions help diverse citizens to participate in the design and implementation of generative technologies and the benchmarks that evaluate them?
What are the strengths and weaknesses of current statistical models–which generate outputs probabilistically (by privileging dominant patterns) and selectively (based on scraped data)–in modeling the lived knowledge, embodied cognition, and metareflection that informs human communication, art, and cultural production?
If evidence suggests that “generative AI” is harmful–and/or counter to the professed object of enhancing human lifeworlds–what alternatives might be forged through community participation in research that rearticulates goals, and reframes design from the bottom up? What kinds of teaching, research, community practices, and policies might sustain these humanist-inflected and justice-oriented design processes?
The website of the University of Connecticut Humanities Institute, one of the organizers of the Pretoria meeting, adds the following remark.
These research questions go to the heart of what inclusive collaborations can contribute to the study of resource-intensive technologies that aim to monetize and “disrupt” human communication and creativity.
These questions go to the heart of mathematics as well. “[L]ived knowledge, embodied cognition, and metareflection… informs” human mathematics as an integral component of “human communication, art, and cultural production,” whatever the industry may claim to the contrary. At the very least, the AMS committee should consult with the organizers before and after the Institute. I would hope that mathematicians will be invited to such meetings in the future.
…preferably after you have shared your thoughts with the AMS.
Some challenges are discussed by Dylan Matthews in “The $1 billion gamble to ensure AI doesn’t destroy humanity,” Vox, September 23, 2023:
Though academic institutions lack the firepower to compete on frontier AI, federally funded national laboratories with powerful supercomputers like Lawrence Berkeley, Lawrence Livermore, Argonne, and Oak Ridge have been doing extensive AI development. But that research doesn’t appear, at first blush, to have come with the same publicly stated focus on the safety and alignment questions that occupy Anthropic. Furthermore, federal funding makes it hard to compete with salaries offered by private sector firms. A recent job listing for a software engineer at Anthropic with a bachelor’s plus two to three years’ experience lists a salary range of $300,000 to $450,000 — plus stock in a fast-growing company worth billions. The range at Lawrence Berkeley for a machine learning scientist with a PhD plus two or more years of experience has an expected salary range of $120,000 to $144,000.
In a world where talent is as scarce and coveted as it is in AI right now, it’s hard for the government and government-funded entities to compete. And it makes starting a venture capital-funded company to do advanced safety research seem reasonable, compared to trying to set up a government agency to do the same. There’s more money and there’s better pay; you’ll likely get more high-quality staff.