There’s no reason to believe the internet when it tells you that Lenin said (famously!) that
There are decades where nothing happens; and there are weeks where decades happen.
If you don’t believe me, try translating that into Russian and follow the links until you arrive at this page.
But it sure feels like decades have been packed into the last week, especially at Columbia, where I am writing these lines. So I will not be writing any new material for the moment. Instead I invite you to watch the slides from my talk on January 9 at the Joint Mathematics Meetings in Seattle, in the first of two Special Sessions on AI and the Social Context of our Work.
My talk concluded with three recommendations for the mathematical community’s future engagement with AI. For the convenience of readers who don’t have 4 minutes and 45 seconds to spare to watch a slide show, here are the recommendations:
Transparency about how decisions are made in connection with AI, including those as apparently innocent as declaring AI to be the “official theme.” More than a year ago the AI AMS advisory committee invited members to “share your thoughts.” Has this influenced the creation of a process to ensure that any decisions regarding AI accurately reflect the long-term interests of the community and its members?
Colleagues should be actively encouraged to engage closely with the vast critical literature about AI and the intentions of its creators. This means making such engagement an explicit criterion when evaluating articles submitted for publication. It should also be clearly represented in any official structure created to “decide.”
Introduce the process of accompanying proposals involving AI by realistic estimates of their environmental impact in terms of energy and water consumption and CO2 emissions. Purely academic projects are unlikely to be major pollutors, but mathematicians who collaborate with the tech industry should expect to be able to produce estimates of the environmental impact of their collaboration.
Readers may well be perplexed by the third suggestion; as far as I know, at the JMM the environmental impact of AI was only addressed briefly in my presentation and, more extensively, in the (remote) presentation by Laura Marks at the same panel. But a recent report in the Financial Times points out that the rapid growth of AI infrastructure poses a danger to public health as well as to climate and water resources:
Big Tech’s growing use of data centres has created related public health costs valued at more than $5.4bn over the past five years, in findings that highlight the growing impact of building artificial intelligence infrastructure. Air pollution derived from the huge amounts of energy needed to run data centres has been linked to treating cancers, asthma and other related issues, according to research from UC Riverside and Caltech. … The issue is set to be exacerbated by the race to develop generative AI, which requires huge computing resources to train and power fast-developing large language models.1
From “Pollution from Big Tech’s data centre boom costs US public health $5.4bn,” Financial Times, February 23, 2025.
Share this post