On purely selfish grounds, Hinton’s bombshell, if that’s what it was, could hardly have come at a better time for me. For months I have been struggling to find the words to express my concerns about the vision of democratic decision-making implicit in warnings or predictions about mathematics and its automated future. In the space of a week, all the words appeared in the press, in comments on Hinton’s resignation from Google; all I need to do is to assemble them in one place and put them in order. Attention has mostly focused on Hinton’s warnings that we must act quickly to avert the dangers of the technology he helped to unleash on the world, but, for those ready to read between the lines, his media moment also highlights the challenge in identifying the “we” that can be trusted to coordinate what may be our species’ last stand.
Hinton’s explanation of his abrupt move also served retroactively to justify my reluctance to welcome representatives of the tech industry to last October’s Fields Medal Symposium. As an organizer of the latter event, I shared my doubts with my fellow organizers:
If [Researcher X] attends [X] will soak up all the press attention, and the symposium will be memorialized as a [Corporate lab X] promotional event. This is not mainly because [Corporate lab X] has a bigger PR operation than all of mathematics combined, but because the press is attracted to shiny objects and nothing in science is shinier than AI.
and
The problem with [Researcher X] is that [X] is responsible to [Predatory Corporation X], just like the problem with [Researcher Y] is that [Y] works for [Predatory Corporation Y], and the problem with [Researcher Z] is that [Z] directs [Predatory Corporation Z's research operation]. That's not disqualifying, but it raises questions about their objectivity with regard to industrial ambitions for AI.1
Hinton has now told the world that my doubts were warranted:
Leaving Google will let [Hinton] speak his mind, without the self-censorship a Google executive must engage in. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.”2 (My emphasis.)
Meredith Whittaker, President of Signal, is less than impressed by Hinton’s recent conversion.
It’s disappointing to see this autumn-years redemption tour from someone who didn’t really show up when people like Timnit [Gebru] and Meg [Mitchell] and others were taking real risks at a much earlier stage of their careers to try and stop some of the most dangerous impulses of the corporations that control the technologies we’re calling artificial intelligence.3
Whittaker herself “was pushed out of Google in 2019 in part for organizing employees against the company’s deal with the Pentagon to build machine vision technology for military drones,” and argues that the media attention to Hinton’s resignation “[is] distracting us from the fact that these are technologies controlled by a handful of corporations who will ultimately make the decisions about what technologies are made, what they do, and who they serve."4
The cases of Gebru, Mitchell, and Whittaker helped feed my doubts about the reliability of this “handful of corporations.” They received additional confirmation a few months ago when a mathematician showed me a non-disclosure agreement he was required to sign in order to participate in a deep learning experiment similar to the ones reported here; the agreement appears to bind him for life and it’s not clear he even had the right to show it to me. So Hinton’s confession of self-censorship didn’t teach me something I didn’t already know; but it’s nice to see it in the words of an insider.
The industry has naturally reacted to Hinton’s decision:
The industry isn’t stupid here, and you are already seeing efforts to self-regulate,” said Eric Schmidt, the former Google chairman who served as the inaugural chairman of the advisory Defense Innovation Board from 2016 to 2020.
“So there’s a series of informal conversations now taking place in the industry — all informal — about what would the rules of A.I. safety look like,” said Mr. Schmidt, who has written, with former secretary of state Henry Kissinger, a series of articles and books about the potential of artificial intelligence to upend geopolitics. (“How Could AI Change War?” NY Times, May 5, 2023)
Schmidt’s reflex is not terribly encouraging for those of us who agree with (former editor-in-chief of Wired) Nicholas Thompson that “Henry Kissinger is one of the worst people to ever be a force for good.”5 The next section recalls some of the industry's previous "efforts to self-regulate" and hints at where the industry fits, or doesn't fit, in the "we" that must take action before it's too late.
An industry and its stakeholders
C'était devenu une habitude que les gens en vue dictent ce qu'il convenait de penser et de faire. (Annie Ernaux, Les années)
A recurring theme of this newsletter is that the involvement of the tech industry in defining the goals of (or disrupting) mathematical research will introduce new decision-makers who will reduce the autonomy of practitioners. But why not give them the benefit of the doubt? Perhaps the end result will still be greater democracy! After all, the attention of the industry to Ethical AI has been widely noted and widely publicized. Perhaps society's interest, after all, is best represented by Amazon, Facebook, Google/DeepMind, Microsoft, and IBM! In September 2016, at a time when governments were struggling6 to define priorities for the technology, these five stakeholders actually chose to call their new coalition the “Partnership on AI to Benefit People and Society” (PAI). If they can be trusted to benefit society, can’t we trust them to save human civilization — and incidentally, to guide mathematics into a beneficial automated future?
I learned about PAI less than one month later, when its formation was announced at a roundtable discussion in New York, with the promise that it would devote itself to defining “best practices.” And I already anticipated in 2016 that, as was observed a few years later,
Big tech money and direction proved incompatible with an honest exploration of ethics, at least judging from my experience with the [PAI]… the Partnership’s policy recommendations aligned consistently with the corporate agenda.…
or, after PAI expanded to include representatives of civil society, that
PAI’s association with ACLU, MIT and other academic / non-profit institutions practically ends up serving a legitimating function. Neither ACLU nor MIT nor any non-profit has any power in PAI. If academic and nonprofit organizations want to make a difference, the only viable strategy is to quit PAI, make a public statement, and form a counter alliance.7
Proof of stake
The 2016 roundtable meeting may have been the first time I heard the expression “best practices,” which seems to be on everyone’s lips, like “inflection point” (which doesn’t mean what the people who aren’t mathematicians seem to think it means); or “stakeholders.” The obvious question: “best” according to whom? Pondering that question began that day and ultimately evolved into this very newsletter. So naturally, when ChatGPT came along, I challenged my new artificial comrade to write a sonnet with “best practices” and “stakeholders” and a few other expressions that — in contrast with “inflection point”— became clichés before they ever had a chance to acquire a definite meaning. Here, slightly modified, is the result:
In Praise of Stakeholders
A web of voices, interconnected and fine, Plans, best practices, actionable and sound, Red teaming to ensure success is in line, Intersectional views, solutions abound.
Transitioned to a new state, we must thrive, The ecosystem needs care and attention, Retinal scans to keep our systems alive, Stakeholders' insights guide our intention.
The stakeholders' role, of utmost import, As we strive for ethical progression, Building a future with justice in thought, Their voices the key to our accession.
Stakeholders, a network of utmost need, For a future, sustainable and free.
(ChatGPT, April 18, 2023)
Let’s have somebody decide
Now the next step ought to be, before you have widespread release, let’s have somebody decide: Do those risks outweigh the benefits? Or how are we even going to decide that? And at the moment, the power to release something is entirely with the companies. It has to change. (Gary Marcus, NY Times, May 2, 2023, my emphasis)
The language is the first thing that has to change. When we allow the conversation on democratic oversight to be diverted by adopting the vocabulary of “best practices” and “stakeholders,” we practically guarantee that conflicts will be settled by means of algorithms on their designers’ terms.
With the help of Google’s n-gram viewer, you can track the progress of “best practices” and the rest as they fanned out into the culture’s most obscure corners. “Inflection point,” for example, started to take off around 1920, plateaued in 1980, and briefly faded before a sudden spurt in 2000 — followed by its own inflection points in 2010 and 2015; its recent success likely an effect of its leaping off the pages of calculus textbooks and into managerial bullet points.
“Best practices” and “stakeholders” were both born in the 1980s — as was “neoliberal,” which I don’t think is a coincidence — but really shot into prominence in the 21st century. If you ask why management jargon has been burrowing into our consciousness, my guess is that it occupies the vacuum left by the retreat of the language and practice of democracy.
The result, not surprisingly, is that the experts the press chooses to consult are explicitly questioning the premises of democratic decision-making in its present form:
[Yoshua] Bengio agrees with Hinton that these issues need to be addressed at a societal level as soon as possible. But he says the development of AI is accelerating faster than societies can keep up. The capabilities of this tech leap forward every few months; legislation, regulation, and international treaties take years.
This makes Bengio wonder whether the way our societies are currently organized—at both national and global levels—is up to the challenge. “I believe that we should be open to the possibility of fairly different models for the social organization of our planet,” he says.8
Is this a call for planetary dictatorship or for a new form of democracy? During the first weeks of May the press has reminded us at least three times a day that Bengio shared the 2018 Turing Award with Hinton and Yann LeCun for the discoveries that are now keeping (some of) their creators up at night. Of the trio, only Bengio signed (and co-authored, apparently) the Future of Life Institute’s March letter calling for a “pause” of “at least six months” in training more powerful AI systems. The letter notably asks
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
Here “we” are again. The passage concludes
Such decisions must not be delegated to unelected tech leaders.
Let’s have somebody decide, indeed. But who, if not unelected tech leaders? The letter’s answer is anticlimactic:
AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.
Some of the most cogent reactions to the “pause” letter are on Substack. Science writer Margaret Wertheim opines that
the real intent of this letter [is] to convince us that we need saving from potentially evil-AI and that the only people who can ensure a future with good-AI are the cadre of professionals who are currently behind the technology …
Like the Stochastic Parrot authors,9 I refuse to collude in the assumption that we the people “must adapt to a seemingly predetermined technological future” and learn to “cope” with whatever inventors of these technologies throw at us.
So Wertheim maintains the old-fashioned belief that “we” means “we the people,” rather than “AI developers" who “work with policymakers.”
Hinton, for his part, appears to be open to “fairly different models”:
Hinton, like Wertheim, is identifying “we” as “all people.” But when the PAI promises to benefit “people and society” it turns out to be a legal obligation that shareholders come first.
Carissa Véliz, a philosopher at the Institute for Ethics in AI at the University of Oxford, said academic input into regulation was essential because, unlike other technologies such as gene editing, AI’s applications were being decided entirely by corporations.
“We’re seeing technology companies roll out incredibly powerful products in whatever way they choose, leading to potentially huge consequences, but they don’t have to pick up the bill. It’s outrageous,” she said.
“Researchers who are independent experts can speak truth to power in a way that those with commercial interests can’t,” added Dr Véliz, whose institute was founded thanks to a £150 million donation in 2019 by the billionaire Blackstone financier Stephen Schwarzman.10
I hope the colleagues who invited researchers on the tech industry’s payroll to the IPAM conference in February, and those who include employees of Intel and IBM, as well as Meta’s Yann LeCun (of whom more below) in next month’s National Academy of Sciences workshop on AI to Assist Mathematical Reasoning, will make allowances for “the self-censorship a [Predatory Corporation [XYZ] executive must engage in” and their legal obligation “to maximize utility for [XYZ] shareholders."
In the wake of the publicity surrounding the “pause” letter, Le Monde simultaneously published interviews with Bengio and LeCun on April 28. Readers who remember LeCun’s reaction to his employer’s LLM Galactica fiasco11 won't be surprised that, far from recommending a "pause" as a response to worries about the technology, he calls for "accelerating" their development in order to make it more reliable.12 But it's disconcerting to see him conflate the pausers with the historic enemies of reason:
The mere idea of wanting to slow research on AI is tantamount, for me, to obscurantism.
The world’s 10th largest corporation may have his back, but LeCun sees himself as embattled, like the Lean community in our last episode; a Gutenberg or Galileo resisting the entrenched and reactionary power structure. While he has the power relations exactly backwards, his self-image as prophet of “a renaissance of humanity, a new age of Enlightenment,” is symptomatic of the genuinely and dangerously obscurantist and self-serving ethos of the tech industry.
The little that LeCun has to say when asked about decision-making (more precisely, about gouvernance) suggests that he finds current models (in which “AI developers …work with policymakers”) adequate.
As soon as AI is used to help in medical diagnoses or for self-driving cars, regulations are needed to guarantee that these products are not dangerous for the public… the existing model of international standardization bodies can help … Governments can promote the growth of open systems and at the same time make sure that control bodies guarantee the reliability of the products derived from these systems.
Bengio, unlike LeCun and Hinton, “was never tempted by the siren songs of the private sector.” We have already seen that he is “open to the possibility of fairly different models for the social organization of our planet.” We leave the last word to the Stochastic Parrot authors,13 who responded to the "pause" letter with a quite different model of social organization.
Contrary to the letter’s narrative that we must "adapt" to a seemingly pre-determined technological future and cope "with the dramatic economic and political disruptions (especially to democracy) that AI will cause," we do not agree that our role is to adjust to the priorities of a few privileged individuals and what they decide to build and proliferate. We should be building machines that work for us, instead of "adapting" society to be machine readable and writable. The current race towards ever larger "AI experiments" is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive. The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.
It is indeed time to act: but the focus of our concern should not be imaginary "powerful digital minds." Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.
Part II will continue this theme, with attention to democracy in mathematics.
X, Y, and Z refer to actual individuals or corporations whose names may be familiar to some of you.
W. Douglas Heaven, “Geoffrey Hinton tells us why he’s now scared of the tech he helped build,” MIT Technology Review, May 2, 2023.
Article by W. Chan at Fast Company, May 5, 2023.
Ibid. In “Google cancels AI ethics board in response to outcry” (Vox, Apr 4, 2019), Kelsey Piper writes
AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.
I should say right away that Google is a tremendous resource. Thanks to Google, I found the Whittaker and Thompson quotations in less than 10 seconds each. It’s such a valuable resource, in fact, that I believe it should be transformed into a public utility and strictly regulated in the interest of humanity as a whole, rather than an especially Predatory Corporation that serves only the short-term interests of its private investors.
and still are:
The Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or defense, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a waiver of sovereign immunity.
(from the Blueprint for an AI Bill of Rights, White House Office of Science and Technology Policy.)
Rodrigo Ochigame, “How Big Tech Manipulates Academia to Avoid Regulation.” The Intercept, December 20, 2019.
Heaven, op. cit.
Wertheim is referring to this response to the “pause” letter, by Gebru and Mitchell, mentioned above, along with Emily M. Bender and Angelina McMillan-Major. The response is quoted at the end of this post.
J. Grove, “Academic minds ‘vital’ as fears grow over ‘out of control’ AI,” Times Higher Education, May 11, 2023.
See note 9.