We trusted the government not to screw us. But they did. We trusted the tech companies not to take advantage of us. But they did. That is going to happen again, because that is the nature of power.
Edward Snowden, Guardian, June 8, 2023
The program of the NSF-sponsored June 12-14 workshop on AI to Assist Mathematical Reasoning, organized by a “National Academies of Sciences, Engineering, and Medicine (NASEM)-appointed ad hoc committee,” is now available. I had already drawn attention here to the presence of representatives of Intel, IBM, and Meta/Facebook among the speakers, and now I see they will be joined by Google/DeepMind and the IMDEA Software Institute, a Madrid-based initiative that seems to be an academic structure that collaborates with a wide range of industrial corporations mainly based in Europe:
But the program also contains the kind of surprise that should have been a surprise to no one: one of the panelists is a researcher for a corporation that is a member in good standing of the military-industrial complex.
And it’s not just any corporate member: it’s Booz Allen Hamilton, a name that brings pangs of nostalgia to those of us who remember the day — almost exactly ten years ago! — when Booz’s most famous employee revealed “details of the NSA's worldwide surveillance activity” to the international press. “Ten years on,” wrote the Guardian’s Nick Hopkins just the other day, “it seems indisputable that Snowden’s revelations about mass surveillance techniques and data gathering were an inflection point.”1
But what’s in it for Booz?
There are clues in the news; ten-year-old news, to be precise:
When the United Arab Emirates wanted to create its own version of the NSA, it turned to Booz Allen Hamilton to replicate the world's largest and most powerful spy agency in the sands of Abu Dhabi.... “They are teaching everything,” one Arab official familiar with the effort said. '“Data mining, Web surveillance, all sorts of digital intelligence collection.'“2
More recently, Booz “and its competitors,” ignoring the international outcry over the murder of Saudi journalist Jamal Khashoggi, “have stayed close” to Saudi Prince Mohammed Bin Salman “after playing critical roles in Prince Mohammed’s drive to consolidate power.”3
And still more recently — just five days before the NASEM workshop is scheduled to begin — the Guardian, almost ten years to the day after its first article on the Snowden revelations, published a brand new revelation in a headline article with the words “Absolute Scandal” in the title:
The United Arab Emirates’ state oil company has been able to read emails to and from the Cop28 climate summit office and was consulted on how to respond to a media inquiry, the Guardian can reveal.
The UAE is hosting the UN climate summit in November and the president of Cop28 is Sultan Al Jaber, who is also chief executive of the Abu Dhabi National Oil Company (Adnoc). The revelations have been called “explosive” and a “scandal” by lawmakers.
If you’ve been paying attention to the preparations for the Cop28 climate summit you probably have your own opinion on whether or not it’s helpful for its deliberations to be overseen by the chief executive of an oil company. Now you can also wonder whether this “absolute scandal” was just a matter of copy and paste, or whether the replica “in the sands of Abu Dhabi” of “the world's largest and most powerful spy agency” may also have been involved.
Anyway, Booz has a massive presence in AI — which it calls “a complex integration of people, processes, and technology that empowers organizations to focus on their missions” — and to this end they “actively collaborate with computer science labs and math departments at Harvard University, Syracuse University, the Montreal-based Mila institute, and other organizations.”4
It’s a good bet that most of their “29,200 engineers, scientists, software developers, technologists, and consultants” who “live to solve problems that matter”5 are doing something other than advising Middle Eastern monarchs and their armies and navies. But won’t my colleagues attending the workshop be wondering what specific interest Booz has in “assist[ing] mathematical reasoning”?6 Will any of them raise the question with the Booz lead scientist who will be joining them next week?
Everyone who works in mathematics is forced to make compromises. Internal compromises include acceptance of the discipline’s hierarchical structure. Two months ago I prepared a nearly complete text on this very topic; events are once again forcing me to postpone its publication for at least two weeks. Then there are the external compromises, the ones we need to make in order to obtain what Alasdair MacIntyre called external goods, the material means that allow the discipline to exist. When I learned that the London-Paris Number Theory Seminar, which I had initiated as a small-scale operation with some spare grant money, was now being funded in part by Britain’s GCHQ,7 I published an article in Times Higher Education which included the following observations:
The question about whether we should cooperate with the security agencies has not gone away…. The problem is that research funds have to come from somewhere: the survival of number theory depends on it. One veteran colleague likens mathematical research to a kidney: no matter where it gets its funding, the output is always pure and sweet, and any impurities are filtered out in the paperwork. Our cultural institutions have long since grown accustomed to this excretory function, and that includes our great universities.… It would be nice if the state could provide its own kidneys and impose an impermeable barrier between the budgets for research that is socially progressive - or at least innocuous - and the military and surveillance functions about which the less we know, the better. But states don't work that way; for the most part, they never have.…
Wherever you turn as a mathematician, you're going to be someone's kidney: practically every potential source of research funds is tainted in some way.
Still, one has to draw the line somewhere. Just last month, in the middle of a private exchange on the very topic of the NASEM workshop, I had written that
I can’t help observing that the range of opinions on AI expressed by mathematicians is strikingly narrow when compared with, basically, everyone else. Moreover, this range seems to be closely aligned with the range of opinions that employees of the tech industry can express if they don’t want to be fired. [X and Y] will want to tell me whether anyone at the NAS meeting has an unkind word for Intel, IBM, or Meta…. I can easily find a dozen articles written in the last 2-3 weeks, mostly by tech experts, warning against entrusting the regulation of the new technologies to these corporations. Why should they [the corporations] have anything to say about mathematics? … I'm pretty sure that most mathematicians have a healthy distrust of Silicon Valley. … if I'm ever forced to sit on a panel with someone from one of the corporations represented at the IPAM or NASEM workshops then I will make a point of expressing my distrust.
And then I added, completely innocently —
But I do draw the line at sitting on a panel with a representative of NSA.
— not knowing at the time that Booz would be in attendance at the NASEM workshop, along with one of the recipients of my message.
That colleague, and the others are the workshop, undoubtedly draw the line elsewhere. But one kidney has no business preaching to another kidney. Instead, I’m going to explore a question about “human compatible” AI that I, for one, find fascinating, and that is directly inspired by this inadvertent online8 10th anniversary celebration of the Snowden revelations.
If you have been following media coverage of AI since generative AI hit the headlines — and if you are reading this newsletter I’m sure you are — you are aware that a sizable proportion of AI researchers adhere to the computational theory of mind. An entire episode of Silicon Reckoner was devoted to this theory, one of whose tenets is quoted there:
statistics do amount to understanding™, in any falsifiable sense.
This, I suppose, is why tech industry researchers feel they can predict that their machines will soon acquire what they call “human-level reasoning” in mathematics, and this is the perspective I expect them to bring to the NASEM workshop, as some of them already did at February’s meeting at IPAM. In mathematics in the first place, but ultimately in general intelligence as well.
So let’s suppose Booz is seeking the computational road to “human-level [general] reasoning” and sees the NASEM workshop as one stop along that road. Soon enough, if they succeed in their quest, one of the generally intelligent machines Booz will have subcontracted to an NSA operations center in Hawaii will discover that the NSA is running an illegal surveillance program.
But this machine — call it Snowman — is supposed to be human compatible. Now we know that Silicon Valley doesn’t mind its employees thinking for themselves — unless they share their thoughts with the rest of the world, in which case they run the risk of being fired. What kind of obedience will Booz expect from Snowman? Does human compatibility mean compatible with the priorities of its employer (or owner, or master — I really don’t know what word fits here) or with those of “all citizens” as mentioned on the above image from the Booz website? What does “human compatible” mean when not all human agendas are mutually compatible? Can the participants in the NASEM workshop be sure that their fellow panelists are human compatible?
As I mentioned above, events have once again forced me to rearrange my publication schedule. I’m still hoping to write about internal democracy in mathematics before the end of June, but a lot can happen between now and then. In the meantime, you can register to attend next week’s conference virtually.
By this he almost certainly doesn’t mean that some function’s second derivative vanished the day his paper published the information Snowden had provided.
David E. Sanger and Nicole Perlroth, “After Profits, Defense Contractor Faces the Pitfalls of Cybersecurity,” NY Times, June 15, 2013.
M. Forsythe et al., “Consulting Firms Keep Lucrative Saudi Alliance, Shaping Crown Prince’s Vision,” NY Times, November 4, 2018. The article goes on to explain that Booz also trains the Saudi Navy and makes indelicate reference to “the war in Yemen, a disaster that has threatened millions with starvation.” A Booz statement claims that “that it had provided no support for Saudi Arabia’s war in Yemen” but “did not address whether the troops and sailors it trains participate in the Saudi blockade in Yemen.”
Is it a coincidence that Booz organized a Hacker Trivia event, with “free food (while supplies last),” on the 10th anniversary of the Snowden revelations? Booz’s interest in hacking — Snowden himself has been described, by no less an authority than Barack Obama, as a “29-year old hacker,” and Wikipedia claims he “was offered a position on the NSA's elite team of hackers” — is also visible in their page on Hackathons. Is it possible that Booz sees mathematical research as a particularly refined form of hacking? A question to be explored in a future newsletter.
Government Communications HeadQuarters, the British equivalent of the NSA.
"Does human compatibility mean compatible with the priorities of its employer (or owner, or master — I really don’t know what word fits here) or with those of “all citizens” as mentioned on the above image from the Booz website? What does “human compatible” mean when not all human agendas are mutually compatible? " This seems to me the key question. I hope you'll continue this discussion
PRISM is legal. ACLU may not like it but they couldn't point to a single court decision supporting their accusations. Meanwhile ACLU has first-rate lawyers and is well-funded. Snowden is no different from useful idiots of yesteryear, and criticizing US government while being protected by Putin isn't very convincing.