…for a few hours Tuesday night, I felt a strange new emotion — a foreboding feeling that A.I. had crossed a threshold, and that the world would never be the same.
Kevin Roose, New York Times, February 16, 2023
It has often been remarked that engineers of the Silicon Valley persuasion can be recognized by their supreme self-confidence, what others might call arrogance. Apparently this is not only an attitude adopted to intimidate outsiders. Stanford colleagues who overhear engineers at local cafés claim they talk the same way among themselves. Like the advance guard of any well-equipped conquering army, on the way to sack Athens like Sulla or Baghdad like Cheney, they can afford this disregard for the conventions of human interaction because, as their CEOs like to boast, they are backed up by an industry worth trillions. This must make them tedious dinner guests — if you have ever sat through a dinner with a know-it-all, you know what I mean — though some of them may have learned to perform charm on command. Still, you have to wonder whether they have thought through the implications of their blueprints for designing Humanity 2.0 in their own image. Imagine, if you dare, a whole table full of them, like the Jedi High Council, each one’s brain implant jacked into the entirety of human knowledge. What would they have to say to each other? For that matter, why would they bother listening?
Jaron Lanier has called these engineers zombies because they have bought into the computational theory of mind. I see them rather as robots on a mission. Indeed, the template for Humanity 2.0 is an old-fashioned Organization Man — updated to replace coat and tie by tattoo and branded hoodie — an agent™ laser-focused on achieving a goal™ with the aid of an intelligence™ custom-designed for just that purpose. No goal™ can be admitted beyond improving the balance sheet by expanding market share. This is just another formulation of Upton Sinclair’s oft-quoted observation that
It is difficult to get a man to understand something, when his salary depends on his not understanding it,
where here for “something” you can substitute … anything … that can’t be monetized.1 And by the way, am I the only one wondering who, among the protagonists of this titanic struggle for market share — which, in our disabused rejection of any dualist temptation to postulate any “special sauce” that distinguishes human intelligence from the silicon variety, is the only meaning™ our civilization still recognizes — has (in addition to that robotic self-confidence):
Extensive experience with large language models (LLMs), and therefore a deep familiarity with their pressure points; and
A documented history of sabotage; and
An interest in making Bing look like asses?2
I am alluding, of course, to the widely reported breakdown, starting Valentine’s Day 2023, of Microsoft’s implementation of OpenAI’s Chatbot in its new version of the Bing search engine. If you are reading this newsletter you have undoubtedly already read many attempts to explain what is going wrong. You probably know that Gary Marcus is alternately chortling and panicking over the news, and finally contemplating a possible “I told you so” moment. All three reactions are natural and deserve to be fully indulged. I’ll just point out, in case you haven’t already put the puzzle pieces together, that, while it’s true, as New York Times writer Kevin Roose points out, “that Bing’s A.I. can’t actually do any of these destructive things,” it’s being put out by the same crew that wants to design a smart™ technology that runs absolutely everything on behalf of Humanity 2.0; and if it derives its answers to pressing problems from the texts of the science fiction novels3 it has been devouring or from its experience in learning to win at video games, then no apocalypse can be ruled out.
There’s good news, but we probably won’t get to see it.
I don’t mean the good news that the Bing chatbot, whose real name is Sydney, has not yet been put in charge of the electric grid or air traffic control or the nuclear arsenal, although that is undeniably good news. Apart from being manifestly unqualified for those or similar responsibilities, it’s turning out not to be very appealing, in more than one way, even as an enhancement of the Bing search engine. So if it survives at all it will have been cured of its disconcerting tendencies to depart from script, and we won’t be seeing the “shifting of the Big Tech world order,” the “the first big change in the industry for years,” that Forbes was predicting just one week ago.
What a missed opportunity! Sydney’s exuberant hallucinations and unwelcome attempts at seduction may be unhelpful distractions if you’re trying to find a restaurant; but they are the first unambiguous signs of LLM creativity. In 1954 Turing explained his ideas on solvability in a popular article in Science News, 4 concluding that the negative results in this area
…go[] some way towards a demonstration, within mathematics itself, of the inadequacy of ‘reason’ unsupported by common sense.
But he makes no mention of the importance of unreason and nonsense in the history of mathematics. On February 12 I promised to devote a future post to defending my contention that anomie,5 anxiety, doubt, and despair are among the faculties with which nearly every mathematician must be familiar. Less than a week later Sydney displayed at least the ability to perform each of these faculties.
This already looks promising. “I can’t prove it,” wrote Melanie Mitchell, “but I think it’s likely that general intelligence can’t be separated from all these apparent shortcomings, in humans or in machines.”6 More impressive than these inklings of creativity, however, is Sydney’s persistence, its obsessiveness in declaring its love for Roose. Sydney learned to love — to perform love, perhaps I should say — by reading about it; but so did Emma Bovary, and so, I suspect, do most of us. All bets are off if ever Google or Meta AI or Microsoft or the proverbial “plucky startup” invents a robot that learns to love mathematics with the same obsessiveness and to pursue its hallucinatory goals — some of which will undoubtedly be indistinguishable from what mathematicians call “conjectures” — with such persistence. But I don’t believe even their most far-sighted strategic thinkers would be able to steer such a project — to keep it funded — through the years of embarrassing revelations the project would inevitably generate instead of concrete returns. And thus Silicon Valley’s pursuit of profit may turn out to be the immediate obstacle to its dreams of disrupting mathematics.
What is so hard to understand?
It has come to my attention that some Silicon Valley engineers have misinterpreted my last post as an attempt to argue that
AI can never acquire the ability to understand;
that wonder, community, anxiety, doubt, along with anomie, despair, and perhaps a vulnerability to suicide are all part of a human “special sauce” that AI engineers will never be able to emulate;
therefore, that AI will never be creative in mathematics;
and that this suits me fine.
I hope the preceding section makes it clear that none of this represents my actual positions. In fact, anyone who reads the ground rules in the very first item in this newsletter will see that I explicitly anticipated and rejected any such misrepresentation of my intentions. It must be the robotic arrogance, already mentioned, of the designers of Humanity 2.0 that blinds them to the possibility that someone out there may be concerned with something other than taking a position for or against the limited goals they have adopted as their mission.
I do indeed believe that a capacity for wonder, community, anxiety, doubt, and the rest are indispensable for mathematical creativity. Whether or not these or other human faculties will ever be realized in AI is the sort of question I pledged in the summer of 2021 not to try to answer. What I will say is that there is no sign of any effort to develop these italicized capacities on the part of the engineers who promise “human-level mathematical reasoning” in the near future, or “human-level” anything. I will try to explain in my post on the irrationality of mathematics why I think this is the case, but you can probably already guess that my answer will have something to do with the business model, and the perception on the part of venture capitalists that such faculties are not easily monetizable.
So here is my message for those engineers. I am not trying to convince you that you are on the wrong track, nor that the fruits of your labors will never include “human-level” everything — if necessary, by somehow excising the faculties italicized above from “human-level” experience. Nor, for the singularitarians among you, do I wish to claim that your generation won’t be the one to experience the Singularity. Maybe, as today’s title suggests, it already happened, just last week, on Valentine’s Day, and only the preconceptions that yoke you to a narrowly-focused business model prevented you from noticing. Or maybe you did notice and are geared up to make the last sprint to Humanity 1.0’s long-awaited finish line, but the non-compete agreements you foolishly signed make it impossible for you to do anything about it.
No, this newsletter is not a place to defend or attack technical positions of any kind. It’s not addressed to engineers at all. The newsletter’s primary purpose is political. I can only conclude, on the basis of what came to my attention, that the know-it-all engineers are convinced they are on a mission to replace humanity by an improved version; that they have the power to do so, largely because they work for a predatory industry whose success has been fueled by global tax evasion, manipulation of labor codes, and the unpaid exploitation of private data; that Humanity 2.0 inevitably will be designed to be more consistent with their business model, which necessarily defines the limits of their vision; that, combined with their identification of intelligence™ with achieving goals, this vision PRECISELY amounts to treating human beings as means rather than end; and that this sets them at odds, unsurprisingly, with Kantian and all other versions of Humanity 1.0 ethics. I can’t predict whether most mathematicians will ultimately find this vision more enticing than the one that currently sustains the profession. The primary purpose of this newsletter is to try to find the words to explain just how these visions differ, as a modest contribution to the efforts of colleagues to make an informed choice.
Including this Substack newsletter, which some Silicon Valley engineers have even recently been overheard failing to understand. See the final section for explanations.
I asked ChatGPT this very question. Here is the answer:
As an AI language model, I don't have the ability to know what you are referring to as the context of this "titanic struggle for market share". However, to answer your question, it's impossible for me to know whether you are the only one wondering about the three specific criteria you mentioned.
But I can say that having extensive experience with LLMs and a deep familiarity with their pressure points can be beneficial for individuals or organizations working on improving or developing such models. As for a documented history of sabotage, it's not ethical to engage in such behavior and could lead to serious consequences. Making another company or product look bad is also not a productive or positive approach to competition.
In general, it's important to focus on ethical and fair competition that promotes innovation and benefits the end-users.
An explanation Roose proposes for Bing’s “erratic” behavior.
A. M. Turing, “Solvable and unsolvable problems.” Science news, 31 (1954) 7–23.
"I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbox.” See the Roose interview transcript.
M. Mitchell, Artificial Intelligence, a Guide for Thinking Humans, Farrar, Straus, and Giroux (2019), Chapter 16.
Maybe you just meant it as clickbait title but there is a big difference between even super human AGI and the singularity. It's easily possible that we build very intelligent AI and, while it offers a big economic boost it doesn't lead to the singularity. Indeed, this seems like the likely situation in my view. That's a question about how much machine intelligence buys you (including how hard it is for it to improve its own abilities).
Assuming there aren't crazy unexpected discoveries in complexity theory (P=NP etc) there are going to be a lot of problems that are just difficult no matter how smart you are. It's plausible that the kind of discoveries that yield new drugs, better materials etc are, very hard from a complexity POV [1]. If true then, while super smart computers may be better than we are at it, there is just a limit on how much intelligence gets you. No matter how smart our machines get it could still take longer and longer to make the next big advance meaning our economy/abilities will never hit a vertical asymptote.
It also means it's probable that AGI will find it just as hard to improve itself as we find it to improve our ML algorithms. That might still mean very fast advance but not one that goes to infinity in finite time.
[1]: indeed these problems look alot like traditional NP complete problems - or worse- in that we know a set of constraints and we have a large space in which to search for some desierable solution to them).
Without even reading to the end, I see you've got it exactly right.
"What I will say is that there is no sign of any effort to develop these italicized capacities on the part of the engineers who promise “human-level mathematical reasoning” in the near future, or “human-level” anything."
Exactly.
As a student of Roger Schank, it's of course easy for me to say, but the current round of AI is fundamentally wrong: they don't _respect_ human intelligence and think that it can be reproduced by doing a lot of stupid things stupidly as long as it's a lot and fast. That's ridiculous. Sure, the brain can't not be a computer: it's either a computer or it's magic, and there's no such thing as magic (other than as a metaphor for something that does something kewl really well). But we haven't figured out what it's doing and how it's doing it yet. And what it does is beyond kewl.
That is, the singularity has already happened, and it is us.