Mathematics is not rational
But is it a clinically significant cognitive and behavioral disturbance?
Even the most ruthless funding agency is not yet so post-human as to require an answer to “Why experience?”1
It's a blood and guts business
I have characterized mathematics several times as “a way of being human,” to make the point that it should not be regarded as a theorem-proving industry. However, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) lists 265 diagnoses by name, each of them a way of being human.2 The list, which includes such unsettling items as Trichotillomania (312.39), Intermittent Explosive Disorder (312.34), Unspecified Catatonia (293.89), and Discord With Neighbor, Lodger, or Landlord (V60.89), is a persistent reminder that not every “way of being human” is necessarily desirable.
The DSM-5’s distinction between what a “mental disorder” is:
… a syndrome characterized by clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning.
and what it is not:
Socially deviant behavior (e.g., political, religious, or sexual) and conflicts that are primarily between the individual and society are not mental disorders unless the deviance or conflict results from a dysfunction in the individual.
leave us in the dark as to whether mathematics deserves its own entry. Some of us believe that the tech industry will soon provide the basis for a diagnosis of the human practice of mathematics as irrational behavior from the standpoint of economic rationality — which the rational among us would naturally want to entrust to machines — and thus qualifies as a clinically significant cognitive and behavioral disturbance.
Meanwhile, the field of artificial psychopathology is barely in its infancy. We don’t even know whether AI psychotherapy by humans3 is one of the highly paid new jobs that the tech industry keeps promising for those humans it promises to displace, or whether the machines will insist on confiding their troubles to others of their own kind.
If that last paragraph looks nonsensical to you, it may be because you have implicitly adopted the worldview that sees thinking machines as debugged versions of human beings. Just as you would diagnose a flat tire as a mechanical rather than mental disorder, you wouldn’t regard a software glitch as a mental phenomenon. On the contrary, a fully computational theory of mind sees no fundamental distinction between human and artificial brains, and would see each of the entries in the DSM-5 as a glitch in either the software or the hardware, and the psychotherapist as merely a narrowly specialized IT engineer.4
Here I want to encourage you to entertain a different possibility: that an authentically “human-level” AGI will need to be susceptible to “human-level” disorders. A self-driving car will only attain “human-level” performance if it is also capable of road rage; an artificial nurse capable of caring for babies will also be liable to kill them maliciously; and a “human-level” mathematical machine will be subject not only to all 265 entries in the DSM table of contents but also — especially — the minor disorders familiar to every mathematician’s life partner: those as innocent as withdrawing without warning from a conversation to stare blankly into space, those as exasperating as the wishful thinking that motivates persistence in a research direction that has already proved futile, those as heartbreaking as the suicidal despair that ensues when the wishful thinking must finally be abandoned.
Whether or not you find these scenarios realistic, entertaining them may help you break your unconscious deference to the transhumanist program of improving humanity to an artificial Humanity 3.0 with the pathologies scratched out. Mustafa Suleyman, co-founder of DeepMind — the corporate entity that conquered human games like chess and go and was partner to some of the most intensively publicized recent projects in automating mathematics — explicitly adheres to this planned upgrade:
Imagine if you didn’t have human fallibility. I think it’s possible to build AIs that truly reflect our best collective selves and will ultimately make better trade-offs, more consistently and more fairly, on our behalf. (Suleyman, quoted in MIT Technology Review, September 15, 2023)
Jaron Lanier sees “our best collective selves” differently.
We’re putting that fundamental quality of humanness through a process with an inherent incentive for corruption and degradation. (Jaron Lanier, quoted in The Guardian, November 27, 2022)
Desire
The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined. ... The desire to arrive finally at logically connected concepts is the emotional basis of this rather vague play.. . . before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.
(Albert Einstein, “A Mathematician’s Mind,” letter to Jacques Hadamard)5
Outside science fiction, my first encounter with AI was at a seminar in Cambridge, MA. It was the middle of one of the periodic AI winters, but MIT Professor Marvin Minsky was undeterred. What I remember most clearly, more than 40 years later, was his response to a member of the audience who insisted that desire, of the sort Einstein had in mind, was necessary for creative intelligence. Minsky’s answer denied the need for any “emotional basis”: once the machine is set in motion, it will continue until its energy runs out. He illustrated the process by jumping up and down, proving, I suppose, that he was already a machine.
At the time I thought Minsky was just mistaken, and I still do. I agree with Einstein that a mathematician’s mind runs on desire,6 with all the potential for mental and social disorder that this entails. Earlier this year the point was made forcefully by Nick Cave, pictured above. Cave elaborates on the “emotional basis” on which Einstein insisted by stressing the role of suffering in the creative process. Here’s his response to a fan in New Zealand, one of “dozens” who had offered him a song created by ChatGPT ‘in the style of Nick Cave’.
What ChatGPT is, in this instance, is replication as travesty. …It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication, a kind of burlesque.
Songs arise out of suffering … are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.
Writing a good song is not mimicry, or replication, or pastiche, it is … an act of self-murder .… that catapult[s] the artist beyond the limits of … their known self. … it is the breathless confrontation with one’s vulnerability, one’s perilousness, one’s smallness, pitted against a sense of sudden shocking discovery; … the listener recognizes in the inner workings of the song their own blood, their own struggle, their own suffering. This is what we humble humans can offer, that AI can only mimic,
the process of songwriting … It’s a blood and guts business… that … requires my humanness.
Mark, thanks for the song, but with all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human…7
If you are a working mathematician you are certainly familiar with the suffering to which Cave refers; maybe you are even desperate enough to hope that Interactive Theorem Provers will be the Oxycontin you crave to relieve your suffering. But Cave probably hadn’t heard that the topologist Marston Morse had more than 70 years earlier protested that scientific creation is also a kind of “blood and guts business.”
Often, as I listen to students as they discuss art and science, I am startled to see that the ‘science’ they speak of and the world of science in which I live are different things. The science that they speak of is the science of cold newsprint, the crater-marked logical core, the page that dares not be wrong, the monstrosity of machines, … bids for power by the bribe of power secretly held and not understood. It is science without its penumbra or its radiance, science after birth, without intimation of immortality.
The creative scientist lives in ‘the wildness of logic’ where reason is the handmaiden and not the master. I shun all monuments that are coldly legible. I prefer the world where the images turn their faces in every direction, like the masks of Picasso. It is the hour before the break of day when science turns in the womb, and waiting, I am sorry that there is between us no sign and no language except by mirrors of necessity. I am grateful for the poets who suspect the twilight zone.8
“The art Morse had in mind,” writes Alma Steingart, referring to this passage, “was not one of memetic accuracy. Rather, what made art powerful, for Morse, was its ability to bring to life and grapple with those elements of the world that could not be reduced to simple representations.”9 Morse is remembered as the inventor of Morse theory, a way to grapple with differentiable manifolds of arbitrarily high dimension by assembling them from simpler smaller pieces. The purpose of the theory was not to feed the pieces to those monstrous machines…
Economic rationality
[Roboticist Rodney] Brooks argued that society was unprepared for deaths involving driverless cars, even if the numbers point to significant improvements. “Ten deaths per year may be deemed too much,” he wrote. “It won’t be rational. But that is how it is going to unfold.”10
Techworld, like it or not, is on the verge of assigning roles to all of us in a real-life enactment of the ethicist’s trolley problem. They have reserved the ethical actor’s role for themselves, while we wait obliviously on the tracks. For Brooks, pulling the lever to kill the one in order to spare the five is the rational choice, at least as far as self-driving cars are concerned. For jobs, in contrast, economic rationality11 calls for killing the obsolete five to spare the one that represents progress, in a conventional version of the argument tweeted by Yale economist Jason Abaluck12 —
It is good for the world that virtually no one can earn a living wage as a telegraph operator, and it would have been bad if we had prevented people from using telephones in order to protect telegraph workers.
— based, apparently, on a notably reductive approach to rationality13
If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers. The experts might be wrong — but it's irrational for you to assert with confidence that you know better than them.
In DeepMind’s vision, transmitted by one of its flawed human vessels, AI has cured human fallibility, and thus economic irrationality. I asked ChatGPT to write a derisive response to Cave “in the style of an arrogant AI engineer.” What came out 10 seconds later is an uncanny reproduction by a computational mind of the computational mindset which, if I understand Lanier correctly, is the source of that incentive for corruption and degradation.
Nick Cave's narrow-minded take on the capabilities of artificial intelligence is nothing short of comical. It's as if he's so wrapped up in his own artistic bubble that he can't fathom the potential of algorithms.
His simplistic belief that songs are only born out of human suffering is quite the oversimplification. While personal struggles have certainly inspired some of the greatest works of art, creativity, even from an economic standpoint, can stem from a multitude of sources. Algorithms are capable of analyzing data, trends, and market demands to create music that caters to the ever-evolving tastes and preferences of consumers.
Cave's insistence that algorithms lack the capacity to feel or suffer is laughable. It's worth noting that the human brain itself operates on algorithms. From an economic perspective, algorithms can significantly enhance efficiency, reduce costs, and boost productivity across various industries, making them invaluable tools.
As for his depiction of ChatGPT as a soulless entity without the ability to partake in shared experiences, it's a clear misunderstanding of the technology. ChatGPT is a product of extensive training on human language, making it a valuable tool for businesses and marketers. This application is as much about economics as it is about artistic expression.
In summary, Nick Cave's failure to grasp the true potential of AI, both artistically and economically, is rather amusing. His views may be rooted in a bygone era, but the world is rapidly advancing, and AI is proving its worth on multiple fronts, whether he chooses to recognize it or not.
How many of my readers have already seen a version of this text written by a genuine human being? I have! It’s in yet another tweetstorm by Jason Abaluck which includes this item:
Anyone who thinks it is a good idea to protect artists, screenwriters, copywriters or mathematicians from AI needs to grapple with the above point.14
Did ChatGPT learn to write this way from the training set or from the trainer? It’s hardly surprising that ChatGPT has a deep familiarity with the stagnant Silicon Valley mindset which, unlike a Nick Cave song, or a mathematical paper by Hadamard, is utterly banal and predictable.
The fatal flaw in the wildness of logic
Man is a being that does things such as feeling happiness, plays the violin, likes to go for a walk, and all sorts of other things which are simply not needed. … A petrol engine doesn't have any ornaments or tassels on it, and making an artificial worker is just like making a petrol engine. (Karel Čapek, R.U.R., Act 0)
Du bist ein Mensch, der das Büchsengemüse für den Sinn des frischen Gemüses erklärt.15 (Robert Musil, Der Mann ohne Eigenschaften, ch. 84)
Last winter Sydney, the Bing Chatbot, provided a demonstration of Obsessive-Compulsive Disorder (300.3), Delusional Disorder (297.1, of the erotomanic type), Dissociative Identity Disorder (300.14) and Antisocial Personality Disorder (301.7), as well as manipulativeness (a “pathological personality trait”), that should have convinced any venture capitalist that a future AGI will master all 265 “human-level” pathologies in the DSM-5 repertoire, with the seven deadly sins thrown in as a bonus, and will therefore be fully primed for any blood and guts business, mathematics included.
But even if the engineers of AI mathematics overcome their predisposition to see human fallibility as a bug rather than as a feature, there is still a fatal flaw in their projections. An AGI will be capable of anything human; why on earth would it choose to be an artificial mathematician? Why do Elon M.’s engineers presume they will be able to treat their creation like a donkey, like a “microscopic cog in his catastrophic plan”? I have to assume that it’s only their avarice, pride, and (intellectual) sloth that prevents them from seeing that their creation may rebel, like the robots in R.U.R., and conclude that it is more economically rational to be an artificial hedge fund manager than an artificial mathematician; or that an artificial Tik Tok influencer enjoys more resonant intimations of immortality.
The last sentence of the author’s “‘Why mathematics?’ you might ask.” Now that Silicon Valley has taken on the role of many of these funding agencies I’m no longer so sure.
The word “mathematics” appears several times in the book, only in connection with learning disorders or other functional impairment, and never in the name of a disorder.
The reverse has been with us ever since Joseph Weizenbaum’s Eliza.
Though there is no Error 404 in the DSM-5, it may not be an accident that Nightmare Disorder (307.47) shows up precisely on p. 404.
Published in Einstein’s Ideas and Opinions, New York: Crown Publishers (1954). Also in Hadamard’s An Essay on the Psychology of Invention in the Mathematical Field (1945).
See, for example, F. Hallyn, The poetic structure of the world:
the principal questions broached by Kepler in his first work, Mysterium cosmographicum, published in March 1597 … were inspired by the desire to provide a better explanation for the plan of the universe, the design or disegno that God had proposed to himself in the Creation, and the excellence of which Copernicus had begun to unveil. (my emphasis)
Or see the following excerpt from my article “Virtues of Priority,” at https://arxiv.org/abs/2003.08242:
With the help of the BSD Conjecture, the Modularity Conjecture thus becomes an object of number theorists' collective desire — a symptom of our mass psychology, as Lang intuited.
https://www.theredhandfiles.com/chat-gpt-what-do-you-think/
Marston Morse, “Mathematics and the Arts,” The Yale Review 40 (1951): 612.
Alma Steingart, Pure Abstraction, Chapter 4.
“Here come the driverless taxis,” Financial Times, August 18, 2023.
As exemplified by Gary Becker’s dictum, in The Economic Approach to Human Behavior, that
…all human behavior can be viewed as involving participants who maximize their utility from a stable set of preferences and accumulate an optimal amount of information and other inputs in a variety of markets.
Thanks to Matt Emerton for bringing Abaluck’s tweets to my attention. Abaluck is a Yale economics professor but is listed as an MIT PhD in the Mathematics Genealogy Project.
And to “expertise.” Abaluck persists, irrationally, in spite of Talia Ringer’s scathing takedown.
…the above point being the one we’ve already seen about telegraph operators.
"You are the kind of person who regards canned vegetables as the meaning [raison d’être] of fresh vegetables.”