(My) shallow thoughts about deep learning in math, prelude
Grading the June workshop on "AI to Assist Mathematical Reasoning"
This is exactly the same post published a few minutes ago, with the title amended to avoid misunderstandings.
A scorecard for the mid-June workshop
The organizers of the NSF/NASEM workshop (henceforward: The Workshop) will naturally, and appropriately, judge its success by reference to the significance of the technological developments discussed there and the speakers’ effectiveness in explaining their work and their perspectives. Toward the end it was claimed that 2000 people were attending the workshop virtually, and it’s safe to assume that most of them share the organizers’ priorities.
I do not share these priorities; the success or failure of mechanization of mathematics ranks low among my concerns. What does matter to me are, on the one hand, the misunderstandings and misrepresentations of the practice of mathematics that are exacerbated by most of what is written about what was discussed at The Workshop; on the other hand, the almost complete lack of attention to the larger implications of this work, which I find particularly irritating when contrasted with the very lively conversation going on … pretty much everywhere except mathematics … even on the precise days when The Workshop was taking place.
For example, just as The Workshop was coming to a close, my regular mailing from Data & Society (which I unfortunately never have time to read) included these items:
With attention-grabbing statements alleging AI superpowers, Big Tech leaders urge us to look to their version of the future, D&S Director of Research Jenna Burrell argues in a piece for Tech Policy Press. “They are distracting us from the enormous amount of wealth and power they stand to gain from the rise of AI — and from the fact that it doesn’t have to be this way,” she writes. “They would like to skip past today’s real work, that of tackling the challenges and wrestling with the possibilities that will shape the world to come. Instead, they ask us to look to the future — but they forget that the future belongs to us all.”
and
“To focus on existential risk only, to drive the policymaking conversation in that direction, is a huge risk in and of itself,” D&S Executive Director Janet Haven tells Gabrielle Sierra, discussing AI on the Council on Foreign Relations’s Why It Matters podcast. “I’m concerned about the future of creativity. I’m concerned about the future of things like care work and the lack of space that we have as a society to talk about what we're willing to automate and what we are not willing to automate….So much of the decision-making in these systems and how they are designed, how they’re rolled out, who has access to them, is happening in a very tight circle of people and companies who have access to data, to compute, to money and to talent. And it is not a democratic process. It’s not a space for societal deliberation. That is very worrying to me.”
If these questions are not too radical for the Council on Foreign Relations, surely the mathematical community can handle them. “Wealth and power,” “democratic process,” “decision-making”… was any of that on The Workshop’s agenda? Did any of the speakers talk about these issues? I’m looking forward to finding out.
As if that weren’t enough, the Association for Computing Machinery (ACM) held its annual Conference on Fairness, Accountability, and Transparency (FAccT) at exactly the same time as The Workshop; and European Parliament adopted the first regulations of its AI Act on June 14, as The Workshop drew to a close.
Over the next few weeks Silicon Reckoner will have a different format. Because I was at a different conference — a mathematics conference — during the week of The Workshop, I was unable to attend even virtually. I had hoped to record the talks during my conference in Bonn but the available technology required my constant attention. Fortunately the videos can now be viewed online. I’ll be sharing my scores as I progress through the hours of the presentations. So far I’ve been able to watch just a few of them, and their scores will be recorded in the next entry. But first I should explain the criteria on which sessions will be graded, on a scale from 0 to 10.
Understanding the scorecard
Context: Attention to the deep motivations of governments and corporations. (Not least the tendency to exaggerate the importance of developments in order to attract investment.)
Access: Access to necessary resources, exclusion, monopoly.
Externalities: Regulation, including climate and water considerations, personal data protection, and copyright infringement. Extra points for mentioning Europe’s AI Act, whose first regulations were adopted by the European Parliament as The Workshop drew to a close.
Democracy: Democratic decision-making, power relations.
Cluefulness: The opposite of cluelessness. Here it consists in acknowledging that conversations about AI in other disciplines (too numerous to list, but one can start with this or this), or the critical work within the AI community itself (not only radical critiques like this one but also mainstream sources like ACM — see above — or MIT Technology Review), may be relevant to mathematics. In Tech Review we can read
“The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, an organization that studies the social implications of technology.
It turns out, though, that when mathematicians have their hearts set on ignoring something nothing is too loud to break through their wall of silence.1
No grades will be awarded for the quality of the technical reports, as mentioned above; nor for attention or inattention to the following topics, among others:
Philosophical and historical considerations;
How different ideals (of proof and rigor, among other epistemic virtues) have dominated at different times in the history of mathematics; for example, at least one talk expressed a prejudice against diagrammatic proofs that even now is not universally shared;
Implications for employment (beyond those that arise in connection with the scores for access and democracy);
Is there a danger that the adoption of rigid norms of rigor will foreclose future developments?
Appearances to the contrary…
I wouldn’t dream of disrespecting adherents of Manichaeism. Wikipedia describes it as a “former major universal religion” but I’m not so sure about the “former,” and not only because it has an active website. It is possible to be non-binary in many respects, including gender non-binary, but still to adhere to a binary outlook which, like that of historical (and perhaps surviving) Manichaeans, consists in “an elaborate dualistic cosmology describing the struggle between a good, spiritual world of light, and an evil, material world of darkness.”
I am convinced that the binary logic that underlies computer science — the logic of 0s and 1s — is a modern manifestation of Manichaeism, apparently divested, but only apparently, of its eschatological associations with the eternal struggle between good (the light spiritual 0) and evil (the dark material 1 – or is it the other way around?). This (admittedly speculative) psychosocial history of contemporary computer science may explain why those who express even the slightest doubts about the wisdom of the plans for our collective future being hatched by executives of major tech firms are immediately and often disdainfully dismissed as Luddites.2
However this may be, I have no trouble acknowledging that I make at least implicit use of AI practically every day, and this doesn’t bother me in the least.3
It’s thanks to AI, in fact, that I am able watch The Workshop’s videos at 1.5 times original speed.4
Without this technology, Brendan Hassett and the two participants who joined him in the opening panel — computer scientist Moshe Vardi and mathematician Geordie Williamson — would have sounded like Alvin and the Chipmunks when their panel is sped up to 1.5 or 2.0 times the speed of the recording. And without this technology I would not have be able to generate a brand new fairy tale to end this post.
When Elon Musk was interviewed on French TV
I’ll begin reporting on The Workshop, with grades, in the next few weeks. In the meantime, here are a true story and a fictional story of encounters, in a remote corner of France, with two famous figures in AI, one human and the other artificial.
My friend Adrien stopped by with a bag of cherries yesterday and mentioned that he had just seen the interview with Elon Musk5 on the (public) network France 2. The interview left him distrustful of Musk and no less concerned than before about the implications of AI, for jobs in particular. When Adrien told me he had never used ChatGPT I gave him a few illustrations. First ChatGPT wrote a poem about the cherries of the region, and a second poem (this one in a tragic vein) about the worms that eat the cherries. The poems were rather awful, so I turned to current affairs, specifically as relates to Elon Musk’s visit to France. Readers who don’t know French can presumably get the gist of what follows with the help of AI, in the form of online translation.
Prompt: Écrire une histoire d'une caissière du magasin Leclerc de Clamecy qui rencontre Elon Musk et qui lui demande de ne pas éliminer son emploi.
ChatGPT: Il était une fois, dans la petite ville de Clamecy, une caissière nommée Sophie. Elle travaillait depuis de nombreuses années au magasin Leclerc, où elle accueillait les clients avec un sourire chaleureux et efficace. Chaque jour, elle scannait les produits, encaissait les paiements et échangeait quelques mots amicaux avec les habitués.
Un matin ensoleillé, alors que Sophie s'apprêtait à commencer sa journée de travail, elle remarqua une agitation inhabituelle dans le magasin. Les employés chuchotaient entre eux et pointaient du doigt un homme élégamment vêtu qui se promenait entre les rayons. Cet homme n'était autre qu'Elon Musk, le célèbre entrepreneur et visionnaire technologique.
Curieuse et intriguée, Sophie s'approcha prudemment de lui. Elle rassembla son courage et lui adressa un sourire timide.
"Excusez-moi, Monsieur Musk, je suis Sophie, la caissière de ce magasin. Je suis une grande admiratrice de vos réalisations. Mais j'ai entendu dire qu'il y avait des projets pour automatiser certains postes de travail, comme le mien. Je voulais vous demander de ne pas éliminer mon emploi."
Elon Musk, habitué aux demandes et aux regards admiratifs, s'arrêta un instant pour écouter Sophie. Son regard pétillant et bienveillant montrait qu'il accordait une véritable attention à ses paroles.
"Bonjour, Sophie," répondit-il avec un sourire. "Je comprends parfaitement tes inquiétudes.6
L'automatisation est un sujet complexe et délicat. Mon but n'est pas de priver les gens de leurs emplois, mais plutôt de créer de nouvelles opportunités et de rendre les tâches répétitives plus efficaces grâce à la technologie."
Sophie écoutait attentivement, mêlant espoir et appréhension dans son regard.
"Je vais te dire quelque chose, Sophie," poursuivit Elon Musk. "Je crois en l'importance des interactions humaines, en la convivialité des commerces de proximité. Je ne veux pas éliminer des emplois, mais plutôt libérer les individus de certaines tâches fastidieuses pour qu'ils puissent se concentrer sur des activités à plus forte valeur ajoutée."
Sophie sentit un soulagement se diffuser en elle, mais elle restait prudente.
"Et comment cela se fera-t-il concrètement, Monsieur Musk ?" demanda-t-elle.
Elon Musk lui expliqua alors sa vision de l'avenir, un avenir où les technologies comme l'intelligence artificielle et les robots pourraient prendre en charge les tâches répétitives, tout en permettant aux employés de se concentrer davantage sur le service client, le conseil personnalisé et la création de liens avec les clients.
"Je souhaite que l'automatisation apporte un changement positif, en créant de nouvelles opportunités pour toi et tes collègues. Je veux que les caissières comme toi deviennent des ambassadrices de la satisfaction client, des guides dans le choix des produits, et qu'elles offrent une expérience d'achat inoubliable."
Sophie écoutait avec un mélange d'émerveillement et d'espoir. Elle se sentait rassurée de savoir qu'Elon Musk ne voulait pas simplement éliminer les emplois, mais plutôt les transformer pour le mieux.
"Merci, Monsieur Musk," dit-elle avec gratitude. "J'espère sincèrement que votre vision se réalisera, et je suis prête à relever ce défi avec enthousiasme."
Elon Musk lui sourit une dernière fois avant de continuer sa visite du magasin. Sophie, remplie d'espoir et de détermination, reprit son poste avec un sourire encore plus lumineux.
Et ainsi, l'histoire de Sophie et d'Elon Musk devint le point de départ d'une transformation positive dans le magasin Leclerc de Clamecy, où les caissières et les employés apprirent à embrasser les opportunités offertes par la technologie, tout en préservant l'importance des interactions humaines dans le monde du commerce.
Adrien was not convinced, so I pursued the exchange with ChatGPT.
Prompt: Adrien n'est pas d'accord avec ce scénario. Il ne croit pas que Musk a l'intention de protéger les emplois. Que faut-il lui dire ?
ChatGPT: Cher Adrien, je comprends que tu aies des doutes et des préoccupations concernant les intentions d'Elon Musk quant à la protection des emplois. Il est naturel d'avoir des opinions divergentes sur ce sujet. Permettez-moi de vous7 présenter un point de vue alternatif.
Elon Musk, en tant qu'entrepreneur et visionnaire, a souvent exprimé son intérêt pour l'amélioration des technologies et l'automatisation, mais cela ne signifie pas nécessairement qu'il souhaite éliminer tous les emplois humains. Il met l'accent sur la création de nouvelles opportunités et la redéfinition des rôles traditionnels pour les rendre plus valorisants.
Il a parlé de la nécessité de décharger les tâches répétitives et fastidieuses des travailleurs grâce à l'intelligence artificielle et aux robots, afin de libérer du temps pour des activités plus enrichissantes et créatives. Il s'agit de permettre aux individus de se concentrer sur des aspects qui nécessitent des compétences spécifiquement humaines, tels que le service client, la créativité, l'empathie et la résolution de problèmes complexes.
Musk croit en l'importance des interactions humaines et en la valeur des commerces de proximité. Il s'agit de trouver un équilibre entre l'efficacité des nouvelles technologies et le maintien de l'aspect humain des relations commerciales. Il souhaite que les employés, comme les caissières, deviennent des ambassadeurs de la satisfaction client, des conseillers personnalisés et des créateurs d'expériences uniques.
Il est compréhensible d'avoir des doutes, mais il est également important de garder un esprit ouvert face à de nouvelles perspectives et à des approches innovantes. L'avenir du travail est en constante évolution, et il est possible que des solutions émergent qui allient technologies et maintien des emplois humains.
Il est essentiel de poursuivre le dialogue et d'encourager un débat constructif sur ces questions, afin de comprendre les différentes perspectives et d'explorer des moyens de créer un avenir du travail équilibré, qui tienne compte à la fois des avancées technologiques et des besoins des travailleurs.
N'hésitez pas à exprimer vos préoccupations et à participer activement à ces discussions.
You can judge for yourselves whether or not Adrien found ChatGPT’s response persuasive.
Thus two colleagues, who in other respects are anything but clueless, offered variants of the following argument when I pointed out that the membership of the committee organizing the workshop seemed to represent only one perspective:
…when there’s a Sage workshop, we don’t make a point of inviting people who think computer algebra systems are a bad idea…
Please! Anyone who is paying any attention to the news — for example, my friend Adrien, mentioned at the end of this post — knows that defining the future role of AI in each and every aspect of social organization is one of the most controversial topics to have arisen in recent memory. Arguments like the one just quoted would never have been taken seriously at the ACM FAccT Conference, or in the MIT Technology Review, not to mention at the European Parliament sessions. Why do (some) mathematicians find such arguments acceptable?
Not that there’s anything wrong with being a Luddite, as I’ve written here, as Thomas Pynchon argued long ago, and as you can even read in Scientific American. If you are ever accused of Luddism, your best bet may be to quote the retort recently published in The New Yorker by science fiction author Ted Chiang:
Whenever anyone accuses anyone else of being a Luddite, it’s worth asking, is the person being accused actually against technology? Or are they in favor of economic justice? And is the person making the accusation actually in favor of improving people’s lives? Or are they just trying to increase the private accumulation of capital?
Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.
AI for facial recognition, especially as used by the police, is of course another matter.
In fact I even used AI to confirm the claim in that last sentence. It was obvious enough to me that AI was involved in making this possible, but since I didn’t know the name of the process I was unable to find the right question to ask Google in order to learn how it’s done. So I just gave ChatGPT the prompt “How is AI used to change audio playback speed?” and here is the answer:
AI is used to change audio playback speed through a technique called time stretching or time scaling. Time stretching refers to the process of altering the duration of an audio signal while preserving its pitch or frequency content. This technology is commonly employed in various applications such as music production, audio editing, speech recognition, and more.
The traditional approach to time stretching involves splitting the audio into small segments, known as frames, and manipulating the time duration of these frames to achieve the desired speed change. However, this method often leads to artifacts and quality degradation, particularly when significant speed adjustments are made.
AI techniques, particularly deep learning models, have been leveraged to overcome these challenges and improve the quality of time-stretched audio. Here's a general overview of how AI is used in this context:
1 Training: A deep learning model, such as a convolutional neural network (CNN) or a recurrent neural network (RNN), is trained on a large dataset of audio examples. The dataset typically includes pairs of original audio and their corresponding time-stretched versions, created using traditional methods.
2 Feature extraction: The trained model learns to extract meaningful features from the audio signal, capturing both the temporal and spectral characteristics.
3 Mapping function: The model learns a mapping function that takes an input audio waveform and generates a time-stretched version as the output. The mapping function essentially encodes the relationship between the original audio and its time-scaled counterpart.
4 Time scaling: During the inference or application phase, the trained model applies the learned mapping function to the input audio, effectively altering its playback speed. The model generates the time-stretched audio by stretching or compressing the temporal representation while preserving the spectral characteristics.
By utilizing deep learning models, AI-based time stretching techniques aim to minimize the artifacts and maintain the quality of the audio, even at extreme speed changes. These models can adaptively adjust the time scaling based on the input signal, resulting in more natural and artifact-free time-stretched audio playback.
Experts will be able to judge the accuracy of the details, but this is convincing enough to me. I noticed, by the way, that I find it hard to follow time scaled audio with some foreign accents in English. Several times, for example, I had to replay Vardi’s presentation at normal speed in order to understand what he was saying. I wonder whether that’s simply because the training set doesn’t include enough examples of certain accents.
Adrien judged Musk’s spoken French to be weak.
Interestingly, Sophie addresses Musk as vous while Musk uses the familar tu form. A recurrent complaint in discussions of systemic racism in France — the existence of which has been indignantly denied by successive Interior Ministers — is that the French police systematically use tu when speaking to people of color, who comprise the majority, or at least a substantial minority, of cashiers in big French cities and their suburbs, though not in a small town like Clamecy. It’s probably just a coincidence.
ChatGPT doesn’t know whether to use tu or vous in speaking to Adrien.
Footnote 4 should probably be withdrawn. Ernie Davis warned me yesterday that time stretching technology has been around for decades, and that I may have allowed myself to be suckered by ChatGPT, which (as has been widely observed) has a habit of making things up, including in this case perhaps an entire technology, to please its human partners. The point I was trying to make, however, is that I wouldn't object to the use of the technology if ChatGPT's story were true.
Thank you! Very useful notes!