In reaction to Jean-Michel Kantor’s challenge to those who predict that the mathematics of the future will belong to the machines:
Can a computer come up with the idea of the Poincaré conjecture?
Silicon Reckoner has devoted several posts, in 2021 and again in 2023, to the problem of reverse engineering Poincaré.
In a comment on the most recent of these posts, AG, suggested that the text of Akshay Venkatesh points to a potential capability of AI that “is quite distinct from the context and narrative underlaying Poincare's transformational (once in a decade if not a century) insight referenced in Kantor's quest.”
Since Kantor’s response to AG is too long to include as a comment, I am happy to provide him the opportunity to explain his position in this guest post. The text after the horizontal line is entirely due to Kantor.
There is a general agreement among mathematicians that their activity of research has two different sides, one being the creation of new ideas, new concepts and raising new problems, and the other consisting in answering questions, proving statements and establishing solid bases for those new mathematics.
This distinction is summarized in the well-known dictum of Poincaré:
C’est avec la logique que nous prouvons et avec l’intuition que nous trouvons. ‘’
We are here concerned with this Janus-face of mathematics confronted with the recent technological revolution of artificial intelligence (AI).
Some specialists maintain that in the future AI could become Artificial general intelligence (AGI), a super-human intelligence possessing all the competences of human minds. This would logically comprehend complete ‘’artificial mathematics'' with the two different aspects mentioned above.
What then about intuition?
I suggested as a test the famous Poincaré’s conjecture (see [1,2]): but my question applies to any mathematical problem: assuming the state of mathematics prior to the questioning of say the existence of an infinity of prime numbers, could an artificial device raise this issue on its own?
During the Fields Medal symposium held in 2022 in Toronto in honor of Ashay Venkatesh, many sessions discussed the connections between AI and mathematics; Venkatesh himself in a remarkable contribution [ 3] touched upon our challenge when he claimed that ‘’ If Alpha_0 [symbol of a future artificial machine] does all the proving and we do all the questioning, the result is not so different to a scenario where Alpha_0 is capable of generating its own mathematical conjectures ‘’. But who knows maybe Alpha_0 could not generate conjectures at all, or on the contrary generate ‘’ super-human’’ mathematical ones!
Moreover there is another issue, maybe it is reasonable to imagine a machine that generates many conjectures, but then could it possess criteria for choice? What are the human criteria for choosing a problem, could there be artificial analogues?
We will not discuss this point which is linked to the conjectural work on future machines possessing emotions or even conscience (see [4]).
The role of intuition was discussed during the sessions of the Workshop, in particular Stanislas Dehaene (see [2] ) distinguished the domains where artificial networks are efficient and domains where ‘’ the brains keep the upper hand ‘’ and suggested radical new developments for AI in order to treat the question of intuition. As an example he seemed to refer to a new Bayesan-type of reasoning (inspired by [ 5 ]), and also a mix of approximate intuitions and precise language-like symbolic expressions (similar to [6] ).
We also suggest the possibility of an artificial abduction, inpired by the logical abduction developed by Peirce (see [7] ).
Concerning the suggestion by AG to start from Graduate Texts this procedure neglects the creative dimension of mathematical research. If the Springer Graduate Texts could generate a “competitive graduate dissertation” all thesis advisors would be out of work! This point of view reminds us of one of the main criticisms made of AI, synthetized in the expression coined by Bender-Gebru et al [8]: how new ideas could be produced by billions of ‘’stochastic parrots‘’?
Of course this could change with a different AI. It has been the hope of several mathematicians and computer scientists trying to develop “artificial intuition”.
A detailed study of these works shows the efficiency of AI as an assistant to human intuition and ideas,for example typically in he domain of combinatorial problems.
This is confirmed in the conclusions of [9]:
‘’…..we focus on helping guide the highly tuned intuition of expert mathematicians, yielding results that are both interesting and deep .
…. As mathematics is a very different, more cooperative endeavor than Go, the role of AI in assisting intuition is far more natural. Here we show that there is indeed fruitful space to assist mathematicians in this aspect of their work. …
It is our hope that this framework is an effective mechanism to allow for the introduction of machine learning into mathematicians’ work, and encourage further collaboration between the two fields.’’
1 Michael HARRIS, “What is ‘human-level mathematical reasoning’?” Substack newsletter, Silicon Reckoner, November 17, 2021, https://siliconreckoner.substack.com/p/what-is-human-level-mathematical-reasoning.
2 Michael HARRIS “Has DeepMind mechanized mathematical intuition?,” Substack newsletter, Silicon Reckoner, December 14, 2021, https://siliconreckoner.substack.com/p/has-deepmind-mechanized-mathematical.
3 Akshay VENKATESH, Some thoughts on automation and mathematical research, November 2021, https://www.math.ias.edu/~akshay/research/IASEssay.pdf.
4 David MUMFORD, Numbers And the World: Essays on Math and Beyond, AMS (2023) ISBN 13-978-14704_7051-7.
5 Joshua S. RULE et al., Trends in Cognitive Sciences 24, no. 11 (November 2020): 900–915, https://doi.org/10.1016/j.tics.2020.07.005.
6 Stanislas DEHAENE, “Origins of Mathematical Intuitions: The Case of Arithmetic,” Annals of the New York Academy of Sciences 1156 (March 2009): 232–59, https://doi.org/10.1111/j.1749-6632.2009.04469.x.
7 Paul ERNEST, “Abduction and Creativity in Mathematics,” in Handbook of Abductive Cognition, ed. Lorenz Magnani (Cham: Springer International Publishing, 2023), 585–611.
8 BENDER Emily M. et al., “On the Dangers of Stochastic Parrot,” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency,” accessed January 3, 2024, https://dl.acm.org/doi/10.1145/3442188.3445922.
9 Alex DAVIES et al., “Advancing Mathematics by Guiding Human Intuition with AI,” Nature 600, no. 7887 (December 2, 2021): 70–74, https://doi.org/10.1038/s41586-021-04086-x.
The challenge of artificial mathematical intuition has been recognized since the beginning of the field. Just yesterday I found a quotation in the book I'm reading by Matteo Pasquinelli, soon to be reviewed here, from the review of Wiener's "Cybernetics" by Wolfgang Köhler:
"It is not astonishing that, so far as speed is concerned, so far as speed is concerned, the substituted operations of machines are far superior to anything that humans can achieve. At the same time, these operations appear to generically different from those of a human being who is occupied with a mathematical problem... The machines do not know, because among their functions there is none that can be compared with insight into the meaning of a problem."
The full reference: review of Cybernetics or Control and Communication in the Animal and the Machine, by Norbert Wiener, Social Research 18 (1951): 127-28.
Meanwhile, this morning I watched an online broadcast of a lecture by Amaury Hayat at Erlangen. One of his slides reads:
- Al is already useful in the practice of mathematics and has solved several difficult problems.
- Al is trained to have better intuition than humans on a specific problem.
- This augmented intuition altows us to bypass the difficulty of the problem.
Is he using a different definition of intuition?