Discussion about this post

User's avatar
Michael Harris's avatar

Thank you for your comment and for the link to your essay. I'm not sure where our agreement is "violent"; I am much less keen than you on Frege, but that remains to be developed. Thank you for quoting me. I have some serious complaints about the way I was quoted in that Quanta article, outlined at https://mathematicswithoutapologies.wordpress.com/2020/09/01/the-inevitable-questions-about-automated-theorem-proving/; my continuing frustration with Quanta was one of the factors that led me to start Silicon Reckoner.

It's generous of you to say that I've triggered a discussion, but I haven't seen much evidence that this is the case. I hope more people who disagree with the hype, or propaganda, will speak up. I particularly appreciate section 6 of your essay, but when you write about "meaning" and that "There is no audience besides the audience of mathematicians," you have to bear in mind that much of the ethos of the mechanization project is already post-human and its most energetic protagonists are probably past being convinced.

Expand full comment
Ke Zhang's avatar

I stumbled on your blog and then followed to this substack. Thank you for your very thoughtful article. I must admit that (like many fellow mathematicians) that I don't have much patience for philosophy and history of mathematics, but I do share your reservation about formalization of mathematics, on a narrower ground.

In my view, there is a long history of computer scientists and tech evangelists to oversell the value of automation. Inevitably, some comparison to basic tools like printing press or steam engine is made. However, computer automation has only changed the way we work, often not for the better. Even in the programmers' own domain, a new programming language is invented every month promising to raise programmers' productivity drastically. Needless to say none achieves that goal since new ones keep appearing. I think it's possible that the formalizers end up in a similar conundrum as the programmers, that a large amount of time is spent on satisfying the compiler instead of thinking about mathematics. If referring to someone else's theorem works similarly to importing a library in programming, I don't really envy a mathematician who is required to do that for their proof to be accepted.

In my view, the recent advances in machine learning has not changed the situation significantly. The "AI" succeeds in very narrow domain (like Go and chess), but only produces something that appears correct in the domain of language and arts. The AI optimists are counting on the mysterious singularity to validate their hypes. Now, it's entirely possible that the singularity will come in the near future, but I do doubt the current technology is a net benefit.

Expand full comment
4 more comments...

No posts