Discussion about this post

User's avatar
Peter Gerdes's avatar

As a logician I've been trying to convince all the people who think that superintelligent AIs will be magic of the existence of inherent computational complexity for a long time. No AIs won't have godlike capabilities relative to us.

But I don't really understand how you get from that fact to the idea that they won't massively change how proofs are found by being substantially faster (or at least able to bring about more computational resources) at proving things than us. I mean, sure we are more efficient than brute force proof search via unification/resolution but at some level when we search for an idea we are engaged in a type of brute force search and there is no particular reason to think that AI won't eventually match that.

Sadly, imo, that's kinda the most depressing outcome. It will still take lots of effort to solve hard problems but we will all be in the situation where we can't even pretend that we are adding much value by trying to prove things ourselves.

I mean, if I felt that, in computability theory, I'd get the theorems I'm interested in proved faster by doing someone else's laundry so they could take the time to solve it for me I wouldn't be a mathematican. Whether or not you want to say that the machine proof doesn't count as mathematical understanding to you, it counts as far as I care about doing math. I know some people would do math even if it was all already known to rediscover it themselves. I wouldn't indeed, I'm not interested if I don't think I'm at least sorta advancing the frontier of knowledge in a meaningful way.

Obviously, other mathematicans may be brighter than me but I need to believe that I can at least learn a sliver of the mathematical universe well enough that my contributions advance knowledge there more than interacting with me slows it down and I worry we will -- not in the next few years but maybe in 20 years -- hit that point with AI.

To people who see doing math like playing music, as a kind of purely expressive/artistic activity maybe that will be fine. But to those of us who do math exactly because we don't like that kind of thing it will be sad. Though maybe VR will come along and give us something else fun to do.

Geordie Williamson's avatar

I would like to be convinced by this argument but am not. We mathematicians prove in a world were proving is hard, even — as the authors point out — in decidable theories like Pressburger arithmetic. However, on a daily basis we (try to) find some passage through formally undecidable terrain. A central question seems to be: is there some formal reason why we humans can navigate this terrain and computers cannot?

The authors seem to suggest that progress has so far either been on a) situations where brute force is possible or b) situations which have better formal properties (e.g. in their discussion of AlphaGeometry). AlphaProof solved questions in number theory at IMO level. I agree this isn’t exactly what we mathematicians do, but can one convincingly say that IMO number theory problems are qualitatively different to what Michael and I do for our daily bread?

I agree that much of the press coverage is full of hype. I agree that so far no convincing example of AI alone doing research level maths has occurred. I would like there to be a formal reason why this isn’t possible, but I just don’t see it. To quote someone mentioned often on this blog: “I see no obstruction.”

16 more comments...

No posts

Ready for more?