6 Comments

Scary to hear that many CS people don't understand the difference between CS and math! No wonder they are winning.

Expand full comment

Hi Michael, This may be a naive comment, but would the math world be impressed if an AI system could produce an accurate algorithm that would generate the sequence of all prime numbers. This wouldn't be a brute force method that simply determined if a number was divisible (or prime), but an actual algorithm for predicting the sequence of primes or as Marcus du Sautoy called it "The Music of the Primes". Alternatively, could an AI theorem prover answer the question of whether there are an infinite number of twin primes. Simple questions, tough to answer for humans...

Expand full comment

(Norm and I were high school classmates in Philadelphia and took mathematics together for four years, if I'm not mistaken; we haven't seen each other since then.)

I try to avoid speculating about such questions because so many issues are involved. If an AI system, without prompting, spontaneously exhibited a comprehensible proof of the twin prime conjecture, or the Goldbach conjecture, or the Riemann hypothesis, then one couldn't help being impressed — and puzzled at the same time. At a different extreme, if the system were given the assignment of proving one of those conjectures and generated 1 trillion characters of code that it claimed to be a proof — a scenario that is often discussed — mathematicians wouldn't know what to do with it. Any number of intermediate situations can be imagined.

One defect of meetings like The Workshop on which I have been reporting is that no systematic attention has been given to questions like yours. Such attention would requiring acknowledging that proof is a social phenomenon and drawing the appropriate conclusions. When someone like Tony Wu sets as a goal to "solve mathematics," as reported in yesterday's article by Siobhan Roberts in the NY Times, the aim, as far as I can tell, is to make what you call "the math world" irrelevant or obsolete.

Expand full comment

Well said Michael. As you surmised I'm one of the new readers who saw you mentioned in the NYT article. AI without all the hype (in every field it enters) would be a breath of fresh air...

Expand full comment

Thanks Michael. Two comments:

> Moreover, “each additional GPU cuts the cost of training significantly,” SB said.

I think this is a misunderstanding of what she said. She meant that having access to the next generation of GPUs cuts cost of training significantly. This is why people want H100s rather than GPUs which were state of the art 3 years ago.

Comment on footnote 3: I think Bidermann was referring not to availability of the paper, but rather the availability of the model on which the paper is based. This is a very common issue in more applied subjects, where the paper often offers a barebones summary of software, without actually making it available. This is the modern equivalent of "forgetting" a crucial ingredient when sharing a recipe with friends!

Expand full comment

Thank you Geordie. I'll revise the footnote.

Expand full comment