3 Comments

Maybe you just meant it as clickbait title but there is a big difference between even super human AGI and the singularity. It's easily possible that we build very intelligent AI and, while it offers a big economic boost it doesn't lead to the singularity. Indeed, this seems like the likely situation in my view. That's a question about how much machine intelligence buys you (including how hard it is for it to improve its own abilities).

Assuming there aren't crazy unexpected discoveries in complexity theory (P=NP etc) there are going to be a lot of problems that are just difficult no matter how smart you are. It's plausible that the kind of discoveries that yield new drugs, better materials etc are, very hard from a complexity POV [1]. If true then, while super smart computers may be better than we are at it, there is just a limit on how much intelligence gets you. No matter how smart our machines get it could still take longer and longer to make the next big advance meaning our economy/abilities will never hit a vertical asymptote.

It also means it's probable that AGI will find it just as hard to improve itself as we find it to improve our ML algorithms. That might still mean very fast advance but not one that goes to infinity in finite time.

[1]: indeed these problems look alot like traditional NP complete problems - or worse- in that we know a set of constraints and we have a large space in which to search for some desierable solution to them).

Expand full comment

Without even reading to the end, I see you've got it exactly right.

"What I will say is that there is no sign of any effort to develop these italicized capacities on the part of the engineers who promise “human-level mathematical reasoning” in the near future, or “human-level” anything."

Exactly.

As a student of Roger Schank, it's of course easy for me to say, but the current round of AI is fundamentally wrong: they don't _respect_ human intelligence and think that it can be reproduced by doing a lot of stupid things stupidly as long as it's a lot and fast. That's ridiculous. Sure, the brain can't not be a computer: it's either a computer or it's magic, and there's no such thing as magic (other than as a metaphor for something that does something kewl really well). But we haven't figured out what it's doing and how it's doing it yet. And what it does is beyond kewl.

That is, the singularity has already happened, and it is us.

Expand full comment