More AI debate between me and Steven Pinker!
Several people have complained that Shtetl-Optimized has become too focused on the niche topic of “people being mean to Scott Aaronson on the Internet.” In one sense, this criticism is deeply unfair—did I decide that a shockingly motivated and sophisticated troll should attack me all week, in many cases impersonating fellow academics to do so? Has such a thing happened to you? Did I choose a personality that forces me to respond when it happens?
In another sense, the criticism is of course completely, 100% justified. That’s why I’m happy and grateful to have formed the SOCG (Shtetl-Optimized Committee of Guardians), whose purpose is to prevent a recurrence, thereby letting me get back to your regularly scheduled programming.
On that note, I hope the complainers will be satisfied with more exclusive-to-Shtetl-Optimized content from one of the world’s greatest living public intellectuals: the Johnstone Family Professor of Psychology at Harvard University, Steven Pinker.
Last month, you’ll recall, Steve and I debated the implications of scaling AI models such as GPT-3 and DALL-E. A main crux of disagreement turned out to be whether there’s any coherent concept of “superintelligence.” I gave a qualified “yes” (I can’t provide necessary and sufficient conditions for it, nor do I know when AI will achieve it if ever, but there are certainly things an AI could do that would cause me to say it was achieved). Steve, by contrast, gave a strong “no.”
My friend (and previous Shtetl-Optimized guest blogger) Sarah Constantin then wrote a thoughtful response to Steve, taking a different tack than I had. Sarah emphasized that Steve himself is on record defending the statistical validity of Spearman’s g: the “general factor of human intelligence,” which accounts for a large fraction of the variation in humans’ performance across nearly every intelligence test ever devised, and which is also found to correlate with cortical thickness and other physiological traits. Is it so unreasonable, then, to suppose that g is measuring something of abstract significance, such that it would continue to make sense when extrapolated, not to godlike infinity, but at any rate, well beyond the maximum that happens to have been seen in humans?
I relayed Sarah’s question to Steve. (As it happens, the same question was also discussed at length in, e.g., Shane Legg’s 2008 PhD thesis; Legg then went on to cofound DeepMind.) Steve was then gracious enough to write the following answer, and to give me permission to post it here. I’ll also share my reply to him. There’s some further back-and-forth between me and Steve that I’ll save for the comments section to kick things off there. Everyone is warmly welcomed to join: just remember to stay on topic, be respectful, and click the link in the verification email!
Without further ado:
Comments on General, Artificial, and Super-Intelligence
by Steven Pinker
While I defend the existence and utility of IQ and its principal component, general intelligence or g, in the study of individual differences, I think it’s completely irrelevant to AI, AI scaling, and AI safety. It’s a measure of differences among humans within the restricted range they occupy, developed more than a century ago. It’s a statistical construct with no theoretical foundation, and it has tenuous connections to any mechanistic understanding of cognition other than as an omnibus measure of processing efficiency (speed of neural transmission, amount of neural tissue, and so on). It exists as a coherent variable only because performance scores on subtests like vocabulary, digit string memorization, and factual knowledge intercorrelate, yielding a statistical principal component, probably a global measure of neural fitness.
In that regard, it’s like a Consumer Reports global rating of cars, or overall score in the pentathlon. It would not be surprising that a car with a more powerful engine also had a better suspension and sound system, or that better swimmers are also, on average, better fencers and shooters. But this tells us precisely nothing about how engines or human bodies work. And imagining an extrapolation to a supervehicle or a superathlete is an exercise in fantasy but not a means to develop new technologies.
Indeed, if “superintelligence” consists of sky-high IQ scores, it’s been here since the 1970s! A few lines of code could recall digit strings or match digits to symbols orders of magnitude better than any human, and old-fashioned AI programs could also trounce us in multiple-choice vocabulary tests, geometric shape extrapolation (“progressive matrices”), analogies, and other IQ test components. None of this will help drive autonomous vehicles, discover cures for cancer, and so on.
As for recent breakthroughs in AI which may or may not surpass humans (the original prompt for this exchange); What is the IQ of GPT-3, or DALL-E, or AlphaGo? The question makes no sense!
So, to answer your question: yes, general intelligence in the psychometrician’s sense is not something that can be usefully extrapolated. And it’s “one-dimensional” only in the sense that a single statistical principal component can always be extracted from a set of intercorrelated variables.
One more point relevant to the general drift of the comments. My statement that “superintelligence” is incoherent is not a semantic quibble that the word is meaningless, and it’s not a pre-emptive strategy of Moving the True Scottish Goalposts. Sure, you could define “superintelligence,” just as you can define “miracle” or “perpetual motion machine” or “square circle.” And you could even recognize it if you ever saw it. But that does not make it coherent in the sense of being physically realizable.
If you’ll forgive me one more analogy, I think “superintelligence” is like “superpower.” Anyone can define “superpower” as “flight, superhuman strength, X-ray vision, heat vision, cold breath, super-speed, enhance hearing, and nigh-invulnerability.” Anyone could imagine it, and recognize it when he or she sees it. But that does not mean that there exists a highly advanced physiology called “superpower” that is possessed by refugees from Krypton! It does not mean that anabolic steroids, because they increase speed and strength, can be “scaled” to yield superpowers. And a skeptic who makes these points is not quibbling over the meaning of the word superpower, nor would he or she balk at applying the word upon meeting a real-life Superman. Their point is that we almost certainly will never, in fact, meet a real-life Superman. That’s because he’s defined by human imagination, not by an understanding of how things work. We will, of course, encounter machines that are faster than humans, and that see X-rays, that fly, and so on, each exploiting the relevant technology, but “superpower” would be an utterly useless way of understanding them.
To bring it back to productive discussions of AI: there’s plenty of room to analyze the capabilities and limitations of particular intelligent algorithms and data structures—search, pattern-matching, error back-propagation, scripts, multilayer perceptrons, structure-mapping, hidden Markov models, and so on. But melting all these mechanisms into a global variable called “intelligence,” understanding it via turn-of-the-20th-century school tests, and mentally extrapolating it with a comic-book prefix, is, in my view, not a productive way of dealing with the challenges of AI.
Scott’s Response
I wanted to drill down on the following passage:
Sure, you could define “superintelligence,” just as you can define “miracle” or “perpetual motion machine” or “square circle.” And you could even recognize it if you ever saw it. But that does not make it coherent in the sense of being physically realizable.
The way I use the word “coherent,” it basically means “we could recognize it if we saw it.” Clearly, then, there’s a sharp difference between this and “physically realizable,” although any physically-realizable empirical behavior must be coherent. Thus, “miracle” and “perpetual motion machine” are both coherent but presumably not physically realizable. “Square circle,” by contrast, is not even coherent.
You now seem to be saying that “superintelligence,” like “miracle” or “perpetuum mobile,” is coherent (in the “we could recognize it if we saw it” sense) but not physically realizable. If so, then that’s a big departure from what I understood you to be saying before! I thought you were saying that we couldn’t even recognize it.
If you do agree that there’s a quality that we could recognize as “superintelligence” if we saw it—and I don’t mean mere memory or calculation speed, but, let’s say, “the quality of being to John von Neumann in understanding and insight as von Neumann was to an average person”—and if the debate is merely over the physical realizability of that, then the arena shifts back to human evolution. As you know far better than me, the human brain was limited in scale by the width of the birth canal, the need to be mobile, and severe limitations on energy. And it wasn’t optimized for understanding algebraic number theory or anything else with no survival value in the ancestral environment. So why should we think it’s gotten anywhere near the limits of what’s physically realizable in our world?
Not only does the concept of “superpowers” seem coherent to me, but from the perspective of someone a few centuries ago, we arguably have superpowers—the ability to summon any of several billion people onto a handheld video screen at a moment’s notice, etc. etc. You’d probably reply that AI should be thought of the same way: just more tools that will enhance our capabilities, like airplanes or smartphones, not some terrifying science-fiction fantasy.
What I keep saying is this: we have the luxury of regarding airplanes and smartphones as “mere tools” only because there remain so many clear examples of tasks we can do that our devices can’t. What happens when the devices can do everything important that we can do, much better than we can? Provided we’re physicalists, I don’t see how we reject such a scenario as “not physically realizable.” So then, are you making an empirical prediction that this scenario, although both coherent and physically realizable, won’t come to pass for thousands of years? Are you saying that it might come to pass much sooner, like maybe this century, but even if so we shouldn’t worry, since a tool that can do everything important better than we can do it is still just a tool?