5 questions for Scott Aaronson

Presented by

BREAKING: Minutes before this newsletter was set to publish, OpenAI announced that Sam Altman is out as the company’s CEO and will be replaced by chief technology officer Mira Murati. In a blog post the company said “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”

Hello, and welcome to this week’s installment of The Future in Five Questions. This week I interviewed Scott Aaronson, a theoretical computer scientist, blogger, and director of the University of Texas Austin’s Quantum Information Center. Aaronson is also the author of “Quantum Computing Since Democritus,” a self-described “candidate for the weirdest book ever to be published by Cambridge University Press,” and is currently a guest researcher at OpenAI focusing on AI safety. We discussed what he sees as the limitations of quantum computing, the inevitable influence of Isaac Asimov, and the continuing need for federal research funding.

This conversation has been edited and condensed for clarity:

What’s one underrated big idea?

One thing I think is underrated in AI is trying to interface language models with proof-checking software, so that you might be able to build agents that are much more able to reason, or even to do original research and math.

The main problem is that language models are full of hallucinations. What you want is for it to be able to check its ideas by converting them into code, and checking the code using existing proof verification packages. Then if something fails verification, you backtrack and change your idea. I think that combination will be enormously powerful, and will actually make generative AI maybe start threatening my job. We’re not there yet. No one has gotten it to work, a few people have tried, but given the importance of it I’m a little bit surprised that everyone’s not trying.

What’s a technology that you think is overhyped?

The idea that quantum computing is going to revolutionize traditional areas of computer science, like machine learning and optimization, and that it’s going to do that soon. There are billions of dollars floating around based on that idea, and anyone who actually does technical work in quantum computing and who doesn’t have a conflict of interest could probably explain to you in five minutes that this is not really based on any quantum algorithms that we actually know.

I remain tremendously excited about the prospect of building quantum computers. I would be excited even if it had no applications, just to find out whether nature has this computational capacity at all. It’s one of the most basic questions that you could possibly ask about physics, and the most stringent test of quantum mechanics that we will possibly ever see.

Based on the quantum algorithms that we’ve actually discovered over the past 30 years, we know how to get at most modest speed-ups for these problems [to which current quantum computers are applied]. When the speed-ups are modest, because of the huge overhead of running a quantum computer at all it doesn’t become a net win until much, much further into the future. Of course, people are free to hope for that, and they should do research, and they should try to learn the truth of the matter, but in the meantime I think presenting to the public or to investors that we know how to get big quantum speed-ups for machine learning and optimization in the near future is fundamentally dishonest.

What book most shaped your conception of the future?

You would probably have to go back early, because people’s conceptions become more fixed as they get older. I was certainly reading a lot of Isaac Asimov when I was 11 years old, “I, Robot” and his robot novels, and those certainly made a big impression on me. To leap forward in time, just this summer I read this recent book by the philosopher Will McCaskill called “What We Owe The Future” and I found that to be beautifully written, and some of the most careful thinking about the future that I’ve seen in any book.

What could government be doing regarding technology that it isn’t?

Often things are lopsided in this very strange way, where there might be huge amounts of funding for collaborations between universities to develop a particular technology, or sometimes the government is basically giving payouts to companies, or DARPA is trying to build particular technologies and asking for milestones and deliverables.

Meanwhile, the basic research that led to all of that stuff in the first place, you know, often graduate students can’t get fellowships. The much smaller expenses for people doing basic research, somehow you can’t even get small grants to cover those things. Funding has always felt lopsided to me in that way. When I see, for example, the National Quantum Initiative Act giving a billion dollars in new funding for quantum computing, part of me thinks yes, that does seem like a good thing to invest in, but then part of me is fearful that it’s going to go to the sort of entities that are really good at capturing funding and not to the kind of people who thought of quantum computing in the first place in the 1980s and who now want funding to think of the next thing.

The other answer that I would give is politically really difficult, but we could be giving the best scientists in the world visas. Within the research community that’s just considered a complete no-brainer, like thousands of dollars are laying on the floor not picked up at all. Whenever people from government ask me for advice and they’re framing the question like, “how do we beat China in the race to quantum computing,” I say if you’re serious about that then just let them come here. A very large fraction of the top students in China, and in quantum computing, and in AI and other fields would stay in the U.S. if we allowed them to. Right now we usually don’t, and that’s a huge problem.

What has surprised you the most this year?

That ChatGPT was the thing that suddenly caused the whole world to wake up to the potentials and the dangers of AI.

Most people who were paying attention knew that these capabilities were coming online, we were playing around with GPT-3 and so forth. But there was a disconnect, like the cartoon character that runs off a cliff and gets suspended in mid-air. On the one hand, this seems like the biggest thing in technology possibly in our lifetimes, and we could have some version of the science fiction machine from “Star Trek” that understands natural language and can answer you in natural language. Then on the other hand, it was just sort of treated as a toy.

It turned out that all it took was for OpenAI to release a version of this that was free and had a nice user interface. That was somehow the event that made it so in the space of about a month or two it went from this thing that my nerdy friends were obsessed with but the rest of the world was laughing at, to suddenly it’s being discussed in White House press briefings.

the law of the ai jungle

A leading AI developer is bashing a proposed carveout for “foundation” models in the European Union’s AI Act.

Speaking to POLITICO’s Vincent Manancourt (paywall), the Canadian computer scientist Yoshua Bengio called the exemption, proposed by Germany and France, “crazy” and said it would lead to the “law of the jungle” in AI.

“We might end up in a world where benign AI systems are heavily regulated in the EU … and the biggest systems that are the most dangerous, the most potentially harmful, are not regulated,” Bengio told Vincent. “We might be in this world where North America will be regulating those frontier AI systems much more strongly than Europe, which I think goes completely against the European values of protecting the public and even the spirit of doing the AI Act in the first place.

The two countries proposed the exemption this month, over worries that they would hamper Europe in its attempts to build a rival to popular systems like ChatGPT.

dak in time

From the retrofuturism department: A blogger recently published a vast repository of scans from the DAK Catalog, a seminal gadget magazine from the 1980s.

Cabel Sasser, a Portland-based video game publisher, spent 10 years collecting issues of the magazine, which at its peak was a $120 million business hawking everything from portable cassette players to something called a “TV communications center,” “the perfect gift for Dad in his office or the whole family to enjoy at home.”

Sasser recounts the history of the magazine, from its competitors and imitators to its slide into obsolescence in the early 1990s, and includes 55 high-resolution scans (!) of complete issues, free to view.

“A friend of mine once remarked that the DAK catalogs, in some way, reminded them of TikTok videos, where people talk at length about products they love and why they love them,” Sasser writes. “The gadget catalog may be no more, but our desire to share incredibly cool things with each other will never die.”

Tweet of the Day

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.