Skip to main contentSkip to navigationSkip to navigation
robot silhouette
Should we brace for robots to wipe out humanity? Bryan Caplan doesn’t think so. Photograph: LiliGraphie/Alamy
Should we brace for robots to wipe out humanity? Bryan Caplan doesn’t think so. Photograph: LiliGraphie/Alamy

This economist won every bet he made on the future. Then he tested ChatGPT

This article is more than 1 year old

Bryan Caplan was skeptical after AI struggled on his midterm exam. But within months, it had aced the test

The economist Bryan Caplan was sure the artificial intelligence baked into ChatGPT wasn’t as smart as it was cracked up to be. The question: could the AI ace his undergraduate class’s 2022 midterm exam?

Caplan, of George Mason University in Virginia, seemed in a good position to judge. He has made a name for himself by placing bets on a range of newsworthy topics, from Donald Trump’s electoral chances in 2016 to future US college attendance rates. And he nearly always wins, often by betting against predictions he views as hyperbolic.

That was the case with wild claims about ChatGPT, the AI chatbot that’s become a worldwide phenomenon. But in this case, it’s looking like Caplan – a libertarian professor whose arguments range from calls for open borders to criticism of feminist thinking – will lose his bet.

After the original ChatGPT got a D on his test, he wagered that “no AI would be able to get A’s on 5 out of 6 of my exams by January of 2029”. But, “to my surprise and no small dismay”, he wrote on his blog, the new version of the system, GPT-4, got an A just a few months later, scoring 73/100, which, had it been a student, would have been the fourth-highest score in the class. Given the stunning speed of improvement, Caplan says his odds of winning are looking slim.

So is the hype justified this time? The Guardian spoke to Caplan about what the future of AI might look like and how he became an avid bettor.

The conversation has been edited and condensed for clarity.

You bet that no AI could get A’s on five out of six of your exams by January 2029 – and now one has. How much did you bet?

I tried for 500 bucks. I think it’s a reasonable forecast that I will lose the bet at this point. I’m just hoping to get lucky.

So what do you think this means for the future of AI? Should we be excited or worried or both?

I would say excited, overall. All progress is bad for somebody. Vaccines are bad for funeral homes. The general rule is that anything that increases human production is good for human living standards. Some people lose, but if you were to go and say we only want progress that benefits everyone, then there could be no progress.

I do have another AI bet with Eliezer Yudkowsky – he is the foremost and probably most extreme AI pessimist, in the sense that he thinks it’s going to work and then it’s going to wipe us out. So I have a bet with him that due to AI, we will be wiped off the surface of the Earth by 1 January 2030. And if you’re wondering how could you possibly have a bet like that, when you’re one of the people that’s going to be wiped out – the answer is I just prepaid him. I just gave him the money up front and then if the world doesn’t end, he owes me.

Bryan Caplan was sure the AI in ChatGPT was not as smart as it was made out to be. Photograph: Evelyn Hockstein/Polaris

How could we theoretically be wiped out?

What I consider a bizarre argument [more broadly] is that once the AI becomes intelligent enough to increase its own intelligence, then it will go into infinite intelligence in an instant and that will be it for us. [That view is endorsed by] very smart, very articulate people. They don’t come off as crazy, but I just think that they are.

They have sort of talked themselves into a corner. You start with this definition of: imagine there’s an infinitely intelligent AI. How can we stop it from doing whatever it wanted? Well, once you just put it that way, we couldn’t. But why should you think that this thing will exist? Nothing else has ever been infinite. Why would there be any infinite thing ever?

What goes into your thinking when you decide: is this worth a wager?

The kind of bets that pique my interest are ones where someone just seems to be making hyperbolic exaggerated claims, pretending to have way more confidence about the future than I think they could possibly have. So far, it’s served me perfectly. I’ve had 23 bets that have come to fruition; I’ve won all 23.

I had multiple other cases of people telling me how great AI was, and then I checked for myself and they were clearly greatly exaggerating. And so I just figured the exaggeration was ongoing, and sometimes you’re wrong. Sometimes someone’s saying something that seems ridiculously overstated and it’s just the way they say.

In other words, you tend to reject the most dramatic possible outcomes.

I’m almost always betting against drama. Because it appeals to the human psyche to say exciting things, and my view is that the world usually isn’t that exciting, actually. The world usually basically continues being the way that it was. “The best predictor of the future is the past” is an adage that I think is so wise, undeniable. If someone doesn’t take it seriously, then I have trouble taking them seriously.

So if you do lose the AI bet, is that an indicator that the hyperbole is justified?

I think it shows for this particular case that GPT-4 advanced way more quickly than I expected. I think that means that the economic effects will be a lot bigger sooner than I expected. Since I was expecting very little effect, it could be 10 times as big as I thought it would be and still not be huge. But definitely on this issue, I’ve rethought my view.

The only story that I could think of that would redeem my original skepticism would be if they just added my blogpost to the training data, and then were pretty much just spitting back my own answers at me. But here’s the thing: I actually have a new post where I gave GPT-4 a totally new test I never discussed on the internet, and it got the high score, so I think it’s genuine.

And what happens next?

There is a general rule that even when a technology seems awesome, it usually takes a lot longer to have big economic effects than you would expect.

The first phones were in 1870; it takes about 80 years before this technology is even giving us reliable phone calls to Europe. Electricity seemed like it took several decades for widespread adoption, and the internet also seemed like it took longer than it should.

I remember several years when backspace didn’t work on email. I don’t know how old you are, but like I remember when you couldn’t backspace an email. And it went on for years like that. You might think this would get solved in three minutes. But whenever human beings are involved in the adoption of the technology, there’s just a bunch of different problems, different snags. So as to whether GPT is going to really transform the economy in a few years, I would still consider that pretty amazing. It’s almost unprecedented.

Most viewed

Most viewed