Understanding vs. impact: the paradox of how to spend my time
Not long ago William MacAskill, the founder of the Effective Altruist movement, visited Austin, where I got to talk with him in person for the first time. I was a fan of his book What We Owe the Future, and found him as thoughtful and eloquent face-to-face as I did on the page. Talking to Will inspired me to write the following short reflection on how I should spend my time, which I’m now sharing in case it’s of interest to anyone else.
By inclination and temperament, I simply seek the clearest possible understanding of reality. This has led me to spend time on (for example) the Busy Beaver function and the P versus NP problem and quantum computation and the foundations of quantum mechanics and the black hole information puzzle, and on explaining whatever I’ve understood to others. It’s why I became a professor.
But the understanding I’ve gained also tells me that I should try to do things that will have huge positive impact, in what looks like a pivotal and even terrifying time for civilization. It tells me that seeking understanding of the universe, like I’ve been doing, is probably nowhere close to optimizing any values that I could defend. It’s self-indulgent, a few steps above spending my life learning to solve Rubik’s Cube as quickly as possible, but only a few. Basically, it’s the most fun way I could make a good living and have a prestigious career, so it’s what I ended up doing. I should be skeptical that such a course would coincidentally also maximize the good I can do for humanity.
Instead I should plausibly be figuring out how to make billions of dollars, in cryptocurrency or startups or whatever, and then spending it in a way that saves human civilization, for example by making AGI go well. Or I should be convincing whatever billionaires I know to do the same. Or executing some other galaxy-brained plan. Even if I were purely selfish, as I hope I’m not, still there are things other than theoretical computer science research that would bring more hedonistic pleasure. I’ve basically just followed a path of least resistance.
On the other hand, I don’t know how to make billions of dollars. I don’t know how to make AGI go well. I don’t know how to influence Elon Musk or Sam Altman or Peter Thiel or Sergey Brin or Mark Zuckerberg or Marc Andreessen to do good things rather than bad things, even when I have gotten to talk to some of them. Past attempts in this direction by extremely smart and motivated people—for example, those of Eliezer Yudkowsky and Sam Bankman-Fried—have had, err, uneven results, to put it mildly. I don’t know why I would succeed where they failed.
Of course, if I had a better understanding of reality, I might know how better to achieve prosocial goals for humanity. Or I might learn why they were actually the wrong goals, and replace them with better goals. But then I’m back to the original goal of understanding reality as clearly as possible, with the corresponding danger that I spend my time learning to solve Rubik’s Cube faster.
