A grand anticlimax: the New York Times on Scott Alexander
Updates (Feb. 14, 2021): Scott Alexander Siskind responds here.
Last night, it occurred to me that despite how disjointed it feels, the New York Times piece does have a central thesis: namely, that rationalism is a “gateway drug” to dangerous beliefs. And that thesis is 100% correct—insofar as once you teach people that they can think for themselves about issues of consequence, some of them might think bad things. It’s just that many of us judge the benefit worth the risk!
Happy Valentine’s Day everyone!
Back in June, New York Times technology reporter Cade Metz, who I’d previously known from his reporting on quantum computing, told me that he was writing a story about Scott Alexander, Slate Star Codex, and the rationalist community. Given my position as someone who knew the rationalist community without ever really being part of it, Cade wondered whether I’d talk with him. I said I’d be delighted to.
I spent many hours with Cade, taking his calls and emails morning or night, at the playground with my kids or wherever else I was, answering his questions, giving context for his other interviews, suggesting people in the rationalist community for him to talk to, in exactly the same way I might suggest colleagues for a quantum computing story. And then I spent just as much time urging those people to talk to Cade. (“How could you possibly not want to talk? It’s the New York Times!”) Some of the people I suggested agreed to talk; others refused; a few were livid at me for giving a New York Times reporter their email addresses without asking them. (I apologized; lesson learned.)
What happened next is already the stuff of Internet history: the NYT’s threat to publish Scott’s real surname; Scott deleting his blog as a way to preempt that ‘doxing’; 8,000 people, including me, signing a petition urging the NYT to respect Scott’s wish to keep his professional and blog identities separate; Scott resigning from his psychiatry clinic and starting his own low-cost practice, Lorien Psychiatry; his moving his blog, like so many other writers this year, to Substack; then, a few weeks ago, his triumphant return to blogging under his real name of Scott Siskind. All this against the backdrop of an 8-month period that was world-changingly historic in so many other ways: the failed violent insurrection against the United States and the ouster, by democratic means, of the president who incited it; the tragedy of covid and the long-delayed start of the vaccination campaign; the BLM protests; the well-publicized upheavals at the NYT itself, including firings for ideological lapses that would’ve made little sense to our remote ancestors of ~2010.
And now, as an awkward coda, the New York Times article itself is finally out (non-paywalled version here).
It could’ve been worse. I doubt it will do lasting harm. Of the many choices I disagreed with, I don’t know which were Cade’s and which his editors’. But no, I was not happy with it. If you want a feature-length, pop condensation of the rationalist community and its ideas, I preferred this summer’s New Yorker article (but much better still is the book by Tom Chivers).
The trouble with the NYT piece is not that it makes any false statements, but just that it constantly insinuates nefarious beliefs and motives, via strategic word choices and omission of relevant facts that change the emotional coloration of the facts that it does present. I repeatedly muttered to myself, as I read: “dude, you could make anything sound shady with this exact same rhetorical toolkit!”
Without further ado, here’s a partial list of my issues:
- The piece includes the following ominous sentence: “But in late June of last year, when I approached Siskind to discuss the blog, it vanished.” This framing, it seems to me, would be appropriate for some conman trying to evade accountability without ever explaining himself. It doesn’t make much sense for a practicing psychiatrist who took the dramatic step of deleting his blog in order to preserve his relationship with his patients—thereby complying with an ethical code that’s universal among psychiatrists, even if slightly strange to the rest of us—and who immediately explained his reasoning to the entire world. In the latter framing, of course, Scott comes across less like a fugitive on the run and more like an innocent victim of a newspaper’s editorial obstinacy.
- As expected, the piece devotes enormous space to the idea of rationalism as an on-ramp to alt-right extremism. The trouble is, it never presents the idea that rationalism also can be an off-ramp from extremism—i.e., that it can provide a model for how even after you realize that mainstream sources are confidently wrong on some issue, you don’t respond by embracing conspiracy theories and hatreds, you respond by simply thinking carefully about each individual question rather than buying a worldview wholesale from anyone. Nor does the NYT piece mention how Scott, precisely because he gives right-wing views more charity than some of us might feel they deserve, actually succeeded in dissuading some of his readers from voting for Trump—which is more success than I can probably claim in that department! I had many conversations with Cade about these angles that are nowhere reflected in the piece.
- The piece gets off on a weird foot, by describing the rationalists as “a group that aimed to re-examine the world through cold and careful thought.” Why “cold”? Like, let’s back up a few steps: what is even the connection in the popular imagination between rationality and “coldness”? To me, as to many others, the humor, humanity, and warmth of Scott’s writing were always among its most notable features.
- The piece makes liberal use of scare quotes. Most amusingly, it puts scare quotes around the phrase “Bayesian reasoning”!
- The piece never mentions that many rationalists (Zvi Mowshowitz, Jacob Falkovich, Kelsey Piper…) were right about the risk of covid-19 in early 2020, and then again right about masks, aerosol transmission, faster-spreading variants, the need to get vaccines into arms faster, and many other subsidiary issues, even while public health authorities and the mainstream press struggled for months to reach the same obvious (at least in retrospect) conclusions. This omission is significant because Cade told me, in June, that the rationalist community’s early rightness about covid was part of what led him to want to write the piece in the first place (!). If readers knew about that clear success, would it put a different spin on the rationalists’ weird, cultlike obsession with “Bayesian reasoning” and “consequentialist ethics” (whatever those are), or their nerdy, idiosyncratic worries about the more remote future?
- The piece contains the following striking sentence: “On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.” Well, yes, except this framing makes it sound like this is a fringe belief of some radical Silicon Valley tribe, rather than just the standard expectation of most of the billions of people who’ve used the Internet for most of its half-century of existence.
- Despite thousands of words about the content of SSC, the piece never gives Scott a few uninterrupted sentences in his own voice, to convey his style. This is something the New Yorker piece did do, and which would help readers better understand the wit, humor, charity, and self-doubt that made SSC so popular. To see what I mean, read the NYT’s radically-abridged quotations from Scott’s now-classic riff on the Red, Blue, and Gray Tribes and decide for yourself whether they capture the spirit of the original (alright, I’ll quote the relevant passage myself at the bottom of this post). Scott has the property, shared by many of my favorite writers, that if you just properly quote him, the words leap off the page, wriggling free from the grasp of any bracketing explanations and making a direct run for the reader’s brain. All the more reason to quote him!
- The piece describes SSC as “astoundingly verbose.” A more neutral way to put it would be that Scott has produced a vast quantity of intellectual output. When I finish a Scott Alexander piece, only in a minority of cases do I feel like he spent more words examining a problem than its complexities really warranted. Just as often, I’m left wanting more.
- The piece says that Scott once “aligned himself” with Charles Murray, then goes on to note Murray’s explosive views about race and IQ. That might be fair enough, were it also mentioned that the positions ascribed to Murray that Scott endorses in the relevant post—namely, “hereditarian leftism” and universal basic income—are not only unrelated to race but are actually progressive positions.
- The piece says that Scott once had neoreactionary thinker Nick Land on his blogroll. Again, important context is missing: this was back when Land was mainly known for his strange writings on AI and philosophy, before his neoreactionary turn.
- The piece says that Scott compared “some feminists” to Voldemort. It didn’t explain what it took for certain specific feminists (like Amanda Marcotte) to prompt that comparison, which might have changed the coloration. (Another thing that would’ve complicated the picture: the rationalist community’s legendary openness to alternative gender identities and sexualities, before such openness became mainstream.)
- Speaking of feminists—yeah, I’m a minor part of the article. One of the few things mentioned about me is that I’ve stayed in a rationalist group house. (If you must know: for like two nights, when I was in Bay Area, with my wife and kids. We appreciated the hospitality!) The piece also says that I was “turned off by the more rigid and contrarian beliefs of the Rationalists.” It’s true that I’ve disagreed with many beliefs espoused by rationalists, but not because they were contrarian, or because I found them noticeably more “rigid” than most beliefs—only because I thought they were mistaken!
- The piece describes Eliezer Yudkowsky as a “polemicist and self-described AI researcher.” It’s true that Eliezer opines about AI despite a lack of conventional credentials in that field, and it’s also true that the typical NYT reader might find him to be comically self-aggrandizing. But had the piece mentioned the universally recognized AI experts, like Stuart Russell, who credit Yudkowsky for a central role in the AI safety movement, wouldn’t that have changed what readers perceived as the take-home message?
- The piece says the following about Shane Legg and Demis Hassabis, the founders of DeepMind: “Like the Rationalists, they believed that AI could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.” This strikes me as a brilliant way to reframe a concern around AI safety as something vaguely sinister. Imagine if the following framing had been chosen instead: “Amid Silicon Valley’s mad rush to invest in AI, here are the voices urging that it be done safely and in accord with human welfare…”
Reading this article, some will say that they told me so, or even that I was played for a fool. And yet I confess that, even with hindsight, I have no idea what I should have done differently, how it would’ve improved the outcome, or what I will do differently the next time. Was there some better, savvier way for me to help out? For each of the 14 points listed above, were I ever tempted to bang my head and say, “dammit, I wish I’d told Cade X, so his story could’ve reflected that perspective”—well, the truth of the matter is that I did tell him X! It’s just that I don’t get to decide which X’s make the final cut, or which ideological filter they’re passed through first.
On reflection, then, I’ll continue to talk to journalists, whenever I have time, whenever I think I might know something that might improve their story. I’ll continue to rank bend-over-backwards openness and honesty among my most fundamental values. Hell, I’d even talk to Cade for a future story, assuming he’ll talk to me after all the disagreements I’ve aired here! [Update: commenters’ counterarguments caused me to change my stance on this; see here.]
For one thing that became apparent from this saga is that I do have a deep difference with the rationalists, one that will likely prevent me from ever truly joining them. Yes, there might be true and important things that one can’t say without risking one’s livelihood. At least, there were in every other time and culture, so it would be shocking if Western culture circa 2021 were the lone exception. But unlike the rationalists, I don’t feel the urge to form walled gardens in which to say those things anyway. I simply accept that, in the age of instantaneous communication, there are no walled gardens: anything you say to a dozen or more people, you might as well broadcast to the planet. Sure, we all have things we say only in the privacy of our homes or to a few friends—a privilege that I expect even the most orthodox would like to preserve, at any rate for themselves. Beyond that, though, my impulse has always been to look for non-obvious truths that can be shared openly, and that might light little candles of understanding in one or two minds—and then to shout those truths from the rooftops under my own name, and learn what I can from whatever sounds come in reply.
So I’m thrilled that Scott Alexander Siskind has now rearranged his life to have the same privilege. Whatever its intentions, I hope today’s New York Times article draws tens of thousands of curious new readers to Scott’s new-yet-old blog, Astral Codex Ten, so they can see for themselves what I and so many others saw in it. I hope Scott continues blogging for decades. And whatever obscene amount of money Substack is now paying Scott, I hope they’ll soon be paying him even more.
Alright, now for the promised quote, from I Can Tolerate Anything Except the Outgroup.
The Red Tribe is most classically typified by conservative political beliefs, strong evangelical religious beliefs, creationism, opposing gay marriage, owning guns, eating steak, drinking Coca-Cola, driving SUVs, watching lots of TV, enjoying American football, getting conspicuously upset about terrorists and commies, marrying early, divorcing early, shouting “USA IS NUMBER ONE!!!”, and listening to country music.
The Blue Tribe is most classically typified by liberal political beliefs, vague agnosticism, supporting gay rights, thinking guns are barbaric, eating arugula, drinking fancy bottled water, driving Priuses, reading lots of books, being highly educated, mocking American football, feeling vaguely like they should like soccer but never really being able to get into it, getting conspicuously upset about sexists and bigots, marrying later, constantly pointing out how much more civilized European countries are than America, and listening to “everything except country”.
(There is a partly-formed attempt to spin off a Grey Tribe typified by libertarian political beliefs, Dawkins-style atheism, vague annoyance that the question of gay rights even comes up, eating paleo, drinking Soylent, calling in rides on Uber, reading lots of blogs, calling American football “sportsball”, getting conspicuously upset about the War on Drugs and the NSA, and listening to filk – but for our current purposes this is a distraction and they can safely be considered part of the Blue Tribe most of the time)
… Even in something as seemingly politically uncharged as going to California Pizza Kitchen or Sushi House for dinner, I’m restricting myself to the set of people who like cute artisanal pizzas or sophsticated foreign foods, which are classically Blue Tribe characteristics.