Epistemic status: Confused jumble of thoughts not really ordered.
Recently I attended EA global. It was lots of fun—I chatted with tons of interesting people, met some of my favorite writers. I met Matthew Barnett, writer of an excellent substack; Scott Alexander, the smartest person on the internet; Natália Coelho Mendonça, who wrote a very decisive reply to slimemoldtimemold’s obesity thesis; Eliezer Yudkowsky, prophet of AI risk and author of enormous numbers of lesswrong pieces; Toby Ord, of Precipice fame, who was one of the founders of EA; and many others. In most settings, if you present a moral argument for some weird-sounding conclusion, people will respond by pointing out that it sounds weird, and then not change their minds. But at EA global, people were willing to follow the logic where it leads—to endorse really weird-sounding things. It was a lot of fun.
I spent a lot of time talking with Rob Bensinger, who thinks there is about a 99.5% chance that AI will kill us all. Rob was actually quite smart and interesting. When one hangs in EA circles, they find that there are lots of people who have very confident views in controversial philosophical topics—generally anti-realism, physicalism, and so on—based on flimsy justification. But this doesn’t apply to Rob—he’s the real deal, and was willing to accept the crazy implications of his type A physicalism.
I had a far less fruitful and far more frustrating conversation with Eliezer Yudkowsky. Having had it, I think Richard was spot on in his article arguing with Eliezer, when he said “But it was also frustrating in some respects, since he seemed to assume that any disagreement was simply due to a failure to appreciate his basic arguments, rather than a considered judgment that they aren't wholly compelling.” Yudkowsky, while pretty smart, is just demonstrably confused about both philosophy of mind and the intersection between philosophy of mind and philosophy of language.
In my brief chat with Eliezer, he just seemed to assume I was a total idiot who had no idea what he was saying when it comes to philosophy of mind. Now, there are some people who can be snarky and dismissive. Dustin Crummett gets to be snarky and dismissive when people raise the evolution objection to psychophysical harmony, despite it being utterly confused. But sorry, you don’t get to act like everyone else is an idiot when you are butchering in elementary ways two distinct areas of philosophy. If getting out of the zombie argument was as simple as raising an argument against epiphenomenalism, it wouldn’t have moved anyone. Eliezer, in frustration, terminated the conversation with me after a few minutes.
All in all, I don’t think this EA global changed my thinking as much as the last one did. The last EA global I attended made me more seriously consider working in government or doing research outside of academia. But this one didn’t shift my career plants very much.
Still, it was a lot of fun. I spent a lot of the weekend chatting with the brilliant Dustin Crummett—who is, of all the Christians on earth, perhaps the one who is most nearly right about things. He’s working on insect welfare.
The case for insect welfare is quite compelling. There are many trillions of insects being farmed. We don’t really know if they’re conscious—but a pretty thorough report concluded that, on average, their suffering is roughly 1.3% as much as that of a human in a comparable circumstance. If we’re not sure, we shouldn’t mistreat like 30 trillion of them.
The insects that are farmed are mostly black soldier flies. Black soldier flies are treated horribly. They’re often microwaved, for example, and they’re literally not fed until they starve to death.
Now, one might think that eating the bugs is better than eating factory farmed animals. But it turns out that they don’t feed the bugs to people. People do not want to eat trillions of bugs. Instead, they’re mostly feeding the bugs to farmed fish and then feeding the fish to the people. So the insect farming industry is, more broadly, feeding into the devastating factory farming industry.
So it seems that EAs—like right wing nutters—should staunchly say: we will not eat the bugs!
Being at EA events is always a delightful experience. One is surrounded by delightful, smart, benevolent, and open-minded people. There’s something incredibly inspiring about being in a room full of people dedicating their life to doing good as effectively as possible.
One pretty jarring thing about EA global though was the sheer number of people who thought we were all going to die.
Lots of people had very short timelines, and thought that, if we get AI, we will all likely die. So there were lots of people who thought they would not live to 80 because of AI. My p(doom)—the odds I give to AI ending the world is in the low single digits—maybe 5%—so I’m not existentially terrified all the time. But for a lot of the people, there was an atmosphere of doom and foom and gloom, thinking that we may be the final generation, that despite all of our best efforts, humanity would not survive—that we would be turned into paperclips. It was a bit depressing.
At EA events, I spend lots of time arguing about zombies and moral realism with people who would not know what value or qualia were if they hit them in the face. Some people, like Matthew Barnett and Natália Coelho Mendonça were actually quite smart and informed about the topics. I’ve joked that type A physicalists fall into two categories—zombie and confused person—and I think they were both in the zombie category. But there is a large contingent of rationalists who just repeat strange, rationalist slogans rather than respond to the compelling arguments for moral realism and dualism provided by people like Huemer and Chalmers. Sorry, but it turns out that repeating that consciousness is what an algorithm feels like from the inside does nothing to solve the hard problem of consciousness, and merely obscures the lack of a solution from behind a catchy slogan.
One question that I got a few times that struck me as a bit confused was roughly the following: “say I believe in those moral facts. Why should I care about them?” But this just seems totally confused. The moral facts are, by definition, the things that you should care about. This is like asking why water is H2O. It just is by definition. How we know they are the same thing is a different question, but if you know why something is the thing you ought to do, then no extra information is needed to figure out why you ought to do it. The idea that values can be irrational may be one that anti-realists object, but that is, in fact, what the realists believe—given this, the reason you should do what’s morally best is that it’s what you genuinely, really ought to do, independently of your desires.
I also had a fun chat with Scott Alexander. Scott is probably my favorite writer, and many of my favorite articles are written by Scott. We chatted a bit about hedonic utilitarianism vs preference-based utilitarianism. I pointed out the various objections to preference utilitarianism that I’ve given before. I didn’t get through all of them.
His reply was roughly “yeah, all of that sounds like ways we need to tweak the theory but there are plausible ways around that, and also you want to tile the universe with rats on heroin, so my theory is still better. If your theory terminates in everyone killing you because you are trying to convert them to hedonium and mine gets a few weird implications, then mine is still better.” Hedonium is the substance in the universe that’s most conducive to pleasure.
In reply to this, I suggested that it’s not that clear that Heroin rats are that happy—Peter Singer talked about this in some place, I think the point of view of the universe. Also, I think it’s hard to have clear intuitions about converting the universe to hedonium because we literally can’t imagine what it’s like to be hedonium. Also, desire theory has so many problems that we really should abandon it, and maybe take up objective list theory if we’re so down on hedonism.”
He didn’t like objective list theory, because he thought that it was impossible to spell out the things on the objective list. This doesn’t seem right—of course it will be practically difficult, but there’s nothing impossible in theory about designing an objective list. The problems with objective list theory are substantive, not procedural.
We didn’t chat for that long, but all in all, it was thoroughly enjoyable. There’s something quite cool about meeting and chatting with, in person, someone who you’ve spent hundreds or thousands of hours reading. Just to give a few more responses to the “you want to convert the universe to hedonium objection.”
I think that any even remotely plausible view will hold that turning the universe into hedonium is good. Even if we’re preference utilitarians, we should still think that it is good when people are happy. After all, they generally prefer being happy. Thus, even if the best possible world is not one in which people are maximally happy, that’s still a pretty good world. So this worry avails all remotely plausible views. The criticism obviously is unpersuasive if it is “we all want to convert the universe to hedonium, but you want to do it slightly more.”
Additionally, I think that desire theory has a similar problem. It ends up getting the result that we should replace the universe with beings that just have strong desires for the world as it is, and make as many of them as efficiently as possible. That’s not any better. And it’s especially implausible that a world where everyone is horrifically miserable but gets what they want is a good one.
Second, I just don’t think we have that clear of intuitions about converting the universe to hedonium. When you read, for example, Bostrom’s letter from utopia, it sounds pretty appealing. And we’re also probably afflicted by various biases, involving speciesism—this isn’t even alive, so it’s hard to sympathize with it; status quo bias; and normalcy heuristics.
To check whether it is status quo bias, it’s helpful to imagine a universe filled with hedonium and ask whether we should convert it back to biological life. Imagine a world full of unimaginable bliss—pure experience, each tens of thousands of times better than the best experience you’ve ever had. Really try to imagine it. It does not seem like we should diminish that state—unblemished in its perfection—to fill it with homo sapiens who are much less happy.
I also think that, if you’re convinced by my comments on the utility monster, you should also be convinced here. But I won’t repeat them, for I’ve already discussed them elsewhere.
All in all EA global was lots of fun. I met lots of very smart, interesting people, doing important work. If you’re on the fence about whether to apply to a future EA global, I’d highly recommend going for it.
There's another consequentialist path besides hedonism or preference utilitarianism: assigning substantial value to macroexperiences.
The reason mainlining hedonium isn't the most positive life is the same reason why the best music doesn't consist of the best 10-note melody played over and over. The large-scale temporal organization of pleasures and pains creates positive or negative macroexperiences that aren't simple sums of their parts.
This is great, thanks so much for sharing. I was hoping to attend but I was dealing with health issues all of January and put off the application until the last minute. I didn't realize the application deadline was in the middle of the day, and so I missed it, thinking it was at midnight.
I was considering making a trip to the Bay Area anyway but ultimately decided not to. I hope to attend EA Global London and EA Global Boston in October.
Keep up with the criticisms of Yudkowsky. The dude has a bit of a God complex and a lot of people seem afraid to criticize him too harshly given his lofty status, which is not a healthy aspect of the community. Vigorous criticism is at the foundation of civilizational progress and helps prevent cult-like dynamics in tight-knit social groups.