31 Comments

Surely no coherent ethical systems rely on thinking about impossible buttons that create impossible people, either for or against. We have something a bit like a button that creates people, and in fact living long enough to push it a bunch is our prime evolutionary goal. There's no telling if the people created will be happy, though, and this analog of a button very much affects the persons involved. A thought experiment that starts by skipping over the actual hard parts is on the same level as 'assume a can opener'.

Expand full comment
author

"Has happened in the real world" is a very different target than "could possibly happen in the world in a way that is similar-enough to the thought experiment to have any utility whatsoever". Most thought experiments cannot meet this second bar, usually for reasons that escape those using them. The discrepancy between the utility that the user of one apparently assigns to it and its actual utility* is part of what makes them so delightful to lampoon, as in the recent spate of memes about the trolley problem.

And no, just because something is taught in an entry-level undergrad class doesn't mean it's good; perhaps relatedly, yes, there is much pitiful confusion about philosophy in the world.

*The actual utility for developing ethical thinking is negative - training oneself to think about how one should act (in the real world) on things that cannot possibly exist should rather obviously be a problem, but the -how- of how it's a problem seems elusive to most. It's a bit like how NN Taleb hates the mental crutch of the concept of dice when discussing random events: the whole point of dice is that they represent fixed, known probabilities, and training oneself to think about randomness through use of dice will deepen common mental mistakes later when attempting risk management. Similarly, the choices we face in life are not at all like the trolley problem, but the choices protagonists in various forms of vapid entertainment often are. No, nobody is going to kidnap you and force you to 'play a game'; nobody is going to perform a cartoonishly evil action like tying innocents to train tracks in an attempt to psychologically torture you into accepting responsibility for his own turpitude. Yes, people who don't care about your or about animals will raise millions of them in desperate squalor and torment in order to make a profit, but I digress.

Expand full comment
Feb 14·edited Feb 14

This seems to assume that units of well-being can be created out of nothing indefinitely, but is that the case? Since humans are social animals I don't think solitary confinement in isolated caves will lead to happiness. But if we imagine them all in one large cave then eventually we will run into overcrowding, and adding more people will reduce each person's share of the cave's finite resources. Perhaps we should imagine that there is a finite limit to the units of well-being? Then it would be good to keep creating people until all the units of well-being are accounted for, but not so good once that threshold has been passed and creating more people requires removing units of well-being from the existing people. Is there a minimum level of well-being needed for survival? If so then I suppose there is another threshold where creating just one more person would reduce the entire population to below the minimum survival level. That sounds bad to me.

Expand full comment
author

The argument just assumes for the sake of argument that people can be happy in caves alone. Maybe they're really zen.

Expand full comment

Sorry, I meant units of well-being. I tried to edit to correct that but it doesn't seem to have worked.

Expand full comment

The person-affecting restriction is kind of like the doing-allowing distinction or any other deontological principle: if you try to make it really precise, it's gonna have absurd implications. Instead, I think the idea is that we're supposed to sort of "see it from afar" (to use Robin Hanson's phrase), and from that distance, it looks quite appealing.

Expand full comment

I think the relevance of this argument depends on the nature of the universe. It might make sense in a finite universe where time is better described as a line rather than a tree. But the premise doesn't make sense under many other plausible descriptions of our universe.

Under the many worlds interpretation of quantum mechanics for example, the people this experiment would create are not mere possibilities; they are inevitable regardless of the actions of the decision maker. Even if the decision maker chooses not to push the buttons, there will inevitably be improbable branches of the universe where the buttons are pushed anyway by accident, or by freak gusts of wind generated by random quantum interactions of air molecules in the room, or by some other mechanism.

Whether new people are created in a probable branch of the universe or an improbable one, they don't really care - they experience being just as real either way. There is no reason to increase the probability of the branches of the universe containing new people, at least not unless that would benefit the already existing people in whatever branch the decision maker finds themself. I think something like the person affecting view is a good fit for a universe where things that can happen do happen.

Expand full comment
author

Well, moral theories will be necessary, rather than dependent on the laws of physics. I don't understand why the many worlds interpretation has anything to do with whether it's good to create new people. Can you try to lay out premises explicitly.

Expand full comment

Under MWI you can't really ask "should I create a new person?" because if you are ever in a situation where that feels like an option, then the creation of that person is already inevitable in a future branch of the universe. The closest you can get is to ask something like "should I increase the probability that I will experience the branch of the universe that will contain this new person who is going to exist"?

Expand full comment
author

This just seems to be a claim that the MWI leads to moral nihilism--our actions not mattering. I don't think that this is true--taking some action will lead to more splitting in accords with that action being taken. In addition, this would apply to all actions, rather than just creating people, so it means the MWI doesn't give any reason to take any actions, meaning it should be irrelevant to our deliberation. Finally, I think the MWI is probably false.

Expand full comment

I would say that most day-to-day morality is going to be the same under MWI or not. For instance, torture is still bad under MWI. If your actions can determine how likely it is that someone experiences torture then it's good to try to minimize those probabilities, even if you can never get them down to zero, MWI or not.

It's only in special cases that morality under MWI seems to conflict with classical morality or suggest moral nihilism. Questions involving creating and destroying people seem to be one of those special cases.

Even if MWI turns out to be false, I think these considerations arguably apply to all four of Max Tegmark's multiverse categories, not just MWI (level III), so maybe they're still worth thinking about.

Unfortunately, all the pop physics I've seen seem to imply that multiverse morality is exactly the same as classical morality, which I think is clearly wrong, and the pop philosophy I've seen either doesn't bother exploring morality in the multiverse at all or jumps to the conclusion that a multiverse implies blanket nihilism, which I also don't think is correct. If anyone has done a deep dive on this somewhere, I hope it surfaces to where I live someday.

Expand full comment
author

If the people who understand physics say it doesn't affect morality, you should probably believe them. I think I remember Yudkowsky talking about why MWI doesn't affect what we should do.

If torturing someone increases the probability of them being tortured then wouldn't creating someone also increase the number of people created in the multiverse?

Expand full comment

Yudkowsky equivocates a bit in the last few paragraphs here and explicitly mentions implications for creating new people but doesn't really elaborate further: https://www.lesswrong.com/posts/qcYCAxYZT4Xp9iMZY/living-in-many-worlds Like I said, I think morality in the multiverse is mostly the same as classic morality day-to-day. I just think the areas where it diverges are especially interesting.

Doubling the amplitude of the branch where someone gets tortured is experienced by that person as a greater probability of experiencing torture, which is bad. Doubling the amplitude of a branch where a person is going to be created is experienced by that person ...exactly the same as they would have experienced the non-doubled branch. When you flip a coin, thus splitting the universe into a heads branch and a tails branch, do you suddenly feel half as alive in whatever branch you experience? So yes, you can change the amplitude of a branch of the universe where a new person is created, but it doesn't have any effect on that person's wellbeing and doesn't increase the number of distinct people in the multiverse.

Expand full comment

“Button 2 would benefit Steve to some degree and create Brenda. This would be good! Button 3 would create Romero, benefit Brenda to some great degree, and get rid of the benefit to Quan. That would be worth pressing—the benefit to Brenda would dramatically outweigh the cost to Quan. Button 5 would have the effect of pressing all of the other buttons.”

What happened to Button 4? You mention it after but it’s not clear if that’s not he same as Button 5 or different.

Expand full comment
author

Oops, I said button 5 when I meant button .

Expand full comment

I think this is an interesting thought experiment and I'd have to think a lot more about it, but my intuition is that it has some issues with temporality - you say the last button has the effect of "pushing all the other buttons," but those buttons *need* to be pressed in a particular order or their effects won't be worthwhile. If you press button 3 without having pressed button 2 first, then you'd be harming Quan but Brenda wouldn't exist to be benefited. So button 3's moral worth depends on having *first* pressed the other two. And that means pushing button 4 is only justified if you accept some sort of principle like: "If it's right to do A, and right to do B given A, and right to do C given B, then it's right to do A, B, and C at exactly the same time." And it seems like there could be some solid counterexamples to that.

Another way to understand button 4 would be that, instead of actually pushing all the other buttons, it just brings about whatever the end result of pushing those buttons in a row would be. Then your principle would have to be something like, "If it's right to do A, and right to do B given A, and right to do C given B, then it's right to do something that would bring about the state of affairs that obtain after having performed A, B, and C." But it's not obvious to me that's always true either. I'd have to think about it some more.

Expand full comment
author

We can stipulate that the last button has the effect of pressing the other buttons in order. You mention two principles that you say aren't obvious. They seem obvious to me. Can you give a counterexample to either?

Expand full comment

But if the last button has the effect of pressing the other buttons in order, then doesn't the argument really lose steam? In that case, pressing the button doesn't *just* create happy people - it benefits Quan and creates Steve, and then benefits Steve and creates Brenda, and so on. So the only principle it really illustrates is that, if some chain of actions is justified every step of the way, then the final outcome is justified. But that's pretty uncontroversial - you'd need to show that the final outcome was morally preferable *apart from the chain.* Someone who holds a person affecting view could just say the final outcome was justified because of what happened "inside the chain," and I don't think there's anything incoherent about that.

There are some practical counterexamples to "If it's right to do A, and right to do B given A, and right to do C given B, then it's right to do A, B, and C at exactly the same time." An obvious one would be giving someone three different medicines that need to be taken in a particular order. It might be right to give them the first, and then the second, and then the third, but not right to give them all at once. But I'm not even sure this principle matters, because it's not even clear what the buttons would *do* if pressed all at once.

The second principle - "If it's right to do A, and right to do B given A, and right to do C given B, then it's right to do something that would bring about the state of affairs that obtain after having performed A, B, and C" - is more central to your argument because I think that's the interpretation you'd want to go for (rather than saying the button does all the other buttons in a row). And I will admit I can't think of a counterexample off the top of my head, but it doesn't seem particularly intuitive to me. I'll think it over more and comment if I do think of a counterexample.

Expand full comment
author

I was assuming that the effects of the buttons would resolve simultaneously but the buttons would be pressed in order.

In terms of the first principle, there would obviously be a ceteris paribus clause.

The second principle seems obvious to me!

Expand full comment

What would you think about a scenario like this? You have Quan, and a button that will give him five dollars and make him promise to do something neutral, like getting dinner with his mother. It seems like you should push that button, right? Five dollars is a benefit, and there's nothing harmful about promising to do something neutral. But then you could have another button that, when pressed, would make Quan get dinner with his mother and lose eleven dollars. At that point, it seems like you should also press that button, because keeping your promise is plausibly more important than losing eleven dollars. And in that case, the "all at once" button would simply result in Quan having lost one dollar, which isn't good. So it seems to me that there are decision chains that could be justified every step of the way, but result in an end state that isn't preferable by itself.

Expand full comment
author

Does Quan have the option to get dinner with his mother without losing 11 dollars? If so, then button 2 is not worth pressing--he should just go to dinner without paying 11 dollars. If not then 1 is not worth pressing.

Expand full comment

Oh, but I think this isn't the way you'd want to resolve the conflict! Someone who holds a person-affecting view could also say that Button 1 is not worth pressing because it ultimately results in no benefit to the person who already exists at the first moment of deliberation. That's the point I'm trying to make here - it's possible for good decisions to create obligations (or just extraneous concerns) that later undercut the original motivation for starting the chain. If you can say "Button 1 isn't worth pressing because the obligation it creates later justifies erasing the benefit it was originally intended to obtain," then I could say Button 1 in your case isn't worth pressing because the new subjects it creates later justify erasing the benefits it was originally intended to obtain. Right?

Expand full comment