19 Comments

Without irony I can say that I am grateful for this substack existing. There is something straightforward and unapologetic about the writings of the young (anyone under 25), but mostly young people don't write very well. This is a delightful exception, and clarifies the subject matter for me in a way I haven't seen elsewhere. Older, more cynical writers hedge and obfuscate and disguise with jokes and irony. That doesn't happen on this substack -- here are ideas that I've never been able to engage with fully due an inability to believe people really take them seriously. Utilitarianism and deontology have always seemed rather obviously absurd to me, and people who talk about them always seem to be pulling my leg. Even when writers I like and respect a great deal (e.g. Scott Alexander) talk about utilitarianism, it always feels like I'm reading science fiction. Instead of saying '"Ok hypothetically imagine that time travel is possible, what quirky scenarios (looking at you, Heinlein) follow?", we say "Hypothetically if we were going to try to maximize the good for society what would we do?". It's as much a thought experiment as when Kant tries to puzzle out what might make a thing good in itself -- it has no connection to actual humans and their so-called decision making, except as providing a verbal layer to whatever they were going to do anyway. Equally obviously, my intuition on the non-seriousness of this prevents me from engaging with the arguments as the authors would prefer.

So, thank you for this. I understand philosophy better than I did before reading it.

Expand full comment

Thanks!

Expand full comment

I'm not clear on why you say the doctor should not kill one to save five, or even if that is your view actually. I mean, you could argue that if doctors started pulling the lever, everyone would steer clear of them, but maybe the real problem there is that people are not educated into utilitarianism: if the society were utilitarian, that would be how medicine would work, and everyone would accept it. Is this your view, and if not, why not?

Expand full comment

I don’t understand the question

Expand full comment

What is your position on the doctor/organ transplant version of the trolley problem? Why shouldn't the doctor kill one to save five?

Expand full comment

In the real world, they shouldn't because it would typically have disastrous consequences. However, if you know with certainty that nothing would go wrong, and it would just save five people, you should do that.

Expand full comment

Yeah, but it would only have disastrous consequences in our non-utilitarian world. In a world where the utilitarians win and everyone is an utilitarian, doctors all would pull the lever in that problem, as you say. This is why I really want to interview you, so you can walk me through what the utilitarian world is like.

Expand full comment

Bulldog,

For most of my life I've been a (partially conflicted) believer in natural rights, but lately I've been pushed more and more toward utilitarianism and am now teetering on the brink -- and this post of yours contributed greatly to bringing me to this point.

However, there are a couple topics about which I am having trouble embracing the utilitarian viewpoint, which I would love to see you write a post about:

1. Just Deserts: (This example is from Huemer) You have a tasty cookie that will produce harmless pleasure with no other effects. You can give it to either serial killer Ted Bundy, or the saintly Mother Teresa. Bundy enjoys cookies slightly more than Teresa. Should you therefore give it to Bundy?

I suppose utilitarians might say that you could give the cookie to Teresa to avoid incentivizing serial killing, or because other people might see you give the cookie to Bundy and derive dissatisfaction from their sense of justice being violated (even if their conception of justice is incorrect), but these responses would dodge the point -- most people have the intuition that giving the cookie to Ted Bundy is fundamentally wrong beyond any downstream consequences simply because Ted Bundy doesn't *deserve* the cookie.

I've heard of "desert-adjusted" utilitarianism (DAU) (https://utilitarianism.net/near-utilitarian-alternatives/#desert-adjusted-views), which seems to address the issue head-on. Do you think DAU is the correct framework?

2. Restitution: Consider Abe, Bob, and Cindy. Abe owns a bike. However, Bob would get more utility from the bike than Abe. Bob steals the bike from Abe (with no intention of using the bike to aid in committing more crimes). Cindy is wealthy and could buy Abe a new bike with minimal utility loss to herself.

Putting aside the important deterrent effects of having laws against stealing and the fact that stealing is usually wrong, utilitarianism would seem to call for letting Bob keep the stolen bike and having Cindy buy Bob a new bike. Yet, this strikes most people as unfair -- Bob stole the bike so he should be required to return the bike to Abe (or buy him a new one that is just as good).

While I can appreciate that in other alleged counter-examples such as Organ Harvester we cannot so easily set aside our intuitions about the broader implications and our status quo and other biases, I'm not confident that response would be satisfactory in this example. Or is it?

Would you say that property rights are just a social construct and so Abe in fact had no greater moral claim to the bike than Bob did? Would our intuitions or the morally justified resolution change if we stipulated that Bob first asked Abe politely for the bike and Abe refused and only then did Bob take it?

^ I would be eager to read a post of yours addressing these two topics!

Lastly I would also be interested in reading your thoughts on a utilitiarian legal framework -- is private ownership of the means of production justified, and if so to what extent? What should the law require with respect to redistributive justice? I am aware that many utilitarians embrace common sense moral norms in many cases (https://utilitarianism.net/utilitarianism-and-practical-ethics/#respecting-commonsense-moral-norms), but I need more detail!

Thank you. By the way, you have helped convince me to go vegan.

Expand full comment

I'm glad to hear that I've helped convince you to go vegan. Just make sure to take b12! I've addressed the desert objection at some length, which should also address the second one--here are a few articles I wrote about it. Ranked in order of convincingness. Kershnar also has a very convincing book on the subject as does Bruce Waller.

https://benthams.substack.com/p/against-desert-part-4-better-to-be

https://benthams.substack.com/p/against-desert-part-2-moral-luck

https://benthams.substack.com/p/contra-huemer-on-utilitarianism-part

https://benthams.substack.com/p/against-desert-part-3-no-systematization

Expand full comment

Thanks for the tip on B12 and the links to your posts on just deserts. However, regarding the latter I must say I find them unpersuasive.

I was attempting to raise a more limited notion of "just deserts": Not that evil people deserve to be punished for punishment's sake; only that we should give their utility less weight than non-evil people -- but I do not argue that we should give their utility negative weight.

"Thus, if a person is guaranteed to suffer, it seems on its face that desert views imply that it would be better if they were morally worse; this makes it so that the badness of their suffering decreases. But this is implausible—making someone who will suffer morally worse cannot be good."

This phrasing is misleading, IMO -- no one *hopes* anyone has acted immorally in the sense that it would increase the historical volume of immoral actions taken in the world relative to what one previously thought. I think about it like this: If you tell me someone got hurt in a tragic accident, my immediate reaction is "Oh no, that's terrible!". But if you then tell me that this person tortured animals for fun, I will feel less bad about their misfortune (even if we assume their accident did not incapacitate them to prevent them from doing bad things for a period of time). I don't feel good about their misfortune -- I don't say they deserved to suffer a tragic accident -- but I do feel less bad about it. And this doesn't mean that I hope history contained more animal torture than I otherwise would've thought. Rather, if someone had to suffer, better it be someone who is awful than someone who is nice.

The moral luck argument is interesting, and while I certainly acknowledge that random circumstances shape people in profound ways, I still believe people should be at least partially held accountable for choices they make.

The Contra Huemer argument where you address the cookie dilemma doesn't really address it -- that post addresses the argument that we should be happy when evil people are punished, not the intuition that we should give less-but-still-positive weight to the utility of evil people.

I believe the distinction matters, because the intuition is different -- or at least it can be. In the cookie dilemma, one can intuit that it is better to give the cookie to Teresa than Bundy because she is more deserving. But if we suppose that Teresa didn't like the cookie at all, one could favor giving it to Bundy in that case. (Others might prefer to destroy the cookie than give it to Bundy -- *that* would be vindictiveness that I don't endorse.)

Perhaps you think I am wrong to draw a distinction between not valuing the utility of evil people *as much* (less-but-still-positive weight) and claiming that it is good to punish evil people for its own sake (negative weight). If so, I urge you to present a case for why this distinction is irrelevant, rather than taking it for granted.

Or if you accept that there is a distinction, then I urge you to make the case for why the more limited notion of just deserts is untenable -- preferably without appeal to proof-by-contradiction. (But if you must appeal to counter-examples, try to keep them within the realm of plausibility -- I can't find it now but you have an example about "imagine if someone's life was temporally scrambled.." -- no, I will not indulge this type of convoluted hypothetical.) Please present your case clearly so that layfolk like myself can follow. I'm sure many people would be interested to read such an argument from you!

Expand full comment

//I was attempting to raise a more limited notion of "just deserts": Not that evil people deserve to be punished for punishment's sake; only that we should give their utility less weight than non-evil people -- but I do not argue that we should give their utility negative weight.

"Thus, if a person is guaranteed to suffer, it seems on its face that desert views imply that it would be better if they were morally worse; this makes it so that the badness of their suffering decreases. But this is implausible—making someone who will suffer morally worse cannot be good."

This phrasing is misleading, IMO -- no one *hopes* anyone has acted immorally in the sense that it would increase the historical volume of immoral actions taken in the world relative to what one previously thought. I think about it like this: If you tell me someone got hurt in a tragic accident, my immediate reaction is "Oh no, that's terrible!". But if you then tell me that this person tortured animals for fun, I will feel less bad about their misfortune (even if we assume their accident did not incapacitate them to prevent them from doing bad things for a period of time). I don't feel good about their misfortune -- I don't say they deserved to suffer a tragic accident -- but I do feel less bad about it. And this doesn't mean that I hope history contained more animal torture than I otherwise would've thought. Rather, if someone had to suffer, better it be someone who is awful than someone who is nice.//

The argument I raise there is perfectly compatible with that conception of desert. If you think better people's suffering is worse than worse people's suffering, the argument goes through. I have a longer paper on this that I can send you if you give me your email.

The moral luck thing--any account of desert that meets our inutitions will have to affirm moral luck. Problem: moral luck is very implausible. This applies, once again, to any conception of desert, not just to conceptions according to non-retributivist kinds.

Expand full comment

Please send the paper to cutler.eli@gmail.com

Consider two poor brothers raised in the same household with equal marginal utility of cash. One used to work until getting paralyzed in an accident. The other is able-bodied but refuses to work because he is lazy. Are both brothers equally morally deserving of social welfare?

Expand full comment

There are practical reasons to give to the first rather than the second based on the fact that the second will work if he doesn't get paid while the second won't.

Expand full comment

That is a red herring (and could easily be stipulated away)

-- I specifically asked if they are equally *morally deserving*.

Unless you are willing to disclaim free will entirely, it is hard to reject the intuition that the misfortunate brother is more morally deserving than the lazy brother.

Expand full comment

On 4: Why see emotional influence as undermining? Why don't emotions stand to value as our senses stand to perceptible properties: i.e. typically we have negative emotions about bad things and positive emotions about good things. To some degree our emotions are stronger at worse/better things. Of course, our emotions don't *perfectly* track the underlying strength of reasons. But nor does perception perfectly track perceptible properties: illusions occur, and indeed occur pretty often at a minor level. I guess I'd probably say that the degree to which emotions actually track the relative importance of good/bad things is way noisier than perception.

Expand full comment

I agree that our emotions can sometimes be truth-tracking. But I think reason is generally a better guide. If you'd expect us to have stronger emotions about something regardless of the moral truth, then that undermines our emotions reliability.

Expand full comment

"Reason" seems imprecise to me in this context. What does it mean exactly?

Expand full comment

Our beliefs that we reach through thinking hard about things.

Expand full comment