Utilitarianism Wins Outright Part 23: Another Other Tricky Issue for Other Accounts
Utilitarianism does best in weird cases
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
The non utilitarian may object that the badness of the act depends on how much harm is done. They might say that the first action is a more serious rights violation. Suppose the formula they give is that the badness of a rights violation = twice the amount of suffering caused by that rights violation.
This leads them open to a few issues. First, this rules out all harms that don’t cause any suffering. Thus, this account can’t hold that harmless rights violations are bad. Second, it doesn't seem to go well with the idea of rights. Rights violations seem to add bad things to the act done, independent of suffering caused.
Maybe the deontologist can work out a complex arithmetic to avoid this issue. However, this is an issue that is easy to solve for utilitarians, yet which requires complexity for deontologists and others who champion rights.
A. I dare you to tell me what exactly "4 units of suffering" means. And no copping out by saying its "metaphysically possible to know" or other nonsense. If you can't tell me what 4 units of suffering actually means, then this is nonsense. If no one can ever seem to say what 4 units of suffering means with any certainty, I find it hard to believe that rendering it numeric is possible at all.
B. Increasing someone's suffering might not be a rights violation at all. If an insane utilitarian is trying to force me into the rape-machine to increase my "pleasure", I am well within my rights to defend myself against them, even if I caused more suffering then the right is worth.
Now you might postulate that I am not in this sort of situation in this example, but this shows that without context these scenarios are not possible to determine.
> Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
There is simply not enough information to decide. Suffering is merely an instrumental goal to things that actually matter, like dismantling utilitarianism. Without knowledge of how these things impact these actual values, it's impossible to make a decision. We can see that pleasure is merely instrumental for several reasons, for one, we can see that intuitively good things that create pleasure track suspiciously well with values like friendship or wisdom, while things which create bad pleasure, like forcing people into experience machines, both do not track with real values like human interaction and are intuitively bad.
We can also use the 12 premise argument:
1 A rational egoist is defined as someone who makes themselves as good as possible
2 A rational egoist would do only actions that make them good people.
3 Therefore only following higher order values like human interaction, friendship, love, pursuing their passions, is good (for selves who are rational egoists)
4 Only the types of things that are good for selves who are rational egoists are good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists or unique benefits that apply to non rational egoists.
5 First-order values do not have unique benefits that only apply to rational egoists
6 Therefore first-order values are good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, first-order values are good for selves
9 Something is good, if and only if it is good for selves
10 Therefore first-order values are good
11 We should maximize good
12 Therefore, we should maximize first-order values
> If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
A) A portion of a rights violation occurs when you take an action that has an unjustifiably high risk of violating rights. This is why attempted murder is bad. If the risk exceeds it's threshold of permissibility. Under that framework, the calculus hanges.
B) This is not as counterintuitive as you make it out to be. It is entirely reasonable to try avoiding harm to important interests, even if those choices might not be fully utilitarian. This is, of course, why this moral philosophy different from util.
> The non utilitarian may object that the badness of the act depends on how much harm is done. They might say that the first action is a more serious rights violation. Suppose the formula they give is that the badness of a rights violation = twice the amount of suffering caused by that rights violation.
We could have a more realistic equation, perhaps by saying that there is a baseline harm, which then increases with actual damage done? This is not at all more complicated then your strawman interpretation of it as a purely linear equation. We don't do moral calculus here!