As I’ve written elsewhere, when it comes to normative ethics, utilitarianism wins outright. There are numerous considerations that decisively favor utilitarianism, making it clearly the best view of normative ethics.
This article will present several plausible ways of proving utilitarianism from plausible axioms and virtues of the theory. However, in my view the main reason to support utilitarianism is abductive—it makes great sense of morality. All of the supposed counterexamples to utilitarianism end up supporting utilitarianism, showing that it gets the right result even when it seems wrong at first. Yet I don’t have the time to prove that in this article, given the sheer diversity of cases. I shall, however, show in my rebuttals that my interlocutors counterexamples to utilitarianism support it (assuming there are any).
1 Theoretical Virtues
When deciding upon a theory we want something with great explanatory power, scope, simplicity, and clarity. Utilitarianism does excellent by this criteria. It’s incredibly simple, requiring just a single moral law saying one should maximize the positive mental states of conscious creatures, explains all of ethics, applies to all of ethics, and has perfect clarity. Thus, utilitarianism starts out ahead based on its theoretical virtues. Additionally, in terms of its prior plausibility it does well, being immensely intuitive. It just seems obvious that ethics should be about making everyone’s life as good as possible.
2 History As A Guide
History favors utilitarianism. If we look at historical atrocities, they were generally opposed by utilitarians. Utilitarian philosophers were often on the right side of history. Bentham favored decriminalizing homosexuality, abolition of slavery, and protection for non human animals. Mill was the second member of parliament to advocate for women's suffrage and argued for gender equality. In contrast, philosophers like Kant harbored far less progressive views, supporting killing people born to unmarried parents, favoring racial supremacy, and believing masturbation to be a horrifically wrong “unmentionable vice.”
Additionally, the atrocities of slavery, the holocaust, Jim Crow, and all others have all come from excluding a class of sentient beings from moral consideration, something prevented by utilitarianism.
If utilitarianism were not the correct moral view, it would be a bizarre coincidence that it both has the mechanism to rule out every historical attrocity and that utilitarians are consistently hundreds of years ahead of their time when it comes to important moral questions.
3 A syllogism
These premises, if true prove utilitarianism.
1 A rational egoist is defined as someone who does only what produces the most good for themselves
2 A rational egoist would do only what produces the most happiness for themselves
3 Therefore only happiness is good (for selves who are rational egoists)
4 The types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists
5 happiness does not have unique benefits that only apply to rational egoists
6 Therefore only happiness is good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
I shall present a defense of each of the premises.
Premise 1, is true by definition
Premise 2 states that a rational egoist would do only what produces the most happiness for themselves. This has several supporting arguments.
1 When combined with the other arguments, we conclude that what a self interested person would do should be maximized generally. However, it would be extremely strange to maximize other things like virtue or rights and no one holds that view.
2 Any agent that can suffer matters. Imagine a sentient plant, who feels immense agony as a result of their genetic formation, who can’t move nor speak. They’re harmed from their pain, despite not having their rights violated or virtues. Thus, being able to suffer is a sufficient condition for moral worth.
We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that hedonic experience is a necessary and sufficient condition for mattering. This serves as strong evidence for utilitarianism—other theories can’t explain this necessary connection between hedonic value and mattering in the moral sense.
One could object that rights, virtue, or other non hedonistic experiences are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this is deeply implausible, requiring strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. However, strongly emergent properties are not reducible to weaker properties. Philosophers tend to think there is only at most one strongly emergent thing in the universe, so if deontology requires strong emergence, that’s an enormous cost.
3 As we’ll see, theories other than hedonism are just disastrously bad at accounting for what makes someone well off, however, I’ll only attack them if my opponent presents one, because there are too many to criticize.
4 Hedonism seems to unify the things that we care about for ourselves. If someone is taking an action to benefit themselves, we generally take them to be acting rationally if that action brings them joy. This is how we decide what to eat, how to spent our time, or who to be in a romantic relationship with—and is the reason people spend there time doing things they enjoy rather than picking grass.
The rights that we care about are generally conducive to utility, we care about the right not to be punched by strangers, but not the right to not be talked to by strangers, because only the first right is conducive to utility. We care about beauty only if it's experienced, a beautiful unobserved galaxy would not be desirable. Even respect for our wishes after our death is something we only care about if it increases utility. We don’t think that we should light a candle on the grave of a person who’s been dead for 2000 years, even if they had a desire during life for the candle on their grave to be lit. Thus, it seems like for any X we only care about X if it tends to produce happiness.
5 Consciousness seems to be all that matters. As Sidgwick pointed out, a universe devoid of sentience could not possess value. The notion that for something to be good it must be experienced is a deeply intuitive one. Consciousness seems to be the only mechanism by which we become acquainted with value.
6 Hedonism seems to be the simplest way of ruling out posthumous harm. Absent hedonism, a person can be harmed after they die, yet this violates our intuitions,.
7 As Pummer argues, non hedonism cannot account for lopsided lives.
If we accept that non hedonic things can make one’s life go well, then their life could have a very high welfare despite any amount of misery. In fact, they could have an arbitrarily good life despite any arbitrary amount of misery. Thus, if they had enough non hedonic goodness (E.G. knowledge, freedom, or virtue), their life could be great for them, despite experiencing the total suffering of the holocaust every second. This is deeply implausible.
8 Even so much as defining happiness seems to require saying that it’s good. The thing that makes boredom suffering but tranquility happiness is that tranquility has a positive hedonic tone and is good, unlike boredom. Thus, positing that joy is good is needed to explain what joy even is. Additionally, we have direct introspective access to the badness of pain when we experience it.
9 Only happiness seems to possess desire independent relevance. A person who doesn’t care about their suffering on future Tuesdays is being irrational. However, this does not apply to rights—one isn’t irrational for not exercising their rights. If we’re irrational to not care about our happiness, then happiness has to objectively matter.
10 Sinhababu argues that reflecting on our mental states is a reliable way of forming knowledge as recognized by psychology and evolutionary biology—we evolved to be good at figuring out what we’re experiencing. However, when we reflect on happiness we conclude that it’s good, much like reflecting on a yellow wall makes us conclude that it’s bright.
11 Hedonism is very simple, holding there’s only one type of good thing, making it prima facie preferrable.
Premise 3 says therefore only happiness is good (for selves who are rational egoists). This follows from the previous premises.
Premise 4 is trivial. For things to be better for person A than for person B, they must have something about them that produces extra benefits for person A, by definition.
Premise five says happiness does not have unique benefits that only apply to rational egoists. This is obvious.
Premise six says follows from previous premises
Premise seven is trivial
Premise eight says therefore, only happiness is good for selves. This follows from the previous premises.
Premise nine says Something is good, if and only if it is good for selves
It seems hard to imagine something being good, but being good for literally no one. If things can be good while being good for no one, there would be several difficult entailments that one would have to accept, such as that there could be a better world than this one despite everyone being worse off.
People only deny this premise if they have other commitments, which I’ll argue against later.
Premise 10 says Therefore only happiness is good. It follows from the previous premises.
Premise 11 says we should maximize good
First it’s just trivial that if something is good we have reason to pursue it, so the most good thing is the thing we have the most reason to pursue.
Second, this is deeply intuitive. When considering two options, it is better to make two people happy than one, because it is more good than merely making one person happy. Better is a synonym of more good, so if an action produces more good things it is better that it is done.
If there were other considerations that counted against doing things that were good, those would be bad, and thus would still relate to considerations of goodness
Third, as Parfit has argued the thing that makes things go best, is the same as the thing that everyone could rationally consent to and that no person could reasonably reject.
Fourth, an impartial observer should hope for the best state of the world to come into being. However, it seems clear that an impartial observer should not hope for people to act wrongly. Therefore, the right action should bring about the best world.
Fifth, as Yetter-Chappel has argued, agency should be a force for good. Giving a perfectly moral agent control over whether some action happens shouldn’t make the world worse. In the trolley problem, for example, the world would be better if the switch flipped as a result of random chance, divorced from human action. However, if it is wrong to flip the switch, a perfectly moral person being given control over whether or not the flip switches by accident would make the world actively worse. Additionally, it would be better for a perfectly moral person to have muscle spasm which results in the switch flipping, than to have total control of their actions. It shouldn’t be better from the point of view of the universe for personally benevolent agents to have muscle spasms resulting in them taking actions that would have been wrong if they’d voluntarily taken them.
Sixth, as Yetter-Chappel has argued, a maximally evil agent would be an evil consequentialist trying to do as much harm even if it involves them not violating rights, so a maximally good agent would be the opposite.
Premise 12 says therefore, we should maximize only happiness. This follows from the previous premises.
4 Harsanyi’s Proof
Harsanyi’s argument is as follows.
Ethics should be impartial—it should be a realm of rational choice that would be undertaken if one was making decisions for a group, but was equally likely to be any member of the group. This seems to capture what we mean by ethics. If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.
So, when making ethical decisions one should act as they would if they had an equal chance of being any of the affected parties. Additionally, every member of the group should be VNM rational and the group as a whole should be VNM rational. This means that their preferences should have the following four features of rational decision theory which are accepted across the board (they’re slightly technical but they’re basically universally agreed upon).
These combine to form a utility function, which represents the choice worthiness of states of affairs. For this utility function, it has to be the case that a one half chance of 2 utility is equally good to certainty of 1 utility. 2 utility is just defined as the amount of utility that’s sufficiently good for a 50% chance of it to be just as good as certainty of 1 utility.
So now as a rational decision maker you’re trying to make decisions for the group, knowing that you’re equally likely to be each member of the group. What decision making procedure should you use to satisfy the axioms? Harsanyi showed that only utilitarianism can satisfy the axioms.
Let’s illustrate this with an example. Suppose you’re deciding whether to take an action that gives 1 person 2 utility or 2 people 1 utility. The above axioms show that you should be indifferent between them. You’re just as likely to be each of the two people, so from your perspective it’s equivalent to a choice between a 1/2 chance of 2 utility and certainty of 1 utility. We saw before that those are equally valuable, a 1/2 chance of 2 utility is by definition equally good to certainty of 1 utility. 2 utility is just the amount of utility for which a 1/2 chance of it will be just as good as certainty of 1 utility. So we can’t just go the Rawlsian route and try to privilege those who are worst off. That is bad math!! The probability theory is crystal clear.
Now let’s say that you’re deciding whether to kill one to save five, and assume that each of the 6 people will have 5 utility. Well, from the perspective of everyone, all of whom have to be impartial, the choice is obvious. A 5/6 chance of 5 utility is better than a 1/6 chance of 5 utility. It is better by a factor of five. These axioms combined with impartiality leave no room for rights, virtue, or anything else that’s not utility function based.
This argument shows that morality must be the same as universal egoism—it must represent what one would do if they lived everyone’s life and maximized the good things that were experienced throughout all of the lives. You cannot discount certain people, nor can you care about agent centered side constraints.
5 But Mr. Bulldog, What About Rights?
Rights are both the main reason people would deny premise 9 of the earlier syllogism and the main objection to utilitarianism. Sadly, the doctrine of rights is total nonsense.
1 A universe with no life could have moral value, given that things can be good or bad, while being good or bad for no one. The person who denies it could claim that things that are good must relate to people in some way, despite not being directly good for people, yet this would be ad hoc, and a surprising result, if one denied it.
2 If something could be bad, while being bad for no one, then it could be the case that galaxies full of people experiencing horrific suffering, for no ones benefit could be a good state of affairs, relative to one where everyone is happy and prosperous, but things that are bad for no one, yet bad, nonetheless are in vast quantities. For example, suppose we take the violation of rights to be bad, even if it’s bad for no one. A world where everyone violated everyone else's rights unfathomable numbers of times, in ways that harm literally no one, but where everyone prospers, based on the number of people affected, could be morally worse than a world in which everyone endures the most horrific forms of agony imaginable.
3 Those who deny this principle usually do so, not on the basis of the principle sounding implausible, but on the basis of the principle denying other things that they think matter, primarily rights. However, the concept of rights fails disastrously.
1 Everything that we think of as a right is reducible to happiness. For example, we think people have the right to life. Yet the right to life increases happiness. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, we generally think it would be a violation of rights to create huge amounts of pollution, such that a million people die, but not a violation of rights to light a candle that kills no people. The difference is just in the harm caused. If things that we currently don’t think of as rights began to maximize happiness to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are meta ethically significant, then the aliens grabbing the legs of the humans, in ways that harm no one would be morally bad. The amount of rights violations would outweigh. However, this doesn’t seem plausible. It seems implausible that aliens should have to endure horrific torture, so that we can preserve our sanctity in an indescribable way. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically miserable all of the time but where there are no rights violations.
If my opponent argues for rights then I’d challenge him to give a way of deciding whether something is a right that is not based on hedonic considerations.
3 A reductionist account is not especially counterintuitive and does not rob our understanding or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person's innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
4 An additional objection can be given to rights. We generally think that it matters more to not violate rights than it does to prevent other rights violations. We intuitively think that we shouldn’t kill one innocent person to prevent two murders. I shall give a counterexample to this. Suppose we have people in a circle each with two guns that will each shoot the person next to them. They have the ability to prevent two other people from being shot, with another person's gun, or to prevent their own gun from shooting one person. Morally, if we take the view that one's foremost duty is to avoid rights violation, then they would be obligated to prevent their gun from shooting the one person. However, if everyone prevents one of their guns from being shot, rather than two guns from other people from being shot, then everyone in the circle would end up being shot. If it’s more important, however, to save as many lives as possible, then each person would prevent two guns from being shot. World two would have no one shot, and world one would have everyone shot. World one seems clearly worse.
Similarly, if it is bad to violate rights, then one should try to prevent their own violations of rights at all costs. If that’s the case, then if a malicious doctor poisons someone's food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them. This seems deeply implausible.
5 We have decisive scientific reasons to distrust the existence of rights, which is an argument for utilitarianism generally. Greater reflection and less emtotional hinderance makes people much more utilitarian as has been shown by research by Koenigs , Greene et al, and Fornasier. This evidentially supports the reliability of utilitarian judgements.
6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer if given the choice between one person killing one other to prevent 5 indiscriminate murders to 5 indiscriminate murders or them not doing so, should obviously choose the world in which the one person does the murder to prevent 5. An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.
7 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you corresponding to you the same options you were just given.
The person in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 1.5777218 x 10^69 murders, when then alternative actions could have resulted in only one murder. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being.
8 Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good—everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.
If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong.
(More paradoxes for rights can be found here and here, but I can’t go into all of them here).
6 Some Cases Other Theories Can’t Account For
Many other theories are totally unable to grapple with the moral complexity of the world. Let’s consider four cases
Case 1
Imagine you were deciding whether or not to take an action. This action would cause a person to endure immense suffering—far more suffering than would occur as the result of a random assault. This person literally cannot consent. This action probably would bring about more happiness than suffering, but it forces upon them immense suffering to which they don’t consent. In fact, you know that there’s a high chance that this action will result in a rights violation, if not many rights violations.
If you do not take the action, there is no chance that you will violate the person’s rights. In fact, absent this action, their rights can’t be violated at all. In fact, you know that the action will have a 100% chance of causing them to die.
Should you take the action? On most moral systems, the answer would seem to be obviously no. After all, you condemn someone to certain death, cause them immense suffering, and they don’t even consent. How is that justified?
Well, the action I was talking about was giving birth. After all, those who are born are certain to die at some point. They’re likely to have immense suffering (though probably more happiness). The suffering that you inflict upon someone by giving birth to them is far greater than the suffering that you inflict upon someone if you brutally beat them.
So utilitarianism seems to naturally—unlike other theories—provide an account of why giving birth is not morally abhorrent. This is another fact that supports it.
Case 2
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
Case 3
Suppose you stumble across a person who has just been wounded. They need to be rushed to the hospital if they are to survive. If they are rushed to the hospital, they will very likely survive. The person is currently unconscious and has not consented to being rushed to the hospital. Thus, on non utilitarian accounts, it’s difficult to provide an adequate account of why it’s morally permissible to rush a person to the hospital. They did not consent, and the rationale is purely about making them better off.
Case 4
An action will make everyone better off. Should you necessarily do it? The answer seems to be yes, yet other theories have trouble accounting for that if it violates side constraints.
Case 5
When the government taxes is that objectionable theft? If not, why not? Consequentialism gives the only satisfactory account of political authority.
7 Conclusion
In this opening statement I’ve presented a series of considerations favoring utilitarianism. Utilitarianism not only follows from plausible actions but gets the correct answer to hard moral conundrums every single time. Other theories utterly fail to account for the cases given above, and have nowhere near as plausible axiomatic derivations as utilitarianism does.
"When deciding upon a theory we want something with great explanatory power, scope, simplicity, and clarity"
These are all convenient to have, and perhaps have false insofar as one has to apply a theory, but they don't seem like they should increase our credence in util by any amount. In Particular, simplicity is wholly irrelevant. Unlike physics, positing additional laws doesn't result in multiplying probabilities here, because ethics isn't being proved as objective fact in the physical world, but is merely a system to more formally explain our moral intuitions.
"It’s incredibly simple, requiring just a single moral law saying one should maximize the positive mental states of conscious creatures"
You can reduce any theory to a sentence if you construct the right sentence. You need far more to get util off the ground even if we take all facts about the external world as given. What is a "positive mental state"? You might say "happiness" and then wave hands by saying that we all intuitively know what happiness is, but I strongly doubt that that's actually the case. As shown by people who willingly do things even if it makes them unhappy, there are things that we intuitively consider "better" then happiness.
"explains all of ethics, applies to all of ethics"
No need to make your sentences longer then they already are.
"It just seems obvious that ethics should be about making everyone’s life as good as possible."
It's not though.
"History"
Ok
"Syllogism"
Blatantly False
Premise Two is Wrong. Your first argument is incoherent, things like rights and virtues inherently resist "maximization" as how they are defined. There is no justification for "maximization". The Sentient plan argument is nonsense because a sentient plant clearly does have rights (you simply assume it does not), and suffering just caused by circumstances clearly doesn't matter for the ethicality of a beings actions. The Robot example depicts a being that probably can't exist, but if it did I can very easily imagine a robot that had no happiness but did have desires or emotions, in which case it would have rights.
Argument Four is wrong. For one according to you we are all irrational if not devoting our time to building a wireheading AGI, this is deeply implausible. Also, many people do lots of things knowing it will make then less happy.
As for rights being conductive to utility, I'll flip the argument: Isn't it strange how what gives us utility is correlated with known human rights, and instances of utility that are repulsive or unintuitive usually seem to violate rights...? Seems that rights are true after all!
Argument Sic, no reason why posthumous harm is ruled out, desecrating corpses is bad.
Lopsided Lives, shut up and multiply. You can't comprehend just how vast infinity is which is why you conclude that the holocaust outweighs it.
Argument Eight is Question begging. I refuse to elaborate.
Argument Nine: Future Tuesdays Again does question begging as to "irrational", second sentence is wrong because we can say that someone is irrational for not using their rights.
Premise 5 is wrong, you've defined a unique benefit to happiness for rational egoists in that they always want happiness and work to obtain it. This is untrue for non-rational egoists.
Premise 6 is false, there's no reason why happiness is exclusively good, all you have so far is that it is *a* good thing.
Premise 11 is false, the Devil personally informed me of this. More seriously, there is no reason why there;s an obligation of any sort to maximize good, even if good really did make the world better. Your arguments here assume implicitly that making the world good is the only relevant consideration, however acts can be independently wrong even if the snapshot of the world they produce is net better. This follows from the nature of a right as something unconnected with the external world.
"4 Harsanyi’s Proof"
The explanation of what ethics is seems suspicious, but it's fine.
Nothing in this section justifies hedonism or only considering "state of the world" as opposed to the individual actions that this supposedly ethical person should take. The fact that you are making ***decisions** for the group makes it clear that you can simply dismiss the morality of each decision this observer makes.
"What about rights"
Argument one is just you making a spurious claim and asserting with no justification that denying it is "surprising and ad hoc".
Argument 2 assumes that rights can be added together and be subject to multiplication like some utilitarian nonsense. That's wrong. A universe full of extremely severe torture could probably qualitatively outweigh a lot of minor rights violations.
Argument 3(a) can be reversed against utilitarianism, as explained above. All of the ways that you make similar sounding sentence structures while handwaving about "only difference" just show that similar sounding sentences can mean very different things. Shooting someone up, even if didn't decrease their "positive mental states" would still be a rights violation. Causing suffering via eyes would probably cause that suffering through a rights violation, and if it didn't (say you look at an evil utilitarian constructing their wireheading AGI and they realize they've been aught), then it was probably justified and not a rights violation.
Leg Grabbing: Their interest in torture might categorically outweigh leg grabbing at certain levels. And even if the increment is small we could probably aggregate rights violations and suffering reduction and come to the same qualitative conclusion. But even if not, I think that the inherent wrongness of an arbitrary amount of leg grabbing could plausibly outweigh.
The people in a circle example is an interesting take on the earlier circle of doom scenario, but it too fails. This is because choosing to stop two other people's guns from firing does *not* ensure that you commit a rights violation, as you say, if everyone in the circle does that, then you will have violated no one's rights.
"Similarly, if it is bad to violate rights, then one should try to prevent their own violations of rights at all costs."
Not all costs....
"malicious doctor"
The relevant action here is probably not the whole sequence of events, but rather the second decision: Save 1 or Save 5. The choice is obvious.
Argument 6: Ignores the Acts/World-states distinction.
Argument 7: Contradiction, choice two says that you give the same options to the next circle, but then you stipulate that the people in the 100th circle will in fact *not* get both choices. All you have shown here is that you can get anywhere from assuming two contradictory premises.
If you try to be annoying and redefine the sentences to say "Give option unless 100th circle" in the text of the options themselves, I would argue that you aren't actually giving people the same options, even if the literal words are the same. For example if I pointed to a wall with 5 people on it and said "save them", it would be a different request then a wall with 100 people on it.
Argument 8: Just do both at once lmao.
"Other Theories Can’t Account For"
Baby-Making Argument: This is why we define rights specifically and don't use limitless consequentialism. There's no right not to be born. Every other flowery description isn't a rights violation caused by you or very specifically foreseeable. And if it is, then yeah, don't have a child.
Case #2:
Answered Recently
Case #3:
I don't think that normal medical procedures without consent when the person cannot consent violates any rights.
Case #4:
The answer is just a flat no.
Case #5:
Only if the government is utilitarian or evil