1 Comment

"When deciding upon a theory we want something with great explanatory power, scope, simplicity, and clarity"

These are all convenient to have, and perhaps have false insofar as one has to apply a theory, but they don't seem like they should increase our credence in util by any amount. In Particular, simplicity is wholly irrelevant. Unlike physics, positing additional laws doesn't result in multiplying probabilities here, because ethics isn't being proved as objective fact in the physical world, but is merely a system to more formally explain our moral intuitions.

"It’s incredibly simple, requiring just a single moral law saying one should maximize the positive mental states of conscious creatures"

You can reduce any theory to a sentence if you construct the right sentence. You need far more to get util off the ground even if we take all facts about the external world as given. What is a "positive mental state"? You might say "happiness" and then wave hands by saying that we all intuitively know what happiness is, but I strongly doubt that that's actually the case. As shown by people who willingly do things even if it makes them unhappy, there are things that we intuitively consider "better" then happiness.

"explains all of ethics, applies to all of ethics"

No need to make your sentences longer then they already are.

"It just seems obvious that ethics should be about making everyone’s life as good as possible."

It's not though.

"History"

Ok

"Syllogism"

Blatantly False

Premise Two is Wrong. Your first argument is incoherent, things like rights and virtues inherently resist "maximization" as how they are defined. There is no justification for "maximization". The Sentient plan argument is nonsense because a sentient plant clearly does have rights (you simply assume it does not), and suffering just caused by circumstances clearly doesn't matter for the ethicality of a beings actions. The Robot example depicts a being that probably can't exist, but if it did I can very easily imagine a robot that had no happiness but did have desires or emotions, in which case it would have rights.

Argument Four is wrong. For one according to you we are all irrational if not devoting our time to building a wireheading AGI, this is deeply implausible. Also, many people do lots of things knowing it will make then less happy.

As for rights being conductive to utility, I'll flip the argument: Isn't it strange how what gives us utility is correlated with known human rights, and instances of utility that are repulsive or unintuitive usually seem to violate rights...? Seems that rights are true after all!

Argument Sic, no reason why posthumous harm is ruled out, desecrating corpses is bad.

Lopsided Lives, shut up and multiply. You can't comprehend just how vast infinity is which is why you conclude that the holocaust outweighs it.

Argument Eight is Question begging. I refuse to elaborate.

Argument Nine: Future Tuesdays Again does question begging as to "irrational", second sentence is wrong because we can say that someone is irrational for not using their rights.

Premise 5 is wrong, you've defined a unique benefit to happiness for rational egoists in that they always want happiness and work to obtain it. This is untrue for non-rational egoists.

Premise 6 is false, there's no reason why happiness is exclusively good, all you have so far is that it is *a* good thing.

Premise 11 is false, the Devil personally informed me of this. More seriously, there is no reason why there;s an obligation of any sort to maximize good, even if good really did make the world better. Your arguments here assume implicitly that making the world good is the only relevant consideration, however acts can be independently wrong even if the snapshot of the world they produce is net better. This follows from the nature of a right as something unconnected with the external world.

"4 Harsanyi’s Proof"

The explanation of what ethics is seems suspicious, but it's fine.

Nothing in this section justifies hedonism or only considering "state of the world" as opposed to the individual actions that this supposedly ethical person should take. The fact that you are making ***decisions** for the group makes it clear that you can simply dismiss the morality of each decision this observer makes.

"What about rights"

Argument one is just you making a spurious claim and asserting with no justification that denying it is "surprising and ad hoc".

Argument 2 assumes that rights can be added together and be subject to multiplication like some utilitarian nonsense. That's wrong. A universe full of extremely severe torture could probably qualitatively outweigh a lot of minor rights violations.

Argument 3(a) can be reversed against utilitarianism, as explained above. All of the ways that you make similar sounding sentence structures while handwaving about "only difference" just show that similar sounding sentences can mean very different things. Shooting someone up, even if didn't decrease their "positive mental states" would still be a rights violation. Causing suffering via eyes would probably cause that suffering through a rights violation, and if it didn't (say you look at an evil utilitarian constructing their wireheading AGI and they realize they've been aught), then it was probably justified and not a rights violation.

Leg Grabbing: Their interest in torture might categorically outweigh leg grabbing at certain levels. And even if the increment is small we could probably aggregate rights violations and suffering reduction and come to the same qualitative conclusion. But even if not, I think that the inherent wrongness of an arbitrary amount of leg grabbing could plausibly outweigh.

The people in a circle example is an interesting take on the earlier circle of doom scenario, but it too fails. This is because choosing to stop two other people's guns from firing does *not* ensure that you commit a rights violation, as you say, if everyone in the circle does that, then you will have violated no one's rights.

"Similarly, if it is bad to violate rights, then one should try to prevent their own violations of rights at all costs."

Not all costs....

"malicious doctor"

The relevant action here is probably not the whole sequence of events, but rather the second decision: Save 1 or Save 5. The choice is obvious.

Argument 6: Ignores the Acts/World-states distinction.

Argument 7: Contradiction, choice two says that you give the same options to the next circle, but then you stipulate that the people in the 100th circle will in fact *not* get both choices. All you have shown here is that you can get anywhere from assuming two contradictory premises.

If you try to be annoying and redefine the sentences to say "Give option unless 100th circle" in the text of the options themselves, I would argue that you aren't actually giving people the same options, even if the literal words are the same. For example if I pointed to a wall with 5 people on it and said "save them", it would be a different request then a wall with 100 people on it.

Argument 8: Just do both at once lmao.

"Other Theories Can’t Account For"

Baby-Making Argument: This is why we define rights specifically and don't use limitless consequentialism. There's no right not to be born. Every other flowery description isn't a rights violation caused by you or very specifically foreseeable. And if it is, then yeah, don't have a child.

Case #2:

Answered Recently

Case #3:

I don't think that normal medical procedures without consent when the person cannot consent violates any rights.

Case #4:

The answer is just a flat no.

Case #5:

Only if the government is utilitarian or evil

Expand full comment