Discussion about this post

User's avatar
Silas Abrahamsen's avatar

I am always quite skeptical of these sorts of moral risk arguments, as they seem to have completely untenable consequences. Surely you are not 100% certain that a strong form of deontology is correct--one where duties have lexical priority over utility. Well, in that case you should just adopt that view in your practical reasoning, since any non-zero chance of such a view automatically trumps any considerations of utility. You say that this isn't about taking seriously vanishingly low probabilities, but I just don't see why we shouldn't. What reason is there for taking seriously pretty low probabilities, but not vanishingly low ones? What is the cutoff point for when we should no longer take things seriously? It seems like not acting according to the lexical priority deontological view is just choosing to be irrational and ignoring the view without good grounds for doing so, if we take the view you propose.

I am not completely sure what model to use instead, but I tend towards something like this: We reason in two steps. First we reason on the normative-ethical level, and figure out which view is most plausible. When we have decided on a view, we go to the practical-ethical level and figure out what the chosen theory tells us to do.

To use a less extreme example: Suppose you are in the footbridge variation of the trolley problem. You have a 80% credence in utilitarianism and a 20% credence in some sort of deontology where pushing the man is seriously wrong (worse than letting 150 people die or something). It looks like you should still push the man, given your commitments, since you are quite certain of utilitarianism, even if it would be very wrong conditional on deontology. But on your view, you should almost certainly not push the man. This seems to hold for almost all ethical dilemmas: You should act as a deontologist, even if you are quite certain that utilitarianism is true.

If we instead take my view, we first figure out our normative-ethical credences, which are 80%/20%. From here, we figure out what each view tells us to do. Let's just say that given utilitarianism, you are 90% sure you should push, and 10% you shouldn't. And on deontology, you are 80% sure you shouldn't and 20% you should (these are just made up figures). We then calculate your actual credence that you should push like this:

P(Utilitarianism)*P(Push|Utilitarianism)+P(Deontology)*P(Push|Deontology)

Given the numbers I made up, this works out to: 0.8*0.9+0.2*0.2=0.76 So you should be 76% sure you should push. This seems much more reasonable, I think.

Expand full comment
Benjamin's avatar

Sorry, but you still wouldn't be worse than a serial killer, because those serial killers also ate meat. Say you eat x meat per year, and kill 0 humans. Jeffrey Dahmer also eats roughly x meat per year, and kills maybe 2 humans per year. x + 2 > x for all values of x, and I don't think most serial killers have abnormally low meat consumption.

Expand full comment
38 more comments...

No posts