Sam Atis gets off the train to crazy town before it gets to some seemingly absurd utilitarian conclusions--I do not. Here, I'll explain why
I often struggle with articles like this. Thanks for making philosophy so readable.
Something I always respected about your approach to utilitarianism is that you were very consistent, whereas it felt like some others just gave up at some points. I think giving up at all undermines the whole theory.
Two challenges now:
1. There is a non-zero probability that some religious doctrine that preaches the existence of eternal heaven is true. This infinite payoff should overwhelm other moral concerns in terms of importance and conversion should be an EA priority, moreso than saving innocent lives.
2. Since humans, animals, computer simulations, etc could exist for billions more years and there could be an extremely large number of them, all altruistic actions should be future-oriented. Reducing X-risk and S-risk matter more than any currently occurring injustice by orders of magnitude. And present-oriented altruistic activities are _relatively_ wasteful.