Epistemic Status: Pretty confident and almost entirely based on ideas expressed here
1. In the past, there was widespread, correlated, biased moral error, even on matters where people were very confident.
2. By induction, we make similar errors, even on matters where we are very confident.
Humans throughout history have made a truly mind-bending number of moral errors. For just a sample of things we know now that we didn’t previously
You shouldn’t keep slaves.
You shouldn’t torture people.
You should also care about people who live in a different city than the one you live in, who were born in a different city than the one you live in, or who speak a different language than you.
You should let women vote, leave the house, choose who they marry, and generally exercise some amount of self-determination over their own lives.
Your national sport shouldn’t be watching people get murdered.
You shouldn’t rape pubescent children, especially if they are also your slaves.
If a country consists exclusively of former child soldiers with CPTSD and the slaves they can randomly murder on a whim, that is in fact bad, and not good.
It was in the lifetime of my grandparents that Jim Crow laws were on the books. I heard a story from my grandfather of refusing to eat at an Italian restaurant because it wouldn’t serve black people. These egregious errors aren’t so far in the past, and we’re not barred from making them. So ideally, we want some moral system to mitigate these horrendous moral errors. Karnofsky argues
The most credible candidate for a future-proof ethical system, to my knowledge, rests on three basic pillars:
Systemization: seeking an ethical system based on consistently applying fundamental principles, rather than handling each decision with case-specific intuitions. More
Thin utilitarianism: prioritizing the "greatest good for the greatest number," while not necessarily buying into all the views traditionally associated with utilitarianism. More
Sentientism: counting anyone or anything with the capacity for pleasure and suffering - whether an animal, a reinforcement learner (a type of AI), etc. - as a "person" for ethical purposes. More
He additionally explains in an appendix why these are good ways of doing future based ethics.
Of course, these aren't the only two options. There are a number of other approaches to ethics that have been extensively explored and discussed within academic philosophy. These include deontology, virtue ethics and contractualism.
These approaches and others have significant merits and uses. They can help one see ethical dilemmas in a new light, they can help illustrate some of the unappealing aspects of utilitarianism, they can be combined with utilitarianism so that one avoids particular bad behaviors, and they can provide potential explanations for some particular ethical intuitions.
But I don’t think any of them are as close to being comprehensive systems - able to give guidance on practically any ethics-related decision - as the approach I've outlined above. As such, I think they don’t offer the same hopes as the approach I've laid out in this post.
One key point is that other ethical frameworks are often concerned with duties, obligations and/or “rules,” and they have little to say about questions such as “If I’m choosing between a huge number of different worthy places to donate, or a huge number of different ways to spend my time to help others, how do I determine which option will do as much good as possible?”
The approach I've outlined above seems like the main reasonably-well-developed candidate system for answering questions like the latter, which I think helps explain why it seems to be the most-attended-to ethical framework in the effective altruism community.
As Opotow says
Exclusion from the scope of justice, or moral exclusion, occurs when individuals or groups are seen as outside the boundary in which justice applies. As a result, moral values and rules and considerations of fairness do not apply to those outside the scope of justice.
Opotow adds
When I began research on moral exclusion, among my first empirical findings was a Scope of Justice Scale consisting of three attitudes toward others: (1) believing that considerations of fairness apply to them; (2) willingness to allocate a share of community resources to them; and (3) willingness to make sacrifices to foster their well-being (Opotow, 1987, 1993). This scale defines moral inclusion and operationalizes it for research.
Opotow is specifically discussing the holocaust, however, the same broad principle applies to most historical atrocities. Karnofsky (I think quotes, though maybe this footnote is an extrapolation from the things Singer says) Singer
At first [the] insider/ outsider distinction applied even between the citizens of neighboring Greek city-states; thus there is a tombstone of the mid-fifth century B.C. which reads:
This memorial is set over the body of a very good man. Pythion, from Megara, slew seven men and broke off seven spear points in their bodies … This man, who saved three Athenian regiments … having brought sorrow to no one among all men who dwell on earth, went down to the underworld felicitated in the eyes of all.
This is quite consistent with the comic way in which Aristophanes treats the starvation of the Greek enemies of the Athenians, starvation which resulted from the devastation the Athenians had themselves inflicted. Plato, however, suggested an advance on this morality: he argued that Greeks should not, in war, enslave other Greeks, lay waste their lands or raze their houses; they should do these things only to non-Greeks. These examples could be multiplied almost indefinitely. The ancient Assyrian kings boastfully recorded in stone how they had tortured their non-Assyrian enemies and covered the valleys and mountains with their corpses. Romans looked on barbarians as beings who could be captured like animals for use as slaves or made to entertain the crowds by killing each other in the Colosseum. In modern times Europeans have stopped treating each other in this way, but less than two hundred years ago some still regarded Africans as outside the bounds of ethics, and therefore a resource which should be harvested and put to useful work. Similarly Australian aborigines were, to many early settlers from England, a kind of pest, to be hunted and killed whenever they proved troublesome.
So an important lesson of history seems to be “don’t exclude lots of beings from your moral circle—exclusion involves thinking considerations of fairness don’t apply, being unwilling to allocate community resources to them, and being unwilling to make sacrifices for them.” This is exactly what we do to the far future. There is a possibility of vast numbers of future humans, and yet despite the compelling philosophical arguments for taking their interests into account, common sense often tells us not to take them into account. If history has taught us anything it’s that ignoring the interests of large numbers of beings, when we can’t provide an adequate philosophical defense of doing so is a very bad idea.
Much like a person’s spatial location shouldn’t determine their moral significance, neither should their temporal location, or when they exist. The two-hundred-thousandth century is intrinsically just as important as the twenty-first.
One final point worth noting made by Karnofsky is the following.
An interesting additional point is that this sort of ethics arguably has a track record of being "ahead of the curve." For example, here's Wikipedia on Jeremy Bentham, the “father of utilitarianism” (and a major sentientism proponent as well):
He advocated individual and economic freedom, the separation of church and state, freedom of expression, equal rights for women, the right to divorce, and the decriminalizing of homosexual acts. [My note: he lived from 1747-1832, well before most of these views were common.] He called for the abolition of slavery, the abolition of the death penalty, and the abolition of physical punishment, including that of children. He has also become known in recent years as an early advocate of animal rights.
Thus, the record of history shows that the moral circle exclusion required to reject longtermism has a poor track record, and is worth rejecting in general.