(Taking a break from studying for a midterm to do more fun things like thinking about epistemic pluralism).
I’m something of a scalar utilitarian. It doesn’t make sense to think binaristically of acts as right or wrong, where every act that is less than perfect is wrong. Instead, it makes most sense to think of acts as ranked from best to worst — with the ranking reflecting people’s reasons to take the act. Though I think that Yetter Chappell is right that the three views — maximizing, satisficing, and scalar utilitarianism — are all compatible ultimately.
But I wonder if something similar could be true of the epistemic domain. It seems odd to hold that there’s some categorical threshold of a rational view — rather, views differ in the weight of supportive reasons. Thus, maximizing accounts would explain what you have most epistemic reason to believe; scalar accounts correctly represent the weight of reasons as not having firm cutoffs, instead there are just beliefs weighted from most to least justified; and satisficing accounts would explain the beliefs we have to have to not be crazy or irrational.
I’m not sure, but I think this is a pretty original idea. There tends to be the notion of a precise cutoff between justified and unjustified beliefs. But this seems as strange when applied to epistemic normativity as it is when applied to moral normativity. Why would it be that there’s some precise point at which a belief becomes irrational? If we take epistemic reasons to count in favor of various beliefs, then there wouldn’t be a precise cutoff for epistemic reasons — there are just beliefs with more and less counting in favor of them.
I think the most accurate way to think of epistemic normativity will generally be in a scalar way. The most rational thing is what a fully rational being would believe when confronted with the relevant facts — they would pick up on, and precisely weight, all of the relevant reasons. I think, though I’m less sure about this, that fully rational beings wouldn’t disagree. My argument for this would be the following.
A perfectly rational being would correctly weigh the epistemic reasons.
Thus, a perfectly rational being would only believe the proposition that they have most reason to believe.
If two beings both believe what they have most reason to believe, they would have the same belief.
However, we humans are not maximally rational — we do not pick up on the precise strength of various reasons. Thus, we have lots of false beliefs and we inadequately weigh various reasons. I think very rational humans can certainly disagree without any of them making a factual error — but this is because as humans we can’t precisely calculate the weight of reasons. If we were maximally epistemically rational, I don’t think there would be disagreement for the reasons given above.
So, do people have any objections? And has anything like this been proposed previously in the literature?
3 can be false if either...
Sometimes the epistemic reasons weigh up multiple beliefs as precisely equal
Epistemic reasons aren't precisely commensurable. Then sometimes there may be multiple beliefs for which no other belief has more reason-support, despite such beliefs not having precisely the same amount of support.