15 Comments

This seems to presuppose that all non-natural facts require NR above the functions of ordinary physical mind. Unless there is NR, then we can’t use deductive reasoning to conclude transitivity or that 1+1=2. But this seems to presuppose that only non-physical mind can perceive non-physical facts and not sufficiently deal with the fact that the functions of the physical mind can use deduction to perceive non-physical facts.

Computers wouldn’t have acquaintance knowledge without conscious experience, but they clearly have deductive powers that would allow them to infer non-natural facts. Especially those necessary facts you list which would be true in all possible worlds, that physicalism would provide no reason to doubt. Physicalism is perfectly compatible with the ontology and epistemology of non-physical facts given the deductive functions that it explains. But let's say it doesn’t that non-natural facts require NR to be justified, what would NR need to have to give it the powers to perceive non-natural facts?

Expand full comment

This is really odd - the brain evolved a lot of very useful stuff, like intuitions about social behavior, quantities, the reliability of induction and non-contradiction, because it's useful. Is it the best possible model of the world, is it metaphysically transcendent, or is it just a bag of tricks that work pretty well for us? Who knows? Maybe I don't have the right kind of brain to ever know!

Expand full comment

It seems like there are two questions lurking here. One is whether our beliefs about non-natural facts are explained by those facts. It seems to me that the answer on either NR or ~NR is just no, if non-natural facts are causally inert. It’s not clear to me how non-natural facts are supposed to explain our beliefs on NR.

The other question is whether our beliefs about non-natural facts are reliable in some sort of probabilistic sense, like the sense captured in your premise 1. NR seems to guarantee this by basically stipulating that our non-mechanistic reasoning faculty is reliable. But stipulating reliability isn’t the same as explaining it. This stipulation doesn’t seem like a very deep advantage to me—at least without a positive reason to think our beliefs couldn’t be reliable on ~NR.

Expand full comment

> "I think all the premises are hard to argue against."

Premise 4 is false. Intuitions aren't reasons; rather, they aim (sometimes successfully, sometimes not) to put us in touch with intrinsically credible propositions. It's those propositions (e.g. "pain is bad", and "badness isn't a physical property") that are good reasons for believing other things (e.g. that there are non-physical properties).

> "You do not get to stipulate that the process that leads to our beliefs is wildly unreliable, but then also that you get non-inferential justification!"

Who ever claimed otherwise? My response to Street's moral lottery is precisely to show how we can have *reliable* moral beliefs despite moral properties being causally inefficacious. The key is to individuate "processes" by their substantive starting points: two processes may be structurally similar while only one of the two is actually reliable, due to their different substantive starting points (e.g. whether starting from the assumption that pain is good or that pain is bad).

> "A computer can’t have direct acquaintance with any facts!"

Are you assuming that a computer can't have a mind? If the psycho-physical bridging laws connect computational states to phenomenal ones, then I don't see any in-principle barrier here. The key thing is just that direct acquaintance requires phenomenal consciousness. Once you've got that, I don't see why you'd further need your consciousness to magically push atoms around.

> "this seems to still call out for explanation."

Look up a few paragraphs to where you gave an argument for the conclusion that there's "an easy explanation" here. (The magic 8-ball analogy is terrible because the 8-ball involves a chancy mechanism, and so could not reliably track any necessary truths. Brain mechanisms need not be chancy in this way. So it's perfectly possible for brains to reliably track necessary truths, *so long as they're set up in the right way.* To check whether they are so set up, we just need to look at (1) what are the truths? and (2) Can we make sense of why belief in *those propositions* would result from natural selection etc.?)

> "The falsity of NR is often assumed but rarely argued for."

Here's an argument:

1. Zombies are possible.

2. Zombies would say the same things we do, as a result of purely mechanistic processes.

3. Given (2), the things we say are explicable as a result of purely mechanistic processes.

C. So NR is false.

Expand full comment

good work on physicalism.

Expand full comment