33 Comments

Two general points:

1) It seems like we shouldn't treat our intuitions the same across different spaces - especially if we know where that we are likely to develop bad intuitions for, say, evolutionary reasons (think scope neglect), there may be reasons to flat-out reject them. We must understand that intuitions come about via a process of empirical stuff over the course of time in an evolutionary process and can't treat them any differently because they "feel different."

2) It seems like your approach is more frequentist than Bayesian (not saying that there is anything wrong with that, I'm merely pointing it out). This is the case, in that, the initial intuition serves as a sort of null hypothesis as opposed to perhaps just a probability (and given the fact that you have no information, you may use some indifference principle to equally distribute credences among the partition). I would say, however, that it seems like if you do use this approach towards the skeptical problem, perhaps you should use it more consistently across all of epistemology (as I don't see why the cases would be so different in method).

Expand full comment

Under compatibilism, does ChatGPT have free will?

Expand full comment

I'm not sure about ChatGPT, but a good example of a program that definitely has free will is a chess engine. It evaluates possible moves searches backward from its desired state of the gameboard and forward from the current one, and then makes a decision, just like a human player would.

Expand full comment

If it were conscious I would totally agree!!

Expand full comment

Under compatibilism, free will is more of a continuum than a binary; rabbits have more free will than rocks but less than humans; humans have more free will drunk-but-awake than sleepwalking, and more sober than drunk.

So I think really the question is the extent to which ChatGPT gives answers that are reasons-guided. My understanding of the architecture is weaker than is necessary to answer this question, but my intuition is that when GPT is roleplaying a character (including the default "assistant") then that character has some level of preferences, and that when it explicitly reasons through its answers (either with a scratchpad or "reason step by step") then it's going to be guided more meaningfully by those preferences and the reasons that guide them. Whether that's more or less than a rabbit I can't say.

Expand full comment

It seems to me that if a computer program, or a robot, or a bacterium, can plausibly be said to have free will, our definition of free will is wrong. I agree that the drone controller controls the drone, but I *disagree* that the drone controller has (any type of) free will, so the type of control the drone controller possesses is not the type that entails free will.

If compatibilism means that actions stem from desires which stem from initial conditions, isn't this precisely the type of determinism Saposkly and Harris and friends advocate? Are there any determinists that are not compatibilists?

Expand full comment

Yes, hard determinists are incompatibilists. Harris and Saposkly are both hard determinists, not compatibilists. Both have also been panned by philosophers for their commentary on this topic.

Free will is the ability to make morally significant choices. So rabbits and chat gpt are red herrings, neither have free will.

Expand full comment

Why would you say that ChatGPT doesn't make morally significant choices? The most obvious one it can make is lying (on purpose) vs. telling the truth. GPT4 does seem to be capable of lying, as reported here:

https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471

Now perhaps it doesn't have a "real" understanding of what lying really means? But then, how would we possibly find out if GPT20 does?

Expand full comment

Is AI making choices in the relevant sense? It picks between two options, and does that to achieve a goal, but moral choices encompass far more than a binary on/off or yes/no state that achieves a pre-programmed goal.

The manipulation of the symbols which constitute language is only a simulation of meaning. It doesn’t produce understanding. To find out if we’re dealing with a moral agent, requires “understanding” what it means to be one. It takes one to know one.

Expand full comment

I dont know about a theoretical free will but a little alcohol can relax me and I act a bit freer. Now too much and I can be way too free.

Expand full comment

I don't think the existence of free will is all that intuitive if you look closely at direct experience. Seems like the contents of consciousness are not under any control at all. As a practice:

What is this feeling of will or control that we have? Start by examining the experience of considering a decision. It could be something simple, like deciding what to eat, or something complicated, like a major life decision. Notice both the reasons and the feelings that enter into your awareness while you are considering that decision. Where did the reasons and the feelings come from? Did you choose to feel the way you feel about the options? Did you choose the reasons that enter into your awareness in order to evaluate the options? You might say that the feelings and the reasons came from your self (whatever that is!) — but that begs the question. Could you choose to have different feelings, or different reasons? Try to feel otherwise, or try to have different reasons. First, where did the trying come from? If you watch carefully, can you experience the beginning of the intention to have other feelings or reasons emerge? Where did that intention come from? If new feelings or reasons entered into awareness, did you choose their contents? Where is the choosing? Keep searching and see what you can find. Is there anything like free will in your direct experience?

On the flip side you might want to check out Erik Hoels argument for free will in The World Behind the World. Here's how he puts it: "Having free will means being an agent that is causally emergent at the relevant level of description, for whom recent internal states are causally more relevant than distant past states, and who is computationally irreducible.". Causal emergence is a very interesting idea - at a high level here's how he describes it: "In our original introduction of the idea of causal emergence, which was based on identifying cases where the macroscale has greater effective information than the microscale, we maintained that the macroscale excluded the causation at the microscale.4 That is, it flipped Kim’s exclusion argument via intellectual judo: since we know that the macroscale is a better description of the causation governing two descriptions of the exact same occurrence, then what do we need the microscale for? In this view, the macroscale really does push around the microscale, and macroscale events really do cause microscale events."

Expand full comment

Very much agree with your approach to philosophy. I also like the Michael Huemer School of Common Sense lol.

I think that analysis of "can" is the only one where "we can do otherwise" is compatible with Determinism (afaik). I just worry that saying "I could've done X if I wanted to" is a bit like saying "I could've had the 100m world record if I had a body like Usain Bolt's". It's like, sure, you could've done differently if your desires were different, but they weren't, and that's not your fault.

Also, I don't really share the intuition that we really could've done otherwise. Choices come from thoughts, but my thoughts just appear on their own. I think if we're gonna cash out free will it needs to be a Compatibilist kind.

Expand full comment

I'd caution against simply trusting intuitions before you properly calibrated them. It's a common failure mode for unfalsifiable beliefs. If you *feel* that something has to be true, then you should be able to present some kind of argument why. For example, with solipsism and subjective idealism in general, it's quite possible to notice that under them there is no particular reason why universe would behave in an orderly manner, and yet it does so, which points towards the existence of external material world. Philosophy which never question common sense is impotent and presumptuous.

But generally, the world should be adding up to normality and if your intuitions are wrong, there has to be a pretty good explanation why. You shouldn't refuse from questioning your initial assumptions but neither you should forsaken them at any opportunity. If your naive intuition happen to be wrong, it's reasonable to correct it, use better definitions and shift your intuition towards them, instead of abandoning the concept all together.

Expand full comment

"But crucially, if we’re part of the causal sequence, it can be both us and the initial conditions of the universe that control some event."

So what? If you pointed a gun at me and forced me to press the subscribe button to your Substack, I would be "in control" over whether the button is pressed in the sense that I would be a part of the causal sequence - but who cares! That's not significant with regards to being responsible for my actions. Likewise, I can't be in control over my actions in any significant sense if the initial conditions of the universe caused me to do them. And those are much more powerful than a gun, as I could still technically violate your orders and choose death. But I can't choose to violate what the initial conditions of the universe say.

Expand full comment

>Here’s one important feature of good philosophy: it shouldn’t deviate from common sense unless it needs to. It is a cost to David Lewis’s theory, for instance, that he says that every possible world concretely exists, that there are infinite spatiotemporally disconnected leprechauns.

It's a cost of every single modality theory that it postulates entities that are meant to explain modal discourse. Ordinary modal talk isn't precommitted to Lewis's modal realism or to ersatz possible worlds or to a shared agreement of prepending "According to the fiction of..." before each modal claim. There's no common sense explanation for why modal discourse occurs, why it's successful, why it's sometimes not, or that there needs to be something standing behind it to prop it up, like a philosophical theory of modality.

> Similarly, it’s reasonable to believe in moral realism because it is intuitive that certain things are wrong.

Moral antirealist relativist theories also agree that certain things are wrong. Also, the amount of work put towards establishing semantic theories of moral terms in metaethics doesn't fit well with your picture that philosophy is about repudiating common sense, since the semantic theories are explanatory danglers that aren't present in ordinary moral discourse. Nobody gets introduced to metaethical semantic theories about the meanings of moral terms before using moral terms, hence there won't be a common sense metaethical theory.

>It makes sense to believe that there’s an external world, even if one doesn’t have arguments to refute solipsism. This is because it sure seems like there’s an external world,

I take it that the force of skeptical theories is that you in fact don't have evidence that decides between the external world existing and e.g. being envatted or deceived by an evil demon. If it just "seemed" like the external world existed absent any theoretical commitments, then I'm not sure why skeptical theories would even be produced or used as counterarguments. They would just "seem" to be obviously wrong, and would present no philosophical puzzle.

(Also, your theodicy that God would put us in an indifferent world can also be used to argue that he would put us in e.g. an envatted world. In order for the theodicy to succeed, you have to give up epistemic access to the certainty you have that an all-good God would just e.g. make us experience constant pleasure in favor of some sort of weird deferred relationship with God where he appears totally absent. It's not clear to me why you would be in a better epistemic situation than God to know that there is something inherently deceitful about being a brain in a vat.)

> As even the opponents of free will admit, free will is intuitive. It sure seems like we are free to make various choices.

I consider myself an opponent of free will and I don't admit this. I think free will is a paradigm example where philosophers and the discourse they construct around a certain topic is perennially misinterpreted by nonphilosophers. You can for example search on /r/askphilosophy or on Twitter or look through the comments section of any video on free will and see hundreds of examples of confused laymen not understanding what compatibilism is, how determinism doesn't rule out free will (that's the motivation behind the phrase, "compatibilism"), presuming that the common sense position among philosophers is that there's no free will or that there are no good arguments for it, presuming that the common sense position among neuroscientists, psychologists, cognitive scientists, etc is that they've already discovered there's no free will... I also think that this is evidence there isn't a common sense notion of free will that people are enculturated into, and that philosophical theories of free will (philosophical theories in general) are not going to be able to recover "common sense" or "intuitiveness," insofar as these can even be construed as measures.

> If my desires were different, my actions would be different. My desires are causally responsible for my actions.

If I program a computer to have desires and be able to act on them, but the desires I program are unchangeable, then this computer would also have free will according to your definition, even though it can never change any of its desires by stipulation. This isn't felicitous, but there seems to be no fact of the matter that would decide between your affirming that this is a case of free will and somebody denying this is a case of free will. If the denier stipulates (like you've done) that you must be able to change your desires in order to have free will, they will have just as valid a definition of free will as you have.

>Our desires may be determined, but our actions depends on them, so in the counterfactual sense, it’s true that we can do otherwise if we want to.

Yes, and the free will skeptic will push these points: "How are you free if it will never happen in the universe that you change your desires? How are you free if the initial conditions of the universe + forward deterministic time evolution guarantees that you will never be able to change your desires in the lifetime of the universe? How are you free if everytime we rewind the clock, you act in the same exact way you do now?" And just to reiterate, I don't think there's anything metaphysical going on here, just people disputing which words they want to use, and using intuition pumps to try to coerce you to agree with their definition by changing the dialectical context to make attributions of free will seem more or less felicitous in that context. But you could always just stay steadfast and revise whichever sayings you want. ("Of course I have free will if time is reversed and I act out the same desires each time, I'm defining free will to mean that I would have acted differently if I had different desires." "Of course I don't have free will if time is reversed and I act out the same desires each time, I'm defining free will to mean that my actions couldn't have been predicted even if my desires were 100% determined.")

Expand full comment

> Nobody gets introduced to metaethical semantic theories about the meanings of moral terms before using moral terms, hence there won't be a common sense metaethical theory.

Oh it's this sophist again. Your reasoning reeks of BS. Formalizing exposes where it is. What's the premise-conclusion argument that "there won't be a common sense metaethical theory."?

Expand full comment

1. People aren't taught second order metaethical theories before they are taught first order ethical discourse.

2. If people aren't taught second order metaethical theories before they are taught first order ethical discourse, there isn't a common sense metaethical theory.

3. There isn't a common sense metaethical theory.

Expand full comment

Good girl. Now, what's the premise-conclusion argument for P2?

Expand full comment

One thing I've struggled with in my life are weight issues. I have been on a variety of diets.

There's a small but frequent choice I face... late at night, feeling an urge to eat something delicious, but knowing it would not be healthy for me. Sometimes I resist the urge, sometimes I don't. In other words, in each of these cases, I made a decision here.

Could I imagine making a different decision? In any one instance, YES. Definitely.

Do I think I could successfully resist the urge 100% of the time, or give in to the urge 100% of the time? Probably not. My conscience would probably not allow a total failure rate but my short-term desires probably could not be entirely resisted. But, I do intuitively feel, very strongly so, that I have a significant amount of free will here. That my free will has a significant impact on success rate vs. failure rate here.

And in realizing the above, I sense a real danger in people totally dismissing free will. I intuitively sense that for many people it's very tempting to dismiss free will because then you can better silence your conscience when facing decisions where to gain something you desire you'd have to do something you consider harmful (either to yourself or others).

Now, it's perhaps possible to overstate free will. People really do have inherent natures to them, in my experience. But even within the framework of our genetic predispositions, we do have at least some free will I believe. I believe this because I can think of many small decisions I've made in my life where I could easily imagine myself making a different decision. Admittedly, I think a person's nature can be overwhelming when it comes to bigger life-changing decisions, like deciding to get married or not.

Expand full comment

I still think Huemer's free will proof is correct, and don't understand why people reject it. It doesn't make sense to both deliberate between free will (A) and hard determinism (B) when you can only pick (A) or only pick (B). Therefore, under (at least) hard determinism, you can't rationally deliberate between the view and some other view!

Expand full comment

Great post, and agree with most of what you say!

But I must say that your compatibalist PAP seems a bit problematic. So I take it you are basically claiming that we could have done otherwise if we had wanted to do otherwise, and this is strong enough for saying that we could have done otherwise.

But under determinism the states of your brain are of course also determined. Furthermore, desires correlate (I assume) with brain states. So for you to have wanted something else, your brain would have needed to be in another physical state. So your account is really that if another physical state had obtained, then other things would have followed, and this is enough to say that you had a free choice.

Consider now a stone rolling down a hill. It will end up in a specific place at the bottom of the hill (and assume this is determined). If it had started in a slightly different physical position, it would have ended up a different place. So if another physical state had obtained, another thing would have followed. So the rock is free in its choice of where it ends up? That seems like an absurd conclusion, but I don't see what the relevant difference is, given your account.

Expand full comment

The difference is that the forces that put stone down the hill are not part of the stone itself, while your brain and its states are part of you.

But even more importantly, the stone does not execute a decision making algorithm where some states of the universe are marked as reachable from the current position and some states are marked as desirable in the mind of the decider. And the point is to find a way from reachable states - things that you could do - to desirable states - things that you want.

Saying "I did A but could've done B" means that there was a moment where you yourself wasn't sure whether you choose A or B, where B was reachable state according to your decision making algorithm, but then you choose A, therefore making B not a reachable state anymore.

Expand full comment

I don't know exactly why where the force is coming from should make a difference. But if that is important, then consider instead a log burning. Here the reaction is surely happening within the log. But it doesn't have free will with regards to whether it burns, or how it burns, just because if you didn't light it, it would not have burned, or if it was lit another way, it would have burned differently.

With regards to states being reachable for the system, epistemically they are "reachable", but the state of the system at time t+1 is entirely determined at t, and so it is actually impossible to reach that state, even if you think it is.

I think one way that might be helpful to think about it is with a philosophical zombie. If I were a philosophical zombie then I could certainly not have done otherwise in any case, given determinism (I would guess you would agree). But adding back in mental states will, by stipulation, not change anything at all about the physical states, and so all the exact same things necessarily happen. So it is strange to say that something else "could have happened" now, when absolutely nothing has changed about what happens and the way in which it happens.

Although I might be misunderstanding what you are actually saying is the relevant difference.

Expand full comment
Jun 8·edited Jun 8

> But it doesn't have free will with regards to whether it burns, or how it burns, just because if you didn't light it

Exactly. I lighted it. Not the log itself. The decision process to light the log happened outside of it. Inside my brain, which is part of me. We can say that I have the free will to light the log or the system of I+log have the free will, but log itself doesn't.

> With regards to states being reachable for the system, epistemically they are "reachable",

Yep. Having free will is a property of the mind.

> but the state of the system at time t+1 is entirely determined at t, and so it is actually impossible to reach that state, even if you think it is.

At time t the decision that you will make at time t+1 is fully *determinable,* but it's not yet *determined*. Someone has to do the actual physical work to *determine* this decision. And this someone is you. The fact that the decision is *determinable* is exactly what allows you to make it. As soon as you *determine* the decision the states previously marked as reachable cease to be so, but not beforehand.

Making decisions is not unlike making a cake. At time t there can be all the ingredients to make a cake, so the cake is *makeable*, but this doesn't mean that it's already *made*. The work still has to be done.

> I think one way that might be helpful to think about it is with a philosophical zombie. If I were a philosophical zombie then I could certainly not have done otherwise in any case, given determinism (I would guess you would agree).

Nope. The question of consciousness is completely orthogonal to the question of freedom of will as far as I can see. Minds have free will when they execute a decision making algorithm. This execution doesn't have to conscious. Consider a chess engine. It evaluates the states of the gameboard, searching for a way to achieve the desired state from the current one, marking some of these states as reachable from the current position as long as the search goes. It makes decisions even without being conscious. Likewise, not all parts of your mind used in the decision making are perfectly consciously legible to you, yet you make decisions nevertheless.

Expand full comment

>Exactly. I lighted it. Not the log itself.

Yeah sure, but from then on, the process runs within the log. Your brain is also a physical system which requires energy in the form of ATP and whatever, which gets its energy from food which comes from without. The beginning of your brains functioning was also not your choice. Your brain activity presumably started at some point when you were an embryo, and you couldn't have chosen how and why it started. And from that moment on there was no possibility of you ever deciding to write anything other than the exact words you wrote above. Likewise, from the moment the log is lit, all its "decisions" of how it burns come from "within" the log. But it was all fully determined at the moment it was lit. So I just don't see how there is a relevant difference.

> At time t the decision that you will make at time t+1 is fully *determinable,* but it's not yet *determined*. Someone has to do the actual physical work to *determine* this decision. And this someone is you. The fact that the decision is *determinable* is exactly what allows you to make it. As soon as you *determine* the decision the states previously marked as reachable cease to be so, but not beforehand.

I don't think this distinction works. In the case of a log, it also wouldn't be *determined* that it would burn in this exact way yet, because it hasn't actually happened yet, it would only be *determinable*. But surely the log couldn't have burned otherwise, just because it hasn't happened yet - it is a fully determined and mechanistic process. Even if you don't know what you will do yet, someone who was smart enough (Laplace's demon or something) *could*. But the fact that you are not smart enough to predict your own decision does not change the modal facts: There still is no possible world where you have the same brain-state at time t, but do something else at t+1.

>Nope. The question of consciousness is completely orthogonal to the question of freedom of will as far as I can see.

Fair enough, didn't expect you to hold that.

I also just want to make clear that I am not disputing compatibilism in general - I think it is probably correct. I just think that compatibilism shouldn't try to save a notion of being able to do otherwise, because it just is literally impossible to ever bring about a different outcome. And when you introduce counterfactuals, then it is of course obvious that you "could have done otherwise" but you are just also no longer talking about the actual world, but a different possible world which it is (and was) literally impossible to reach from our own actual world.

Expand full comment

> Yeah sure, but from then on, the process runs within the log.

The process of burning, yes. That's why we say that the log is burning. Likewise if the process of decision making was running inside the log we would say that the log is decision making, or, in other terms, has free will.

> Your brain is also a physical system which requires energy in the form of ATP and whatever, which gets its energy from food which comes from without. The beginning of your brains functioning was also not your choice. Your brain activity presumably started at some point when you were an embryo, and you couldn't have chosen how and why it started.

Sure. But as my brain is part of me, the decision making process is running inside me and so it's fine. The fact that I didn't control the beginning of my brain functioning is irrelevant.

> And from that moment on there was no possibility of you ever deciding to write anything other than the exact words you wrote above.

Of course there was! Possibility is in the mind of the decider. Before the decision was made I didn't know what I would write, and only when I decided, writing alternative things ceased to be possible according to my decision making algorithm.

> Likewise, from the moment the log is lit, all its "decisions" of how it burns come from "within" the log.

Log doesn't maker any decisions. I do. That's the whole point. Let me state it once again: for something to have free will it has to execute a decision making algorithm where some states are initially marked as reachable and then cease to be when the decision is made.

> Even if you don't know what you will do yet, someone who was smart enough (Laplace's demon or something) *could*.

Via simulating my decision making algorithm, yes. Still, even in such situation, the decision I make is determined by the execution of my decision making algorithm, instead of something else. I'm still the decision maker, even if it's possible to predict my decision.

> There still is no possible world where you have the same brain-state at time t, but do something else at t+1.

> I just think that compatibilism shouldn't try to save a notion of being able to do otherwise, because it just is literally impossible to ever bring about a different outcome. And when you introduce counterfactuals, then it is of course obvious that you "could have done otherwise" but you are just also no longer talking about the actual world, but a different possible world which it is (and was) literally impossible to reach from our own actual world.

There is a coherent way to use the term "possible" with compatibilism and I don't see any reason not to do it. The notion of "couldness" is reduced to whether the states are reachable according to the decision making algorithm. As soon as you understand it, you notice that other ways to talk about possibility make much less sense.

Like, imagine if it's *not determinable in principle* whether your decision making algorithm chooses A or B based on a specific input. Then how could *you* make such a decision, which is literally determining this exact thing?

Expand full comment

If you define being free as “acting in accordance with your will”, even in the most extreme materialistic and epiphenomenalist case, free will exists.

https://www.lesswrong.com/posts/nY7oAdy5odfGqE7mQ/freedom-under-naturalistic-dualism

The whole point is to put “free will” in the subjective noumenal side of reality, as a qualia.

Expand full comment

Agreed. Everyone believes "will" exist. Determinists like myself just fail to see the "free" part.

Expand full comment

Will exists, but sometimes you cannot act as you wish.

Your degrees of freedom are the self assessed space of posible actions that you as a conscious being can choose. Of course your choice is determined (by your will from the subjective perspective/by the physics of your brain from the objective perspective).

But as long as there is a conscious being, there is will (your objective function) and the domain of freedom (your decision space).

Expand full comment

Wdy think about Inwagen’s consequence argument against compatibilism?

Expand full comment

Do you think that what action someone does is determined by what they want to do? Or do we ever genuinely act against our desires?

Expand full comment
author

I think the word desire is ambiguous between what one ultimately chooses vs having a psychological inclination in the direction of. No to the first, yes to the second.

Expand full comment

I agree. Anything we choose is something we "wanted" to do: if we didn't want to do it, we wouldn't have chosen to do that!

Expand full comment