I think the notion of limitation here obscures more than it clarifies. There are plenty of more intuitively limited things that are simpler than less limited things. The Game of Life, for example, is a simpler kind of dynamical system than idealized Newtonian gravitation, even though there's a kind of "speed limit" in the former that doesn't exist in the latter.
I'd speculate that what you have in mind in most paradigmatic examples of "pathological limitation" which decreases simplicity is taking some fixed description of a system, and then tacking on an extraneous detail declaring some specific case/configuration (hitherto unremarkable) to be forbidden. For instance, Newtonian gravitation plus the weird addendum that everything must be moving below 100 mph at all times. There's an argument to be made that most of the time this will increase Kolmogorov complexity, hence makes things more complex.
But if we have two extraordinarily different theories, neither of which is a close restriction of the other, this observation seems inapplicable. Instead, we need to zoom out and do our best to compare (our approximations of) Kolmogorov complexity overall directly. And I'd argue that when we do this, none of the attributes you propose come out looking very simple.
For example, "knowing every true thing" is extremely complicated. If we discovered a highly mysterious alien computer somewhere that correctly answered some hard math questions (e.g., proof/disproof of the Riemann Hypothesis or P = NP), the hypothesis that it could correctly answer *any* math question (including all the undecidable ones!) would nevertheless remain very complex. Because it would involve postulating that the fundamental laws of physics which underly the computer's physical operations are uncomputable: for every true undecidable mathematical statement, there'd have to be something like an extra law stating atoms have to move a certain way in such-and-such a situation to reflect that statement's truth. Otherwise, how would the alien computer always answer in the right way? By contrast, a computable set of laws on which the computer is maybe superintelligent but still mathematically non-omniscient, and constrained to only being able to compute things these laws allow (unless it answers randomly and gets lucky), should should be regarded as simpler.
It gets worse when you broaden "can answer any mathematical question correctly" to "can answer any question correctly." Because now there are a bunch of semantic paradoxes your theory about the alien computer has to circumvent, such as "Hey Alien Computer, will you answer negatively to this question?" So instead of "can answer any question," you have to sharpen it to "can answer any question in [restricted class of questions that avoids paradox]." But spelling out that restricted class *precisely* is hard. It may also be uncomputable, i.e., there might not be a reasonable decision procedure to identify whether a question is in that class or not. So this hypothesis looks really bad! Better to go with a "limited" universe.
Obviously absence of limits isn't the only feature of prior probability but it's a major one. I don't think Kolmogorov complexity is what's relevant to simplicity. Minds are a simple sort of thing even if, as dualists suppose, they can't be reduced or computed, and an unlimited mind is the simplest kind. I agree that designing a mechanism to answer every question would be super complex, but a mind with direct awareness of everything isn't like that.
>Obviously absence of limits isn't the only feature of prior probability but it's a major one.
I'm trying to suggest this factor is only relevant to simplicity insofar as it shaves off minimum description length of our overall theory of the world (or something roughly along those lines). And while "absence of limitations" may do that for many things, it doesn't do that in the case of theism; it arguably drastically increases that length. It's why we shouldn't infer the alien computer in my thought experiment knows the answer to literally any question merely because it knows the answer to everything we've asked so far: that's not truly part of the simplest hypothesis of the overall world consistent with the data!
>Minds are a simple sort of thing even if, as dualists suppose, they can't be reduced or computed, and an unlimited mind is the simplest kind.
The meaning of "minds are simple" is unfortunately fairly ambiguous. But even if they are simple or irreducible in some sense, what matters to what I'm arguing is whether certain descriptions that completely characterize their abilities/behavior - given their irreducible essence or whatever - are compressible or not. Minds are on the same playing field as everything else, right? Copernicus' heliocentric theory was to be preferred over prior geocentric theories which needed to add dozens of epicycles because (again, roughly speaking) the former mathematical description of planetary behavior was much shorter and required fewer free parameters. This would remain true even if it turned out planets in themselves were extremely simple! Planets might have even turned out to be completely irreducible point particles. Heliocentric theory would still have been simpler.
>I agree that designing a mechanism to answer every question would be super complex, but a mind with direct awareness of everything isn't like that.
I don't really see a difference between "mind M has direct access to every truth" and a separate law of physics - or maybe law of metaphysics? - stating that M will behave such-and-such a way if asked such-and-such a question for each individual truth, or something equivalently messy.
//I'm trying to suggest this factor is only relevant to simplicity insofar as it shaves off minimum description length of our overall theory of the world//
I don't think so. If there are two theories one says that something is infinite in size the other says it's finite even if they don't affectl ength of descriptions the first seems more probable.
//The meaning of "minds are simple" is unfortunately fairly ambiguous. But even if they are simple or irreducible in some sense, what matters to what I'm arguing is whether certain descriptions that completely characterize their abilities/behavior - given their irreducible essence or whatever - are compressible or not.//
You couldn't compress the description of *how to give rise* to a mind. But minds are still simple in that they're a fundamental bit of the universe that doesn't break down.
//I don't really see a difference between "mind M has direct access to every truth" and a separate law of physics - or maybe law of metaphysics? - stating that M will behave such-and-such a way if asked such-and-such a question for each individual truth, or something equivalently messy.//
Well I don't think that knowledge is just a disposition to answer questions in a certain way. Zombies, in my view, don't have knowledge, and you could know stuff while not have any will.
>I don't think so. If there are two theories one says that something is infinite in size the other says it's finite even if they don't affectl ength of descriptions the first seems more probable.
Depending on the subtleties of what you mean by "size," "finite in size" does not precisely specify the subject's size, whereas "infinite in size" presumably does. (Although even this only works if you're assuming things like sizes are modeled by real numbers. If you think that universes governed by hyperreal number-based physics are metaphysically possible, then there are just as many infinite hyperreals as there are reals. And in such universes I don't think even you would consider the infinite hypothesis simpler.) So I don't think this example succeeds: the finite hypothesis is, under the hood, existentially quantifying over a bunch of more complex specific hypotheses - "the object has size 0.1 units or it has size 0.2 units or..." - in a way that the infinite hypothesis isn't.
If you try to pull this same move with limitless divine knowledge rather than limitless size, this virtue of precision fails. You'll end up instead with an existential quantification over the many highly complex ways of being omniscient (or rather, having a maximal collection of known beliefs) *precisely*.
>You couldn't compress the description of *how to give rise* to a mind. But minds are still simple in that they're a fundamental bit of the universe that doesn't break down.
I'm not sure I understand how this is responsive. I don't think I've mentioned anything about how to give rise to anything else, and am also unsure what that's exactly supposed to mean. I've only talked about descriptions which successfully capture patterns precisely, however those patterns arose metaphysically.
I believe my last analogy was illustrative: if planets had turned out to be point particles which are fundamental bits of reality that don't break down any further, some descriptions of their behavior and "abilities" will still be simpler than others, like heliocentrism vs. hundred-epicycle geocentrism. They just won't be simpler by virtue of breaking down planets into smaller and more familiar constituents. Similarly, if minds are fundamental, some descriptions of what existing minds do and believe will be simpler than others. They just won't be simpler by virtue of breaking minds down into smaller building blocks.
>Well I don't think that knowledge is just a disposition to answer questions in a certain way. Zombies, in my view, don't have knowledge, and you could know stuff while not have any will.
OK, but I didn't mean to suggest that that knowledge reduces in such a way, even if my language may've suggested that. The broader point is that the hypothesis "knowing every truth" is tantamount to a massively complex disjunction in our fundamental theories, however you choose to cash out the word "knowing." In other words, with my alien computer example, it doesn't matter if the alien computer turns out to be conscious or not.
One other thing I forgot to mention. The idea in your third paragraph - that we can essentially average all the arguments for simplicity out (weighted by plausibility) to get expected simplicity, and then run with that number in our subsequent arguments for theism - is probably going to run you into trouble. First, arguments for simplicity are ultimately arguments for having certain priors, and it's not clear it makes a lot of sense to average these out or attach meta-probabilities to them. You should either have the prior to that degree or you shouldn't! I'm reminded of the time someone I knew once argued that since he was equally compelled by the Halvers' and Thirders' arguments in the Sleeping Beauty debate, the correct probability to assign heads in that thought experiment is 50%*1/2 + 50%*1/3 = 5/12.
Second, and perhaps more worryingly, this idea will likely backfire on you. Because there's a non-crazy argument which I've already given that theism is in fact infinitely complex, based on the argument that simplicity boils down to computational compressibility, and theism can't be *properly* compressed to a finite length at all. Now, I myself am not really persuaded by this argument in such a maximally strong form (as opposed to the "God is merely very, but not necessarily infinitely, complex" form) because I have lingering uncertainty about the formalism involved and its applicability to metaphysics. But it's not totally insane to think it goes through. So if you take this into account, there's a non-negligible positive probability that theism is infinitely more complicated than naturalism. And the expected-value-style reasoning in your third paragraph is then going to totally destroy the viability of theism.
//First, arguments for simplicity are ultimately arguments for having certain priors, and it's not clear it makes a lot of sense to average these out or attach meta-probabilities to them.//
There will be certain theories of what the correct way of assigning priors is. You should average out those theories.
//So if you take this into account, there's a non-negligible positive probability that theism is infinitely more complicated than naturalism. And the expected-value-style reasoning in your third paragraph is then going to totally destroy the viability of theism.//
You don't take the average simplicity--that would imply that any view with any probability of being infinitely complex has a prior of zero. Instead you take a weighted average of the prior probabilities. So assume theism has a 50% chance of having a prior of 10% because it's simple and a 50% chance of being very complicated and having a prior of 0%. It's overall prior would be 5%. But because theism might be way simpler than naturalism, even if there's only a 1% chance of that, because it beats naturalism by so much on that theory--more than 100 times--theism will have a higher prior than naturalism.
And Simon Peter answered and said, "Suppose you have a mind and just don’t place any limits on it. I claim that what you get is God! Because he is unlimited there is no limit on what he knows or can do. But if he knows everything then he knows the moral facts, and thus understands at the deepest conceivable level why he should follow them. You might worry—isn’t he limited in various ways. For instance, presumably his love of evil is limited. But this misses the point. It’s true that there are various properties in which he’s limited—but he’s unlimited qua mind."
A witch isn't complicated because it has features that are hard to simply describe. Theism is a being without parts, with just one fundamental property--being an unlimited mind.
I'm surprised you think goodness is simple. You of all people know how complicated it really is. Sure, utilitarianism is simpler than other moral theories, but there's still a lot of complexity in defining welfare and choosing the right decision theory, not to mention infinite ethics.
Also, your argument for why theism has a higher prior than naturalism if you're uncertain about the arguments is incorrect. It commits the exact same fallacy as the envelope paradox. https://en.wikipedia.org/wiki/Two_envelopes_problem
The problem is, you're assuming that the prior of naturalism is some constant X independently of whether the arguments for a high prior on theism are correct. But obviously that's false - if the arguments for theism's high prior succeed, then naturalism has a much lower prior than if they fail. The correct probabilistic reasoning, based on your assumptions, is the following: Call theism T and naturalism N, and let S be the claim that one of these simplicity arguments succeeds while F = ¬S is the claim that they all fail. The probabilities you assigned are P(T|F) = 0.01*P(N|F), P(T|S) = 100*P(N|S), and P(S) = 0.1. Since naturalism is mutually exclusive with theism, this means that P(T|F) ≤ 1/101 and P(T|S) ≤ 100/101. The overall prior probability of theism then is P(T) = P(T|F)*P(F) + P(T|S)*P(S) ≤ 1/101*0.9 + 100/101*0.1 ≈ 0.108. And that ≤ sign has equality if and only if your prior on atheistic non-naturalism is 0, which it shouldn't be because of Cromwell's rule.
Of course, that's assuming that you even buy that the arguments have a 10% chance of making theism 100x more likely than naturalism, which I don't. Some of the arguments ("pure act" and "unlimited mind") I don't even think are coherent, and the ones that are repeat themselves (I don't see any difference between "infinite power" and "unlimited power") and mostly don't even come anywhere near a full description of God (the only ones that maybe do are "unlimited agent" and "perfection", neither of which seems like a particularly simple or joint-carving description to me). On top of that, all of the arguments rely on your idea that a mind is somehow an inherently simple thing, since God has to be a mind under any of them. This seems obviously false to me - minds are so complex that we have no way of even saying what one is other than pointing to our own consciousness and saying, "that kind of thing". Even if you're a dualist, this remains true, so dualism doesn't automatically get you out of that problem.
I think the notion of limitation here obscures more than it clarifies. There are plenty of more intuitively limited things that are simpler than less limited things. The Game of Life, for example, is a simpler kind of dynamical system than idealized Newtonian gravitation, even though there's a kind of "speed limit" in the former that doesn't exist in the latter.
I'd speculate that what you have in mind in most paradigmatic examples of "pathological limitation" which decreases simplicity is taking some fixed description of a system, and then tacking on an extraneous detail declaring some specific case/configuration (hitherto unremarkable) to be forbidden. For instance, Newtonian gravitation plus the weird addendum that everything must be moving below 100 mph at all times. There's an argument to be made that most of the time this will increase Kolmogorov complexity, hence makes things more complex.
But if we have two extraordinarily different theories, neither of which is a close restriction of the other, this observation seems inapplicable. Instead, we need to zoom out and do our best to compare (our approximations of) Kolmogorov complexity overall directly. And I'd argue that when we do this, none of the attributes you propose come out looking very simple.
For example, "knowing every true thing" is extremely complicated. If we discovered a highly mysterious alien computer somewhere that correctly answered some hard math questions (e.g., proof/disproof of the Riemann Hypothesis or P = NP), the hypothesis that it could correctly answer *any* math question (including all the undecidable ones!) would nevertheless remain very complex. Because it would involve postulating that the fundamental laws of physics which underly the computer's physical operations are uncomputable: for every true undecidable mathematical statement, there'd have to be something like an extra law stating atoms have to move a certain way in such-and-such a situation to reflect that statement's truth. Otherwise, how would the alien computer always answer in the right way? By contrast, a computable set of laws on which the computer is maybe superintelligent but still mathematically non-omniscient, and constrained to only being able to compute things these laws allow (unless it answers randomly and gets lucky), should should be regarded as simpler.
It gets worse when you broaden "can answer any mathematical question correctly" to "can answer any question correctly." Because now there are a bunch of semantic paradoxes your theory about the alien computer has to circumvent, such as "Hey Alien Computer, will you answer negatively to this question?" So instead of "can answer any question," you have to sharpen it to "can answer any question in [restricted class of questions that avoids paradox]." But spelling out that restricted class *precisely* is hard. It may also be uncomputable, i.e., there might not be a reasonable decision procedure to identify whether a question is in that class or not. So this hypothesis looks really bad! Better to go with a "limited" universe.
Obviously absence of limits isn't the only feature of prior probability but it's a major one. I don't think Kolmogorov complexity is what's relevant to simplicity. Minds are a simple sort of thing even if, as dualists suppose, they can't be reduced or computed, and an unlimited mind is the simplest kind. I agree that designing a mechanism to answer every question would be super complex, but a mind with direct awareness of everything isn't like that.
>Obviously absence of limits isn't the only feature of prior probability but it's a major one.
I'm trying to suggest this factor is only relevant to simplicity insofar as it shaves off minimum description length of our overall theory of the world (or something roughly along those lines). And while "absence of limitations" may do that for many things, it doesn't do that in the case of theism; it arguably drastically increases that length. It's why we shouldn't infer the alien computer in my thought experiment knows the answer to literally any question merely because it knows the answer to everything we've asked so far: that's not truly part of the simplest hypothesis of the overall world consistent with the data!
>Minds are a simple sort of thing even if, as dualists suppose, they can't be reduced or computed, and an unlimited mind is the simplest kind.
The meaning of "minds are simple" is unfortunately fairly ambiguous. But even if they are simple or irreducible in some sense, what matters to what I'm arguing is whether certain descriptions that completely characterize their abilities/behavior - given their irreducible essence or whatever - are compressible or not. Minds are on the same playing field as everything else, right? Copernicus' heliocentric theory was to be preferred over prior geocentric theories which needed to add dozens of epicycles because (again, roughly speaking) the former mathematical description of planetary behavior was much shorter and required fewer free parameters. This would remain true even if it turned out planets in themselves were extremely simple! Planets might have even turned out to be completely irreducible point particles. Heliocentric theory would still have been simpler.
>I agree that designing a mechanism to answer every question would be super complex, but a mind with direct awareness of everything isn't like that.
I don't really see a difference between "mind M has direct access to every truth" and a separate law of physics - or maybe law of metaphysics? - stating that M will behave such-and-such a way if asked such-and-such a question for each individual truth, or something equivalently messy.
//I'm trying to suggest this factor is only relevant to simplicity insofar as it shaves off minimum description length of our overall theory of the world//
I don't think so. If there are two theories one says that something is infinite in size the other says it's finite even if they don't affectl ength of descriptions the first seems more probable.
//The meaning of "minds are simple" is unfortunately fairly ambiguous. But even if they are simple or irreducible in some sense, what matters to what I'm arguing is whether certain descriptions that completely characterize their abilities/behavior - given their irreducible essence or whatever - are compressible or not.//
You couldn't compress the description of *how to give rise* to a mind. But minds are still simple in that they're a fundamental bit of the universe that doesn't break down.
//I don't really see a difference between "mind M has direct access to every truth" and a separate law of physics - or maybe law of metaphysics? - stating that M will behave such-and-such a way if asked such-and-such a question for each individual truth, or something equivalently messy.//
Well I don't think that knowledge is just a disposition to answer questions in a certain way. Zombies, in my view, don't have knowledge, and you could know stuff while not have any will.
>I don't think so. If there are two theories one says that something is infinite in size the other says it's finite even if they don't affectl ength of descriptions the first seems more probable.
Depending on the subtleties of what you mean by "size," "finite in size" does not precisely specify the subject's size, whereas "infinite in size" presumably does. (Although even this only works if you're assuming things like sizes are modeled by real numbers. If you think that universes governed by hyperreal number-based physics are metaphysically possible, then there are just as many infinite hyperreals as there are reals. And in such universes I don't think even you would consider the infinite hypothesis simpler.) So I don't think this example succeeds: the finite hypothesis is, under the hood, existentially quantifying over a bunch of more complex specific hypotheses - "the object has size 0.1 units or it has size 0.2 units or..." - in a way that the infinite hypothesis isn't.
If you try to pull this same move with limitless divine knowledge rather than limitless size, this virtue of precision fails. You'll end up instead with an existential quantification over the many highly complex ways of being omniscient (or rather, having a maximal collection of known beliefs) *precisely*.
>You couldn't compress the description of *how to give rise* to a mind. But minds are still simple in that they're a fundamental bit of the universe that doesn't break down.
I'm not sure I understand how this is responsive. I don't think I've mentioned anything about how to give rise to anything else, and am also unsure what that's exactly supposed to mean. I've only talked about descriptions which successfully capture patterns precisely, however those patterns arose metaphysically.
I believe my last analogy was illustrative: if planets had turned out to be point particles which are fundamental bits of reality that don't break down any further, some descriptions of their behavior and "abilities" will still be simpler than others, like heliocentrism vs. hundred-epicycle geocentrism. They just won't be simpler by virtue of breaking down planets into smaller and more familiar constituents. Similarly, if minds are fundamental, some descriptions of what existing minds do and believe will be simpler than others. They just won't be simpler by virtue of breaking minds down into smaller building blocks.
>Well I don't think that knowledge is just a disposition to answer questions in a certain way. Zombies, in my view, don't have knowledge, and you could know stuff while not have any will.
OK, but I didn't mean to suggest that that knowledge reduces in such a way, even if my language may've suggested that. The broader point is that the hypothesis "knowing every truth" is tantamount to a massively complex disjunction in our fundamental theories, however you choose to cash out the word "knowing." In other words, with my alien computer example, it doesn't matter if the alien computer turns out to be conscious or not.
One other thing I forgot to mention. The idea in your third paragraph - that we can essentially average all the arguments for simplicity out (weighted by plausibility) to get expected simplicity, and then run with that number in our subsequent arguments for theism - is probably going to run you into trouble. First, arguments for simplicity are ultimately arguments for having certain priors, and it's not clear it makes a lot of sense to average these out or attach meta-probabilities to them. You should either have the prior to that degree or you shouldn't! I'm reminded of the time someone I knew once argued that since he was equally compelled by the Halvers' and Thirders' arguments in the Sleeping Beauty debate, the correct probability to assign heads in that thought experiment is 50%*1/2 + 50%*1/3 = 5/12.
Second, and perhaps more worryingly, this idea will likely backfire on you. Because there's a non-crazy argument which I've already given that theism is in fact infinitely complex, based on the argument that simplicity boils down to computational compressibility, and theism can't be *properly* compressed to a finite length at all. Now, I myself am not really persuaded by this argument in such a maximally strong form (as opposed to the "God is merely very, but not necessarily infinitely, complex" form) because I have lingering uncertainty about the formalism involved and its applicability to metaphysics. But it's not totally insane to think it goes through. So if you take this into account, there's a non-negligible positive probability that theism is infinitely more complicated than naturalism. And the expected-value-style reasoning in your third paragraph is then going to totally destroy the viability of theism.
//First, arguments for simplicity are ultimately arguments for having certain priors, and it's not clear it makes a lot of sense to average these out or attach meta-probabilities to them.//
There will be certain theories of what the correct way of assigning priors is. You should average out those theories.
//So if you take this into account, there's a non-negligible positive probability that theism is infinitely more complicated than naturalism. And the expected-value-style reasoning in your third paragraph is then going to totally destroy the viability of theism.//
You don't take the average simplicity--that would imply that any view with any probability of being infinitely complex has a prior of zero. Instead you take a weighted average of the prior probabilities. So assume theism has a 50% chance of having a prior of 10% because it's simple and a 50% chance of being very complicated and having a prior of 0%. It's overall prior would be 5%. But because theism might be way simpler than naturalism, even if there's only a 1% chance of that, because it beats naturalism by so much on that theory--more than 100 times--theism will have a higher prior than naturalism.
Remind me to respond to this later if I forget!
He saith unto them, But whom say ye that I am?
And Simon Peter answered and said, "Suppose you have a mind and just don’t place any limits on it. I claim that what you get is God! Because he is unlimited there is no limit on what he knows or can do. But if he knows everything then he knows the moral facts, and thus understands at the deepest conceivable level why he should follow them. You might worry—isn’t he limited in various ways. For instance, presumably his love of evil is limited. But this misses the point. It’s true that there are various properties in which he’s limited—but he’s unlimited qua mind."
And Jesus answered and said unto him, "What?"
Most of these are in fact not simple https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor
A witch isn't complicated because it has features that are hard to simply describe. Theism is a being without parts, with just one fundamental property--being an unlimited mind.
I'm surprised you think goodness is simple. You of all people know how complicated it really is. Sure, utilitarianism is simpler than other moral theories, but there's still a lot of complexity in defining welfare and choosing the right decision theory, not to mention infinite ethics.
Also, your argument for why theism has a higher prior than naturalism if you're uncertain about the arguments is incorrect. It commits the exact same fallacy as the envelope paradox. https://en.wikipedia.org/wiki/Two_envelopes_problem
The problem is, you're assuming that the prior of naturalism is some constant X independently of whether the arguments for a high prior on theism are correct. But obviously that's false - if the arguments for theism's high prior succeed, then naturalism has a much lower prior than if they fail. The correct probabilistic reasoning, based on your assumptions, is the following: Call theism T and naturalism N, and let S be the claim that one of these simplicity arguments succeeds while F = ¬S is the claim that they all fail. The probabilities you assigned are P(T|F) = 0.01*P(N|F), P(T|S) = 100*P(N|S), and P(S) = 0.1. Since naturalism is mutually exclusive with theism, this means that P(T|F) ≤ 1/101 and P(T|S) ≤ 100/101. The overall prior probability of theism then is P(T) = P(T|F)*P(F) + P(T|S)*P(S) ≤ 1/101*0.9 + 100/101*0.1 ≈ 0.108. And that ≤ sign has equality if and only if your prior on atheistic non-naturalism is 0, which it shouldn't be because of Cromwell's rule.
Of course, that's assuming that you even buy that the arguments have a 10% chance of making theism 100x more likely than naturalism, which I don't. Some of the arguments ("pure act" and "unlimited mind") I don't even think are coherent, and the ones that are repeat themselves (I don't see any difference between "infinite power" and "unlimited power") and mostly don't even come anywhere near a full description of God (the only ones that maybe do are "unlimited agent" and "perfection", neither of which seems like a particularly simple or joint-carving description to me). On top of that, all of the arguments rely on your idea that a mind is somehow an inherently simple thing, since God has to be a mind under any of them. This seems obviously false to me - minds are so complex that we have no way of even saying what one is other than pointing to our own consciousness and saying, "that kind of thing". Even if you're a dualist, this remains true, so dualism doesn't automatically get you out of that problem.