The Biggest Unsolved Problem in Philosophy of Science
Should we believe our scientific theories?
There is a tension at the heart of scientific discovery. On the one hand, our scientific theories are really good. On the other hand, they’re probably wrong.
This tension is at the heart of the central debate in philosophy of science: whether scientific realism is true. Whether, in other words, science is accurately telling us true stuff about the world—whether the unobservable entities posited by science really exist. Fortunately, I think there’s a pretty good solution.
First: our scientific theories are good. The standard model of particle physics is a spectacularly successful physical theory. It tells a story of a world populated by fields, particles, and forces. It’s demonstrated stunning empirical success. It’s a bit hard to intuitively get a sense of just how extraordinary this success is without understanding physics, but it’s made dozens of surprising new discoveries, including predicting parameter values verified to twelve decimal places and brand-new particles.
By positing a world of unobservable particles, physicists have been able to do for physics what Darwin did for biology. They’ve developed a deep and comprehensive understanding of the world. At first blush, it would seem reasonable, in light of this, to think the physicists have correctly identified which things there are. If you say “thing X exists, and here are nine predictions down to twelve decimal places,” and you’re right each time, then we should be pretty damn sure that thing X exists.
This basic argument is called the no-miracles argument. It’s an argument for scientific realism—the idea that scientists are broadly discovering the right picture of reality, and that we should believe in the existence of the unobservables they posit. Scientific realism says that we should think our scientific theories aren’t just good at making predictions. They’re telling us something deep about what reality is like. The no-miracles argument goes: if they weren’t telling us something about reality, then it would be a stupendous miracle that they have the predictive success they have.
It’s easiest to see this in the context of less confusing higher-level scientific theories. The theory of dark matter posits that invisible matter populates galaxies and exerts a gravitational pull. That theory explains a lot about the world. It explains otherwise surprising facts about the bending of light, the speed of galactic rotations, and galactic collisions. If there was no dark matter, it would be a miracle that the theory does so well. Positing non-existent stuff doesn’t generally make extremely accurate predictions.
But the no-miracles argument has an evil twin—an argument that points in the opposite direction. It’s been enough to convince lots of philosophers and scientists that the picture painted by our best scientific theories, despite their predictive success, isn’t literally true. It’s called pessimistic meta-induction.
The argument runs as follows: just because a theory is predictively successful doesn’t mean it’s true. Historically, lots of theories have been believed based on their predictive successes, yet have turned out false. Newton’s theory isn’t just incomplete; it’s wrong. It posits non-relative motion, when Einstein later proved motion is relative. It posits instantaneous action at a distance, which Einstein later disproved. It posits gravity as a force, rather than the bending of spacetime. The picture it tells of reality isn’t approximating the truth, but is flatly incorrect, and much of what it says exists, does not.
And Newtonian physics is far from alone. The following list from Vickers illustrates a great many examples of scientific theories that were wonderfully predictively successful but false:
Caloric Theory
Phlogiston Theory
Fresnel’s theory of light and the luminiferous ether
Rankine’s vortex theory of thermodynamics
Kekulé’s Theory of Benzene Molecule
Dirac and the positron
Teleomechanism and gill slits
Reduction division in the formation of sex cells
The Titius-Bode law
Kepler’s predictions concerning the rotation of the sun
Kirchhoff’s theory of diffraction
Bohr’s prediction of the spectral lines of ionized helium
Sommerfeld’s prediction of the hydrogen fine structure
Velikovsky and Venus
Steady state cosmology
The achromatic telescope
The momentum of light
S-matrix theory
Variation of electron mass with velocity
Taking the thermodynamic limit
This is the core tension at the heart of scientific discovery. The no miracles argument holds that we should believe in the unobservable posits of our best scientific theories because if they didn’t exist, the success of science would be a miracle. And yet pessimistic meta-induction seems to show that the track record of those who believed in their theory based on predictive success is quite poor.
Some people end up going the scientific anti-realist route. Van Fraassen, for instance, adopted a view called constructive empiricism. He thought that affirming a scientific theory wasn’t about thinking it’s true, but instead just about thinking it made good predictions. On this view, we could accept the results of our best sciences without thinking they’re literally accurate.
I somewhat feel the pull of an anti-realist view. Pessimistic meta-induction is a good argument and illustrates that getting predictions right doesn’t mean the entities that a theory invokes exist. But I don’t feel it’s adequate because it doesn’t really have a good explanation of the spectacular success of science. Why, if the standard model is false, does it predict things out to twelve-decimal places?
Now, to his credit, Van Fraassen has something to say about this objection. He has a famous passage where he notes that the reason that scientific theories are predictively accurate is that they’re selected for being accurate. The ones that aren’t accurate are discarded, and so it’s no surprise that the ones that hang around get things right.
And sure, this blunts some of the force. It would be vastly more surprising if a theory picked out randomly had excellent predictive success than if the theories selected for making correct predictions did. But still, our theories make such gobsmackingly specific predictions that it’s hard to believe they’d be as good as they are if they weren’t uncovering something about the world. It’s especially surprising, if a theory is false, that it makes accurate predictions about new domains. Retrodiction by curve-fitting isn’t too difficult, but false theories shouldn’t be expected to accurately predict the future.
It would be one thing if the theories in physics were complicated and gerrymandered to artificially fitting the data. But they’re not. They’re extremely simple. It’s surprising that a simple inaccurate theory does so well because it doesn’t have manipulatable free parameters that allow it to artificially match the data.
This is a serious puzzle. But fortunately—and unlike a lot of puzzles in philosophy—I think there’s a pretty good solution. It’s called structural realism.
Structural realism holds that our scientific theories aren’t figuring out which entities exist in the world but are still figuring out the mathematical structure of the world. Even when theories are wrong about which entities exist, they can be right about the structure of the world. Mathematical structure was preserved between Newton and Einstein. The math of Newton was found to be an extension of the deeper math of Einstein. Worrall, a leading modern advocate of structural realism, puts it well:
The rule in the history of physics seems to be that, whenever a theory replaces a predecessor, which has however itself enjoyed genuine predictive success, the ‘correspondence principle’ applies. This requires the mathematical equations of the old theory to reemerge as limiting cases of the mathematical equations of the new.
The structure of a physical theory is something like the behavior the theory posits at the mathematical level. And so while entities of theories are routinely modified and discarded, underlying structure tends to stay the same. This can give us a nice account of what scientific theories are getting right—and why they’re so successful—without needing us to posit that the entities they believe in actually exist.
Why adopt this view? I see two main reasons. The first is that it’s the best way to explain the success of science without being vulnerable to pessimistic meta-induction. It can explain the spectacular success of science by positing that our best scientific theories are discovering the structure of the world. However, because it doesn’t need to think the entities they believe in really exist, it’s not vulnerable to bad track record arguments. I think pessimistic meta-induction and the no-miracles argument are both decisive, and so for a theory to be right, it must not be vulnerable to either. Structural realism isn’t.
A second big reason: meta-induction is actually a point in its favor. Imagine you’re a Van Frassen-style anti-realist. How can you explain why new theories maintain the structure of old theories? If theories are just tools for making correct predictions, then there’s no reason at all to expect them to be structurally continuous with old theories. And yet they are.
So while the historical record spells bad news for traditional kinds of scientific realism, it provides a strong argument for structural realism. Structural realism is able to explain the continuity of structure along with the changes of entities. If an inductive track record of ditching entities is a point against realism about the entities of science, then an inductive track record of keeping structure is a point in favor of realism about the structures in science.
Structural realism is a pretty popular view. The Stanford Encyclopedia of Philosophy says “Structural realism is considered by many realists and antirealists alike as the most defensible form of scientific realism.” But some objections remain.
Now, many of the objections to structural realism focus on a version called ontic structural realism (OSR). OSR says that structure is all that exists and there’s nothing at the bottom level to manifest that structure. I don’t find OSR very plausible. It seems impossible that there would be structure without anything that it’s the structure of—that there’d be mathematical relations without anything doing the relating. Relations, by definition, hold between things. It can’t just be relations all the way down!
But some objections remain to the more modest kind of structural realism that says science identifies the structure of the world without saying structure is all there is. One worry is that it’s a bit hard to draw a distinction between structure and entities. After all, entities are given exhaustive mathematical descriptions in physics. When physicists tell you what particles are, they’ll discuss them in terms of their physically-describable properties. Thus, the argument goes, structural realism collapses into realism about the entities of science.
But I don’t think this is right. Even though physical entities are typically characterized through their structure, that doesn’t mean they’re nothing but structure. They can’t be structure all the way down, because you need something to be doing the structuring. You might describe particles, in physics, through math, but that doesn’t mean they’re just math and nothing more!
I also think that it can’t be structure all the way down, because our universe has stuff other than mathematical structure: minds. Minds, as I’ve argued elsewhere, aren’t physical things. They’re not characterizable in terms of structure. No mathematical structure, however elaborate, is identical to a mind by itself. So if this is right, then we’ll have to think there are more things in heaven and Earth than just mathematical structure, and so realism about structure doesn’t involve realism about the things instantiating those structures.
Another objection to structural realism: isn’t it too skeptical? If it doesn’t think that the stuff that our leading scientific theories say exist really do exist, then doesn’t it reject the truth of our best scientific theories? How then can it explain the miraculous success of science.
But I think this argument is seriously damaged by the history of science. As we’ve seen, lots of theories have made predictions but been wrong about which entities exist. The history of science shows that theories can discover the structure of the world even if they’re wrong about the world’s entities. So we shouldn’t be surprised if that’s true of our current theories. Mathematical structure is what matters to making predictions. Structure can be correct, in a theory, even if the picture it tells of reality is wrong.
Structural realism is a nice middle ground between realism and anti-realism. It allows us to identify what our best theories are getting right without thinking that they’re vastly better than previous extremely successful theories. Science is accurate because scientists are discovering the structure of the world.


This is another one of those cases where whatever we can learn, we can learn from physics, the philosophers have nothing to contribute, and often end up saying stupid and anti-scientific things because of their ignorance of actual science.
Lets look at your takes on physics.
> Newton’s theory ... posits non-relative motion, when Einstein later proved motion is relative.
False. Newton himself believed in non-relative motion, as we can see if we actually go back and read the principia. But it wasn't part of his theories in any meaningful sense. Nothing else he said hung on it. It was the physics equivalent of what lawyers call dicta. And when modern physicists speak of Newtonian mechanics, they are not referring to a way of thinking that involves absolute (non-relative) motion, they are referring to a way of thinking that involves relative motion, specifically gallilean relativity. You can complete a course in classical mechanics, a whole physics major, and never learn that Newton believed in absolute motion. It is just as true and just as irrelevent as the fact that Newton believed in alchemy. It was not part of his science. I only learned this fact because I was assigned the relevant passage from the principia in a modern philosophy class, a class where the professor had no idea that physicists had long since discarded the idea. That to my mind is a huge mark against philosophy.
> It posits instantaneous action at a distance, which Einstein later disproved. It posits gravity as a force, rather than the bending of spacetime.
These are true, but you later suggest that statements like these are somehow claims about what exists, and they just aren't. Newton doesn't make particular claims about what exists, in his time that was what chemistry was for. These statements, and all of Newtonian physics, are statements about what sorts of properties the things that exist have (like mass), and how the things that exist interact.
> The picture it tells of reality isn’t approximating the truth, but is flatly incorrect, and much of what it says exists, does not.
Again, this is just False. Newtonian physics really does approximate the truth extremely well for things that move very slowly compared to the speed of light, and have very little mass compared to a star or black hole.
When I look at your list of other scientific theories that supposedly turned out to be completely false, some of them I believe are false, some I have no idea what they are, but some I'm pretty sure are true. Lets take for example the theory of "Variation of electron mass with velocity". This is not my preferred way of describing what is happening, but it is a way some physicists describe what is happening, and it isn't wrong. In high school physics, you were probably told that momentum is mass times velocity. This is true in Newtonian physics. In special relativity, momentum is m*v*gamma, where gamma = 1 / (1 - v^2 / c^2)^0.5, where c is the speed of light in a vacuum. Firstly, note that when v is small, gamma is very close to 1, and so the special relativity notion of momentum is very close to the Newtonian notion of momentum. So yes, Newtonian physics is approximately true. But to come back to the variation of mass, what some physicists do when talking about relativistic motion is to redefine the word mass to mean m*gamma. If you use the word mass that way, then momentum still equals mass times velocity. And if you use the word mass that way, then mass also becomes a function of velocity, it really does increase when velocity increases. Again, this is not my preferred way to think about these equations, but it is not an incorrect way to think about these equations.
The ultimate point here is that is these are the sorts of questions you are interested in, you should be studying physics, not philosophy.
Aside: the inclusion of phlogiston on a list of "wonderfully predictively successful" theories made me initially skeptical of Vickers' list. I have not read extensively on this subject, but it seemed to me that phlogiston was a classic example of a non-explanation which just gives a name to a phenomenon as a cause. "Why did it burn? Because it was full of stuff that makes it burn." This was Yudkowsky's argument as a necessary criterion for geniune science (https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality).
I asked GPT-5 to evaluate the rest of Vickers' list to check for instances of fake causality, including reassessing whether phlogiston is a geniune example. But the list is pretty legit—only one is fake causality (Velikovsky and Venus), and even phlogiston made some predictions, even if riddled with epicycles.
Just in case anyone had the "those aren't real science" reflex that I did.