That’s called Hidden Variable Theory, but there’s also no indication that this is how the universe works and everything we find just reinforces indeterminism and uncertainty.
The most notable development is the math working out to make hidden variables irrelevant i.e. they do not actually help us better describe reality or predict outcomes of measurement.
The math doesn’t seem to care whether God is rolling dice or not.
Speaking of predicting outcomes implies a forwards arrow of time. As far as we know, the arrow of time is a macroscopic feature of the universe and just doesn’t exist at a fundamental level. You cannot explain it with entropy without appealing to the past hypothesis, which then requires appealing to the Big Bang, which is in and of itself an appeal to general relativity, something which is not part of quantum mechanics.
Let’s say we happen to live in a universe where causality is genuinely indifferent to the arrow of time. This doesn’t mean such a universe would have retrocausality, because retrocausality is just causality with an arrow facing backwards. If its causal structure was genuinely independent of the arrow of time, then its causal structure would follow what the physicist Emily Adlam refers to as global determinism and an "all-at-once* structure of causality.
Such a causal model would require the universe’s future and past to follow certain global consistency rules, but each taken separately would not allow you to derive the outcomes of systems deterministically. You would only ever be able to describe the deterministic evolution of a system retrospecitvely, when you know its initial and final state, and then subject it to those consistency rules. Given science is usually driven by predictive theories, it would thus be useless in terms of making predictions, as in practice we’re usually only interests in making future predictions and not giving retrospective explanations.
If the initial conditions aren’t sufficient to predict the future, then any future prediction based on an initial state, not being sufficient to constrain the future state to a specific value, would lead to ambiguities, causing us to have to predict it probabilistically. And since physicists are very practically-minded, everyone would focus on the probabilistic forwards-evolution in time, and very few people would be that interested in reconstructing the state of the system retrospectively as it would have no practical predictive benefit.
I bring this all up because, as the physicists Ken Wharton, Roderick Sutherland, Titus Amza, Raylor Liu, and James Saslow have pointed out, you can quite easily reconstruct values for all the observables in the evolution of system retrospectively by analyzing its weak values, and those values appear to evolve entirely locally, deterministically, and continuously, but doing so requires conditioning on both the initial and final state of the system simultaneously and evolving both ends towards that intermediate point to arrive at the value of the observable at that intermediate point in time. You can therefore only do this retrospectively.
This is already built into the mathematics. You don’t have to add any additional assumptions. It is basically already a feature of quantum mechanics that if you evolve a known eigenstate at t=-1 and a known eigenstate at t=1 and evolve them towards each other simultaneously until they intersect at t=0, at the interaction you can seemingly compute the values of the observables at t=0. Even though the laws of quantum mechanics do not apply sufficient constraints to recover the observables when evolving them in a single direction in time, either forwards or backwards, if you do both simultaneously it gives you those sufficient constraints to determine a concrete value.
Of course, there is no practical utility to this, but we should not necessarily confuse practicality with reality. Yes, being able to retrospectively reconstruct the system’s local and deterministic evolution is not practically useful as science is more about future prediction, but we shouldn’t declare from this practical choice that therefore the system has no deterministic dynamics, that it has no intermediate values and when it’s in a superposition of states it has no physical state at all or is literally equivalent to its probability distribution (a spread out wave in phase space). You are right that reconstructing the history of the system doesn’t help us predict outcomes better, but I don’t agree it doesn’t help us understand reality better.
Take all the “paradoxes” for example, like the Einstein-Podolsky-Rosen paradox or, my favorite, the Frauchiger–Renner paradox. These are more conceptual problems dealing with an understanding of reality and ultimately your answer to them doesn’t change what predictions you make with quantum mechanics in any way. Yet, I still think there is some benefit, maybe on a more philosophical level, of giving an answer to those paradoxes. If you reconstruct the history of the systems with weak values for example, then out falls very simple solutions to these conceptual problems because you can actually just look directly at how the observables change throughout the system as it evolves.
Not taking retrospection seriously as a tool of analysis leads to people believing in all sort of bizarre things like multiverses or physically collapsing wave functions, that all disappear if you just allow for retrospection to be a legitimate tool of analysis. It might not be as important as understanding the probabilistic structure of the theory that is needed for predictions, but it can still resolve confusions around the theory and what it implies about physical reality.
That’s one theory about how it might work, our inability to come up with another way to explain the possibility of quantum determinism is not evidence against it
It’s not that there aren’t other ways to explain the universe, but rather, none of those alternatives are more predictive or descriptive. Not only can’t we find hidden variables, we don’t need them.
You can believe there are angels dancing on the heads of pins (or whatever) and that’s the hidden variable causing uncertainty, but there’s literally no reason to. You’re introducing addition unnecessary complexity when we can explain everything without it.
Our inability to predict an outcome does not prove anything about the certainty of the outcome, our understanding of physics is incomplete and any conclusions you draw from incomplete information are necessarily assumptions, you felt compelled to describe that with reference to angels as a means of delegitimizing this fact because you’re emotionally invested in your preferred theory
That’s called Hidden Variable Theory, but there’s also no indication that this is how the universe works and everything we find just reinforces indeterminism and uncertainty.
The most notable development is the math working out to make hidden variables irrelevant i.e. they do not actually help us better describe reality or predict outcomes of measurement.
The math doesn’t seem to care whether God is rolling dice or not.
Speaking of predicting outcomes implies a forwards arrow of time. As far as we know, the arrow of time is a macroscopic feature of the universe and just doesn’t exist at a fundamental level. You cannot explain it with entropy without appealing to the past hypothesis, which then requires appealing to the Big Bang, which is in and of itself an appeal to general relativity, something which is not part of quantum mechanics.
Let’s say we happen to live in a universe where causality is genuinely indifferent to the arrow of time. This doesn’t mean such a universe would have retrocausality, because retrocausality is just causality with an arrow facing backwards. If its causal structure was genuinely independent of the arrow of time, then its causal structure would follow what the physicist Emily Adlam refers to as global determinism and an "all-at-once* structure of causality.
Such a causal model would require the universe’s future and past to follow certain global consistency rules, but each taken separately would not allow you to derive the outcomes of systems deterministically. You would only ever be able to describe the deterministic evolution of a system retrospecitvely, when you know its initial and final state, and then subject it to those consistency rules. Given science is usually driven by predictive theories, it would thus be useless in terms of making predictions, as in practice we’re usually only interests in making future predictions and not giving retrospective explanations.
If the initial conditions aren’t sufficient to predict the future, then any future prediction based on an initial state, not being sufficient to constrain the future state to a specific value, would lead to ambiguities, causing us to have to predict it probabilistically. And since physicists are very practically-minded, everyone would focus on the probabilistic forwards-evolution in time, and very few people would be that interested in reconstructing the state of the system retrospectively as it would have no practical predictive benefit.
I bring this all up because, as the physicists Ken Wharton, Roderick Sutherland, Titus Amza, Raylor Liu, and James Saslow have pointed out, you can quite easily reconstruct values for all the observables in the evolution of system retrospectively by analyzing its weak values, and those values appear to evolve entirely locally, deterministically, and continuously, but doing so requires conditioning on both the initial and final state of the system simultaneously and evolving both ends towards that intermediate point to arrive at the value of the observable at that intermediate point in time. You can therefore only do this retrospectively.
This is already built into the mathematics. You don’t have to add any additional assumptions. It is basically already a feature of quantum mechanics that if you evolve a known eigenstate at t=-1 and a known eigenstate at t=1 and evolve them towards each other simultaneously until they intersect at t=0, at the interaction you can seemingly compute the values of the observables at t=0. Even though the laws of quantum mechanics do not apply sufficient constraints to recover the observables when evolving them in a single direction in time, either forwards or backwards, if you do both simultaneously it gives you those sufficient constraints to determine a concrete value.
Of course, there is no practical utility to this, but we should not necessarily confuse practicality with reality. Yes, being able to retrospectively reconstruct the system’s local and deterministic evolution is not practically useful as science is more about future prediction, but we shouldn’t declare from this practical choice that therefore the system has no deterministic dynamics, that it has no intermediate values and when it’s in a superposition of states it has no physical state at all or is literally equivalent to its probability distribution (a spread out wave in phase space). You are right that reconstructing the history of the system doesn’t help us predict outcomes better, but I don’t agree it doesn’t help us understand reality better.
Take all the “paradoxes” for example, like the Einstein-Podolsky-Rosen paradox or, my favorite, the Frauchiger–Renner paradox. These are more conceptual problems dealing with an understanding of reality and ultimately your answer to them doesn’t change what predictions you make with quantum mechanics in any way. Yet, I still think there is some benefit, maybe on a more philosophical level, of giving an answer to those paradoxes. If you reconstruct the history of the systems with weak values for example, then out falls very simple solutions to these conceptual problems because you can actually just look directly at how the observables change throughout the system as it evolves.
Not taking retrospection seriously as a tool of analysis leads to people believing in all sort of bizarre things like multiverses or physically collapsing wave functions, that all disappear if you just allow for retrospection to be a legitimate tool of analysis. It might not be as important as understanding the probabilistic structure of the theory that is needed for predictions, but it can still resolve confusions around the theory and what it implies about physical reality.
That’s one theory about how it might work, our inability to come up with another way to explain the possibility of quantum determinism is not evidence against it
It’s not that there aren’t other ways to explain the universe, but rather, none of those alternatives are more predictive or descriptive. Not only can’t we find hidden variables, we don’t need them.
You can believe there are angels dancing on the heads of pins (or whatever) and that’s the hidden variable causing uncertainty, but there’s literally no reason to. You’re introducing addition unnecessary complexity when we can explain everything without it.
Our inability to predict an outcome does not prove anything about the certainty of the outcome, our understanding of physics is incomplete and any conclusions you draw from incomplete information are necessarily assumptions, you felt compelled to describe that with reference to angels as a means of delegitimizing this fact because you’re emotionally invested in your preferred theory