On the fundamental limits of our predictive power

On the fundamental limits of our predictive power

Over the years that I spent reading and thinking about the nature of reality, and our ability to grasp it, I progressively accumulated some understanding of several core limits in our ability to make predictions over the future. By predictions, I mean mathematical or computational models to allow us to jump ahead of time and predict the future state of a system based on its current state, in less time that it would take us to just wait and see it happen. I identified initially three independent core limits, and recently discovered a potential fourth one, which is always exciting because each of these limitations usually comes with a dramatic change of perspective on reality. Here are these four limitations:

The first one is a basic consequence of the Theory of Relativity, which states that the speed of light is the maximal speed at which information can be transmitted. As a consequence, a large part of the universe past a given time in the future is not accessible to our measurements because it is out of our light cone. If we try to predict the evolution of our system long enough into the future, anything happening outside this light cone could possibly have a causal influence on its evolution, but we cannot access it. Also, since we cannot have an instantaneous snapshot of everything inside this light cone (we need to probe by sending at best some electromagnetic wave or anything else that will be constrained by the speed of light), by the time we collect the information inside it, we know that some external influences outside of it have rendered this information potentially obsolete. In practical engineering this is not a problem because of the small scale of what we measure in terms of space and time, and the relatively limited causal side effects of the environment on well defined systems, but on a principle level this is a core limitation to our capacity to arbitrarily predict the future as a whole.

The second one is, as you probably guessed already, related to Quantum Mechanics. When we try to predict the future in the framework of QM, all we can calculate is a probability distribution for future outcomes. This is not related to our intellectual limitations or the fact that QM is a substandard theory that we could conceivably improve in the future to restore the ability to make deterministic predictions. It is instead deeply engraved in the working of reality as we understand it, as countless experiments like the Bell inequalities verification have shown. In many cases, the best we can say about the future is that it will be 50%/50% result A or result B. Nothing more. The laws of nature, from the observer's stand point, are of probabilistic nature. There is a fundamental limit to what nature allow us to say about what we is actually going to happen.

The third one is somehow related to the second one, but still adds an independent layer of uncertainty that would persists in the absence of it. This is the fact that uncertainty on initial conditions in the most general case of a complex system will compound into an exponential drift, making any kind of prediction in the long term effectively random. This is also referred to as Chaos Theory. Of course, this is made worse by the uncertainty principle from QM, since we know that we cannot hope to achieve a perfect measurement of initial conditions anyway, but it goes further. We could have hoped somehow that the initial uncertainty from QM measurement could be kept in check over long times, so that we can at least make predictions within some bound. But Chaos Theory ruins this hope in all practical cases when we are dealing with non trivial systems involving many subsystems interacting with each other. So even in a fully deterministic, fully Newtonian universe, Chaos Theory would impact our ability to predict, assuming the slightest limitation on our ability to perform perfect measurements, which is a practical reality with or without Quantum Mechanics.

The fourth one is a distant cousin of the third one, but with a much more profound root. I thought about it relatively recently after reading about interesting research trying to formulate the laws of physics in a purely computational framework, with discrete operations run over discrete structures (see Wolfram, or Loop Quantum Gravity, Causal Sets, etc). If you think that the root machinery of the universe that "runs" reality is of a computational nature, then you can add a fourth limitation, that Wolfram calls "computational irreducibility". It simply says that some of the computation run by the universe might have an irreducible nature in term of complexity, in the sense that there are no shortcut to getting the result of the computation over n steps, and that you don't have any other choice but to effectively run the computation. It is clearly not the case in general, as you have computations for which you can easily establish the result without actually running the steps of the computation. For example, imagine a computation that adds 1 to an initial number at each step. It is easy to compute in a very efficient way (with just one addition) what will be the result after n steps, at least in a much more efficient way that performing n times the addition of 1 to the initial number. Not all computations are that easy to "compress" in time however. If you take the example of simple cellular automata, you can easily produce some non trivial set of rules (like rule 33 in the Wolfram framework for 1D binary automata), which are presumably irreducible. Assuming the universe is running some form of computation on a digital structure, we might encounters pockets of irreducibility where we don't have any better option to "predict" the future than simply run the computation. And since any computing device in our universe would be implemented within the universe and using the universe computation rules, it would at best be as fast as the computing universe itself, and more likely slower. So, we can't fast track our way to a prediction in this context. Notably, this is different than the kind of limitation coming from Chaos Theory because it still holds if you have a mean to know perfectly the initial conditions like in Rule 33 (and it also works in a non-quantum and non-relativistic world). The existence of effective theories in physics to actually make predictions at some larger scale (with the limits stated in general here), proves however that there are large spaces of computational reducibility in the universe. Irreducible computation might precisely happen only in circumstances where we stand outside of the validity bounds of our current theories (like inside black holes or at the big bang).

It is important to note that the inability to precisely predict the future given some initial conditions, which is the topic of discussion above, is not the same as total ignorance about the future. In all the cases above, we often can, or we might in the future, be able to come up with probabilistic results on the kind of futures we could observe. This is already the norm in Quantum Mechanics and, in fact, in most practical applications of physics in engineering. We also have statistical results on chaotic systems via ergodic theory. Total ignorance about the future would be actually quite hard to get: you need a signal that is so random that even its probability distribution is randomized, through a randomization process that should also be subject to a probability distribution randomization, and so on ad infinitum. I suspect this does not really exist in nature, but computational irreducibility might prove me wrong.

On a philosophical level, I find it fascinating to identify limits on our potential understanding of nature, or more precisely on our ability to visualize the future without having to wait for it to happen, which I think would be a core element of any definition of intelligence. I wonder if there are some other such core limits that I have not yet identified. It is likely that our better understanding of physics in the future will reveal new limitations, getting us ever further away from the initial Newtonian dream of predicting the whole future of the universe from initial conditions.

Subscribe to JC Baillie's Blog

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe