The Universe Is a Big Layer Cake
By thinking of nature as a hierarchy, scientists dissolve the dichotomies they have wrestled with.
By George Musser

Is the universe deterministic or indeterministic? A clockwork or a craps table? In this month’s issue of Scientific American, I have an essay arguing that the answer is: both. The world can be deterministic on some levels and indeterministic on others; these two categories are not mutually exclusive. To me, this is the essence of Einstein’s critique of the orthodox Copenhagen Interpretation of quantum mechanics. Einstein recognized that quantum indeterminism could perch atop a deeper deterministic layer and, given the theory’s loose ends, seemed to have to. I’ve already been getting some interesting responses by email, so I thought I’d open up a comments thread to get a discussion going.

Traditionally the supposed dichotomy between determinism and indeterminism has been as much about people as about particles. Determinism seems to deprive us of free will—a concern that weighed on the creators of quantum mechanics and made many of them receptive to indeterminism. Max Born wrote Einstein in 1944: “I cannot understand how you can combine an entirely mechanistic universe with the freedom of the ethical individual… To me a deterministic world is quite abhorrent—this is a primary feeling.” To which you might reply that indeterminism is no less abhorrent. If God plays dice with the universe, he also plays it with human fate.

By dissolving the dichotomy, philosophers such as Daniel Dennett of Tufts University and Christian List of the London School of Economics have argued that we can be the authors of our own acts even if the particles in our bodies move in completely preordained ways. The unpredictability and openness of human decisions is a type of higher-level indeterminism. It is decoupled from whatever happens at the foundations of reality.

Another way to put it is that free will—and indeterministic behavior in general—is an emergent property, one that doesn’t exist at the microscopic level but arises in the aggregate. Sometimes the word “emergence” is thrown around without elaboration. In this case, it’s really quite simple. There’s redundancy in how macroscopic objects are put together. A change at the macro level implies a change at the micro level: if you roll a die twice and it lands on different sides, the atoms in the die must have taken different paths. (This condition is known as “supervenience”.) But the converse is not true. If the atoms take different paths, the visible outcome might still be the same. Consequently, the workings of microscopic and macroscopic laws need not mesh.

To illustrate, here’s a pair of figures from List. They show the development of a system—a gas, say—at five moments in time, with time running upward. The first figure shows how the situation looks on the microlevel. The system begins in one of six possible microstates, each of which evolves in a purely unambiguous, deterministic way. The second figure shows the macrolevel (or “agential” level, meaning the human scale). The resolution of this level is lower, and the gridlines show how the microstates blur together into macrostates. In physics jargon, the macrolevel is “coarse-grained.” In several places, the lines branch: the gas could evolve in two possible ways as seen from the macrolevel, and it looks as though the gas chooses one of them at random.

List Figures 1 and 2

Roughly speaking, you might think of the dots as molecules and the boxes as the maximum resolution of a microscope. At the start, the molecules are bunched, so you see two lumps. At t=2, a third lump suddenly materializes on the left. It arises because the left group of molecules has spread apart, but you don’t see the molecules. To you, it looks as though a new lump has popped out of thin air. The dynamics of the macrolevel is as much a product of the coarse-graining as of the microscopic dynamics.
 
In this scenario, the system evolves from a high-redundancy state to a low-redundancy one. In physics terms, that means it goes from high entropy to low entropy—which is opposite to what you’d expect for a closed system. But List assures me that entropy can also increase, so that the emergence of indeterminism from determinism doesn’t presume unusual conditions; it’s a general phenomenon. Another feature of the simple scenario is that the macro level is just a blurrier version of the micro, making the division into levels somewhat arbitrary. But the relationship among levels can be, and usually is, more opaque. Macrostates and microstates typically involve completely different variables, so that you can’t resolve the microstate merely by turning up the magnification; you need a change of conceptual framework. The bulk properties of gases, such as temperature, do not exist at the molecular level; they are collective quantities. Quarks and gluons are not simpler littler versions of the protons and other particles they make up.

If you went to an even higher level of description, you might well find that the system behaves deterministically again, as the randomness gets averaged out. The microscopic motions are deterministic, and so are macroscopic phenomena such as diffusion, wave motion, and fluid flow, but in between is a “mesoscopic” realm governed by probabilistic laws. The mesoscopic equations are formulated in terms of macroscopic variables, but include random noise to capture the suppressed microscopic details. You see this in climate models, which use random variables to capture physics that occurs below the resolution of the computer simulation. Some theorists, following Einstein, think of quantum mechanics as a probabilistic mesoscopic theory. For instance, Steven Adler at the Institute for Advanced Study has derived quantum mechanics with random corrections from an underlying theory that is not only deterministic, but also nonspatial.

Many physicists and philosophers find this kind of analysis deeply unsatisfying. Sure, they say, higher-level laws may be indeterministic if formulated in terms of higher-level variables—but that’s a big if. Those laws can still be deterministic in terms of lower-level variables. If you encounter any uncertainty over how the world will unfold, you can zoom into the microscopic level and everything will come into focus. Our decisions may seem open to us, but are preordained on the lower level. This issue is tied up with a broader debate over the concept of emergence. Strict reductionists think of emergent properties not as objective features of reality but as convenient approximations to the fundamental physics. In other words, the world really has only a single level, and if it is deterministic, any indeterminism we perceive merely reflects our imperfect knowledge about the real goings-on.

Thing is, we do observe multiple levels in nature. Each is self-contained: it follows laws that are most succinctly described by what happens at that level, without reference to below or above. (This is an important feature of quantum field theory.) List developed this point in an earlier paper with the late Australian philosopher Peter Menzies. Suppose the macrolevel can be in one of three states, A, B, and C, each of which—because of redundancy—corresponds to a large set of microstates, {ai}, {bi}, and {ci}. Now, suppose that some of the ai’s deterministically evolve to bi’s and the rest to ci’s. From the macroscopic perspective, A indeterministically splits to B or C. Nothing would be gained by spelling out all the microscopic outcomes; to the contrary, you’d fail to grasp the way the microstates are grouped. There’s real structure here, not just a mirage caused by imperfect knowledge. “If we were to try to capture all scientific phenomena just at the lower level, we would actually miss out on some important higher-level regularities,” List says.

Much the same reasoning about levels could dissolve other dichotomies of physics, such as locality vs. nonlocality, the subject of my forthcoming book. In fact, questions of determinism vs. indeterminism and locality vs. nonlocality often go hand-in-hand. Any large, complex system will undergo random statistical fluctuations (an effective indeterminism) and, when the system has not achieved internal equilibrium, the fluctuations at different locations will be correlated (an effective nonlocality). In quantum mechanics, particles can act in lockstep despite the distance between them, so they must either be deterministic (so that their coordinated behavior can be preprogrammed into them) or nonlocal (so that they can coordinate on the fly)—this is the dilemma that Einstein articulated in the famous EPR paper of 1935. For many physicists and philosophers, quantum nonlocality suggests that spacetime is derived, and in most approaches, the reality that underlies spacetime obeys some primitive notion of locality.

Yet again, physics demonstrates its power to demolish our neat pigeonholes, to show that the world need not conform to our human categories.

Albert Einstein quantum mechanics quantum physics

Share your wisdom

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Thanks for writing the article in Scientific American, and for this blog post. They’re both very well written and explained!

    I suppose I’d just say that at least within the physics community, there has always been an understanding that Einstein’s hope for a determinism underlying quantum theory was meant in analogy with the way that thermodynamics could be derived as the statistical description of an underlying (conceivably) deterministic physics. I appreciate that this subtlety may not be widely appreciated out in the wider public — and that’s why your work, analysis, and great writing are so greatly needed!

    However, I believe there are still some additional wrinkles in the arguments expressed in your article and in the blog post. In particular, I’m a little nervous about the claimed symmetry between (1) micro-indeterminism/macro-determinism and (2) micro-determinism/macro-indeterminism.

    As for (1), it is entirely true that an underlying indeterministic microphysics can average out to produce a deterministic-looking macrophysics. After all, if I have a marble hopping stochastically/indeterministically but with equal probability between two locations +1 and -1, then its average (“coarse-grained”) location is a constant equal to 0, and you can’t get more deterministic than a constant! But notice that it is impossible to use a knowledge of the deterministic macrophysics to make any predictions whatsoever about the underlying stochastic microphysics.

    On the other hand, regarding (2), suppose that we have an underlying deterministic microphysics, and, through some sort of coarse-graining, we obtain what looks like a stochastic macrophysics. Then it is entirely possible to predict the macrophysics if we somehow know the deterministic microphysics. That is, a knowledge of the deterministic microphysics, plus a knowledge of the coarse-graining recipe (both of which might require a lot of computing power for, say, a mole of particles!), allows for complete prediction of the supposedly “stochastic” macrophysics. And the possibility (at least in principle) of predicting the macrophysics would make most people skeptical about calling the macrophysics truly indeterministic to begin with.

    Now, it’s true that this logic breaks down in quantum field theory — the physics of protons and neutrons really can’t be predicted deterministically from the QCD physics of the underlying quarks and gluons. But that’s a quantum example, so it sort of begs the question. In particular, naive structural reductionism fails spectacularly in quantum theory, e.g., due to entanglement.

    If we stick to classical physics without allowing for any classical micro-indeterminism, then it’s very hard to come up with similar counterexamples.

    The weather is actually a good case study. If we pretend that the weather is fundamentally classical all the way down (again, we do not allow ourselves to appeal to quantum theory here!), and we pretend that there’s no chaos, then a big enough computer could track every air particle and deterministically predict the future evolution of the microphysics. So we’re in case (2) in my dichotomy above. Then if you feed the computer any coarse-graining prescription — say, a suitable definition of what you mean by temperature, pressure, etc. — the computer will be able to tell you the values of these coarse-grained, macroscopic observables at any future time.

    The example you cite from List in your blog post actually illustrates this point very clearly. You give us the micro-level diagram (Figure 1) and also the rule for coarse-graining, and so the macro-level diagram is completely predictable! That is, if we were only given Figure 1, we could easily draw Figure 2!

    To illustrate the asymmetry I mentioned earlier, now suppose that you are given Figure 2 and that it consists of just a single column running vertically, with a big black dot in each cell. That’s a very deterministic-looking system — its position is a constant in time! But suppose someone then told you that the underlying microphysics was indeterministic, so that we’re in case (1). Would it be possible to draw Figure 1 now?

    I suppose ultimately all this comes down to the question of practical versus in principle.

    It is true that — in practice — one cannot track the (conceivably) deterministic microphysics of a sufficiently complicated system to make perfect predictions about the (seemingly indeterministic) macrophysics.

    But — at the level of principle — it’s tough to argue regarding the macrophysics, as you write in your blog posting, that “It is decoupled from whatever happens at the foundations of reality,” let alone that free will is truly restored. What happens at the level of the macrophysics is completely determined by what happens at the level of the microphysics. One might even go so far as to say that the indeterminism of the macrophysics in this case is superficial, not fundamental, in contrast to the case of an indeterministic microphysics. I’d certainly be hard-pressed to call a universe with deterministic microphysics anything but deterministic, and if Einstein had succeeded in finding a deterministic microphysics underlying quantum theory, I think most physicists would have regarded the seeming indeterminism of quantum theory as being fake — just a convenient approximation.

    Near the end of your blog post, you cite an example by List:

    “Nothing would be gained by spelling out all the microscopic outcomes; to the contrary, you’d fail to grasp the way the microstates are grouped. There’s real structure here, not just a mirage caused by imperfect knowledge. ‘If we were to try to capture all scientific phenomena just at the lower level, we would actually miss out on some important higher-level regularities,’ List says.”

    Notice that List says “just at the lower level.” But as I’ve argued repeatedly, there is no reason in practice or principle why an agent (say, a Laplace’s demon) that is assumed to be able to keep complete track of the microphysics must somehow be limited in knowledge to being “just at the lower level.” Such an agent could certainly also know the coarse-graining recipe relating the microstates to the macrostates (as is made explicitly clear in Figures 1 and 2 in your blog post), and with both these kinds of knowledge, the agent could make perfect predictions about the behavior of the macrostates of the system.

    If such an agent were a physical computer sitting on your desk and could tell you exactly what you were going to do today and what the weather was going to be (Apple — please don’t get any ideas!), would you still say you are indeterministic in anything but a superficial sense?

    There’s a similar statement in your article in Scientific American, where you write

    “For this reason, the die roll is not merely apparently random, as people sometimes say. It is truly random. A god-like demon might brag that it knows exactly what will happen, but it knows only what will happen to the atoms [it.]. It does not even know what a die is because that is higher-level information. The demon never sees a forest, only trees.”

    Again, there is nothing that prevents the demon from knowing both what will happen to the atoms and also how they form the faces of the die. If the demon is really capable of predicting with absolute certainty how the atoms will behave (that is, chaos and the indeterminism of the die being an open system aren’t issues), then the demon is completely capable knowing the coarse-graining scheme relating the microstates of the atoms to the macrostates describing the numbers on the faces of the die (that is, there is no reason such a demon can’t know the locations of the trees and also how the trees together make up a forest), and thus there is no obstruction to the demon predicting with absolute certainty what number will be showing at the top of the die after the roll.

    It’s therefore misleading to argue that the die is indeterministic in this case. There isn’t any way to get perfectly predictable determinism at the atomic level without perfect determinism (in principle) at the macroscopic level.

  2. Random determinism is still determinism (just need statistical analysis). Why doesn’t anyone talk about real in-determinism. What is that? Something that can’t be defined deterministically, sooo if free will is not deterministic, then it can’t be defined deterministically. If this is true it would be impossible to create a rigorous explanation of free will.

  3. Dear George,

    Very well written piece! The picture is very clear, and I think you mostly are right.

    However, there is one thing I want to mention. While reading, I had the feeling that the article is based on an assumption which I am nut sure is correct. It became clear to me when I read this part:

    “In quantum mechanics, particles can act in lockstep despite the distance between them, so they must either be deterministic (so that their coordinated behavior can be preprogrammed into them) or nonlocal (so that they can coordinate on the fly)—this is the dilemma that Einstein articulated in the famous EPR paper of 1935.”

    Do you know of a deterministic solution of the EPR which is preprogrammed and local? Also, can one preprogram the spin to have all the answer to possible spin measurements, as in the Kochen-Specker theorem? Or to preprogram the photons in Wheeler’s delayed choice experiment so that they give the correct predictions?

    As far as I know, the only way to do this is if the observed particles know from the beginning what measurements will be made. This kind of prescience or conspiracy between the initial conditions of the observed particle and that of the observer who chooses the observable is needed in any hidden-variable theory, deterministic or not, and is called “superdeterminism”.

    Another issue is that it is not clear how this kind of emergent randomness allows free will. By throwing a die, the outcome is random only because we live in the emergent layer of the cake. But the fact that we don’t know the bottom layer of the cake, which is deterministic in your cake recipe, doesn’t mean we are free. We are just ignorant.

    Sorry if I misunderstood something, this is how your cake tasted for me 🙂

    Best regards,
    Cristi

    1. Ah, good point: I should clarify that the EPR paper (and Einstein’s related writings) identified a dilemma, but the confirmation of the Bell inequalities showed that determinism doesn’t restore locality.

      As for the question of whether this truly restores free will, List’s point is that this isn’t just a question of ignorance. Free will is real, but not fundamental. To talk about you making a decision, we need to work at a level where *you* exist. I realize that violates many of our intuitions, but that’s what makes the discussion interesting.

      1. A possible answer to the question of compatibility of the determinism in the bottom layer of the cake and the free will allowed by the emergent indeterminism (the icing on the cake) was proposed in these essays:

        [1] http://www.lse.ac.uk/CPNSS/pdf/DP_withCover_Measurement/Meas-DP%2016%2001.pdf
        [2] http://fqxi.org/community/essay/winners/2008.1#Stoica
        [3] http://www.noema.crifst.ro/doc/2012_5_01.pdf
        [4] http://www.scottaaronson.com/papers/giqtm3.pdf

  4. Hi George many thanks for this interesting and useful post. The key point is as you point out There’s redundancy in how macroscopic objects are put together. This multiple realisation at the lower levels allows higher level purposes to be fulfilled by selecting whichever lower level realisation best fulfills higher level purposes – for purpose is the centre of all life, from the biomolecular level up. This does not in any way undermine the lower level physics: rather it conscripts the physics to higher level purpose. This is central to the way the mind works, see e.g. Deco and Rolls on the stochastic brain.

  5. To repeat my comments from a Facebook post on this:

    There being a deterministic layer “under” the apparent quantum indeterminism runs into three basic problems:

    1. It doesn’t tell us how “structureless” entities like muons can have different life spans – we don’t have reason ? to believe in some kind of clockwork mechanism inside them.

    2. The wider issue is the transition from “something” being wide-spread (known because of interference experiments) to suddenly AFAWCT all being at a particular little spot – and with something preventing other spots all over the universe from displaying that presence, also. Yeah,conservation of mass-energy etc, but “how” does the universe coordinate it. (And as Hume showed, rules are just descriptions of observed order, not an explanation OF the order.) If delicate underlying processes were out there tickling say, one detector more than another, it still doesn’t tell us how the wide-spread presence of the detected entity, suddenly is not all out there anymore. (And there is no intelligible way to represent that anyway, in terms of the relativity of simultaneity.)

    3. The apparent core inability of any realistic properties that could inhere separately in far-flung particles, to account for the strong correlation of observations about them. So,there are no separate linear polarizations for two entangled photons to have, that would account for how one of them being absorbed by say, a 20° LP filter, means that a 20° LP filter in the other direction dependably absorbs that photon as well – and all the strong correlations at other angles.

    Yes, you’ve heard all that before, but I just don’t see how your or any other musings on “explaining” the unpredictability, really can work. That goes even for Bohmian mechanics. The claim it works, depends on a combination of presuming a contrived, helpful “random distribution” in the first place, to get the Born rule etc, as well as brushing off the kind of effects as of muon decay, that are not part of “patterns” of how bunches of particles hit a screen etc. Also, Nick Herbert came up with a clever challenge … awhile ago: since dBB mechanics says electrons don’t really orbit in atoms (!) but just sit there, then muonic atoms wouldn’t show relativistic time-dilation of the muon decay times.