The Puzzle of How Large-Scale Order Emerges in Complex Systems
THE ORIGINAL VERSION of this story appeared in Quanta Magazine.
A few centuries ago, the swirling polychromatic chaos of Jupiter’s atmosphere spawned the immense vortex that we call the Great Red Spot.
From the frantic firing of billions of neurons in your brain comes your unique and coherent experience of reading these words.
As pedestrians each try to weave their path on a crowded sidewalk, they begin to follow one another, forming streams that no one ordained or consciously chose.
The world is full of such emergent phenomena: large-scale patterns and organization arising from innumerable interactions between component parts. And yet there is no agreed scientific theory to explain emergence. Loosely, the behavior of a complex system might be considered emergent if it can’t be predicted from the properties of the parts alone. But when will such large-scale structures and patterns arise, and what’s the criterion for when a phenomenon is emergent and when it isn’t? Confusion has reigned. “It’s just a muddle,” said Jim Crutchfield, a physicist at the University of California, Davis.
Though the problem remains unsolved, over the past few years, a community of physicists, computer scientists, and neuroscientists has been working toward a better understanding. These researchers have developed theoretical tools for identifying when emergence has occurred. And in February, Fernando Rosas, a complex-systems scientist at Sussex, together with Seth and five coauthors, went further, with a framework for understanding how emergence arises.
A complex system exhibits emergence, according to the new framework, by organizing itself into a hierarchy of levels that each operate independently of the details of the lower levels. The researchers suggest we think about emergence as a kind of “software in the natural world.” Just as the software of your laptop runs without having to keep track of all the microscale information about the electrons in the computer circuitry, so emergent phenomena are governed by macroscale rules that seem self-contained, without heed to what the component parts are doing.
Using a mathematical formalism called computational mechanics, the researchers identified criteria for determining which systems have this kind of hierarchical structure. They tested these criteria on several model systems known to display emergent-type phenomena, including neural networks and Game of Life–style cellular automata. Indeed, the degrees of freedom, or independent variables, that capture the behavior of these systems at microscopic and macroscopic scales have precisely the relationship that the theory predicts.
No new matter or energy appears at the macroscopic level in emergent systems that isn’t there microscopically, of course. Rather, emergent phenomena, from Great Red Spots to conscious thoughts, demand a new language for describing the system. “What these authors have done is to try to formalize that,” said Chris Adami, a complex-systems researcher at Michigan State University and EEB core faculty member. “I fully applaud this idea of making things mathematical.”
Rosas came at the topic of emergence from multiple directions. His father was a famous conductor in Chile, where Rosas first studied and played music. “I grew up in concert halls,” he said. Then he switched to philosophy, followed by a degree in pure mathematics, giving him “an overdose of abstractions” that he “cured” with a PhD in electrical engineering.
A few years ago, Rosas started thinking about the vexed question of whether the brain is a computer. Consider what goes on in your laptop. The software generates predictable and repeatable outputs for a given set of inputs. But if you look at the actual physics of the system, the electrons won’t all follow identical trajectories each time. “It’s a mess,” said Rosas. “It’ll never be exactly the same.”
The software seems to be “closed,” in the sense that it doesn’t depend on the detailed physics of the microelectronic hardware. The brain behaves somewhat like this too: There’s a consistency to our behaviors even though the neural activity is never identical in any circumstance.
Rosas and colleagues figured that in fact there are three different types of closure involved in emergent systems. Would the output of your laptop be any more predictable if you invested lots of time and energy in collecting information about all the microstates—electron energies and so forth—in the system? Generally, no. This corresponds to the case of informational closure: As Rosas put it, “All the details below the macro are not helpful for predicting the macro.”
What if you want not just to predict but to control the system—does the lower-level information help there? Again, typically no: Interventions we make at the macro level, such as changing the software code by typing on the keyboard, are not made more reliable by trying to alter individual electron trajectories. If the lower-level information adds no further control of macro outcomes, the macro level is causally closed: It alone is causing its own future.
This situation is rather common. Consider, for instance, that we can use macroscopic variables like pressure and viscosity to talk about (and control) fluid flow, and knowing the positions and trajectories of individual molecules doesn’t add useful information for those purposes. And we can describe the market economy by considering companies as single entities, ignoring any details about the individuals that constitute them.
The existence of a useful coarse-grained description doesn’t, however, by itself define an emergent phenomenon, said Seth. “You want to say something else in terms of the relationship between levels.” Enter the third level of closure that Rosas and colleagues think is needed to complete the conceptual apparatus: computational closure. For this they have turned to computational mechanics, a discipline pioneered by Crutchfield.
Crutchfield introduced a conceptual device called the epsilon (ε) machine. This device can exist in some finite set of states and can predict its own future state on the basis of its current one. It’s a bit like an elevator, said Rosas; an input to the machine, like pressing a button, will cause the machine to transition to a different state (floor) in a deterministic way that depends on its past history—namely, its current floor, whether it’s going up or down, and which other buttons were pressed already. Of course an elevator has myriad component parts, but you don’t need to think about them. Likewise, an ε-machine is an optimal way to represent how unspecified interactions between component parts “compute”—or, one might say, cause—the machine’s future state.
Computational mechanics allows the web of interactions between a complex system’s components to be reduced to the simplest description, called its causal state. The state of the complex system at any moment, which includes information about its past states, produces a distribution of possible future states. Whenever two or more such present states have the same distribution of possible futures, they are said to be in the same causal state. Our brains will never twice have exactly the same firing pattern of neurons, but there are plenty of circumstances where nevertheless we’ll end up doing the same thing.
Rosas and colleagues considered a generic complex system as a set of ε-machines working at different scales. One of these might, say, represent all the molecular-scale ions, ion channels, and so forth that produce currents in our neurons; another represents the firing patterns of the neurons themselves; another, the activity seen in compartments of the brain such as the hippocampus and frontal cortex. The system (here the brain) evolves at all those levels, and in general the relationship between these ε-machines is complicated. But for an emergent system that is computationally closed, the machines at each level can be constructed by coarse-graining the components on just the level below: They are, in the researchers’ terminology, “strongly lumpable.” We might, for example, imagine lumping all the dynamics of the ions and neurotransmitters moving in and out of a neuron into a representation of whether the neuron fires or not. In principle, one could imagine all kinds of different “lumpings” of this sort, but the system is only computationally closed if the ε-machines that represent them are coarse-grained versions of each other in this way. “There is a nestedness” to the structure, Rosas said.
A highly compressed description of the system then emerges at the macro level that captures those dynamics of the micro level that matter to the macroscale behavior—filtered, as it were, through the nested web of intermediate ε-machines. In that case, the behavior of the macro level can be predicted as fully as possible using only macroscale information—there is no need to refer to finer-scale information. It is, in other words, fully emergent. The key characteristic of this emergence, the researchers say, is this hierarchical structure of “strongly lumpable causal states.”
Read the full story in Wired.