A New Twist on Occam's Razor: Exploring Complexity through AI

The emergence of complexity from simple algorithms is a phenomenon we see in both natural and artificial systems. It's a classic example of complexity: even straightforward algorithms can lead to immense complexity over time.

An abstract image of a beach

Large neural networks like GPT-4 display increasingly complex behavior as they scale. Their emergent intelligence often confounds expectations. By detecting patterns across massive datasets, they creatively link concepts and respond intelligently. While still poorly understood, such systems hint at machine learning’s potential for advancing complexity science.

As model parameters expand into the billions, their inferential abilities qualitatively change. For instance, large language models can display unexpected skills in spatial reasoning, induced purely from text data alone.

A recent paper by David Krakauer, who leads the Santa Fe Institute, theorizes that surpassing critical complexity “thresholds” allows micro-patterns to self-organize into macro-level insights that were previously obscured. Or more simply: sufficient complexity grants algorithms access to previously invisible patterns in natural phenomena.

The emergence of complexity from simple algorithms is a phenomenon we see in both natural and artificial systems. It's a classic example of complexity: even straightforward algorithms can lead to immense complexity over time. As things get more complex, more possibilities emerge. Small changes at a basic level gradually build up, resulting in significant, large-scale developments. Natural selection’s algorithm has delivered vast complexity in life on earth. Reinforcement learning has delivered vast complexity in AI.

Now we have a counterintuitive idea: instead of making things more confusing, Krakauer suggests that extreme complexity actually helps us understand and create new things. When machine learning models get really complex a phase shift occurs. Simple parts, like neurons or words, form into complex groups capable of detecting more intricate patterns. At this point, neural systems cross a "complexity threshold" where micro-level network components self-organize into macro-level functional structures which implicitly represent complex statistics. Complexity science then steps in by simplifying these detailed models to find what matters at a higher (macro) scale.

The macro scale is meaningful to us. The micro scale is meaningful to AI. Imagine data as grains of sand on a beach: an AI could look for patterns across billions of grains of sand. It might find patterns in the grains that relate to local geology or the ecological impact of storm waves on the microbiota of the sand. But for a human, what matters is the beach itself: a crowded self-organizing system of surf-seekers who have their own impact on the beach system. With access to a “sand AI” humans who understand the complex system of ecology, geology, weather, and human activity have access to an entirely new level of potential knowledge. Krakauer speculates this multi-scale interplay between fine-grained statistics (by AI) and coarse-grained mechanics (by humans) may offer new traction on scientific discovery and creativity.

Krakauer proposes a way to do this—a meta version of the famous simplification mechanism called Occam’s Razor. The usual Occam's razor focuses on making explanations simpler. Meta-Ockham's razor takes a different view—don't worry as much about describing what already exists. Instead, look for simple and efficient generative rules that can gradually build up complex things over time.

Remember how biological evolution has simple steps—variation, selection, reproduction—but repeating those steps produces all the diversity of life on earth. The rules are simple but the results are complex.

Meta-Ockham cares more about the simplicity of the generator than the complexity of the generated. Looking at specific snapshots of big data sets and AI can be overwhelming. Krakauer’s intuition is that compact generative rules are far more explanatory and powerful, which helps us make better use of hyperscale AI.

Let’s look at an example from one of the most established and productive fields of complexity science: network science. Imagine a social network with its multiple connections between individuals, like followers or friend links. Now think about how clusters of tightly interlinked people emerge—cliques and communities. These groups then link up based on shared interests or affiliations into extended social circles and subcultures. Some especially influential figures act as connectors between nearly separate clusters.

Gradually a complex social network topology self-organizes from the ground up, simply based on individuals linking to others they relate to, without any centralized coordination. The macro-level network exhibits characteristics like inequality in influence which are not present at the individual level. Trends and ideas go viral and propagate through key bridges. The macro states such as these are more meaningful and explanatory than the raw patterns in the data, even though those patterns hold the keys to the AI’s predictive power.

Meta-Ockham can help us make sense of the crazy ways that social networks operate. Rather than considering all the people as components, there are some simple mechanisms that create the vast, dynamic complexity of a social network. One is that individuals link to others in proportion to how many existing connections they already have. This is called preferential attachment. It’s a "rich get richer" network rule. Or the rule of small world bridges, where even links formed randomly between otherwise distant clusters will greatly shorten the distance between points in a network. Occasional weak ties that create surprises can trigger viral information flows. Network science has effectively identified and quantified certain "rules" and algorithms that are now practically applicable. Investors leverage these insights to seek out network effects for greater returns. Similarly, as frequent users of social networks, we've developed an intuitive understanding of the peculiarities inherent in some online interactions.

The Meta-Ockham approach would be to identify these minimal generative principles, interaction biases, and recurring local node behaviors that reliably give rise to the complex global network topologies. In turn, these reflect community emergence across multiple scales in the social network. This simplifies the co-evolution story by concentrating on the processes that create complexity. It pinpoints the most effective level within the system for a mechanistic thinker to apply simplification, streamlining our understanding of complex systems.

Humans gain better, deeper understanding by taking qualitative leaps. These moments of insight could well be paralleled in very large networks because they are able to self-organize and learn in ways which eventually cross critical boundaries. Perhaps we can think of this as machine insight triggered by accumulating micro-discoveries until suddenly, macro-level comprehension clicks. Aha!

The key for the future is focusing humans on the ever-increasing space of possibilities that AI can help direct us towards using its predictive power while we focus on gleaning the meaning and building new knowledge of the world.

Clarifying note on terminology: Wikipedia and others use the term Occam's razor while Krakauer uses the actual spelling of William of Ockham.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.