A New Twist on Occam's Razor: Exploring Complexity through AI

The emergence of complexity from simple algorithms is a phenomenon we see in both natural and artificial systems. It's a classic example of complexity: even straightforward algorithms can lead to immense complexity over time.

An abstract image of a beach

Large neural networks like GPT-4 display increasingly complex behavior as they scale. Their emergent intelligence often confounds expectations. By detecting patterns across massive datasets, they creatively link concepts and respond intelligently. While still poorly understood, such systems hint at machine learning’s potential for advancing complexity science.

As model parameters expand into the billions, their inferential abilities qualitatively change. For instance, large language models can display unexpected skills in spatial reasoning, induced purely from text data alone.

A recent paper by David Krakauer, who leads the Santa Fe Institute, theorizes that surpassing critical complexity “thresholds” allows micro-patterns to self-organize into macro-level insights that were previously obscured. Or more simply: sufficient complexity grants algorithms access to previously invisible patterns in natural phenomena.

The emergence of complexity from simple algorithms is a phenomenon we see in both natural and artificial systems. It's a classic example of complexity: even straightforward algorithms can lead to immense complexity over time. As things get more complex, more possibilities emerge. Small changes at a basic level gradually build up, resulting in significant, large-scale developments. Natural selection’s algorithm has delivered vast complexity in life on earth. Reinforcement learning has delivered vast complexity in AI.

Now we have a counterintuitive idea: instead of making things more confusing, Krakauer suggests that extreme complexity actually helps us understand and create new things. When machine learning models get really complex a phase shift occurs. Simple parts, like neurons or words, form into complex groups capable of detecting more intricate patterns. At this point, neural systems cross a "complexity threshold" where micro-level network components self-organize into macro-level functional structures which implicitly represent complex statistics. Complexity science then steps in by simplifying these detailed models to find what matters at a higher (macro) scale.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.