Doyne Farmer: Making Sense of Chaos
An interview with Doyne Farmer about complexity economics and his new book, Making Sense of Chaos: A Better Economics for a Better World.
Our obsession with the parallels between human and machine intelligence.
Key Points
Physicist Richard Feynman famously said, “what I cannot create, I do not understand.” This concept has played an important role at the intersection of cognitive sciences and AI. It has a history and a notable synergy. As humans make machines that think more like us, we inadvertently find clues that mirror the complexities of our own minds.
Sometimes this process is metaphorical. Researchers on one side may take inspiration from the other. The artificial neural network was inspired by the biological neural networks in our brains. Neurons receive, process, and transmit information. Artificial neural networks adopted this fundamental concept, allowing AI to learn from vast amounts of data. While the simplistic models of artificial neurons are a far cry from the complexity of our brains, with their hormones, neurotransmitters and glial cells, the idea stemmed from our understanding of neurobiology.
Sometimes creativity comes from direct application of an insight. The discovery of dopamine's role in the brain's reward system has had a significant influence on the development of reinforcement learning. Dopamine neurons fire when an unexpected reward is received, which aligns with the "reward prediction error" concept in reinforcement learning algorithms. In essence, these algorithms adjust their predictions based on the difference between the expected and received reward. This is analogous to how dopamine levels fluctuate in our brain. Reinforcement learning has become an important paradigm in machine learning.
Interdisciplinary collaboration is highly fruitful. We like to follow those researchers who are fluent in the dual domains of cognitive and computer science. Their work enlightens us in a two-for-one kind of way. We see how AI reflects us and what we want (or not) in AI. The gap between us is conceptually straightforward but the technical challenge to close it, well that’s a different story. It’s at this nexus that we find intriguing hints at what to look for in research directions.
Metaphors can help us understand complex topics because key concepts “come along for the ride” when we use a good metaphor. But they can also oversimplify and mislead us. Describing brains as "prediction machines" highlights their Bayesian capacities but it also risks overlooking other evolutionary intricacies—brains are far more than a network for statistical learning. And while patterns in neuroscience do enlighten AI design, equating the two overlooks the vastness of possible minds.
The stakes have risen significantly. Researchers now seriously talk of conscious machines, even the possibility of building subjective experience into AI. Machines that can feel joy but also can suffer. For Transhumanists, the key to unlocking a superior consciousness and undying existence lies in merging carbon and silicon-based intelligences. This is next level stuff: we’re not sure if it is the most genius or the most ghastly idea in AI.
Humans make decisions based on feelings. No matter how rational we might want to think we are, feelings come first. It may well be that consciousness arose as an evolutionary adaptation to make decisions under uncertainty and in ambiguous situations—a short cut to action. A crafty researcher could reasonably extend the analogy and see consciousness in machines as a short cut to agency. Why not? Seems plausible.
We're stepping into unknown territory. The collaboration between AI and neuroscience might prove to be part of how we come to understand consciousness. While we have some theories about how biology gives rise to subjective experience there is no leading one yet. It’s possible that the same people currently working on AGI end up creating artificial consciousness. They don’t know enough to know how not to create it.
As Feynman rightly emphasized, the act of creation is, in itself, a profound means of understanding. AI mirrors our intelligence and us the same. We are obsessed with what might next be visible in the mirror’s reflection.
The Artificiality Weekend Briefing: About AI, Not Written by AI