Doyne Farmer: Making Sense of Chaos
An interview with Doyne Farmer about complexity economics and his new book, Making Sense of Chaos: A Better Economics for a Better World.
Generative AI promises personalized learning at scale but risks creating dependency. Complementary cognitive artifacts enhance skills; competitive ones replace them. Effective AI tutors balance engagement and autonomy, expanding human cognition without diminishing critical abilities.
Can students learn better with generative AI? The promise of edtech has always been about the personalization of learning at scale. Generative AI now offers the capability to understand a learner's knowledge state and respond like a teacher. But will this vision work, and what might be the downsides? A recent study suggests that generative AI tutors can help students learn but only when the AI is specifically designed for learning. Without specific design parameters targeted at learning, students who rely on AI tutors will likely perform worse when the digital crutch is taken away.
Researchers from the University of Pennsylvania, led by Hamsa Bastani, found that while AI tutors based on GPT-4 improved students' performance by up to 127% during practice sessions, those who used a basic version saw their performance drop by 17% when working independently later. "Access to GPT-4 can harm educational outcomes," the researchers warn. "To maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills."
How can we design AI tutors that enhance learning rather than replace it? One simple framework we like is based on the dual nature of cognitive tools. David Krakauer, president of the Santa Fe Institute, distinguishes between two types of cognitive artifacts: complementary and competitive.
"Complementary cognitive artifacts" act as teachers, enhancing our abilities even when we're not using them. Think of an abacus user who, after practice, can perform calculations in their head. "Competitive cognitive artifacts," on the other hand, amplify our abilities only while we're using them—like a calculator that leaves us no better at math when it's taken away.
"We are in the middle of a battle of artificial intelligences," Krakauer writes. "It is not HAL, an autonomous intelligence and a perfected mind, that I fear but an aggressive App, imperfect and partial, that diminishes autonomy."
The challenge in edtech is to create AI tutors that enhance and support learning rather than replace human skills. AI should serve as a tool that teaches, not one that takes over entirely. If AI tools take on too much, we risk losing essential human skills to technology. This might be fine for something like long division, which few people do by hand anymore, but we need to think carefully about what skills are worth preserving. Without a solid understanding of human pedagogy, it’s hard to decide which skills are critical for students to learn and remember. Educators have to balance AI-enabled learning at scale without making important skills obsolete, a complex task given that the value of any particular skill is deeply culturally dependent.
Let's revisit the issue of design and emphasize the importance of using a tailored AI tutor with specific design elements for each learner. A team at Google DeepMind has been developing an AI tutor called LearnLM-Tutor, designed with specific pedagogical principles in mind. Their approach, detailed in a recent technical report titled Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach, emphasizes active learning, metacognition, and adaptive tutoring.
"LearnLM-Tutor is seen as significantly better than base Gemini 1.0 at promoting engagement in the learners," the researchers report. This engagement is crucial—it's the difference between passively receiving information and actively constructing knowledge.
The Google team's work aligns with Bastani's findings. While their basic GPT-4 tutor led to decreased performance, a more carefully designed version called GPT Tutor mitigated many of the negative effects. The key seems to be in how the AI interacts with students.
"Do not give away solutions prematurely," the Google researchers advise. "Encourage learners to come up with solutions." This approach mirrors traditional teaching methods that prioritize critical thinking over rote memorization.
Designing effective AI tutors extends beyond replicating human teaching methods—it involves leveraging AI's unique capabilities to enhance learning in innovative ways. For instance, AI tutors can provide personalized feedback at a scale that would be impossible for human teachers. They can adapt in real-time to a student's learning pace and style. And they can draw connections across vast amounts of information, potentially sparking insights that even expert human tutors might miss.
The trick is to design these systems not as replacements for human cognition, but as tools that expand what it is we can even think about. As Krakauer puts it, "The deep intellectual and ethical question facing our species is not how we'll prevent an artificial superintelligence from harming us, but how we will reckon with our hybrid nature."
This hybrid nature isn't new. Humans have been augmenting their cognition with tools for millennia—from memory techniques to modern smartphones. AI tutors are just the latest in this long lineage of cognitive artifacts. This recent paper provides additional impetus for making sure that AI-enabled learning creates a generation of intelligent hybrid humans not a generation of learners overly dependent on their AI assistant.
The Artificiality Weekend Briefing: About AI, Not Written by AI