This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
This week we are leaning into multiple metaphors: AI as a mirror, UX design as a window or room, and life as information. Plus: read about Michael Levin's upcoming presentation at the Imagining Summit, Helen's Book of the Week, and our upcoming events.
This week we are leaning into multiple metaphors: AI as a mirror, UX design as a window or room, and life as information. Plus: read about Michael Levin's upcoming presentation at the Imagining Summit, Helen's Book of the Week, and our upcoming events.
From Artificiality This Week
Our Ideas: The Design Illusion of LLMs.It’s easy to fall prey to the design illusion that because LLMs look sleek, they must be well-designed. But aesthetics alone do not equal design. As Steve Jobs once said, “Design is not just what it looks and feels like. Design is how it works.” Despite impressive capabilities, there is a disconnect between what’s expected and what’s actually delivered by large language models. This gap is largely due to a fundamental design illusion—one that makes these technologies look polished and ready for prime time but hides the real complexity and shortcomings underneath—and two category errors that have led these products to be designed in the wrong way: windows vs rooms and consumer vs professional.
The Science: What AI and Biology Reveal About the Future of Autonomous Systems. It’s fascinating when two people, from completely different fields, start asking the same questions. In a biologically-inspired model leveraging simple sorting algorithms, Michael Levin’s Self-Sorting Algorithm shows how simple cellular systems organize themselves into complex structures, suggesting minimal forms of agency. Meanwhile, in AI, Blaise Agüera y Arcas’s Computational Life explores how self-replicating programs emerge from random code in computational environments, mimicking life-like behaviors. It’s curious that these two papers, tackling such similar ideas, came out at the same time. Is this coincidence, or does it tell us something about where the study of life and intelligence is heading?
Conversations: Shannon Vallor: The AI Mirror. A conversation with Shannon Vallor, professor of ethics and technology at the University of Edinburgh, and the author of The AI Mirror. In her book, Shannon invites us to rethink AI—not as a futuristic force propelling us forward, but as a reflection of our past, capturing both our human triumphs and flaws in ways that shape our present reality. In The AI Mirror, Shannon uses the powerful metaphor of a mirror to illustrate the nature of AI. She argues that AI doesn’t represent a new intelligence; rather, it reflects human cognition in all its complexity, limitations, and distortions. Like a mirror, AI is backward-looking, constrained by the data we’ve already provided it. It amplifies our biases and misunderstandings, giving us back a shallow, albeit impressive, reflection of our intelligence. We think this is one of the best books on AI for a general audience that has been published this year. Shannon’s mirror metaphor does more than just critique AI—it reassures. By casting AI as a reflection rather than an independent force, she validates a crucial distinction: AI may be an impressive tool, but it’s still just that—a mirror of our past. Humanity, Shannon suggests, remains something separate, capable of innovation and growth beyond the confines of what these systems can reflect. This insight offers a refreshing confidence amidst the usual AI anxieties: the real power, and responsibility, remains with us.
The Imagining Summit Preview: Michael Levin
Michael Levin, distinguished professor of biology at Tufts University and associate at Harvard's Wyss Institute, will talk about the emerging field of diverse intelligence and his frameworks for recognizing and communicating with the unconventional intelligence of cells, tissues, and biological robots. This work has led to new approaches to regenerative medicine, cancer, and bioengineering, but also to new ways to understand evolution and embodied minds. He will sketch out a space of possibilities - freedom of embodiment - which facilitates imagining a hopeful future of "synthbiosis", in which AI is just one of a wide range of new bodies and minds.
Check out the agenda for The Imagining Summit and send us a message if you would like to join us. We're excited to meet our Artificiality community in person!
💡
The Imagining Summit will be held on October 12-14, 2024 in Bend, Oregon. Dedicated to imagining a hopeful future with AI, The Imagining Summit gather a creative, diverse group of imaginative thinkers and innovators who share our hope for the future with AI and are crazy enough to think we can collectively change things. Due to limited space, The Imagining Summit will be invite-only event. Follow the link and request an invite to be a part of this exciting event!
In The AI Mirror, Shannon Vallor suggests that, rather than propelling us into the future, AI more often reflects the limitations of our past. Vallor, a philosopher specializing in technology and ethics, frames AI as a shallow mimic of human cognition, reflecting our triumphs and failures alike, and constraining our reality to something less than we are.
Vallor's metaphor is clever because it works on different levels:
Reflection, not reality: Just as a mirror produces a reflection of your body rather than a real body, AI systems trained on human thought and behavior are not actual minds, but reflections of human intelligence
Backward-looking: AI mirrors point backward, showing only where data says we have already been, not where we might go in the future
Flawed and distorted: Like mirrors that can distort images, AI systems are "immensely powerful but flawed mirrors" that reflect human errors, biases, and failures of wisdom
Lack of understanding: Similar to how a mirror image lacks real understanding or consciousness, AI systems can mimic outputs of an understanding mind without possessing actual comprehension
Risk of narcissism: Vallor warns against becoming entranced by our own reflection in AI, similar to the myth of Narcissus who became captivated by his reflection. Don't we all know someone like that...
AI's backward-looking quality, she argues, makes AI ill-equipped to solve today’s complex problems. By simply reproducing old patterns, these systems risk trapping us in outdated paradigms.
There are solutions here. Ultimately, Vallor challenges the deterministic narrative that AI will inevitably surpass or undermine human agency. In many ways the mirror metaphor emphasizes the gap between AI's reflective capabilities and genuine human intelligence, and there's a lot to see.
I think this is one of the best books on AI for a general audience that has been published this year. Vallor’s mirror metaphor does more than just critique AI—it reassures. By casting AI as a reflection rather than an independent force, she validates a crucial distinction: AI may be an impressive tool, but it’s still just that—a mirror of our past. Humanity, Vallor suggests, remains something separate, capable of innovation and growth beyond the confines of what these systems can reflect. This insight offers a refreshing confidence amidst the usual AI anxieties: the real power, and responsibility, remains with us.
Where in the World are Helen and Dave?
Select upcoming events where we'll be—join us!
The Polyopportunity: September 19-20 in New York, NY.Join the Acosta Institute and the House of Beautiful Business in New York City during UN Week to write a powerful counter-narrative to the polycrisis: The Polyopportunity. With a select group of leaders, experts, creatives, and changemakers, we’ll explore how to amplify positive trends across disciplines and communities, how to nurture and develop the links between these trends, and how to build the foundation for a future that we want.
The Global Impact Forum: September 22-25 in Philadelphia, PA. The Global Impact Forum (TGIF) 2024 is the inaugural event in a collaborative effort between Global Silicon Valley (GSV) and Penn State, aimed at fostering economic development, social mobility, and positive impact through education and innovation.
The Imagining Summit: October 12-14 in Bend, OR.The Imagining Summit will gather a creative, diverse group of imaginative thinkers and innovators who share our hope for the future with AI and are crazy enough to think we can collectively change things.
💡
Interested in us presenting a keynote or workshop for your organization to help you navigate the new worlds of AI and complex change? Set up time for a chat with us here.
Dave Edwards is a Co-Founder of Artificiality. He previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Apple, CRV, Macromedia, Morgan Stanley, and Quartz.