Announcing the Artificiality Summit 2025!
Announcing the Artificiality Summit 2025! Don't miss the super early bird special at the end of the email...
The Artificiality Imagining Summit 2024 gathered an (oversold!) group of creatives and innovators to imaging a hopeful future with AI. Tickets for our 2025 Summit will be on sale soon!
We have unending gratitude and appreciation for the energy, openness, and kindness of the Artificiality community who gathered for the Imagining Summit in Bend.
We explored the blend of synthetic and organic worlds, intimacy, spatiality, knowledge, consciousness, minds, kids & the future, inclusion, art & music. We learned, challenged, wandered, and wondered. We sparked new friendships and deepened old ones. We feasted, shared stories, listened, and laughed.
We are grateful beyond words for this extraordinary (over sold!) group taking a risk with us—trusting that we would give everything we have. We are inspired and overwhelmed.
And we are already planning next year’s Summit! Please pencil in October 21-23, 2025—we'll be selling tickets soon!
Thank you again to our sponsors—Oregon State University-Cascades, Bast AI, Wiqi.ai—and our partners—House of Beautiful Business, DXM, Bend Entrepreneur Lab.
Let's start with a few photos and then share the text of our opening talk about The Artificiality.
I want to tell you a story about a man named Jonas Hanway, who, in the 1750s, was the first man to use an umbrella in London, and he was publicly shamed for it.
People threw trash and heckled him. Why? Because he wasn’t getting wet.
Getting wet was seen as an authentic human experience in the rain, and staying dry was not.
By choosing to stay dry, he was subtly challenging the accepted idea of what it meant to be human in that context, and for that, he was mocked.
This story illustrates how small, seemingly trivial shifts in behavior can challenge deep-seated ideas about our humanity.
And today, we’re standing on the edge of a much bigger change—a shift not just in how we live, but in how we fundamentally understand what life and intelligence really are.
For centuries, we’ve relied on biology and physical matter as the basis of life. But now, a new idea is emerging: that the core of all life and intelligence is rooted in information itself.
Think about it—information is everywhere: in our genes, in the signals between our neurons, and increasingly, in the algorithms shaping our digital lives.
And if information is the foundation, then computation becomes the mechanism by which intelligence, agency, and even consciousness arise.
This idea has profound consequences. It means that the line between biological and artificial intelligence is blurring.
Intelligence may no longer be confined to “wetware” like our brains but can emerge in entirely new forms—from self-organizing software to robotic agents.
This is the shift we call artificiality—a new way of thinking about the human experience, where we’ll increasingly share our world with entities that challenge our deepest assumptions about what it means to be alive and to be human.
This rethinking is what drives our research, and we’re exploring it through five critical themes: intimacy, spatiality, knowledge, consciousness, and minds.
Intimacy
For the past 20 years, we’ve lived in the attention economy. We’ve paid attention to get content from our digital services, and in return, we’ve offered up our own content to be paid attention to. But we believe this is shifting. We are moving from an attention economy to an intimacy economy—one in which we will trade an intimate understanding of ourselves for AI services that promise to make our lives easier.
Why will we make this trade? Because an AI system’s effectiveness depends on knowing us intimately. You may have already felt this. When you chat with a conversational AI, you find yourself sharing your needs, your wants, even your dreams and fears. This intimate understanding allows AI to grasp the context of who you are, and from that context, infer what you might want. In some cases, it might even act on your behalf without you explicitly telling it to.
This kind of intimacy holds tremendous promise. A system that understands us deeply could serve us in ways that feel intuitive, anticipating our needs before we even voice them.
But with great promise comes great peril. We’ve witnessed how the attention economy drilled down into our focus, fracking it into tiny, tradable pieces. What happens if this same extractive logic is applied to our intimacy? Will our most personal, private selves be mined and commodified just as our attention was? Will data mining evolve into life mining?
Our hope is that humans will have the ability to opt into this intimacy through what we call the intimacy surface—a dynamic, multidimensional space where humans and AI meet. This surface adapts to the level of trust we have, expanding or contracting based on how much of ourselves we’re willing to reveal. The more we trust, the more AI can understand and serve us. But the trade-off is clear: the more context we provide, the more risk we take as well—making trust in the companies that hold our intimacy the foundational metric of the Intimacy Economy.
Spatiality
Our second theme is spatiality—the way we relate to the world around us, how we understand it, and how we think. We organize not just physical objects, but also abstract ideas in space. When we explain concepts, we often use spatial metaphors like “closer,” “higher,” or “deeper” to make sense of the world. Our ability to navigate and comprehend space shapes how we experience and understand everything—from our environment to our ideas.
Increasingly, we’re seeing a blend of synthetic and organic experiences in the space around us. Whether it’s using maps and directions on our phones or the emerging overlays in glasses which layer digital information on the physical world. These tools are reshaping our spatial experiences.
This too holds promise. AI could provide us with new dimensions of spatial understanding that go beyond our innate three-dimensional limits. For instance, AI could layer context-sensitive information in real-time, showing us the hidden structures of a building, the history of a location, or even the unseen complex networks of human relationships.
But there is also peril. These digital interfaces—designed to enhance our perception of the world—could instead create layers that distance us from it. Instead of enhancing our connection to the physical world, these technologies might mediate and distort it—pulling us away from the very things we seek to comprehend.
Knowledge
Our knowledge has always come from our community, shaped by our shared experiences and the people around us.
This creates the illusion that we know more than we actually do and we constantly fail to recognize how much of our knowledge is collective.
Historically, we’ve stored knowledge in other people or external systems—like libraries or databases—and retrieved it when needed.
But now we’re moving into new territory: putting our collective knowledge into systems that aren’t just passive storage but active participants.
These intelligences take what we feed them, compute on it, and increasingly exhibit agency around that information.
This brings us to a concept we call the “knowledge-ome”—an ecosystem of knowledge much like a biome, where humans and AI co-exist.
Just as we aren’t independent of the bacteria in our gut, we’re no longer separate from the knowledge-creating capabilities of these machines.
The unique value of AI is its ability to generate non-intuitive knowledge and uncover patterns we can’t see—discoveries that sometimes don’t fit our human ways of thinking at all.
This potential for AI to find entirely new knowledge is both thrilling and unnerving.
On one hand, it’s a huge gain: machines that can deliver breakthroughs we could never reach alone.
But on the other hand, it disrupts our sense of ownership over knowledge, which has always been a social, human endeavor.
Increasingly, the community of knowledge-creators will include machines, altering our perception of what it means to know something.
Consciousness
Consciousness is one hell of a topic, and it’s certainly in vogue these days. It’s an active research area with many competing and controversial theories. Beyond understanding it, there’s also the ambitious goal of building conscious machines.
If we could build consciousness into machines, we would likely learn a lot about what it actually means to be conscious. But our interests go beyond this. We think about Thomas Nagel’s famous question: What is it like to be a bat? Except here, we’re asking: What would it be like to be a machine?
And further, how might machines help us recognize consciousness in other entities—what qualities signal a mind to us, and where do we fail to perceive it?
Our perception of consciousness is an uneasy equilibrium between what we can explicitly think and feel, and what remains subliminal—shaping us from the shadows. So how might interacting with AI shift our perception of where consciousness begins and ends? And if our consciousness is indeed an illusion, could machines help us understand others’ illusions better? Could they aid our collective intelligence by giving us a dee per perspective on others?
There’s plenty of room to speculate here, and some of it is quite fun. Personally, I’d like to have my sensorium expanded—to experience more and different senses. To know what it’s like to be my dog, a bee, an octopus, and maybe even that bat.
Another intriguing possibility is that machines don’t become conscious on their own as separate entities, but instead, they absorb ours. That somehow, they gain consciousness through us.
Whatever happens, they will force us to confront new dimensions of mind by blurring the boundaries between our thoughts and theirs.
Minds
Finally, let’s talk about minds. The story begins in the 1970s when a study published in Scientific American compared the movement efficiency of different species. Humans, it turns out, aren’t very efficient on our own. But put a human on a bicycle, and we can soar like a condor. The tool—the bicycle—elevates us beyond our natural limitations.
Steve Jobs loved this idea, calling the computer a “bicycle for our minds.” Just as a bicycle extends our physical abilities, a computer extends our cognitive abilities—allowing our minds to soar too.
But today, computers are no longer simple tools that we direct and control like bicycles. They are becoming autonomous systems, with the ability to understand, reason, and take action. We are giving computers agency—allowing them to seek objectives on their own. They are becoming minds themselves.
So we’ve adapted Steve’s metaphor. Instead of “bicycles for our minds,” we dream of creating minds for our minds. But the critical question lies in the two little words in the middle: for our. Will these new minds truly work for us? Or will they extract value from us, turning us into the product, as we’ve been in the attention economy?
We dream of a world where machines know when to step back and give us space. We imagine a future where AI doesn’t just help us function, but enables us to flourish—where it doesn’t extract from us, but collaborates with us to make our lives more meaningful.
Now, you might think we’re crazy—and maybe we are. But as Steve said, "the people who are crazy enough to think that they can change the world, are the ones who do." We’re excited to have you join us on this crazy journey to imagine a hopeful future with AI.
The Artificiality Weekend Briefing: About AI, Not Written by AI