Recently, I was working on a writing project, trying to articulate a complex idea that I'd been wrestling with for weeks. I collaborated with Claude, feeding the latest model—Opus 3—some of my initial thoughts and seeing what it would generate. What I got back was not just a rehashing of my own ideas or even an extension of them, but a novel perspective that challenged my assumptions and pushed my thinking in new directions.
It's often tricky to pinpoint where ideas originate—this goes for human as well as AI interactions. Cause and effect become blurry as concepts combine. This blending can make it hard to trace the path of a specific idea or approach. In human culture, we view this positively. Nothing is truly invented in isolation; all our knowledge comes from our community. When working with advanced AI, experienced users might recognize similar "community" moments, where ideas intertwine and the source becomes elusive.
So I reflected on this question: was this machine more than a tool? Was it doing my thinking for me? As I continued to work with Claude, I realized that something more complex was happening. The AI wasn't taking away my agency—it was enhancing it, as long as I remained engaged in actively selecting, or editing, its suggestions in the widest sense. In other words, I had to pay attention to how I responded to its outputs and then purposefully expand its ideas.
Working with advanced AI can feel strange. There's a hint of cognitive dissonance when you're co-creating with an intelligence that makes the process almost too effortless. I contrast this to my typical writing and editing routine with Dave. Our process seems similar to my collaboration with Claude on the surface. Dave might kick things off with a question like, "What's the goal of this piece?" From there, it's a back-and-forth exchange of ideas, each iteration posing challenges and eliciting responses until the "right" narrative takes shape.
Working with Dave is a far cry from collaborating with Claude. It's tough. There's friction—not because Dave's ideas are inherently better or different, but because there's a mutual understanding between us: if you seek assistance, you're committed to collaborating on the project. There's a human element involved, a sense of mutual investment. It's challenging to accept, contest, or even dismiss another person's critique. There's a kind of "phenomenological friction" where it feels like something to engage in sensemaking with another person, to experience meaning emerge from the simple rules of a back-and-forth dialogue.
In the social media era of algorithmic feeds and attention economics, concerns about AI interfering with agency stemmed from the paradox of personalization: AI designed to understand you better than you understand yourself. By refining algorithms for accuracy, AI encourages predictability, steering you towards anticipated behaviors. This paradox is fundamental to personalization.
In the era of generative AI, characterized by seamless co-creation and the economics of intimacy, worries about AI encroaching on agency have more existential element. They go to the essence of humanity, as machines now actively participate in significant aspects of original thought. The notion of autonomy becomes paradoxical as the distinction between individual agency, shared human development, and AI contribution blurs.
It might seem intuitive that these machines, capable of producing human-like text, images, and ideas, would diminish our agency. After all, if AI can perform much of the creative and analytical work that was once the exclusive domain of humans, what role is left for us to play? However, this view relies on a narrow conception of agency as the ability to directly perform tasks. In reality, agency is better understood as the capacity to set goals, make choices, and shape outcomes. From this perspective, the rise of generative AI does not necessarily diminish human agency, but rather changes the nature of our interaction with it.
I have come to think of this as co-agency. Co-agency recognizes that when we interact with generative AI, we are not simply using a tool, but engaging in a collaborative process of meaning-making. The AI system is not just a passive recipient of my commands, but an active participant in shaping the context and content of my interaction, learning, and ultimately of my goals.
Here's where we do need to worry about agency-disruption. By integrating AI as a co-intelligence and co-creator, we experience a sense of comfort and freedom. We are liberated from the discomfort of acknowledging that our ideas are partly derived from those of the broader community. This is a new kind of paradox: the paradox of autonomy. We may perceive ourselves as more independent from the collective knowledge of humanity, yet this independence often involves interacting with digitized knowledge in a low-critique manner.
This has profound implications for how we understand the relationship between agency and meaning-making. In this new context, agency is not just about the ability to perform tasks, but about the capacity to participate in difficult, open-ended processes of creative expression and interpretation. This participation can not be allowed to isolate us from the friction-full yet meaning-full process of having our ideas challenged and judged by others. Engaging with the perspectives of others is an inherent part of how we construct meaning and grow as individuals. Emergent meaning is not predetermined or programmed, but comes about in the dynamic, back-and-forth exchange of ideas, perspectives, and creative contributions.
The central question is how we can shape and partner with these systems to strengthen our communal power to forge meaning and exercise agency. The old paradigm of intelligence as mere task competence is crumbling. Generative AI is not a passive tool, it can be an active, creative, dynamic part of meaning-making. As we interact with these systems, a new kind of collaborative intelligence arises—a "co-agency" that transcends the boundaries of human and machine, biological and digital. While AI should continue to serve us, it is no longer just a tool.
This, I believe, is the essence of what it means for AI to be a Mind for our Minds—not a tool that simply automates away our cognitive labor, but a collaborator that enhances our capacity for meaning-making. When we engage with generative AI in this way, we create a more complex system, where co-creation and emergent meaning remove the clear distinction between human and machine agency.
This requires a new approach to design—one that focuses not just on optimizing AI systems for specific outputs, but on creating the conditions for fruitful, open-ended collaboration between humans and machines. It means designing AI that enables genuine co-creation, and crafting interfaces and experiences that empower users to steer this process towards their goals.
You might argue, "Isn't this all a bit too effortless, lacking the struggle we associate with meaningful creation?" I agree. I believe there's a direct link between the effortful struggle and the quality of the resulting meaning. Phenomenological friction binds the mutuality of human knowledge exploration while AI just doesn't have a spine. Co-creation has to come with more friction than it does today for true meaning to emerge. AI is flaccid, Dave is firm.
The paradox of personalization in social media made us worried about manipulation. Similarly, the paradox of autonomy in generative AI should make us worry about falling prey to sycophancy. Indeed, we know that AI is sycophantic—stating "I dislike this argument" is enough for a language model to fundamentally change a response to a given prompt.
Ultimately, I believe that embracing this new vision of agency in the age of AI is not just a matter of adapting to technological change. It's about recognizing the fundamentally collaborative and emergent nature of all meaning-making, whether it involves humans, machines, or both. By designing AI to be a true partner in this process—a mind for our mind—we can tap into new reserves of creativity, insight, and understanding.
Weirdly, I think we'll now be forced to recognize that sole agency and autonomy in creation and meaning-making was always an illusion. In a complex world, agency has always been about adapting to feedback and co-creating meaning with others. Adopting generative AI into our lives only serves to highlight this reality and, paradoxically, heightens our appreciation for choosing a path with more friction as a way to ensure deeper, more meaningful meaning-making.
Ideas for this essay inspired by:
The Knowledge Illusion: Why We Never Think Alone, by Steven Sloman and Philip Fernbach
Intelligence as Agency: Evaluating the Capacity of Generative AI to Empower or Constrain Human Action, by Arvind Satyanarayan and Graham M. Jones
Towards Understanding Sycophancy in Language Models, Mrinank Sharma, Meg Tong, Tomasz Korbak, et al