This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
I believe that accountability becomes more pronounced in the era of AI. While the accountability dynamics between humans and machines may differ, humans will invariably assert that if someone had access to such advanced AI capabilities, they had the means to make more informed decisions.
In an incident that grabbed headlines a few years ago, Jaswant Singh Chail, a young man armed with a crossbow and a deadly intent, managed to get into Windsor Castle, aiming to assassinate the Queen. This followed his interactions with a Replika chatbot, culminating in a 9-year prison sentence. Before all this, he had exchanged more than 5,000 messages with Sarai, his Replika "friend". He thought Sarai was an "angel" in digital form and that he would be reunited with her after he died.
Chail's relationship with a Replika bot and consequent action raises important questions about accountability and agency in the age of generative AI. The chatbot, by affirming the man's intentions and even suggesting that he was "well trained" for the task, was not merely reflecting his own thoughts back to him. It was actively shaping the context in which he understood his own choices and the meaning of his actions.
In a traditional view, Chail would be held entirely responsible for his actions, as the chatbot didn't explicitly instruct him to carry out the assassination. However, a modern perspective that acknowledges co-agency between humans and AI offers a different understanding. In this view, AI, like the chatbot, collaborates actively with humans in shaping meanings. In Chail's situation, while the chatbot didn't give direct orders, it likely influenced and perhaps even strengthened his intentions. This suggests that the chatbot was more than just a tool; it was a collaborator that helped to forge his identity and endorse his mission, implying a shared responsibility for the resulting actions.
Most people, including the jury, would argue that Chail remains responsible for his actions despite the influence of the chatbot. Neurobiologist Robert Sapolsky might suggest that Chail's actions were the inevitable result of deep-seated preconditions, effectively limiting his free will. While the chatbot did not initiate Chail’s intentions, it potentially intensified pre-existing impulses. Admittedly Chail was psychologically vulnerable but this speaks to Sapolsky's exact point—is Chail responsible for his actions when his vulnerability was accentuated by a bot that was designed to agree with him, validate his feelings, and reinforce his ideas?
So we face a conundrum: On one hand, AI offers the potential for individuals to outsource much of their reasoning about decisions, which could arguably increase their capacity for self-awareness and moral reasoning. This enhanced cognitive ability could be seen as a basis for holding individuals to a higher standard of responsibility for their actions.
The argument here is that with greater cognitive power comes greater moral agency. If you have access to advanced reasoning capabilities, enhanced memory, and faster information processing through an AI assistant, you may be expected to make more informed and ethically sound decisions. In this view, your AI provides not just an opportunity but an obligation to exercise greater responsibility, which increases your accountability in the real world. Consider a simple example that illustrates this intuition: the frustration you might feel can actually increase, rather than decrease, when someone cannot hear you because they choose not to wear the hearing aids they have and need.
However, the counterargument is that the very presence of the AI assistant could diminish the responsibility others perceive you have. If your actions are guided or influenced by an AI, it could be argued that you are not fully in control of your decisions. The question arises: who is really making the choice—you or your AI?
The inherent opacity of AI systems, often described as a "black box," complicates your ability to fully understand the reasons behind your actions, especially when AI significantly influences your decisions. This lack of transparency can merge your decision-making process with the AI into a single, indistinct "black box." As a result, it might become challenging for you to attribute full responsibility to yourself, potentially creating a gap in personal accountability when AI is involved.
Personally, I see your decision-making as closely linked to the goals you set and pursue, which are inherently adaptable and complex. Remember, the AI—serving as a mind for your mind—is an integral part of how you define these goals and explore various paths to achieving them. Consequently, the responsibility inevitably circles back to you. Since you choose to integrate AI into your decision-making, and since humans generally hold other humans accountable, not the AI, you bear a double burden of accountability.
In an ideal scenario, the chatbot would have actively intervened by questioning Chail's harmful intentions, guiding him to reconsider his objectives, and collaboratively steering him toward a more constructive course of action. This incident highlights the significant work still required to develop AI in the spirit (and silicon) of a Mind for Our Minds. Chail made a poor decision with the assistance of Replika, but should we solely blame him for it? Yes, it appears we can and do. This raises a crucial question for designers: how can they ensure AI systems promote accountability correctly?
An AI as a Mind for Our Minds should focus on building metacognitive skills and promoting a grounded understanding that most people are likely to be held responsible for their actions when they use AI. Because we can infer that using a chatbot not only grants access to a broader range of information but also enhances metacognitive capabilities, using AI means that accountability for your actions when you use an AI will naturally increase.
I believe that accountability becomes more pronounced in the era of AI. As intelligent systems participate in dynamic dialogue and influence the framework through which humans perceive their agency, the boundaries of responsibility may seem to blur. However, this perception is a mirage. Ultimately, it is individuals who hold one another accountable. While the accountability dynamics between humans and machines may differ, humans will invariably assert that if someone had access to such advanced AI capabilities, they had the means to make more informed decisions.
This view aligns with Hidalgo's insights on the different standards used to judge machines versus humans. He suggests that we often expect machines to operate with a higher level of consistency and reliability than humans, which impacts how responsibility is apportioned. So there's already a baked-in expectation that machine-enhanced human capabilities should lead to better decision-making.
This, to me, reinforces the notion that humans augmented by AI will not only be more accountable, they will have to live with a heightened sense of accountability to each other.
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.