An AI that Knows Everything

An AI the Knows Absolutely Everything, Co-Agency and the Paradox of Autonomy, Google DeepMind on AI Assistants, State of AI Report, Omri Allouche from Gong, and more.

An abstract image of a journey

An AI “that knows absolutely everything about (your) life” is “what you really want,” according to OpenAI’s Sam Altman. And he wants to keep that everything knowledge on his servers in the cloud.

Is this possible? Do we want it?

Let’s start with the possible. The quick answer is no. Sam talks about an AI that knows “every email, every conversation (you’ve) ever had.” Theoretically, this is possible if we embrace a surveillance world in which even the conversations Helen and I had in bed last night or while getting dressed this morning were captured by OpenAI. These may seem like edge cases but both of those conversations framed what is important to me today—and if Sam is going to create a “super-competent colleague” for me, it will need to know these things. And, in case it isn’t obvious, I don’t want Sam’s AI in our bedroom.

Beyond the challenge of capturing everything I contribute to the world through typing and speaking, an AI that knows “everything” would need to share my experience of the world. It would need to share my experience of space, time, and emotion—each of which are intuitive and individualistic. Yes, each can be measured in some way but I agree with Immanuel Kant who said that space and time are intuitive lenses through which we experience the world. Certain distances feel farther than they used to. And time feels to be accelerating as my time shortens. These create new and unique emotional reactions to the world around me—that I mostly don’t vocalize and an AI couldn’t know.

Accepting the limitations, might we even want Sam’s AI that attempts to know “everything” about us? Personally, I don’t. I already know everything I know—I don’t want a digital twin that replicates my own understanding. In contrast, I’d like an AI that brings something new to my experience of the world. As Michael Levin asks in the article referenced below, how might an AI “elevate” my experience of life? I surround myself with a complex system that elevates my experience of life. That system is not just anthropocentric or even ecocentric, it is holistic, including art and music and machines—so why not AI machines as well?

How an AI might elevate my experience of life is central to our philosophy of a Mind for our Minds. We describe it as a dream because the tech titans are more motivated to elevate their profits than our experiences.

But I still dream.

I dream that instead of attempting to know everything about my life, someone creates an AI that can tell me everything I want to know about life around me. I dream that instead of attempting to capture every experience I have, someone creates an AI that can help me experience the world in ways that I can’t today. And I dream that instead of attempting to create a grand AI for everyone in the cloud, someone creates an AI that is private and just for me.

I dream of a Mind for My Mind that elevates my individual, quirky, messy, wandering, complex life.


This Week from Artificiality

  • Our Ideas: Co-Agency with AI and the Paradox of Autonomy. This has profound implications for how we understand the relationship between agency and meaning-making. In this new context, agency is not just about the ability to perform tasks, but about the capacity to participate in difficult, open-ended processes of creative expression and interpretation. This participation can not be allowed to isolate us from the friction-full yet meaning-full process of having our ideas challenged and judged by others. Engaging with the perspectives of others is an inherent part of how we construct meaning and grow as individuals. Emergent meaning is not predetermined or programmed, but comes about in the dynamic, back-and-forth exchange of ideas, perspectives, and creative contributions.
  • Research: The State of AI Report: Q224. Following our April research webinar, we have published a report on the same research. To help organizations assess their own generative AI readiness, we start this report with a (new) self-assessment guide. This guide, based on insights from our research, enables companies to evaluate their progress across five key dimensions: strategy and leadership, data management, human resources and training, and operations and use case implementation. The remainder of the report focuses on four aspects of the current generative AI landscape: confusion, status, anxiety, and real-world applications. By examining each of these areas, we aim to provide an understanding of the current state of AI adoption and its implications for businesses and society. We include a downloadable PDF of the report to make it easier for you to take it with you.
  • The Science: Google DeepMind on AI Assistants. A new paper from Google DeepMind is a comprehensive guide on the ethics of AI assistants. It's a monster—273 pages! The paper is centered on the ethics of this technology but there are many interesting perspectives on some key issues. The fundamental takeaway from this paper is that the design of true personal AI assistants necessitates a foundational shift toward responsible design principles from the outset. It emphasizes that new externalities and a broader systemic scope must be primary considerations. Uniquely, AI assistants necessitate a departure from conventional design processes that typically center on individual users. Instead, these technologies require starting with a systemic perspective that includes the collective network of users. This shift represents a significant challenge for designers, compelling them to integrate broader societal impacts into the early stages of design.
  • Conversations: Exploring Complexity Episode 3: What is Complexity Science? Join us on a journey into the fascinating world of complexity science, exploring its interdisciplinary nature and its potential to revolutionize our understanding of complex systems. Discover the captivating examples of complexity in action, from the mesmerizing behavior of starling flocks to the emergence of intricate patterns from simple rules. And discover how complexity principles can be applied to tackle real-world challenges.
  • Conversations: Omri Allouce, Gong AI. A conversation with Omri Allouche, the VP of Research at Gong, an AI-driven revenue intelligence platform for B2B sales teams. Omri has had a fascinating career journey with a PhD in computational ecology before moving into the world of AI startups. At Gong, Omri leads research into how AI and machine learning can transform the way sales teams operate. In this conversation, we explore Omri's perspective on managing AI research and innovation. We discuss Gong’s approach to analyzing sales conversations at scale, and the challenges of building AI systems that sales reps can trust. And Omri will shares how Gong aims to empower sales professionals by automating mundane tasks so they can focus on building relationships and thinking strategically.

Bits & Bytes from Elsewhere

  • Our former Quartz colleague, Zach Seward, published a transcript of a recent talk at an Aspen Institute event entitled AI is not like you and me. Not only is it interesting to hear the perspective of someone with a long history at the intersection of journalism and technology, but Zach's views are particularly important given his current position as Editorial Director of AI Initiatives at The New York Times. In his talk, Zach criticises the anthropomorphism of AI, saying "...ascribing human qualities to AI is not serving us well. Anthropomorphizing statistical models leads to confusion about what AI does well, what it does poorly, what form it should take, and our agency over all of the above." I agree that anthropomorphizing can be dangerous, however, we do it, at times, to emphasize the fact that humans naturally see human traits in non-humans (just think of the way people describe their pets). And I think it's important to expand our understanding of ourselves beyond the anthropocentric to a more holistic embrace of our complex relationship with the natural and artifical world around us.
  • Following along those lines, Michael Levin published a piece in Noema entitled AI Could Be A Bridge Toward Diverse Intelligence. In the article, Michael argues that today's debates around AI miss the deeper questions about what kind of future we want to build and how we should relate to radically different types of minds. He introduces the concept of "synthbiosis," mutually beneficial relationships between diverse intelligences, and emphasizes that humans and modern AIs are just two points on a vast spectrum of possible minds. Michael explores how each human develops intelligence gradually, from a single cell to a complex mind, and highlights the shared features of all intelligent agents. He urges us to recognize these commonalities to expand our moral concern and decide what kind of future we want as a species. This future should involve improving the human condition and relating to new forms of intelligence with compassion rather than fear, which requires rethinking notions of humanity, creation, and ethics. (Also related to this topic is Helen's piece, How AI Can Help Us Envision More Diverse Intelligence from April 13.)

Helen's Book of the Week

When Science Meets Power: What Scientists Need to Know about Government and What Governments Need to Know about Science
by Geoff Mulgan

Geoff Mulgan’s When Science Meets Power gave me a different perspective on the relationship between scientific innovation and political authority.

Mulgan, a seasoned expert in public policy and former CEO of Nesta, describes the complex dynamics that arise when the realms of science and government collide. His analysis is particularly relevant in the context of AI, where advancements have many implications for governance, public policy, and democratic processes.

This is the third book by Geoff Mulgan that I've read, following Big Mind, which explores collective intelligence, and Prophets at a Tangent, which examines how art fosters social imagination. It seems to represent the culmination of his exploration into society as a complex, collective system. Mulgan has a knack for distilling complex ideas into memorable sound bites. For instance, he discusses the challenge of reconciling scientific "fact" with public acceptance of these facts, stating: "Although science can map and measure, it has no tools for calibrating." This phrase resonates with me as it succinctly captures the idea that the broader collective—whether in society, an organization, or a family—ultimately determines the placement and weight of scientific knowledge within its cultural context.

The COVID-19 pandemic has illustrated this dynamic vividly, showing how different countries interpreted and acted upon the same scientific facts in varied ways. While science provided data on excess deaths, and insights into the effects of isolation and disruptions to children's education, it fell to politics to navigate the associated trade-offs and risks. This serves as a reminder of the "muddled and muddied" relationship between science and politics.

My favorite section of the book is in the concluding chapters, where Mulgan discusses science, synthesis, and metacognition. He emphasizes that all complex issues fundamentally require synthesis, which illustrates the difficulty of this process and highlighting a common epistemological mistake: misinterpreting the relationship between knowledge and action. Mulgan argues that possessing knowledge does not directly translate to specific actions. To show this he identifies 16 types of knowledge that could influence a decision-making process, including statistical, policy, scientific, economic, implementation, futures, and lived experience. Next time you're trying to synthesize something, try compiling such a comprehensive list. I'd be surprised if it doesn't just sharpen your perspective.

As someone who often leans towards the "follow the science" approach, one takeaway from Mulgan’s book for me was a reminder for humility in science regarding its own state of knowledge. He reminds us that science alone cannot answer all of our significant questions because humans inherently seek meaning. Often this philosophical perspective is at odds with scientific perspectives that might illustrate the cosmic irrelevance of humans, challenging the notion that science can be the sole arbiter of truth in our quest for understanding and significance.

I find myself eager to pose a question to Mulgan: As machines develop knowledge from complex, high-dimensional correlations that extend beyond our human capacity to conceptualize, what role will scientists play in attributing significance and meaning to these findings? This question gets to a critical issue that remains largely unaddressed in the evolving landscape of AI—a future where the integration of machine intelligence in our discovery processes challenges the traditional roles of scientists.


Facts & Figures about AI & Complex Change

See all of our Facts & Figures here.

  • 47%: Percentage of CEOs who say that investing in technology, including AI, to improve growth and productivity is a top priority in the next 12 months. (EY)
  • 45%: Percentage of CEOs who say that enhancing data management and cybersecurity is a top priority in the next 12 months. (EY)
  • 15%: Percentage of CEOs who say investing in their employees through training and re-skilling is a top priority in the next 12 months. (EY)
  • 30%-32%: Percentage of CEOs who say investing in technology, AI, and data without also training and re-skilling people is a top priority in the next 12 months. (EY and Artificiality)
  • 78%: The percentage of hiring managers who say their company will lay off recent graduates due to AI use. (Intelligent.com)
  • 69%: The percentage of hiring managers who believe that AI can do the work of a recent college grad. (Intelligent.com)
  • 57%: Percentage of people who believe companies are adding AI assistants to customer service in order to cut costs, not improve service. (Callvu).

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.