This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
How to go beyond prompting and develop your conversational AI skills. Now, as more AI collaboration is designed inside of applications, do you still need to learn how to prompt? If you want to get the most out of AI, we would say, yes.
Digital design has progressed through three phases: interfaces, experiences, and now relationships. There are parallels for how AI is developing. If you want to control your relationship with AI you will still need to master prompting. But it's not engineering—it's a conversation.
Think of today's AI as like early software—full of potential, but still needing skilled users to draw out its capabilities. Getting what you want out of generative AI is an art that requires experience. Beyond one-off tricks, it demands developing an intuitive feel for how these alien machine minds operate. Which is why advice such as "just talk to it like it's a research assistant/intern/friendly helper" is mostly useless. First, AI isn't human—its great successes and failures are distinctly non-human. Second, if conversing with humans was that easy, there might not be an entire industry devoted to improving our communications with others!
Making the transition from "learning to prompt" to "learning to converse" starts with dropping the engineering mindset that tends to come along with prompting and embracing a more abstract yet intentional stance.
How AI is like a human
Generative AI exhibits human-like qualities in several ways. Just as a person performs better when they grasp the goal of a task, AI excels when its objectives are clearly defined. It's important to tell the AI your intention and its role. The foundation of a good relationship with AI, much like with humans, is set in the initial setup: establishing clear guidelines and expectations from the start is far more effective than attempting to correct course later.
AI, mirroring human behavior, benefits from feedback, valuing corrections when it errs or is misunderstood, as this helps refine its understanding and actions. Its capability to process information and solve problems is enhanced when encouraged to adopt step-by-step reasoning, reflecting the human approach to complex tasks.
Moreover, AI's learning process is human-like in that it performs better when provided with specific examples to model its responses and actions after.
How AI is not like a human
Generative AI demonstrates distinct differences from human cognition and capabilities, particularly in its approach to information processing and problem-solving. AI's ability to "read" the entire internet equips it with the capacity to search, mix, match, and combine information at a level far beyond human capability, leveraging vast databases to generate outputs with superhuman speed and scale.
However, AI's learning and reasoning mechanisms, grounded in statistical analysis, prioritize likelihood over precision and coherence over factual correctness, often leading to outputs that favor probable scenarios over precise or factually accurate ones, and stereotypical conclusions over nuanced understanding.
Unlike humans, AI struggles with spatial reasoning, unable to accurately describe or comprehend complex physical movements from textual prompts alone. AI's understanding of language, grammar, and notation is markedly different from human comprehension. Its sensitivity to wording requires varied approaches to ensure accuracy. Sometimes it is necessary to rephrase questions to uncover errors. This can highlight its lack of innate language understanding.
AI's capacity for causal reasoning is also limited: it does not grasp the underlying reasons why events occur or the motivations behind human behavior, struggling to reason beyond surface patterns, especially in complex human systems. This limitation matters a lot. When we work with others, understanding causality and human motivation play a crucial role in reasoning and decision-making.
AI lacks metacognition, the ability to reflect on and correct its own thought processes. This means AI cannot identify or revise its mistakes independently, often leading to the propagation of errors and a form of "self-delusion," where erroneous information is compounded over time. In human cognition metacognitive ability allows for self-assessment, correction, and learning.
From Prompting to Conversing
Three prompting skill paths are emerging: engineers rigorously interrogating AI to advance the science, designers streamlining prompts into intuitive workflows, and general users collaborating to enhance productivity. Prompt engineering will evolve in three distinct directions: science and engineering, UX and software design, and conversation:
Engineers and researchers working with models will advance our understanding of generative AI by developing deep expertise in interrogation of models in order to understand capabilities and safety. Researchers investigating AI will need to design valid experimental approaches, develop methodologies that help us map AI’s capabilities, and build explanatory mechanisms for both data veracity and machine cognition.
UX designers and software developers will translate a family of prompts into user journeys and workflows. They will be able to use the 80:20 rule and encode common use and best practice into AI interactions. They will apply design standards (including safety, ethical, and branding standards) to simplify interactions. As generative AI diffuses across tools, designers and developers will be the ones shaping how users access AI within applications. Everything from how you might use a chatbot to order your Uber to how your Excel spreadsheet gets turned into a Powerpoint, a memo in Word, and a Tableau dashboard.
For most, it's key to master a third path of nurturing a flexible conversation process that tightly couples human and machine strengths. Rather than following fixed formulas, prompting is an ongoing dialogue where both partners adapt and learn. It requires deploying the right strategies for different tasks and building a mental model of how your AI thinks. This machine intuition unlocks AI's possibilities while avoiding wasted time and mistakes.
The best way to learn is to understand the different strategies to deploy depending on the task, your skills and knowledge, and on the domain itself. This goes beyond learning prompts. It means developing a mental model for the AI and having a flexible collaborative process where you work synergistically with AI.
Our Framework
Over the past year and half we have tested and fine tuned a conceptual framework for conversing with AI. It's designed to be flexible and robust to the major changes in models and interfaces. Most importantly, the components of this framework can help you navigate your dynamic relationship with AI.
It's a 7-component framework designed to couple both human and machine strengths:
Explore: access all digitized human knowledge
Segment: modularize tasks to make them more achievable
Weave: lever the technology’s combinatorial power
Create: mix modes, vary both inputs and outputs, and perspectives
Iterate: critique, interact, and iterate to improve outputs
Fuse: improve synthesis of complex and conflicting information
Reflect: sharpen cognitive skills and metacognitive self-awareness
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.
Dave Edwards is a Co-Founder of Artificiality. He previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Apple, CRV, Macromedia, Morgan Stanley, and Quartz.