Weekend Briefing: April 28, 2024

A Skeptic's Guide to How Cognitive Biases and Reasoning Errors Fuel AI Hype, Social Learning, The Hidden Cost of ChatGPT is the Erosion of the Digital Commons, and more.

An abstract image of wayfinding

The AI world is awash with excitement! Every little thing is a GAME CHANGER or MIND BLOWING! There is nothing but good things ahead!

We tire of the unbridled and unthinking enthusiasm. And we worry about the impact of the new-to-AI cheerleaders and their lack of skepticism and context. We understand why they lean toward cheerleading—unbridled and unthinking enthusiasm can be good for business as people like to hear good news and easy solutions. But the reality is far more complex which is why we aim to provide reflection and context.

This week, our reflection includes a skeptical guide to AI hype, a science review on how generative AI might destroy the internet, and a contemplation on the potential effect of AI on learning—all meant to help put AI and complex change in context and perspective.

This week, we are also launching a video series: Exploring Complexity. As you know, we are obsessed with complexity science and think a complexity mindset is essential to understanding the impact of AI. We've posted the first few episodes of what we anticipate will be an ongoing series—find them all in Conversations.

Finally, please keep an eye out for an invitation to the May research webinar on agentic AI. We'll be presenting a general purpose framework for conceptualizing what this technology is and why it’s so important to understand now


This Week from Artificiality

  • Our Research: A Skeptic's Guide to How Cognitive Biases and Reasoning Errors Fuel AI Hype. Separating hype from reality in AI has become more daunting than ever. More than Cleantech in the aughts (we should know, we were there). Or previous AI waves (we were there too). While we've experienced tech hype cycles before, the current wave feels like a different beast altogether. To shield yourself from the relentless enthusiasm of the #UnthinkingAIEnthusiast, it helps to understand the cognitive biases and reasoning errors that fuel the hype: Benchmarks ≠ Intelligence, Stories on Steroids, All Aboard the Hype Train, Exponential Thinking Hits a Brick Wall, and AI Is a Map, Not the Territory.
  • Our Ideas: Social Learning. Learning with and from others can be incredibly enriching. When we collaborate, discuss, and share ideas, we gain new perspectives, deepen understanding, and achieve things we never could have on our own. That's the power of social learning and it's a cornerstone of human learning and the foundation of our collective intelligence. When we learn in social contexts, we're not just grappling with the material, we're also navigating complex social dynamics. We're constantly aware of how others perceive us, which is both a motivator and a stressor. This is where the promise and peril of AI in education comes in. On a surface level, AI might seem like the perfect solution to the challenges of social learning. An AI tutor won't react to your silly question or smirk at your wrong answer. It's tempting to think that AI could create a judgment-free learning environment where every question is safe to ask. But is this really true? And even if it is, taken to the extremes of large-scale adoption of AI tutors, is it actually what we want?
  • The Science: The Hidden Cost of ChatGPT is the Erosion of the Digital Commons. Is AI destroying the internet? Are we running out of good data? Will AI increasingly eat its own excrement? All these questions are being asked right now and the answer to all of them feels like "yes." But how do we know and what evidence do we have? Perhaps the real question is this: what is happening to the digital commons that underpins so much of the modern web? A new study suggests that the rise of ChatGPT may be eroding these foundations. Focusing on the popular programming Q&A platform Stack Overflow, the researchers found a significant drop in user activity following the release of ChatGPT. If users turn more and more to ChatGPT and other AI models for answers and assistance, rather than posting their questions and solutions publicly, the digital commons that these models rely on will begin to decline.
  • Conversations: Exploring Complexity. Join us as we launch our new video series through which we will explore the complexity of minds meeting machines, combining complexity science, artificial intelligence, and the human sciences. An introductory trailer and the first two episodes are live now—you can find all of them in Conversations.

Bits & Bytes from Elsewhere

  • OpenAI rolled out "memory" across chats in ChatGPT. As we've discussed in the past, LLMs have limits to how long of a conversation it can "remember," aka the context window. This limits its usefulness while limiting cost—the longer the context window, the more tokens have to be sent to the cloud, and the more money that exchange costs. OpenAI says that ChatGPT will remember parts of your chats either because you tell it to or because it determins there is something to remember. The explanation leaves a lot of open space for interpretation and exploration (cue researchers to start working on new papers). The goal of memory is laudable but how well it will work today is unknown. Will it remember what you want it to remember? Do you want it to remember every chat? What kind of bias will be present in what it remembers?
  • Our friend Julio Mario Ottino recently published a thought-provoking essay called "Amazing Possibilities Ahead: But beware the hidden dangers of prediction without understanding." We interviewed Julio on the podcast about his book, The Nexus, in June 2022 and that has led to a continuing friendship. Julio has started publishing on his own site and we're enjoying reading more of his work. A brief quote from this essay: "It used to be that predictions depended on understanding. For the last two centuries or so, that has been the unstated foundation of most advances in technology. But we should recall, as well, that for hundreds of years, we were designing systems that worked well without having the scientific bases to understand their functioning. But the advent of A.I. represents a seismic shift in enabling prediction without understanding. A.I. can make accurate predictions and achieve remarkable results without necessarily grasping the underlying reasons or mechanisms. On one hand, this could be a boon. An age of prediction free from understanding. On the other, our lack of understanding could mean we lose control of the outputs altogether."
  • The #UnthinkingAIEnthusiast world went bonkers over news of a Moderna / OpenAI partnership. While it's interesting to see an enterprise-wide licence of generative AI during a period when many large organizations are not finding enough value to move beyond experiments and trials—let's not get too excited. The reality is that the "news" was entirely based on a case study released by OpenAI—showing the AI hype in full effect.Case studies are great it’s how we learn but a good skeptic always knows to immediately down weight one from a vendor. That said, we 100% agree with this quote from the CIO of Moderna: "90% of companies want to do GenAI, but only 10% of them are successful, and the reason they fail is because they haven’t built the mechanisms of actually transforming the workforce to adopt new technology and new capabilities." Success with AI is all about the people.

Helen's Book of the Week

Why We Remember: Unlocking Memory's Power to Hold on to What Matters
by Charan Ranganath, PhD.

"Why We Remember" by neuroscientist and psychologist Charan Ranganath, details how our brains record the past and utilize that information to shape our present and future. It's a very accessible book and has been widely reviewed to popular acclaim.

But here's why it particularly interested me: the book illustrates, almost like a sleight of hand, how memory's crucial role in shaping our sense of self, our world navigation, and our creative and innovative capacities make for a fundamentally different kind of intelligence between AI and humans.

Ranganath isn't explicit about this, these are more my gleanings. There are many of these contrasts implicit in the book, but a couple that stand out to me are context, meaning, and expertise.

Human memory relies on context and meaning, recursively shaping both in the process. The 'where' and 'when' of our memories are processed separately from the 'what.' This contextual encoding allows us to form rich, multidimensional memories that deeply connect to our sense of self and our understanding of the world. Memories are retrieved flexibly and associatively, reflecting our individual identities. In contrast, AI 'memory' is often designed to strip away context, reducing data to decontextualized information. This raises the question: can AI truly understand 'meaning'? It seems that expecting AI to find meaning might be a fundamental category error.

Expertise in a particular domain is not just about the ability to see patterns, but also about the way we find them. This suggests that expertise involves a deep understanding of the context and meaning of the patterns we observe, rather than just accumulating a vast store of information in more and more detail. This implies that AI's kind of memory—for facts and associations—won't result in the same kind of expertise that a human develops. AI's might be more "factual" but a human will retain the valuable skill of insight.

As biological, evolved beings, we remember so that we can make better predictions and decisions about the future. This book successfully liberates the concept of memory from the outdated metaphor of a storehouse. Instead we see memory as an active, dynamic process that is closely tied to our goals, values, and sense of purpose. This connection is something AI cannot replicate in the same manner as humans.


Facts & Figures about AI & Complex Change

  • 20%: The percentage of revenue BCG says will come from AI consulting in 2024. FT
  • 40%: The percentage of revenue BCG say swill come from AI consulting in 2026. FT
  • $220,000,000: The Netherlands' pledge towards developing responsible and ethical AI systems. TechReport
  • $1,800,000,000: Canada's recente investement in the country's AI sector (in USD). Reuters
  • $1,500,000,000: Microsoft's investment in UAE-based G4w. Microsoft
  • $100,000,000,000: The planned cost of Stargate, the data center partnership of Microsoft and OpenAI. Reuters
  • 150: The number of AI pilots at the US General Services Administration. FedScoop
  • 132: The number of different generative AI tools and technologies being used by the US General Services Administration. FedScoop
  • 72%: The percentage of students who say they would like some help from adults in learning how to use different tools. AutomatED
  • 51%: The percentage of students who use AI on a weekly basis. AutomatED
  • 8%: The percentage of students who feel confident in their knowledge of AI. AutomatED

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.