Generative AI’s Undesirable Unpredictability

This week: Generative AI's Undesirable Unpredictability, Sensemaking & AI, Meta-Prompting, Doug Belshaw: Serendipity Surface & AI, Pro: Conversing with AI, Part 1-3, and Pro: February Update.

An abstract image of unpredictability

This Week:


Prior to generative AI, it was reasonable to expect software to work as expected. Developers wrote code to follow specifications and QA engineers tested the result. If the software didn’t perform as specified, the mistake was described as a “bug” and sent back to engineering to fix.

Generative AI, however, is unpredictable by design. That’s one of the reasons it is so powerful. It can find patterns across uncounted dimensions in the data cosmos and weave together concepts into a novel creation. It’s not following rules, it’s creating them.

The problem is that we don’t yet have a way to define the boundary between desirable and undesirable unpredictability. And, this week, we saw the most advanced generative AI tools lurch into undesirable unpredictability.

On February 20, OpenAI announced it was “investigating reports of unexpected responses from ChatGPT”—or what the rest of the world simply described as “weird.” It turns out that OpenAI created the problem by creating an error in how the model translates words into vectors to predict the next best word. And this problem resulted in ChatGPT creating gibberish responses.

On February 21, Google announced that Gemini was creating inaccurate images and subsequently paused Gemini’s image creation capabilities. Google designed Gemini using an AI model called Imagen 2 which is supposed to avoid some of the biased results of previous image generators. However, the goal to represent greater diversity went too far, for instance generating images of America’s Founding Fathers that included dark-skinned women.

We’re early in the development of generative AI so it’s not surprising to find issues. But, how will we know when we are able to rely on these tools? It’s fine for the early adopters to roll with punches when a tool goes down for a day or two. But what about those who are trying to incorporate these tools into their workflows? Will you be able to tell your boss or your customer: sorry, my generative AI was misbehaving today?

A Bit More on Weirdness:

  • "Weirdness" is a real challenge reliance and trust in LLMs. Here are three types of error to be aware of aside from what is considered to be a "normal" amount of hallucination: LLM Drift, Prompt Drift & Cascading.
  • LLM Drift refers to significant changes in a model's responses over a short period, because of fundamental alterations in the model's functioning.
  • Prompt Drift describes how the same input can yield different outputs over time due to changes in the model, the data it's fed, or even the model's migration to a newer version.
  • Cascading compounds these challenges by amplifying deviations through a sequence of processes, each step potentially veering further from the intended outcome.

This Week from Artificiality:

Sensemaking & AI

Sensemaking is going to change. AI will allow us to find story-less, a-narrative yet meaningful correlations. Our minds will have to be open to a new kind of awe: that which a machine can make sense of that we cannot.

Meta-Prompting and what it tells us about a bias for code

This research shows how flexible these models are: meta-prompting aids in decomposing complex tasks, engages distinct expertise, adopting a computational bias when using code in real-time which further enhances performance, then seamlessly integrates the varied outputs.

Doug Belshaw: Serendipity Surface & AI

An interview with Doug Belshaw about serendipity surface & AI.

Pro: Conversing with AI, Part 1—How to go beyond prompting and develop your conversational AI skills

Now, as more AI collaboration is designed inside of applications, do you still need to learn how to prompt? If you want to get the most out of AI, we would say, yes.

Pro: Conversing with AI, Part 2—Good, Better, Best Prompting

Developing the skill to craft effective prompts is a critical aspect of working with generative AI. It's about understanding what you want, knowing how to articulate that desire in a way the machine understands, and strategically using the AI's strengths to your advantage.

Pro: Conversing with AI Part 3—Explore

Access all digitized human knowledge.

Pro: February 2024 Update

Slides and videos from our February 2024 research update for Artificiality Pro

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.