AI Systems are Discovering a Shared Model of the World. It's called "reality."

A recent paper entitled The Platonic Representation Hypothesis supports the idea that widespread use of singular algorithms might actually be simplifying hte world. Large-scale AI models, such as GPT-4, are all starting to learn the same representations of human digitized reality.

Research paper: The Platonic Representation Hypothesis

Key Points:

  • Global Complexity and Simplification: David Krakauer of the Santa Fe Institute argues that globalization and the use of singular algorithms are simplifying rather than increasing global complexity, resulting in less robust and interesting systems.
  • Platonic Representation Hypothesis: AI models like GPT-4 are converging towards a shared model of reality, akin to the shadows on the wall in Plato’s cave, driven by factors such as multitask scaling, computational capacity, and simplicity bias.
  • Multitask Scaling Hypothesis: As AI models are exposed to more tasks and data, the space of possible representations narrows, pushing them towards a common solution.
  • Capacity Hypothesis: Larger AI models, with greater computational capacity, are more likely to find and converge to optimal representations of tasks compared to smaller models.
  • Simplicity Bias Hypothesis: AI models inherently favor simpler solutions, which strengthens as they scale up, leading to convergence on simple, common representations.
  • AI’s Expanding Applications: The convergence of AI models’ representations enhances their ability to reason about and interact with the world, improving performance in language tasks and visual structure understanding.
  • Limitations and Pitfalls: AI models are constrained by their training data, potentially limiting their ability to represent unrecorded aspects of reality, raising concerns about over-reliance on AI’s models of reality.

At a recent conference on increasing global complexity that we attended, David Krakauer, head of the Santa Fe Institute, kicked off the opening keynote panel. The moderator asked why the world is becoming more complex. In a typically contrarian manner, Krakauer responded, "Well, I'm not sure that it is." He argued that globalization and the widespread use of singular algorithms might actually be simplifying the world. This loss of complexity, he suggested, makes our systems less robust, less heterogeneous, and less interesting.

A recent paper entitled The Platonic Representation Hypothesis supports this idea. Large-scale AI models, such as GPT-4, are all starting to learn the same representations of human digitized reality.

In the allegory of Plato's cave, prisoners chained in a cave see only shadows of objects cast on the wall, mistaking these shadows for reality itself. I notice this allegory coming up a lot recently in the AI discourse and have wondered why. I think it's because it's a useful way to suggest that AI systems, like the prisoners, are beginning to construct representations of reality based on the limited "shadows" of data they are trained on.

The paper argues that as AI models are trained on increasingly large and diverse datasets, their representations are converging towards a shared, underlying model of reality. The researchers propose that this convergence is driven by several factors. First, as models are trained on more tasks and data, the space of possible representations that can solve all these tasks becomes increasingly constrained, leading to convergence. Second, larger models have a greater capacity to find these common solutions. Third, the inductive biases of neural networks, particularly simplicity bias, further guide them towards similar representations.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.