The $1 Trillion Question
The $1 Trillion Question, Mortality in the Age of Generative Ghosts, Your Mind and AI, How to Design AI Tutors for Learning, The Imagining Summit Preview: Adam Cutler, and Helen's Book of the Week.
A recent paper entitled The Platonic Representation Hypothesis supports the idea that widespread use of singular algorithms might actually be simplifying hte world. Large-scale AI models, such as GPT-4, are all starting to learn the same representations of human digitized reality.
At a recent conference on increasing global complexity that we attended, David Krakauer, head of the Santa Fe Institute, kicked off the opening keynote panel. The moderator asked why the world is becoming more complex. In a typically contrarian manner, Krakauer responded, "Well, I'm not sure that it is." He argued that globalization and the widespread use of singular algorithms might actually be simplifying the world. This loss of complexity, he suggested, makes our systems less robust, less heterogeneous, and less interesting.
A recent paper entitled The Platonic Representation Hypothesis supports this idea. Large-scale AI models, such as GPT-4, are all starting to learn the same representations of human digitized reality.
In the allegory of Plato's cave, prisoners chained in a cave see only shadows of objects cast on the wall, mistaking these shadows for reality itself. I notice this allegory coming up a lot recently in the AI discourse and have wondered why. I think it's because it's a useful way to suggest that AI systems, like the prisoners, are beginning to construct representations of reality based on the limited "shadows" of data they are trained on.
The paper argues that as AI models are trained on increasingly large and diverse datasets, their representations are converging towards a shared, underlying model of reality. The researchers propose that this convergence is driven by several factors. First, as models are trained on more tasks and data, the space of possible representations that can solve all these tasks becomes increasingly constrained, leading to convergence. Second, larger models have a greater capacity to find these common solutions. Third, the inductive biases of neural networks, particularly simplicity bias, further guide them towards similar representations.
The Artificiality Weekend Briefing: About AI, Not Written by AI