The $1 Trillion Question
The $1 Trillion Question, Mortality in the Age of Generative Ghosts, Your Mind and AI, How to Design AI Tutors for Learning, The Imagining Summit Preview: Adam Cutler, and Helen's Book of the Week.
New research probes the inner workings of these models, revealing the emergence of structured knowledge about fundamental concepts like space and time.
Do language models actually "think" and reason about the world, or do they merely mimic such abilities by picking up on patterns in data? This question lies at the heart of much of the research aimed at unraveling the inner workings of large language models (LLMs). A new paper titled "Language Models Represent Space and Time" probes how concepts fundamental to human cognition—space and time—emerge in the learned representations of LLMs.
Wes Gurnee & Max Tegmark of Massachusetts Institute of Technology discover that despite not being directly trained to reason about space and time, LLM models develop specialized neural pathways responsive to spatial and temporal relationships. Even weirder, when they probed the interior of these networks they uncovered spontaneous emergence of individual neurons specifically attuned to spatial relationships like "left" and "right" or temporal orderings like "before" and "after”. This suggests certain core elements of a reasoning mind can arise spontaneously with increasing scale, bringing us closer to understanding how higher-level intelligence may potentially emerge in AI.
The Artificiality Weekend Briefing: About AI, Not Written by AI