AGI is a red herring. That’s not to say it isn’t a “real” phenomenon. Much like the origin story of red herrings, it serves as a distraction. But from what, exactly, does it distract us in the current discourse?
[Before I move on, I will explain my use of "real" in quotes. Since there isn't an agreed upon definition of intelligence, it seems impossible to have an agreed upon definition of a derivative of intelligence. In some ways, this is also true for artificial intelligence but the definitional difficulty is more problematic with AGI because it is often benchmarked against human intelligence, which lacks definitional clarity. So, with the caveat of a lack of clear definition, let's move on to answering the question: from what, exactly, does it distract us in the current discourse?]
First, the Achilles' heel of current AI systems—which people often forget to highlight—is that AI is inherently about the past. While it can make predictions about the future or fill in informational gaps, it fundamentally relies on historical data. This means AI cannot free us from the past selves we might want to leave behind. This critical limitation remains underappreciated. It makes AI inherently conservative—in the literal sense of the word. It might not only narrow our attention on things that worked in the past, but it may also trap us into believing that the past is all there is.
To be sure, there are cutting-edge AI systems that predict possibilities based on combinatorial spaces that nature offers but are beyond our current reach—AlphaFold, for example. These systems are unique and distinct from the everyday AI most of us encounter. And these systems are really important to understand because they help us envision what we might truly find useful in AI: systems that extend our minds beyond our current experiences, break us free from the constraints evolution has endowed us with, and respect our rights to a future tense. So a large language model like ChatGPT might help you break out of a personal rut, but it has zero chance of breaking humanity out of a collective rut. In fact, it’s more likely to create an entirely new, massive rut for all of us.
Second, an obsession with AGI—often implicitly assumed to be housed within a single model—is fundamentally limiting. This isn't just due to the significant issue of concentration risk, but also because of more subtle limitations, such as design. If we all query some oracle for the answer to life’s questions (the answer is 42, by the way) through one interface, how much design innovation can there be? Now, it's possible that AGI could “reside” in one foundational model (say, GPT-20) and there could be many interfaces, leading to a veritable Cambrian explosion of AGI-driven life forms. However, by the time we reach that point, the incentives for a few—or perhaps just one—large player to triple down on its massive investment in training will mean there can be only one.
Third, let’s take a moment to consider how absurd the whole idea of AGI truly is. A true superintelligence should help us become better and different than we are, extending our experiences far beyond what we can currently achieve ourselves. Surely, we don't need a machine that merely replicates us—bigger, faster, and instantly updatable. We already know how to create humans. What we don't know is how to connect us up in better ways.
This is the real distraction: losing sight of what we actually want in “strong AI.” Instead, AGI can have people turn to the mystical. Historically, humanity has yearned for something just like us, but operating on a higher level—a superintelligence with access to knowledge and wisdom that we, the ordinary masses, cannot possess. This yearning is a precursor to a new elitism, one that we may have already created and endorsed in various forms: technocratic elites, transhumanists, post humanists, e/accs, and so on.
Fourth, the ambiguity in AGI's various definitions makes it a trojan horse. What does "general" even mean? Is it a reversion to average human intelligence or a perfect, maximal sum of the best human capabilities? Which humans are we talking about, and how could we measure this? The definition itself is problematic.
OpenAI's definition, which has gained the most traction, describes AGI as highly autonomous systems that outperform humans at most economically valuable work. Surveys suggest that people are already nervous about AI's potential to reduce personal agency, so this definition manages to be both vague and unsettling. The ambiguity leads to a "deer in the headlights" reaction: if you think about it too long, you'll get hit by the realization that there are people out there eagerly working to build something smarter, faster, and better than us at everything, all the time. How is that a good idea?
Fifth, AGI is a story, an illusion shaped by science fiction and popular culture. There's a spectrum of depictions, from embodied, conscious human-like forms like Ava in Ex Machina to all-knowing supercomputers like Deep Thought in The Hitchhiker's Guide to the Galaxy (spoiler alert: the answer is still 42). These narratives tap into our deep-seated desire for security and certainty in an uncertain world. But can a machine truly provide these things, given the infinite complexity of reality?
Humans evolved to handle the unpredictable, not outsource it. Even if we grant, for argument's sake, that an AGI could master uncertainty through sheer computation, we'd end up in a situation where AGIs are responsible for all decision-making and accountable for any errors. Humans would be left with no agency, no responsibility, and no accountability to each other. The AGI illusion, taken to its logical conclusion, leads to a profound loss of human purpose and connection.
The problem with AGI, as it’s currently envisioned, is that by aiming to recreate human intelligence in a superhuman form, it paradoxically becomes profoundly inhuman. This approach encourages the belief that our best achievements are behind us, distracting from the critical question of transformation: what in our culture today have run their course, so what do we want to leave behind?
The concept of a unitary AGI limits our interactions with AI to a narrow set of affordances, reducing the diversity of our human engagements with these systems. This erosion of confidence undermines our ability to understand ourselves and, consequently, to organize and govern ourselves. When we surrender our agency and confidence to AGI, we don't just lose control over our decisions—we lose our sense of responsibility. It's a slow erosion, a gradual numbing of the feeling that our choices and actions matter. As we increasingly outsource our thinking and problem-solving to machines, we start to shed the weight of accountability that comes with being human. When we lose this self-knowledge, to whom or what will we turn? The answer, inevitably, is the AGI.
This would be an anti-mind for our minds. And, in our current culture, it is the most likely mind to emerge. Without an active, conscious, passionate rejection of this AGI morality we will get the mind that values the skins we want to shed, the human fossils that were first frozen in the data, the ossified bones of a humanity that has driven our living planet to the brink.
There are reasons to hope: I am hopeful about much of AI. Systems like Alphafold3 and the humans that work with it to creatively solve scientific mysteries give me profound optimism. But this hope could be so easily crushed by rhetoric that eliminates our human role. I'm not sure who wants to lounge on a beach being paid a paltry UBI (thanks Sam and Elon!) while the AGI tries to salvage our world as it otherwise burns, melts, and drowns, but not me. I want to be part of the solution not part of the problem.
The current obsession with AGI, fueled by the hype from companies like OpenAI, is a dangerous distraction we must firmly reject. Don't fall for the red herring argument that we need superintelligent AI to save us from ourselves. It's an insult to human intelligence and agency. Break down their flimsy logic and you'll see the AGI agenda for what it really is: a modern techno-myth peddling the ancient story of salvation from above. A quasi-religious narrative spun by Big Tech to serve their own interests, not ours.
We don't need AI saviors to build a better future. It's a cop-out. We have to take responsibility as humans to do the hard work of solving global problems through cooperation, innovation and democratic action, not wishful thinking about hypothetical superintelligent AI. The AGI obsession isn't just misguided, it's an abdication of our potential and duty as humans to shape our own destiny, to feel a sense of responsibility for our collective endeavors.
Don't get me wrong: I truly believe that creating AI beyond our capabilities is possible and a good idea. But before we do, we need better stories about what it is, how it's designed, who brings it into the world, and why.
In 1950, Roderick Seidenberg made a prediction about technologies far less powerful than current AI, with the ironic conclusion that intelligence may indeed make itself redundant before we fully grasp its nature and implications. I'll leave the last word to him:
The functioning of the system, aided increasingly by automation, acts — certainly by no malicious intent — as a channel of intelligence, by which the relatively minute individual contributions of engineers, inventors, and scientists eventually create a reservoir of established knowledge and procedure no individual can possibly encompass. Thus, man appears to have surpassed himself, and indeed in a certain sense he has. Paradoxically, however, in respect to the system as a whole, each individual becomes perforce an unthinking beneficiary — the mindless recipient of a socially established milieu. Hence we speak, not without justification, of a push-button civilization — a system of things devised by intelligence for the progressive elimination of intelligence!