AGI is a Red Herring

The current obsession with AGI, fueled by the hype from companies like OpenAI, is a dangerous distraction we must firmly reject. Don't fall for the red herring argument that we need superintelligent AI to save us from ourselves. It's an insult to human intelligence and agency.

An abstract image of a red herring

AGI is a red herring. That’s not to say it isn’t a “real” phenomenon. Much like the origin story of red herrings, it serves as a distraction. But from what, exactly, does it distract us in the current discourse?

[Before I move on, I will explain my use of "real" in quotes. Since there isn't an agreed upon definition of intelligence, it seems impossible to have an agreed upon definition of a derivative of intelligence. In some ways, this is also true for artificial intelligence but the definitional difficulty is more problematic with AGI because it is often benchmarked against human intelligence, which lacks definitional clarity. So, with the caveat of a lack of clear definition, let's move on to answering the question: from what, exactly, does it distract us in the current discourse?]

First, the Achilles' heel of current AI systems—which people often forget to highlight—is that AI is inherently about the past. While it can make predictions about the future or fill in informational gaps, it fundamentally relies on historical data. This means AI cannot free us from the past selves we might want to leave behind. This critical limitation remains underappreciated. It makes AI inherently conservative—in the literal sense of the word. It might not only narrow our attention on things that worked in the past, but it may also trap us into believing that the past is all there is.

To be sure, there are cutting-edge AI systems that predict possibilities based on combinatorial spaces that nature offers but are beyond our current reach—AlphaFold, for example. These systems are unique and distinct from the everyday AI most of us encounter. And these systems are really important to understand because they help us envision what we might truly find useful in AI: systems that extend our minds beyond our current experiences, break us free from the constraints evolution has endowed us with, and respect our rights to a future tense. So a large language model like ChatGPT might help you break out of a personal rut, but it has zero chance of breaking humanity out of a collective rut. In fact, it’s more likely to create an entirely new, massive rut for all of us.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.