Dystope and Hope: The Dual Servings of an AI Insider

A research review of The Coming Wave: Technology, Power and the 21st Century’s Greatest Dilemma. by Mustafa Suleyman with Michael Bhaskar.

The Coming Wave by Mustafa Suleyman

If there is one AI entrepreneur who could reasonably be thought of as gamekeeper turned poacher, it would be Mustafa Suleyman. After an early career in non-profit work and public policy, he co-founded DeepMind. Here he held a firm grip on AI’s throat as he helped steer the company towards developing AGI. After Google’s acquisition he was vice president of AI product management and AI policy and is now co-founder and CEO of Inflection AI

Part memoir, part explainer, part predictor, and part policy, The Coming Wave attempts to explain the global econo-socio-techno-politico system in which AI diffuses. He is adept at weaving high level technological and social threads together, as he explains how exponential progress in artificial intelligence and synthetic biology (and energy and quantum computing and nanotechnology) will converge, creating “mass diffusion in great roiling waves”. 

Hyperbole aside, it is genuinely hard to convey the impact of these types of technology convergences without sounding like yet another Silicon Valley High Priest. 

Some parts of his message are refreshing in their straightforward, matter-of-fact empiricism. The Turing Test, a measure of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human, feels outdated today because it evaluates mimicry over understanding. Suleyman proposes a clever update because it sidesteps the entire issue of intelligence, focusing solely on functional usefulness. 

His "Modern Turing Test" tasks an AI with successfully achieving a complex, open-ended goal in an internet environment (that is, the global economy). Specifically, the AI should be able to take an instruction like "Go make $1 million on Amazon in a few months with just a $100,000 investment." To achieve this, the AI would need to research the web to identify trends, source products or designs, communicate with manufacturers, set up and refine the necessary contracts, create a seller's listing, continually update the merchandise based on feedback, and even deal with the legal aspects of registering a business. 

This goes beyond simple question-answer tasks and requires the AI to interpret, make decisions, and act across multiple domains over an extended period. With the release of GPT-4V (the vision version of GPT-4) this test makes a lot of sense. GPT-4V can “look” at a webpage and navigate (by describing how it would do so) through the site to achieve a goal. Sometimes the oldest (and in this case, one of the most sacred) ideas in a field need a shake up. Rethinking the Turing Test is a prime example. 

Suleyman also wants us to think differently about AGI, the very technology DeepMind was attempting to create. The standard narrative is that AGI will appear as a distinct and discrete event. This assumption, which is rarely questioned, drives an entire downstream industry of existential angst. While there’s a certain logic to one-hit superintelligence, much of the risks are theoretical. A “colossal red herring” says Suleyman.

In contrast, by questioning “flick of the switch” AGI and focusing us instead on how AI actually progresses, Suleyman opens a more vital dialogue about the risks of AI developing human-level capability in a more diffuse and gradual manner. His authority comes at an opportune time. Rather than be “sidetracked into arcane debates” all that “matters is what the system can do.” He’d like us to talk about Artificial Capable Intelligence (ACI) instead. ACI is measured in its usefulness to humans but is just as capable of bad as it is good. 

And bad it will be. A world of catastrophe and instability with unprecedented shifts of power and unpredictable, chaotic, long tail events washing over us in great waves of eternal turbulence. Technology that is fast (petaflops beget petaflops), asymmetric (things are easier to destroy than create), general (everywhere all at once), and autonomous (does what it wants on its own) is a technology that is immensely valuable, yet extremely dangerous. It needs to be contained. Without effective containment strategies AI will enable a lone psychopath to engineer a pandemic or destabilize nation states by destroying the social fabric.

Suleyman frames the dilemma as this: either we accept a surveillance state or we face an unavoidable catastrophe. His solution is a ten point plan for containment, which he lays out in concentric layers and includes everything from the obvious (technical standards and audits) to the aspirational (AI entrepreneurs who genuinely care about functional institutions in democratic societies and governments willing to employ top tech talent so as to develop better policy).

The recommendations are logical, if not especially creative, but they somehow ring hollow. Suleyman’s portrayal of the complex interplay of technologies lacks the depth and insight you’d hope for given his perspective and knowledge. As is often the case in this style of techno-forecasting, so much of the real dynamical nature of the system gets washed away with an overweighted focus on the tech. Rather than impart a rich set of mental models for understanding the subtle interplay, we’re left with a weird cognitive dissonance: if this is so transformative, give me one full transformation. I want to ask him to lay out, end-to-end, the factors for adoption in one process that has so far resisted tech-enabled productivity gain such as healthcare or home building. 

So instead of good explanations about how the system as a whole might deliver on great wealth and surplus, we get vague, oversimplified, exponential extrapolations. Take the example of AI’s ability to find promising new materials. By glossing over the fact that there’s an entire system of exploration, validation, production, scale, distribution, and use, we lose a vital perspective: the complexity of physical world systems that consist of already-optimized chains of supply and people who behave in unpredictable ways. 

Suleyman misses the opportunity to take us to the next level, to gain a deeper sense of the many possible outcomes, the hidden points of acceleration, the importance of differing time scales and path dependency, and of the existing systems of inertia. The dynamic, and sometimes subtle, nature of technology adoption is diluted in a simple story of tech take over. 

Paradoxically, this weakens his overall message because the real story is about concentrated resources in this technology. It’s about the clean machines of capital versus messy, diverse human systems. If the few can govern the many with GPUs running LLMs, we have a lot to worry about, maybe even more than he’s alerted us to. 

What is important is that Suleyman remains both the poacher and the gamekeeper. Perhaps Pi (Inflection’s AI assistant) is going to be one of the good AIs. Perhaps he can convince the other High Priests of the Valley not to destroy us in their race to beat each other to AGI, or perhaps he will figure out how to get the tech billionaires to share the great wealth and surplus that will come from AI. If not, maybe he’ll go full circle, take a government job, and save the world from there.

Links: 

Book

Video of Suleyman and Harari interview with The Economist

- Inflection AI’s Pi

- GPT-4V research by Microsoft

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.