This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
An online debate this week offers a rare glimpse into the way Facebook’s chief AI scientist thinks about humans, and it should worry us all. Yann LeCun, also a professor at NYU, presides over some of the most powerful AI in existence. Facebook’s algorithms connect millions, at unimaginable scale, and at speeds that are beyond human response time. While the company’s previous mantra of “move fast and break things” was abandoned, this debate shows that key people at Facebook still think that breaking things is legit. But when it comes to existentially dangerous AI, fixing things after they break seems like a very bad idea.
I’ve written before about Yann LeCun’s lack of understanding of human behavior and the danger it poses for how Facebook develops its powerful AI. This week, two of the world’s AI authorities tried to provide some much-needed education and perspective before, quite literally, LeCun potentially creates AI so dangerous that it is a threat to humanity.
Stuart Russell (professor, UC Berkeley) took LeCun to task while Yoshua Bengio (professor, Université de Montréal) attempted some degree of adjudication. The comment stream is long and fragmented (a handy collation here), so I’ll try to summarize the central argument. (I have paraphrased, so not their actual words but captures the argument and hopefully the sentiment):
LeCun: AI can never be dangerous because: 1) we can input our human values into the AI objective function, such as our laws and proxies for social contracts, all the way down to the rules we use to discipline our children, btw not doing this would be stupid, and, 2) even if we get it wrong we can fix the damage later, which may include building a more specialized AI to destroy any evil AI. This will work because specialization trumps generalization.
Russell: It should be glaringly obvious that: 1) we value AI for its ability to find loopholes and strategies that we can’t even know, which it then uses to fulfill its goal. How could we ever provide enough certainty in our human-level guidance and designs to be confident we are setting this up safely, and, 2) existential risks to humanity don’t really fit into the class of things that most people would think are ok to fix-after-breaking.
Russell’s points are especially relevant if you consider that
human values are not universally the same
in practice, laws rely on the application of a lot of human judgment.
humans might, in fact, end up as the ultimate generalists and be out-specialized by the AI’s optimization.
What I particularly like about this online debate is that Russell is like a dog with a bone.
To quote Russell directly:
Yann's comment "Facebook stopped maximizing clickthroughs several years ago and stopped using the time spent in the app as a criterion about 2 years ago" makes my point for me. Why did they stop doing it? Because it was the wrong objective function. Yann says we'd have to be "extremely stupid" to put the wrong objective into a super-powerful machine. Facebook's platform is not super-smart but it is super-powerful, because it connects with billions of people for hours every day. And yet they put the wrong objective function into it. QED. Fortunately they were able to reset it, but unfortunately one has to assume it's still optimizing a fixed objective. And the fact that it's operating within a large corporation that's designed to maximize another fixed objective - profit - means we cannot switch it off.
Russell is worried; while it might be years away and remain a low likelihood, we are birthing AI into a world where this problem - technically called the value alignment problem - is not only far from solved in an AI research sense, but also far from solved in a human sense. We have information platforms that are regularly manipulated, increasing polarization in society, and a reversal in the value of “truth.” There’s evidence our values are diverging further. Leaving AI design to people who think they can just engineer in some “human values” is hardly confidence building.
It’s ironic that value alignment is a perennial every-day problem for Facebook. This week Facebook refused to take down a political campaign ad which makes false claims about Biden. It’s the usual “doesn’t violate our policies” argument, to which the Biden campaign is justifiably pissed off because it surely violates his. That seems like a value alignment problem right there, and one that I just don’t trust Facebook to navigate on behalf of the human race, thanks so much.
I’m with Russell. When mistakes happen at human scale and at human speed, they are at least human-manageable. After all, no one expects technology to come with no cost or downside. But AI is different from everything we’ve had before - different from cars, different even from nukes. This isn’t theoretical; OpenAI found this recently when agents invented emergent strategies that were supposed to not be possible or, indeed, not allowed by the system.
AI’s very nature is to find things we can’t - we already give up some of our autonomy every time we use AI. Russell says we need to “change the way we do AI.” Facebook is the prime example of why. Our values are more inconsistent, ambiguous, polarized and fluid than any AI designer can hope to input on our behalf. Our values are gamed by humans already - just look at how the social platforms have been made antisocial. The information social ecosystem is prime territory for an AI to find the perfect loophole, potentially one that we don’t want and can’t turn off. The mathematics inherent in AI doesn’t need anything as human as emotion or as biological as testosterone to enable an AI to develop a self-preserving strategy, where loss of human autonomy is an unintended side effect.
Perhaps the greatest irony in the whole debate is that of the role of uncertainty. One of the most important differences between AI and traditional technology is its probabilistic nature. Humans have a tough time with probability, it’s simply not an intuitive way to think for most people. Russell’s point isn’t that he’s certain that AI is or is not an existential threat to humanity. His point is that if it is, it represents a catastrophic outcome. This is the part that LeCun simply misses. It doesn’t matter that the chances are remote or that it’s likely way out in the future. For us humans, any chance should be too high.
LeCun’s perspective is likely influenced by his own objective - he’s paid by Facebook to make Facebook more profitable. He can do that by advocating for minimal control over AI development and maximal scope and scale for AI to act. It’s the exact scenario where it suits to downplay the danger of AI. It’s the exact reason we need to change how we develop AI and it should start with AI researchers being challenged to understand more about humans.
Other things this week:
I had a group of high school kids try out this interactive video that shows how social media (namely Snapchat) uses emotional AI. The results were pretty interesting and will challenge any preconceptions you may have of whether privacy is still a thing. Check it out yourself here.
An AI Cold War between China and the USA seems possible. Starting with banning companies that develop surveillance technologies, and justified based on China’s treatment of the Uighurs, it’s easy to see this ban spreading to other AI companies.
ICYMI, excellent Exponential View podcast episode: Azeem Azhar interviews former British PM Tony Blair on AI regulation and how technology disruption can work for the many, rather than the few. Blair’s advice to Facebook: “work out what you think is a sensible form of regulation and get into a dialogue.” He goes on to say, “these huge companies will start to be regarded as public utilities. It’s not conceivable that that amount of power can exist without politics taking an interest and wanting to regulate. It is literally inevitable.”
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.