Facebook's AI intends to break first, fix later

Facebook's AI chief scientist oversimplifies humans. Again.

Abstract image of broken plates

An online debate this week offers a rare glimpse into the way Facebook’s chief AI scientist thinks about humans, and it should worry us all. Yann LeCun, also a professor at NYU, presides over some of the most powerful AI in existence. Facebook’s algorithms connect millions, at unimaginable scale, and at speeds that are beyond human response time. While the company’s previous mantra of “move fast and break things” was abandoned, this debate shows that key people at Facebook still think that breaking things is legit. But when it comes to existentially dangerous AI, fixing things after they break seems like a very bad idea.

I’ve written before about Yann LeCun’s lack of understanding of human behavior and the danger it poses for how Facebook develops its powerful AI. This week, two of the world’s AI authorities tried to provide some much-needed education and perspective before, quite literally, LeCun potentially creates AI so dangerous that it is a threat to humanity.

Stuart Russell (professor, UC Berkeley) took LeCun to task while Yoshua Bengio (professor, Université de Montréal) attempted some degree of adjudication. The comment stream is long and fragmented (a handy collation here), so I’ll try to summarize the central argument. (I have paraphrased, so not their actual words but captures the argument and hopefully the sentiment):

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.