The $1 Trillion Question
The $1 Trillion Question, Mortality in the Age of Generative Ghosts, Your Mind and AI, How to Design AI Tutors for Learning, The Imagining Summit Preview: Adam Cutler, and Helen's Book of the Week.
Facebook's AI chief scientist oversimplifies humans. Again.
An online debate this week offers a rare glimpse into the way Facebook’s chief AI scientist thinks about humans, and it should worry us all. Yann LeCun, also a professor at NYU, presides over some of the most powerful AI in existence. Facebook’s algorithms connect millions, at unimaginable scale, and at speeds that are beyond human response time. While the company’s previous mantra of “move fast and break things” was abandoned, this debate shows that key people at Facebook still think that breaking things is legit. But when it comes to existentially dangerous AI, fixing things after they break seems like a very bad idea.
I’ve written before about Yann LeCun’s lack of understanding of human behavior and the danger it poses for how Facebook develops its powerful AI. This week, two of the world’s AI authorities tried to provide some much-needed education and perspective before, quite literally, LeCun potentially creates AI so dangerous that it is a threat to humanity.
Stuart Russell (professor, UC Berkeley) took LeCun to task while Yoshua Bengio (professor, Université de Montréal) attempted some degree of adjudication. The comment stream is long and fragmented (a handy collation here), so I’ll try to summarize the central argument. (I have paraphrased, so not their actual words but captures the argument and hopefully the sentiment):
The Artificiality Weekend Briefing: About AI, Not Written by AI