Announcing the Artificiality Summit 2025!
Announcing the Artificiality Summit 2025! Don't miss the super early bird special at the end of the email...
This week Facebook announced a new policy banning deepfake videos.
This week Facebook announced a new policy banning deepfake videos. Facebook’s vice president of global policy management, Monika Bickert, said videos that have been edited “in ways that aren’t apparent to an average person and would likely mislead someone” and were created by artificial intelligence or machine learning algorithms would be removed under the new policy.
Ok, this is good, but it’s nowhere near good enough, a sentiment echoed by many. It’s another example of Facebook’s business strategy being driven by its AI strategy, rather than any genuine change in the company’s policy for reducing the pollution level of information on the platform.
Deepfakes are definitely a concern but, as this paper from Data and Society points out, they are only part of the story. “Cheap fakes” use conventional technologies such as speeding, slowing, cutting, re-staging or re-contextualizing footage and are far more accessible to the average person.
Deepfakes and cheap fakes exist on a spectrum from highly technically complex, and therefore requiring a lot of technical expertise, to requiring almost no technical expertise. From most technical to least according to Data and Society:
Deepfakes:
Cheap fakes:
It should be obvious how narrow and techno-centric this new policy will be in practice - it only captures the top line of this list. And while malicious fake content is dangerous, it may not be inherently more dangerous than less sophisticated doctored media. Cheap fakes can cause just as much havoc as more technically sophisticated deceptive media. One could argue that cheap fakes can be even more engaging - grabbing attention because they are so clearly in the uncanny valley or are distinct, unusual, curious or amusing, manipulate confirmation bias or incite a sense of urgency to act. And it’s the engagement that matters for Facebook - amplification of engaging content is what the AI does.
Bickert also testified this week before the Subcommittee on Consumer Protection and Commerce on manipulation and deception in the digital age. When questioned she acknowledged that on many occasions Facebook is slow to act: slow to find malicious content, slow to get information to fact checkers, slow to remove. And the inability of people to react as open content is amplified at immense speed and scale is the real problem. As Dr Joan Donovan pointed out in her testimony; “the platform’s openness is now a vulnerability.”
Banning deepfakes is good - certainly a good technical challenge - but we shouldn’t kid ourselves that it is indicative of any real change regarding information safety. What’s really needed is for Facebook to decouple amplification of content from the content itself. Ultimately this is the only way to reduce the risks that come with the market in deceptive information and the attention economy.
Elsewhere this week:
The Artificiality Weekend Briefing: About AI, Not Written by AI