Facebook's GIPHY acquisition is genius

Helping Facebook's AI understand hidden meaning.

A person hiding

If necessity is the mother of invention, then Facebook’s AI has been on extra duty since Covid as it tries to identify and understand hidden meaning.

AI now proactively detects 88.8 percent of the hate speech content we remove, up from 80.2 percent the previous quarter. - Facebook, May 2020

Moderators, as contractors, can’t work from home due to security concerns, so removing hate speech has relied more on AI while human moderators prioritize Covid misinformation. Covid is new, so there’s relatively less data available for AI to learn from which makes humans even more important. But humans are not a sustainable strategy for Facebook. Facebook is simply too big, too fast and too diverse.

Content moderation is a frontier for AI research. Developments in content moderation will drive a wave of new AI capability, which will be of huge value in Facebook’s core business of micro-targeting ads.

Hate speech is complex for AI; it’s a small proportion compared to the billions of posts that are not problematic; it’s often multimodal, it can be ironic or sarcastic which is tough for AI, and the people posting it are deliberately trying to avoid detection by doing things such as manipulating words or creating ambiguous text within a broader context. Haters use dog whistles to try to hide the meaning from anyone who doesn’t understand the codes—from those who doesn’t understand the hidden language.

Of course, if Facebook can use AI to solve hate speech, then it can use the same AI for a lot more. Which is a big incentive.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.