Facebook is an AI ethics fail

If you’re at all enmeshed in the tech press, the big issue for the last couple of weeks has been Facebook’s inability to get out of the huge hole it’s dug for itself around misuse of its platform, political ad targeting and lies.

Abstract image of Lady Justice falling down

If you’re at all enmeshed in the tech press, the big issue for the last couple of weeks has been Facebook’s inability to get out of the huge hole it’s dug for itself around misuse of its platform, political ad targeting and lies. It’s fascinating to watch this through the lens of AI governance and ethics because AI ethics has something to say about it all.

Transparency is necessary but not sufficient

Facebook says that its tools for searching political ads are the solution. By being able to search for any political ad, anywhere in the world, the trifecta of problems - preserving free speech, neutrality of platforms and accuracy - are solved. But all this has done is more clearly shift the responsibility for analysis, interpretability and fact-checking away from Facebook. In this case transparency is a false flag, not even a partial solution. The ethical AI response might be for Facebook to interrogate the information, to discover and report bias, inaccuracies and misuse directly. Transparency is one important part of AI but it’s worthless if it’s not part of an ecosystem of explainability, responsibility, accountability and trust.

The people affected should be central

How algorithms are trained and how people are alert to potential bias and unintended harm are a critical part of AI development. Data scientists currently shoulder this burden and it’s going to be a huge shift to transition some of this front line work to product managers and business leaders and others who can involve those most affected by AI. This is central to many AI ethics programs because it recognizes that knowledge is created in the front-lines and, done right, this is where AI and humans can constantly work in a dynamic and mutually reinforcing cycle of improvement.

Facebook’s content moderation is an important front-line activity. At a recent congressional hearing, Representative Katie Porter of California asked Zuckerberg whether he would "be willing to commit to spending one hour per day for the next year watching these videos and acting as a content monitor." The Facebook CEO replied by suggesting that he was "not sure that it would best serve our community for me to spend that much time" reviewing questionable content.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.