Another paradox: this time, explainability

Autonomy relies more on relative power of the designer than it does on the quality of the explanation.

An abstract image of a book

There seem to be a lot of paradoxes in AI. Perhaps we shouldn’t be surprised. I expect there will be many more. Why?

Paradoxical observations are a natural feature of AI. People build AI with the aim of it being just like us or not being like us at all. At these two extremes we see more of ourselves. In many ways it doesn’t matter whether it’s a perfect reflection of ourselves or a hall-of-mirrors distortion. What AI does is progressively reveal human traits in ways that are natural contradictions.

I’ve now started to look for them because I think they are both intriguing and revealing. Resolving a paradox can provide insights into AI design. In this week’s Artificiality I take a look at a new paradox discovered by top researchers, Solon Barocas from Microsoft Research and Cornell University, Andrew D. Selbst from UCLA and Manish Raghavan from Cornell University. They recently presented their work on hidden assumptions behind explanations in algorithmic systems and its impact on explainability in AI.

Explainable AI is a foundational concept - one that everyone agrees underpins trust in AI. In US law, citizens have a right to an explanation when an algorithm is used to make credit decisions, for example. But beyond certain legal rights, a user-centric explanation and justification is simply good design.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.