In 2016, researchers from Princeton University and the University of Bath made waves in the AI research community with a landmark study. They looked at a common tool used by AI researchers to represent language, derived from a large database of text from the internet, and they found associations that strongly correlated with human biases—including mundane things like the fact that people find flowers more pleasant than bees and that weapons are less pleasant than musical instruments.
They also found associations that we would recognize today as stereotypes: female names are more likely to be associated with family than careers or with arts rather than sciences. And the biased associations they uncovered mapped onto real-world discrimination. Previous research had found that US job candidates with traditionally European-American names were 50% more likely to get job interviews. They were able to replicate that finding, using just the fact that European-American names were more closely related to pleasant words in their data.
These were disturbing revelations for AI. But the researchers’ point was to start a conversation about bias not just in algorithms but in humans. Because prejudice is a human trait, which is dependent on cultural norms and an individual’s actions, addressing bias in AI is not solely a technical challenge.
AI is transforming industries and has an uncanny knack for finding patterns, edge cases, and counter-intuitive outcomes. It is immensely powerful and, used wisely, it can provide a platform for progress. But there is overwhelming evidence of its ability to harm, too. AI discriminates and stereotypes. It reinforces historical biases and it works better for some people than others.
A growing cadre of academics, activists, technologists, lawyers, and designers are confronting these biases and attempting to understand and mitigate them. It’s pollyannaish to think we can just “de-bias” AI with the right software. Instead, the attempt to grapple with AI bias will force us to confront the biases in ourselves.
How bias happens
Bias has many causes but two categories are particularly important. If historical human bias is reflected in a dataset used to train AI, the AI will likely exhibit the same bias. And if the dataset isn’t representative—or even unbalanced—it will learn to predict things about some groups better than others.
Historical Bias
Many instances of bias are a result of long-standing bias in human societies. Human culture, language, associations, traditional roles and stereotypes all contribute to it.
A prime example of historical bias is Amazon’s abandoned recruitment algorithm, which “learned” to downgrade resumes associated with women. That happened because the data reflected a historical bias: men were historically seen as a better “fit” for employment. Amazon scrapped the initiative in 2017 because executives lost confidence that the AI wouldn’t just keep finding more ways to discriminate.
The problem in this case wasn’t, as far as we know, that the dataset didn’t have women in it. It was that women had historically been hired at lower rates, a pattern the algorithm effectively learned to mimic. Anywhere this sort of historical bias is embedded in the data, AI will likely be biased. (Amazon told Reuters that the tool “was never used by Amazon recruiters to evaluate candidates” but otherwise declined to comment on its reporting.)
Another example of historical bias comes from health care. Recent research focused on the US population found that a commonly used algorithm assigned black people lower risk scores compared to white people of similar health. The algorithm used health costs to predict who would need more care. But black people spend $1,800 per year on average less than white people for the treatment of a chronic condition. This is for a variety of reasons, including lower incomes, less flexibility in transport and work schedules, as well as unconscious bias and overt discrimination by the health care practitioners, resulting in disparity of treatment.
In this case, the algorithm interpreted the lower cost of treating black patients as indication that they were healthier and therefore did not need as much additional care as white people. While this bias is obvious in hindsight, the designers of the algorithm did not recognize it ahead of time.
Bias in representation
Bias can occur because the data are not representative of specific groups. If a dataset contains lots of examples of male CEOs and few examples of female CEOs, for example, a statistical model trained on that data to recognize CEOs is likely to be deeply biased. (Even datasets that are representative can be biased if they’re not balanced between groups, because the algorithm has an incentive to learn patterns associated with the majority group to maximize accuracy.)
Another major source of bias are the labels that AI uses to help it learn about the world initially. Kate Crawford is an AI researcher at Microsoft and one of the founders of AI Now, an institute at New York University focused on the social ramifications of AI. In 2019, Crawford, together with artist Trevor Paglen, undertook an art project called “the archeology of datasets.” They took one of the largest, most used image datasets, ImageNet, and studied the values by which it was labeled and constructed. Their tool, called ImageNetRoulette, “often returns misogynistic, racist and cruel labels.” These labels are now an embedded part of many image recognition systems and the project was instrumental in demonstrating how systemic bias from AI systems is now a ubiquitous feature of our world.
Bias can be introduced because of missing context. In her book Algorithms of Oppression, Safiya Umoja Noble interviews Kandis, a black woman who owns the only local African American hair salon within a predominantly white neighborhood near a prestigious college town in the US. When she was asked about her experience with the business review site Yelp, Kandis highlighted a data gap that she saw as important to how Yelp’s algorithms prioritize her business:
“Black people don’t ‘check in’ and let people know where they’re at when they sit in my chair. They already feel like they are being hunted; they aren’t going to tell The Man where they are.”
Representation in data is critical, and these cases highlight that there are many social and economic factors that affect people’s representation in datasets. There are deep disparities across groups based on their access to the internet and their online behavior, which means dealing with bias not solely a technical process.
How bias harms
Even when statistically neutral and technically de-biased, if AI is used inappropriately or is poorly designed, it can perpetuate discrimination and result in unfair outcomes. And AI raises the stakes because bias can now harm more people, faster.
Human bias in recruitment is a huge problem which denies millions a fair and equitable chance to prove themselves. AI has a role to play in solving this problem by efficiently sorting desirable candidates, predicting those who are most likely to succeed, reducing the impact of human prejudice and unconscious bias and screening people through AI-interview processes. This is big business: the top two companiesbased on funds raised are HireVue ($93 million) and pymetrics ($56.6 million).
But the biases of a recruiting algorithm created by a popular hiring platform will have far greater impact than even the hardest-working recruiter. Its biases become a single point of failure.
Hiring algorithms assess candidates’ suitability and personality based on videos and games, and by analyzing body language, speech patterns, mouse movements, eye tracking, tonality, emotional engagement and expressiveness. Hundreds of thousands of data points are gathered in a half hour interview or online game-playing exercise.
One of the biggest problems with this process of AI screening is that the science isn’t keeping up. “Academic research has been unable to keep pace with rapidly evolving technology, allowing vendors to push the boundaries of assessments without rigorous independent research,” say researchers from Cornell and Microsoft in a recent paper.
While the intent is good, there is a lack of data on to whether AI is improving recruitment diversity and fairness or introducing new sources of bias, which are then applied at scale.
And even when AI isn’t biased in a technical sense, it can be used to discriminate.
“Affinity profiling” is the practice of grouping people based on their assumed interests rather than on their personal traits. This is common in online advertising and it has the potential to be discriminatory if people do not see certain ads or receive different prices based on their affinity. This “discrimination by association” can be difficult to detect, which means that, from a legal perspective, it can be hard to remedy.
Affinity profiling is common in online advertising. For instance, as recently as 2016, Facebook allowed advertisers to use “ethnic affinity” as a proxy by which to target people by race. In 2018, the company removed 5,000 of these affinity categories to address concern from activists and lawmakers. The company also added a requirement that advertisers behind ads for housing, jobs, and credit cards comply with a non-discrimination policy.
Even so, AI’s ability to detect subtle correlations between interests and personal traits like race and gender make it exceedingly difficult to identify when someone has been discriminated against. This form of discrimination matters because of its opacity, which effectively allows discrimination to hide in another guise. Scholar and teacher Chris Gilliard calls it “friction-free racism.” Affinity profiling is indirect and is seen as neutral so it “allows people to feel comfortable with racism,” Gilliard says.
Dealing with bias
Individuals, companies, and societies are beginning to grapple with AI bias through new programming tools, design techniques, laws, and more.
In practice, there are a variety of strategies that people use when dealing with AI bias and harm that comes from AI systems. New technical tools are being developed. Human-centered AI design practices and ethical AI design are being refined as more people work on AI-enabled products. Lawyers are finding ways to challenge unfair outcomes from automated decision-making systems. And people are reflecting on AI’s role in the power structures of society, and how fighting bias means fighting power.
Debiasing and fairness testing
“Debiasing is now table-stakes,” says John C. Havens, the director of emerging technology and strategic development at IEEE, a prominent association of technical professionals. Havens plays a major role in advocating for standard practices in AI ethics. He strongly recommends that all AI models be rigorously examined for bias and tested for fairness and, with new technical tools available, implies there is no excuse for not doing so.
A lot can be achieved with purely technical tools. Documenting sources of bias, testing for fairness, de-biasing models, and archiving previous model versions are considered best practice.
A critical step in that process concerns understanding the data representations and choices the AI is making. There are a host of tools that have been developed to help there, including IBM’s AI Explainability 360, Google’s What-If tool and LIME (Local Interpretable Model-Agnostic Explanations) from the University of Washington, for example. These tools are designed to help data scientists understand the model’s most important features and how it makes predictions. They all combine visualizations with sophisticated data tools which allow data scientists and engineers to examine and manipulate data.
Even so, most AI experts acknowledge that an unbiased dataset is not possible. Bias, in some form, will always exist, which means that it’s vital to understand how bias affects different groups. This is the role of fairness testing.
There are multiple technical definitions of fairness, all based on what happens to different populations when AI makes an incorrect prediction. The most simple idea of fairness is to ensure some form of parity across a predetermined list of groups, often based on legally protected categories like race or gender or in other domains where discrimination is known to be common.
AI can make four types of predictions. As an example, imagine you are a recruiter and you use a pre-recruitment algorithm that tests candidates, gives them a score, and then recommends candidates for you to interview based on their score. It can:
- Recommend a candidate that it correctly predicts would be good at the job; a true positive (TP)
- Recommend a candidate that won’t be good at the job; a false positive (FP)
- Not recommend a candidate that wouldn’t be good at the job; a true negative (TN)
- Not recommend a candidate that would be good at the job; a false negative (FN)
Statistical fairness tests use error rates (false positives and false negatives) to test various ratios of failure between different groups. There are many different types of fairness tests but they fall into three broad categories: individual fairness, where similar predictions are given to similar individuals; group fairness, where different groups are treated equally; and subgroup fairness, which tries to balance both approaches by picking the best properties of the individual and the group and testing across various subgroups.
These are some examples of commonly used metrics:
- Group fairness: Equal positive prediction rates (TP + FP)
- Equalized odds: Equal false positive rates (FP / (TN + FP)) and equal false negative rates (FN / (TP + FN))
- Conditional use accuracy equality: Equal positive predictive values, also known as Precision (TP / (TP + FP)) and Equal negative predictive values (TN / (TN + FN))
- Overall accuracy equality: Equal accuracies (TP + TN)
- Treatment equality: Equal ratios of wrong predictions (FP / FN)
The problem is that there is usually a conflict between accuracy and fairness. The core tension is making optimal decisions for the system as a whole versus maintaining parity between groups. Therefore, even with a plethora of technical definitions, fairness testing remains context and value dependent. It involves making decisions about the kinds of mistakes that are made and how these mistakes are distributed between different groups. In The Ethical Algorithm, Michael Kearns and Aaron Roth point out that the tension between fairness and accuracy will never go away, but that it can now be measured and managed better than in the past.
“In the era of data and machine learning, society will have to accept, and make decisions about, trade-offs between how fair models are and how accurate they are. In fact, such trade-offs have always been implicitly present in human decision-making; the data-centric, algorithmic era has just brought them to the fore and encouraged us to reason about them more precisely.”
Perhaps one of the biggest challenges in the technical domain is actually a human one. Maria Axente, Responsible AI Lead with PwC UK in London, works with both executives and technical teams to help them understand the importance of the data supply chain. She says she starts any conversation about tech ethics by asking, “How biased is the context in which a technology is created?”
It’s vital that data scientists are encouraged to think beyond the strict boundaries of their role and to consider the consequences of their work. This helps to reduce bias being introduced as a product of someone’s background or because of a certain preference which unconsciously contributes to design choices, which then means that unintended bias is amplified at scale without any human making a conscious choice. As Kearns says, “If you don’t say anything about fairness, AI won’t either.”
Human-centered design
As technology and AI diffuse through everything—from devices in our homes, to the apps we use to track our health, to sophisticated equipment in industry—design plays an increasingly important role. We expect things “just to work,” and that expectation extends to AI. “Almost all design stems from making sure that a user can figure out what to do, and can tell what’s going on,” Cliff Kuang and Robert Fabricant write in their history of user-centered design, User Friendly.
Design has long sought to knit human psychology and product functionality together. A fundamental idea in design is feedback: how we adjust our predictions based on our experience. AI presents a unique design challenge because it can be hard to figure out what’s going on and bias can amplify inaccurate and unreliable feedback. AI also speeds up the cycle of feedback—think of how rapidly “likes” on Facebook can result in ads related to those likes.
Human-centered AI design is an emerging practice with the goal of making better AI. People who work in the area generally value putting humans first, with the belief that AI should serve humans, rather than the other way around. Bias that is not understood or revealed may not satisfy the user’s need and may be confusing, which disrupts the core tenet of design.
Many designers consider the idea of unbiased AI to be unrealistic and counter to technology goals. Josh Lovejoy, head of Design, Ethics and Society at Microsoft, says that bias reflects a latent prioritization. The goal of the designer is to reveal this in a productive way. Instead of aiming to “de-bias” an AI, designers need to be explicit about priorities and heuristics.
Instead of setting a goal for an AI to be as accurate as possible across as big and diverse a population as possible, Lovejoy suggests a different trade-off: more models with each designed for “narrow-utility.”
He illustrates this idea with an example: designing an AI to detect fake names in online forms.
In commonly used datasets there are patterns between the length of a surname and the likelihood of that name being marked as a likely fraud case, according to Lovejoy. Training data that is US-centric is biased: very long or very short names, and names that contain hyphens, are more likely to be flagged as fake, giving rise to clusters of error. But the pattern varies in different parts of the world. One answer is to have a greater number of narrow models instead of generalized deeper models. These models are localized, which has the effect of reducing bias. This is less convenient for the technologists but better for users—training data may be more difficult to gather, models are more fragmented, but each individual user has more influence over the AI, which increases how useful the AI ultimately is.
Human-centered design has a bias towards understanding the natural ways that humans interact with a product and provide feedback that helps humans develop an accurate mental model of the system. A mental model is our intuition about how something works, such that we can make an accurate prediction about what it will do. AI introduces additional complexity because it fails in unpredictable ways.
Lovejoy challenges technologists and other AI designers to maximize ways that users can play a bigger role in the AI. The more a user can interact, the more the user plays a role in the AI, the more the AI becomes a collaborator with a human who participates by making active choices. This gives the user a better mental model of the AI which builds trust.
How humans build mental models of AI systems is an active area of research, according to Tom Griffiths, professor of information technology in Princeton University’s psychology department, and co-author of Algorithms to Live By. “Humans are good at thinking about management of our own cognition,” Griffiths says. We can reason about how we, ourselves, would solve a problem and use this reasoning to build an internal model of how an AI would solve the same problem. The key to this process is interaction. “We underestimate how good we are at interacting with black boxes,” Griffiths says. “We already engage with human beings as other black boxes.”
Milena Pribic, a design advisor at IBM focused on AI, sees trust as “an act that a user takes”—so something that can be measured and monitored in user actions. In this way, bias is intentional. It is set by product design teams as part of a design. The AI has a personality, a style, a tone, a goal of its own. Without conscious design choices made up front that help set a user’s mental model, people can come to an AI with an anthropomorphic bias or have expectations that are too high. The result is frustration and “trolling” of an interaction which ultimately amplifies any existing bias in the AI.
AI is different because of the “elongated engagement” that users have with an AI. The first encounter can be very different from the thousandth and the user needs to have autonomy to set and guide the relationship.
Legal solutions
AI challenges the legal landscape. Tech giants, antitrust, privacy, surveillance, ad micro-targeting, discrimination, and bias are all hot topics. While new laws are likely required, it’s not clear when, how, and to what extent new regulations will be introduced to deal with the new challenges of AI.
Algorithmic decision-making that results in discrimination or disparate treatment, that is done without notice, or in ways that humans cannot understand and explain, is increasingly being investigated by journalists, legal scholars, human rights-focused non-profits, as well as challenged in the courts.
In the US, some of the most prominent and successful cases against AI-enabled discrimination have been taken against state governments. These cases have had one common factor: automated systems, which stand in for human decision makers, and which have denied people their constitutional rights.
In Arkansas, disabled beneficiaries lost half their benefits with no notice and no one was able to explain the algorithm’s decision. In DC, a criminal risk assessment tool for juveniles constrained sentencing choices, sometimes only displaying options for treatment in a psychiatric hospital or a secure detention facility, which drastically altered the course of people’s lives. In Michigan, an AI system for unemployment benefit “robo-determination” of fraud, adjudicated 22,000 fraud cases with a 93% error rate. Twenty thousand people were subject to the highest-in-the-nation quadruple penalties, amounting to tens of thousands of dollars per person.
The fact these cases were won by the plaintiff now provides precedent. People are beginning to push back on unjust treatment and AI-enabled discrimination. Schultz says, “It takes time for people to ‘learn how to get justice’ when new technologies are involved. Lawyers take cases on a pro-bono basis, technical experts will also often work for free as people learn and “build on each other’s wins.” The recent disability austerity cases in Arkansas, Idaho, and Oregon were won using this exact formula.”
The hope is that state and federal governments focus on doing a better job of understanding the implications of taking an engineering mentality to social systems. There is a lack of awareness about how AI systems fail and a lack of training and planning for dealing with technology failure in vulnerable social systems. When human decision makers are removed from the loop, “the probability of harm is very high.” Which means that providers of social services should be thinking carefully about whether AI should be used at all.
Schultz questions whether there is any role for AI in the justice system. One of its fundamental principles is the right to a fair judgment as an individual. AI groups, clusters, classifies and uses proxies which may be incompatible with fairness and justice.
Schultz explains that the legal system the US is “in flux.” Many foundational legal principles break down in a world powered by intelligent, autonomous systems. But he does see the system responding. “Using tools such as AI Now’s Algorithmic Impact Report make me hopeful we can come back from the Black Mirror,” he says, referring to the dark future depicted by the UK TV series, where AI drives humans into a world of algorithmic autocracy.
Regulation
One approach is to regulate AI. But regulating fairness requires defining it, which means grappling with the tradeoffs between fairness and accuracy, for example.
“The next frontier is a large technology company articulating and justifying the bias that exists in their models,” believes Michael Kearns, professor of computer and information science at the University of Pennsylvania. Naturally big tech will resist any calls to regulate AI in ways that would make their models and data more open, because doing so would compromise their intellectual property. But, with the scale and speed of AI, the current reactive regulatory approach falls short. The damage is discovered only long after the harm has occurred.
A move to a proactive regulatory regime would need to look more like what happens in the financial system, with FINRA, the industry’s self-regulatory agency which has direct access to highly granular trading data. Kearns points out that similar issues of speed and scale are at work in the finance industry, where regulation has gone “real-time,” with sensors placed in data feeds. With direct monitoring of data in specific ways, it would be possible for people to monitor for issues of bias inside of tech companies without needing to fully understand the models or digest the data at the speed and scale that a tech company operates at.
According to Kearns, there is an incentive for tech companies to allow this form of limited external oversight. It prevents them from being blamed for something that is beyond their control, say where the effect of algorithmic bias crosses an organizational boundary. An example of this is recent research on whether Google’s advertising shows gender bias in hiring for science, technology, engineering, and math (STEM) roles, showing more STEM ads to men than women. While on the face of it, Google’s model may be biased, at least part of the cause was advertisers being more willing to pay higher prices to get clicks from women for products specifically targeted at women. Ads for STEM roles were outbid by ads for other products.
Technology companies may resist regulation. The other issue, says Kearns, is that while the science of AI has advanced, the regulators themselves are decades behind.
Bias and society
When we choose to delegate physical, emotional, cognitive or ethical work to a machine, we outsource a part of ourselves. Outsourcing to an AI can make us more efficient but it can have unintended consequences and there are always trade-offs. It can make us more passive, decrease our sense of responsibility, decrease our agency, and make us detached or helpless.
Machines have interacted with humans for a long time but AI raises the stakes. Traditional technology and expert systems were typically developed based on known rules and heuristics. AI is different. Much of how an AI behaves relies on what it encounters “in the wild” and in its interactions with people. AI is “world-creating in a way that other technologies aren’t,” Jacob Metcalf from Data and Society says.
AI increasingly substitutes for, or enhances, human decision-making. Because AI learns and acts on its own and changes as new data about the world is made available, it can help guide humans in ways that traditional technology does not. This interactivity and shared agency has existential consequences for humans.
Annette Zimmermann, postdoctoral researcher at the Center for Human Values at Princeton, worries that people may reject AI because many AI applications have been shown to be biased in a way that exacerbates social injustice. This, however, does not necessarily mean that all AI applications will be biased and harmful in the same way. Zimmermann emphasizes that while we should subject AI to critical scrutiny, we shouldn’t assume that anything involving AI should be rejected.
What really matters, Zimmermann argues, is that we as a society find a way of implementing more and better democratic processes in this area, so that people can make more informed and direct choices about what AI should and shouldn’t be used for. This means people getting involved where AI is used locally—in their cities, schools and communities. It means municipalities creating ways for people to learn about the technology and its social implications—and creating reliable and transparent ways for people to hold governments and corporations accountable when AI deployment leads to unjust outcomes.
AI’s role means that these choices can’t be a one-time thing. There is a need to constantly evaluate how AI interacts with the world.
Human decision-making works on a geologic time scale compared to how quickly biased and scientifically unsound AI can cause destruction. How can we make space for human-scale reflection and collaborative decision-making?
Entrepreneurs are trying to create that space. Leanne Carroll, a graduate student at the School of Visual Arts in NYC, recognized early on that consumers need a way to surface bias. She’s working on a platform, a “Kickstarter for bias” where people can highlight how they have been affected by an AI system and seek assistance from the AI community and product owners in dealing with it.
Another approach is to use AI and other digital technologies to reflect our unconscious bias back to us. Oregon-based start up, Shift, uses VR and AI to teach people about human unconscious bias and help them make changes to their behavior. Leveraging the scale of AI to train more people and change how we propagate bias is powerful. It borrows an idea from self-driving cars, where new knowledge can be updated and uploaded faster and at scale.
AI can play a role in addressing social problems. It can be used to diagnose and measure them with more precision and clarity than in the past. It can also be used to expose and reframe social problems in new ways, which enables new solutions. AI has an ability to expose bias in ways that make us all more aware and motivated to change.
At its core, AI gives us insight into how human minds work. Humans evolved ways of thinking under constrained resources of energy and time. “Human bias is a necessary consequence of the constraints we are under,” according to Tom Griffiths.
But, as a society, we need to be ready, and have the capacity, to discuss the biases that AI will inevitably reveal.
Originally published in Quartz.