There’s no universal human reaction to AI

Agency and decisions in the age of machines.

An abstract image of a person

AI-based decision-tools and data-driven decision-making is designed to reduce the variability of human decision-making. People assume that data offers an objective view of the reality and that an AI decision is rational. Decisions get easier with an objective and rational view of reality because the answer is apparent and incontestable. In reality, more data isn't necessarily more meaningful, it’s what someone has chosen to pay attention to, and what’s deemed rational depends entirely on the parameters people care about.

How would you expect individuals to react to a decision recommended by an AI? Would it depend on the context in which the AI made the recommendation? Or the level of confidence the AI expressed in the decision? Or the expertise of the person receiving the recommendation? The biggest factor in how people respond to AI-based decision-making is their own decision-making style.

Even when people are given identical AI inputs, people make entirely different choices. How people use input from AI depends on how they process information, how they regulate their emotions and behavior, and how urgent the decision is. Counter-intuitively, executives who are most rational and data-driven in their decision-making style can be the most likely to reject the algorithm. This is probably because they also place a high value on their own agency and autonomy. Conversely, executives who don’t like to make decisions and tend to procrastinate, are the most likely to delegate to AI, perhaps because it allows them shift responsibility to the machine.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.