This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
Now that more machine learning-based AI has been deployed in more places, human skills are being replaced in finer slices with new automation technologies. What has been observed in traditional blue collar work is that not all AI is good enough to increase the value of the output.
The traditional view of automation and labor is that automation increases the value of labor by increasing the productivity of a chain of tasks. Now that more machine learning-based AI has been deployed in more places, human skills are being replaced in finer slices with new automation technologies. What has been observed in traditional blue collar work is that not all AI is good enough to increase the value of the output.
According to Daron Acemoglu, an influential economist at the Massachusetts Institute of Technology, if the automation is “so-so” (just good enough to be adopted but not so much more productive than the labor being replaced), productivity will not increase, the cost of the product or service at the end of the chain will not decline (so no demand increase), and the incentive is then to remove more human labor. An example of this phenomenon in action is a supermarket self-checkout, arguably the ultimate example of “shitty automation.” Automation since the 1980s has not been especially “human-friendly;” Acemoglu estimates it explains somewhere between 50 and 70% of the variance of changes in wages between 1980 and 2016.
And this is all before AI, which has only accelerated in adoption since 2016. AI changes things again because it impacts human decision-making and judgment more profoundly. Many AI opportunities are in activities where humans are quite skilled (vision, conversation, pattern recognition) which means the impact of so-so automation—human-unfriendly AI—is potentially even greater.
The dominant paradigm in AI is that AI can free people up from mundane work so that they can be tasked with more creative work such as complex problem solving. If AI can improve tasks that do not require humans to be creative, use their judgment, and make decisions, then demand will rise for creativity and judgment. Acemoglu’s insight is that this will likely only deliver on its promise if it is designed in such a way that humans still get to learn by doing many things at once.
For human judgment to improve and decision-making to improve, people need to deal with a range of problems and to be able to recognize different aspects of a problem. Human fluency and ability to see the entirety of the task in context, to understand the interrelationships, and to “step back” from a task to reflect or reconsider rely on experience. If AI isn’t good enough to help to assist a human to understand the holistic picture, even experts and specialists will find their productivity—decision-making ability—is reduced.
AI is different from traditional automation because it isn’t always accurate. While we can’t predict the impact of AI’s false negatives, false positives, or bias, it’s quite possible that so-so AI amplifies what Acemoglu has seen with traditional automation—only this time in a much broader and diffuse set of applications. How so-so AI may impact the decisions of doctors, nurses, drivers, business analysts, and executives may be more difficult to understand than how so-so automation affects warehouse work, supermarket check out or customer service.
One potential example of so-so AI is personalization. Recommenders are ubiquitous and consumers have come to expect some level of curation, with content tailored for their preferences. We’re so used to it in fact that we fail to notice both what we are missing and what we are being offered when it’s useless to us. Personalization is so-so because it so often causes us to be fit with the curve we are already on—recommending items we’ve already purchased or taking us deeper down the rabbit hole of our last search. It exists to make us more predictable which is the opposite goal of creating new tasks.
Personalization works because it’s just good enough to be adopted. When it comes to higher-stakes decision-making processes we should make a bigger effort to avoid so-so AI. AI that it is too narrow and slices decisions too finely will hamper our natural human talent for understanding context and identifying exceptions. A human-friendly goal is to create valuable new tasks for humans, not to fine-slice a role so that AI can replace humans on narrow existing tasks most of the time, only needing a human for errors and edge cases.
Ironically, aiming for narrow metrics of human parity can reduce the economic incentive to adopt AI, something investors have known for a while. Many AI applications rely on humans-in-the-loop to perform well, especially when the AI encounters edge cases or when the model drifts from reality. Maintaining training data is estimated to cost AI companies 10 to 15% of their revenue and many customers are yet to bear this full cost. For human-in-the-loop activities which address edge cases or “pick up the cognitive slack,” more accurate AI translates into more training data, processing, and storage costs. Optimization of human v machine is a moving target.
If we want humans to be good decision-makers, data and AI have to make humans more productive and not be just so-so.
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.