Your primer on how to talk about the “fourth industrial revolution”

We have attempted to summarize the outputs and approach of three studies—from Oxford University, McKinsey Global Institute, and Intelligentsia.ai (our own research firm acquired by Quartz in 2017).

An abstract image of research papers

As world leaders gather for Davos, one of the common and continuing themes is the emerging threat of automation and the consequent effect on economic inequality and global stability. Responding to the so-called fourth industrial revolution has become one of the biggest topics of discussion in the world of technology and politics, and it’s not surprising that anxiety runs high.

A lot of the current conversation has been shaped by research with scary conclusions such as “47% of total US employment is at risk from automation.” In a survey last year of 1,600 Quartz readers, 90% of responders thought that up to half of jobs would be lost to automation within five years—and we found that everyone thought it was going to happen to someone else. In our survey, 91% of those who work don’t think there’s any risk to their job, for example.

If it’s true that half the jobs will disappear, then it’s going to be an entirely different world.

As leaders and policy makers consider the broader implications of automation, we believe it’s important that they remember that the predictions and conclusions in the analytically derived studies—such as the 47% number—come from just a few sources. All the studies on the impact of AI have strengths and weaknesses in their approach. To draw deeper insight requires taking a closer look at the methodology and data sources they use.

The studies

We have attempted to summarize the outputs and approach of three studies—from Oxford University (pdf), McKinsey Global Institute, and Intelligentsia.ai (our own research firm acquired by Quartz in 2017). We chose the Oxford study because it was the first of its kind and highly influential as a result. We chose MGI because of its scale. And we chose our own because we understand it in great detail.

Our conclusions

We conducted our own research because we wanted to understand the key drivers of human skills and capability replacement. We were both surprised and pleased to find that, even though machines indeed meet or exceed human capabilities in many areas, there is one common factor in the research that artificial intelligence is no match for humans: unpredictability. Where a job requires people to deal with lots of unpredictable things and messiness—unpredictable people, unknown environments, highly complex and evolving situations, ambiguous data—people will stay ahead of robots. Whether it’s creative problem solving or the ability to read people, if the environment is fundamentally unpredictable, humans have the edge. And likely will for some time.

In fact, we found four themes where jobs for humans will thrive:

  • People: This includes jobs that rely on strong interpersonal skills like chief executives, school psychologists, social work teachers, and supervisors of a variety of trades.
  • Numbers: These are jobs that apply math to business problems, like economists, management analysts, and treasurers.
  • Bugs and bad things: This includes human health-related jobs, like allergists, immunologists, and microbiologists and other environmentally-oriented professions such as toxicology.
  • Spaces and structures: These are jobs that manage the physical world, like engineers and environmental scientists.

When work is unpredictable, humans are superior.

Our conclusions about their conclusions

In all of the studies, researchers had to grapple with the sheer level of uncertainty in the timing and degree of technological change. This is a conclusion in itself and a serious challenge for policy makers whose goal it is to plan for social support and education across generations.

Common across the studies was a recognition of a new kind of automation; one where machines learn at a scale and speed that has fundamentally changed the opportunity for AI systems to demonstrate creative, emotional and social skills, those skills previously thought as solely human. Machine-learning systems operate, not as task-specification systems, but as goal-specification systems. This is important because it means that, increasingly, many automated systems adapt and reconfigure themselves on their own.

The biggest weakness of all the studies is that jobs aren’t islands; boundaries change. The story of automation is far more complex and beyond the reach of the models and the data we have at hand. Jobs rarely disappear. Instead, they delaminate into tasks as new technology and business models emerge.

None of these studies is able to forecast the impact of reimagining scenarios of business process changes that fundamentally alter how an occupation functions. None of them can take into account the “last mile” of a job, where the automation can be relied upon for 99% of the job but it still takes an on-the-job human to do the 1%. None of them conveniently spit out what knowledge will be most valuable.

There are counterintuitive effects to automation such as how the value of a job changes after the automation of one component. If a specific task in a job is automated, creating value through an increase in productivity, it tends to raise the value of the whole chain of tasks that make up that job. So investment in capabilities that can’t be automated will be a good investment.

Finally, there are new jobs. We are far from solving all the world’s problems and we have an insatiable appetite for more. Just because people today can’t think of the new jobs of tomorrow doesn’t mean someone else won’t.

A note on the data

The common data set used by many of the big studies is O*Net(Occupational Information Network). This is the best data, anywhere. It was built for the US Department of Labor, primarily to help people match things they care about (such as skills, knowledge, work style and work preferences) to occupations. For every occupation, there is a different mix of knowledge, skills, and abilities for multiple activities and tasks. When all of these are described and assigned standardized measures such as importance, frequency, and hierarchical level, the final O*Net model expands to more than 270 descriptors across more than 1,000 jobs.

Why does all this matter? Because this level of complexity is what it takes to make it fit for purpose. The data isn’t gathered for the purposes of analyzing automation potential so any and all automation modeling has to transform this complex and handcrafted dataset. Subjective judgements of researchers or statistical manipulation of standard measures are the most important new inputs to a novel use of this data store. There’s a lot of room for fudging, personal bias and lies, damned lies. Absurd results can happen. Previously, when the data was used to predict offshorability, lawyers and judges were offshored while data entry keyers, telephone operators and billing clerks could never be.

Still, it’s the best data available and if it’s good enough for designing jobs, it’s probably good enough for deconstructing them. It’s all a question of how and where data is manipulated to fit the modeling goal.

Evaluating those studies in detail

Oxford University

  • Why it’s important: Novelty. It was the first of its kind, published five years ago. This study set up the conversation about jobs and robots.
  • Purpose: Analyze how susceptible jobs are to computerization.
  • Headline result: 47% of total US employment at risk from automation.
  • Timing: 2025-2035.
  • Primary job attributes considered: Nine proprietary capabilities. Three levels of human skill from labor data.
  • Limitations: Subjective input of a narrowly focused group, no economic analysis, no technology adoption, or timing. Using labor data for all levels may overstate impact on low-wage jobs.
  • What’s interesting: Two waves of automation with advancement in physical robots doing physical things first, then a second wave driven by breakthroughs in social perceptiveness and creativity. Specification of goals rather than specification of tasks will be the main indicator of computerization.
  • What’s useful: If you only go deep on one technical field, make it reinforcement learning.

The detail: This research, first published in 2013, kicked off the automation story with the finding of 47% of total US employment being at risk from automation. This was an academic study to figure out the number of jobs at risk. It turned out that it wasn’t realistic to pinpoint the number of jobs that would actually be automated so instead they developed a model that calculated the probability of computerization of any given job.

The most important human touch was a binary yes/no assessment of the ability to automate a job. In a workshop at the University of Oxford, a handful of experts, probably clustered around a whiteboard, went through a sample list of 70 jobs, answering “yes” or “no” to the question: “Can the tasks of this job be sufficiently specified, conditional on the availability of big data, to be performed by state of the art computer-controlled equipment?” We don’t know what jobs they chose, it’s safe to assume the people in the room were not experts in those 70 jobs nor do we know whether there was enough tea and biscuits on hand for them to be able to think as deeply about job number 70 as job number 1.

The researchers were super aware of the subjective nature of this step. The next step was designed to be more objective and involved ranking levels of human capabilities to find the nine most human capabilities that matched to three engineering bottlenecks they were interested in: perception and manipulation, creativity, and social awareness. From this ranking, they were then able to apply statistical methods to come up with probabilities of these capabilities being computerized and therefore the probability of any whole job being automated.

The limitations of their approach are twofold. First, they looked at whole jobs. In the real world, whole jobs are not automated, parts of jobs are. It’s not possible to fully critique the effect of this on the final results—it’s all hidden in the stats—but it’s intuitive that the “whole job” aspect of this is highly overstated. Second, using “level” as the objective ranking mechanism introduces an important bias. Machines and humans are good at different things. More importantly, what’s easy and “low level” for a human is often an insanely difficult challenge for a machine. Choosing “level” as the primary objective measure risks overstating the risk to low-wage, low-skill perception and manipulation-heavy jobs that operate in the uncontrolled complexity of the real world. Given the researchers would have been very aware of this effect—which is known as Moravec’s Paradox (paywall)—it’s surprising that they didn’t specifically account for it in the methodology. It is potentially a significant distortion.

One more thing. The researchers did not take into account any dimensions of importance, frequency, cost, or benefit. So all capabilities, whether important or not, used every hour or once a year, highly paid or low wage, were treated the same and no estimates of technology adoption timelines were made.

So while this is a rigorous treatment of 702 jobs representing almost all the US labor market, it has the limitation that it relied on a group of computer-science researchers assessing jobs they’d never done at a moment in time when machine learning, robotics, and autonomous vehicles were top of mind and likely firmly inside their areas of expertise (as opposed to, say, voice, virtual assistants and other emotional/social AI) and without any way of modeling adoption over time. A figure of 47% “potentially automatable over some unspecified number of years, perhaps a decade or two” leaves us hanging for more insight on the bottlenecks they saw and when they saw them being overcome.

Perhaps their most important contribution is their crisp articulation of the importance of goal specification. Prior waves of automation relied on human programmers meticulously coding tasks. Now, with machine learning, particularly with significant progress being made in reinforcement learning, the important insight is that it’s far more important to be able to specify the goal for an AI than to input defined tasks for the AI to perform. In many circumstances, there are now the tools for machines to figure out how to get there on their own. Creative indeed.

McKinsey Global Institute

  • Why it’s important: Comprehensiveness. The deep and broad analysis tapped into extensive knowledge. It was also the first study to look for correlations between human capabilities and job activities.
  • Purpose: Analyze the automation potential of the global economy.
  • Headline result: 51% of activities in US susceptible to automation, representing $2.7 trillion in wages.
  • Timing: 2035-2075.
  • Primary job attributes considered: 18 proprietary capabilities, at four levels of human skill. Two-thousand activities from labor data.
  • Limitations: Doesn’t take into account importance of skills to a job, thereby limiting the economic evaluation at a job level.
  • What’s interesting: Automation most attractive in highly structured and predictable environments and there’s more of those than you might guess: accommodation, food service, and retail.
  • What’s useful: Largest near-term opportunities will be extending business-as-usual (not always sexy) automation (such as data collection and processing) into more places. Policy responses will be challenging given the high degree of uncertainty in timing.

The detail: The heavyweights of business analysis, MGI published their report in early 2017 analyzing the automation potential of the global economy, including productivity gains. It’s comprehensive and the analytical process is extensive. They recognized the weakness of analyzing whole jobs and, instead, used O*Net activities as a proxy for partial jobs. They also introduced adoption curves for technology so they could not only report on what’s possible but also on what’s practical. As such, their conclusions were more nuanced with around 50% of all activities (not jobs), representing $2.7 trillion in wages in the US, being automatable. They found that less than 5% of whole jobs were capable of being fully automated. Adoption timing shows a huge variance with the 50% level reached in around 2055—plus or minus 20 years.

MGI took around 800 jobs (from O*Net) and their related 2,000 activities, which they then broke into 18 capabilities. These 18 capabilities, with four levels each, were uniquely designed by the MGI team. This capability/level framework is at the core of the technological potential analysis. These 18 capabilities are perhaps the most important human-touch point of the MGI analysis. “Academic research, internal expertise and industry experts” informed this framework. Their framework offers a far more appropriate description of human skill-level in relation to automation than does the O*Net data. This framework was then used by experts to train a machine learning algorithm and apply the capabilities across 2,000 O*Net activities to create a score for each activity. There’s some “secret sauce” at work here. It’s impossible for any outsider to tell how capability levels or automation potential are assigned against activities. It’s a mix of human, machine, and consulting nuance.

Finally, to analyze technical potential, they developed “progression scenarios” for each capability. This step must have taken quite some effort. Surveys, extrapolation of metrics, interviews with experts, recent commercial applications, press reports, patents, technical publications, Moore’s Law all went into the mix. Perhaps there was some key factor in the model that got tweaked at the last minute by an individual analyst. We’ll never know. Nevertheless, they are experts with access to vast resources of academic expertise and they have a ton of practical operating chops.

In the second major stage of their analysis, they created adoption timelines. Here, they use data from 100 automation solutions that have already been developed and create solution times for the 18 capabilities. To assess the impact of automation across industries and jobs, they use proxies from example jobs (there’s a lot of expert consulting input to this) to convert the frequency of an activity into time spent in a job, leading finally to economic impact by industry, activity, and job. This is the sort of modelling that only a handful of groups can pull off. With so many inputs and the creation of databases as well as models, it would be daunting to recreate.

Weaknesses? The MGI analysis has two important limitations. First, by using only the activities from O*Net and defining their own capabilities, they miss the rich detail of how important a given capability is for a job, instead, they are all treated as equally important. This may have the effect of underestimating the incentive to automate particular activities in higher wage jobs where the importance of a capability is high, say, an interface between professionals and clients. Second, how they determined adoption timelines is completely opaque to an outsider. But, because of the huge uncertainty of the 40 year span, it doesn’t really matter. What’s important is that one of the world’s premier analytical agencies has been able to demonstrate just how uncertain this all is. The takeaway is that there’s no way to get a sense overall of when the breakthroughs may happen and how they may affect jobs. The most difficult job now? Being a policymaker developing long-range plans in such an uncertain techno-socio-political environment.

A key piece of information that is easily overlooked in the MGI report is how much more there is to harvest from current systems and how big the opportunity is to make business technology interfaces more human and more seamless. Maybe it just doesn’t sound sexy when “collecting and processing data” is put up against other, more exciting ideas but these activities consume vast amounts of time and it’s a safe bet that it’s one of the most boring parts of many people’s jobs. Even with theInternet of Things and big data technologies, there’s still an enormous of amount of data work that’s done by human hand, consuming hours of time per day, before people get on with the real work of making decisions and taking action. With advances in conversational AI and vision interfaces, we would expect to see an explosion in developments specifically to help people better wrangle data.

Intelligentsia.ai

  • Why it’s important: It’s ours. We know its flaws, shortcuts, and fudges.
  • Purpose: Analyze the opportunity to invest in automation. That is, to create machine employees.
  • Headline result: Market opportunity of $1.3 trillion over the next ten years. In that time, up to 46% of current capabilities offer attractive investment opportunities.
  • Timing: 2026-2036.
  • Primary job attributes considered: 128 capabilities (skills, abilities, activities) from labor data. Importance of capabilities from labor data.
  • Limitations: No accounting for time spent in a job. Subjective projections of technical progress. No accounting of dynamic changes in value as automation changes a job.
  • What’s interesting: Largest market opportunities and greatest value add are different. Communication drives value-add while product opportunities are in management and planning.
  • What’s useful: Be hyper aware of the frontier of emotionally aware AI: it will be a breakthrough when an AI can respond with the appropriate emotion to a person in a difficult situation or high emotional state with the intent of influencing or persuading them to take action.

The detail: At Intelligentsia.ai, we were fascinated by the debate over automation of jobs and decided to do our own analysis of the opportunity to invest in automation, that is, invest in some new kind of machine employee. We, too, turned to O*Net. “Level” was required but not enough to really understand the incentive to invest; we needed both level and importance.

Our methodology did not employ any statistical methods or machine learning. We had to scale everything by hand. Our subjective assessments were primarily predictions of what a machine can do today versus what we think a machine will be able to do in 20 years. This relied on both research expertise and considered opinion plus the concentration it took to assess and rank 128 capabilities.

Our view is that there is more intra-technology uncertainty than there is inter-technology uncertainty. That is, there’s more chance of being completely wrong by forecasting a single technology than across a set of technologies so we felt comfortable that technology forecasting uncertainty would broadly average out across the analysis. However, it’s the biggest weakness in our analysis, primarily because it would be highly unlikely that we or anybody else could reproduce our technology capability curves.

We used these forecasts to determine when a machine could match each capability within a job. This allowed us to create an attractiveness ranking using both importance and skill for each job to which we could apply a dollar figure. From there it was an excel number crunch to create a list of the most attractive AI capabilities to invest in and the most likely jobs to be impacted. We found that a market opportunity for machine employees of $1.3 trillion in the US. Because we weren’t trying to determine the jobs at risk, we didn’t get to a set percentage. However, we did find the percentage of capabilities where a machine could perform the role as well as a human to max out at around 46% in 10 years and 62% in 20 years. Most jobs were significantly less than this.

From our admittedly biased perspective, the most useful part of our analysis was that it helped to hone in on the best opportunities for investing in AI in the next 10 years. If you’re an entrepreneur and want to create products with the greatest market opportunity, invest in AI for combining information and choosing methods to solve problems, as well as emotionally intelligent AI that can assist people in caring for others or motivating people. If you’re an “intrapreneur” and looking for the highest value-add inside a company, invest in AI that listens and speaks in a socially perceptive way as well as the next generation of insight discovery AI for analysis and decision support.

Originally published in Quartz

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.