State of AI & Complex Change: Q2 2024

Generative AI is growing quickly as individuals, corporations, and investors all try to stay ahead. The rapid change is creating optimistic visions of the future while also creating significant anxiety as people justifiably worry about their role in a future of abundant intelligent machines

An abstract image of confusion

This week:

  • Essay: A summary of our State of AI & Complex Change research presentation.
  • This Week from Artificiality: How AI Can Help Us Envision More Diverse Intelligence, LLMs can set prices and collude without people knowing, and the full State of AI & Complex Change: Q2 2024.
  • Things from Elsewhere: Microsoft's data center investment plans, the #unthinkingaienthusiast, and Big Tech breaking rules for data capture.
  • Book of the Week: A new section! This week: Ethan Mollick's Co-Intelligence
  • Facts & Figures on AI & Complex Change: A new section representing data we've captured recently
  • Where in the World are Helen & Dave: San Diego, San Francisco, and Seattle

State of AI & Complex Change: Q2 2024

This week, we presented a research briefing on the current state of AI and complex change. Here are some key takeaways:

  • How many companies are using AI, and to what extent, isn’t a simple task. Estimates for how many companies are using AI range from 5% to 55%. The low-end estimate from the U.S. Census Bureau represents companies of all sizes which are using AI for delivering products and services. The high-end estimate from McKinsey focused on large enterprises and a wider definition of usage. Both can be correct while creating just as much confusion as clarity.
  • Half of corporate leaders expect AI to generate cost savings in excess of 10% in 2024…but 90% of them also state that they are waiting for the hype or are just experimenting in small ways. How can 50% of leaders think they will generate savings if almost none of them are doing much? Again, these conflicting statements can create wide ranging expectations for the overall market.
  • How organizations should value the benefit of generative AI remains an open question. As we’ve written about before, while individuals see productivity gains, those gains are not translating to enterprise-wide value. That may be why more than half of companies who are investing in generative AI say that they aren’t precisely measuring ROI but are investing because they believe ROI is positive.
  • Generative AI usage is up across all age groups and is expected to increase across all functions and use cases. Yet, more than 70% of companies appear to be doing no more than experimenting.
  • Generative AI anxiety is up across the board with the highest anxiety in younger workers, people of color, and workers with less education. Workers closer to the frontline are nearly twice as concerned as leaders. No wonder people are concerned as the vast majority of leaders expect significant change to roles.

From the early internet days, I’ve tracked and forecasted fast growth markets. It’s always hard to keep track given that data always lags and it’s always hard to forecast given that change compounds. Generative AI is perhaps the most challenging market I’ve worked on however because what people mean by “AI” varies so widely. From a purely technical term, nearly every person on the planet uses AI nearly everyday because they are searching, scrolling social media, or accessing any number of embedded AI technologies on their mobile devices. Most major organizations are using predictive AI for everything from financial forecasting, marketing automation, to sales optimization. Today, however, many people are using the term “AI” to mean only the new generative AI. The different meanings of a simple term can make it quite hard to see reality clearly.

Despite the confusion, it is clear that use of generative AI is growing quickly, driving significant new investment as individuals, corporations, and investors all try to stay ahead. The rapid change is creating optimistic visions of the future while also creating significant anxiety as people justifiably worry about their role in a future of abundant intelligent machines. AI company leaders encourage this anxiety when they (honestly) describe AI as a labor replacement technology. The question that leaves for the rest of us is whether we want to be replaced and, if not, if there is anything we can do about it.

You can watch a replay of our webinar, access our slides, and read a transcript here.


This Week from Artificiality

  • How AI Can Help Us Envision More Diverse Intelligence. If we want truly novel intelligence, AI design has to transcend the handful of universal design principles operating in nature, notably the constraints of computational scale and speed. Leveraging AI to navigate the vast landscape of morphological possibilities allows us to step away from the notion of mimicking biological intelligence. The key to transformative innovation in AI lies in shedding biological limitations and embracing new constraints. These could be societal decisions on the extent of human activities replaced by machines, ethical choices about not inducing consciousness in machines, or other values-driven limitations. AI will force us to broaden our view of intelligence. Moving past the idea of simply copying human intelligence, we're stepping into a world where AI could show us entirely new ways of thinking and solving problems. The real success in AI development will be in discovering forms of intelligence that go beyond anything we've known, transforming how we understand and interact with the world around us.
  • LLMs can set prices and collude without people knowing. Foundational ideas about promise and perils of algorithm-driven competition were laid out by Ariel Ezrachi and Maurice E. Stucke back in 2016 in their book, Virtual Competition. Since their book, the use of algorithms for pricing decisions has proliferated, bringing to light the ease with which they can lead to supra-competitive prices, to the detriment of consumer welfare. A new study reveals that when deployed in oligopolistic settings, LLM-based pricing agents can independently learn to set prices at supra-competitive levels. This form of autonomous collusion is particularly concerning because it occurs in the absence of any explicit collusionary intent by the firms involved. More intriguingly, the research highlights how slight modifications in the instructions given to these LLMs can significantly influence the degree of collusion, pointing to a nuanced and potentially unpredictable landscape of algorithmic pricing strategies.
  • State of AI & Complex Change: Q2 2024. On April 9th, we presented our research update on the State of AI & Complex Change which focused on confusion about AI, status of AI adoption and deployments, anxiety about AI, and AI in the Wild. In this post, you can watch a video recording of the webinar, access slides and a transcript (by Otter and unedited), and search through sources and resources.

Bits & Bytes from Elsewhere

  • Microsoft recently announced a plan to invest $100 billion in data center capacity to support generative AI. That got us wondering: if the return on this investment will come from replacing labor, how many jobs need to be replaced to generate the target return? My back of the envelope calculation is 2 million jobs with an average salary of $50k (roughly using Microsoft’s current 35% net margin and 35% return on equity). Note that Sam Altman has suggested that the total investment needed to support AGI will be $7 trillion. Using the same ROE math, Altman’s $7 trillion would require replacing 140 million workers—or the entire U.S. workforce—meaning that OpenAI’s plan to create AGI that can replace all economically viable labor matches Sam’s capital plan.
  • Our friend Bruno Giussani wrote a great LinkedIn post about “an emerging character in our contemporary collective performance, in which reality parodies itself” which he has dubbed “the unthinking AI enthusiast.” Bruno describes this character as someone “who fervently believes in the power of #AI and its every latest advancements and applications (and that’s OK) without considering (and that’s not OK) its implications. Constantly eager to embrace the tech’s coolness or its capacity to make stuff easy and cheap, the #unthinkingAIenthusiast doesn’t seem to have any inkling that technology doesn’t happen in a vacuum.” Bravo Bruno! At the beginning of every talk and workshop we give, we always ask audiences about their excitement-to-fear ratio—because thinking people will have both. The future of AI may certainly be exciting, but we’re also quite fearful of the negative potential for individuals, organizations, and society as a whole.
  • The New York Times published and important article about the disappointing choices Big Tech has made to capture data for training AI models. For instance, OpenAI transcribed more than 1 million hours of YouTube videos, breaking YouTube’s rules. Then again, Google may have violated copyrights of its creators by transcribing videos itself. Google also modified its terms of service to allow access to publicly available Google docs. And all the major companies have used copyrighted material without paying creators for their works.

Book of the Week

Co-Intelligence, Living and Working With AI
by Ethan Mollick

Ethan Mollick has carved a phenomenal role for himself as the guru of generative AI so it's not surprising that his book is both broad and comprehensive. It's a great summary and introduction for anyone who hasn't yet come to grips with modern AI as well as being a handy reference for those who are already across the topic.

Mollick's central idea is that AI is an alien intelligence that we need to learn to live and work with in ways that make humans better and faster. The book offers his practical tips for doing this as well as a solid description of how AI works, what we understand, and what we don't understand. Mollick's perspective skews optimistic, as he doesn't delve deeply into the potential downsides, counterintuitive results, or unintended consequences that may arise from the early adoption cases he cites. However, he does balance this optimism by acknowledging the complexity of the social and cultural adoption of generative AI.

Mollick's book issues a clear call to action that resonates with us, particularly regarding how managers assess their employees. He highlights a crucial shift in the way we evaluate effort, care, and expertise in the age of generative AI. Traditionally, managers relied on proxies such as the number and quality of words produced by an individual to gauge their performance. However, with the advent of generative AI, these measures have become obsolete. Mollick argues that managers must now adapt their assessment strategies to account for the transformative impact of this technology on the nature of work.

It's clear that many of Mollick's ideas will evolve over time as the technology advances and barriers to adoption are overcome through better design. Mollick's work serves as a bridge to this new world, offering valuable insights in our current transitionary point. One idea that stuck with us is Mollick's perspective on the unique opportunity presented by the current state of AI. He argues that the immaturity of AI interfaces and their implications for human-AI interaction encourage us to engage in reflection. In other words, the absence of refined design prompts us to deliberate more deeply on the technology's impact and potential.

Mollick's book is fundamentally about empowering individuals to approach the future of AI with practical knowledge and personal agency. What sets his work apart is his focus on the current state of evidence and his balanced perspective on how humans actually interact with AI and the underlying reasons for their behavior. By providing readers with a grounded understanding of the present landscape, Mollick equips them to navigate the co-intelligent future he envisions, where humans and machines work together seamlessly. His book delivers on its promise to guide readers towards a future of co-intelligence, offering actionable insights and strategies for individuals to harness the potential of AI while maintaining their autonomy and decision-making power.


Facts & Figures about AI and Complex Change

  • 68%: The percentage of research study participants who chose to submit ChatGPT writing without editing. (MIT)
  • $1: The high end of the range paid by big tech platforms per image to train generative AI models. (Reuters)
  • 90%: Percentage of generative AI cloud unicorns that are Google Cloud customers. (Google)
  • 1,000,000: Number of tokens supported in the Gemini 1.5 Pro context window. (Google)
  • 1: Number of hours video supported in the Gemini 1.5 Pro context window. (Google)
  • 11: Number of hours of audio supported in the Gemini 1.5 Pro context window. (Google)
  • 30,000: Number of lines of code supported in the Gemini 1.5 Pro context window. (Google)
  • 700,000: Number of words supported in the Gemini 1.5 Pro context window.(Google)
  • 3,000,000,000: Number of Google Workspace users. (Google)
  • 10,000,000: Numbrer of paying Google Workspace customers. (Google)
  • 70%: Percentage of enterprise users who use generative AI suggestions after selecting “Help me write” in Google Docs or Gmail. (Google)
  • 75%: Percentage of enterprise users who insert generative AI created slides into presentations after asking AI to create them in Google Slides. (Google)
  • 69: Number of languages supported for translation in Google AI Meetings and Messaging Add-ons. (Google)
  • $11,000,000: Stability AI’s projected revenue in 2023. (Deeplearning.ai)
  • $153,000,000: Stability AI’s projected costs in 2023. (Deeplearning.ai)
  • $99,000,000: Stability AI’s cloud costs with Amazon Web Services, Google, and CoreWeave. (Deeplearning.ai)

Where in the World are Helen & Dave?

Several upcoming events to highlight—you can see everything on our events page. 

  • ASU+GSV Summit. Join us on April 15th at the ASU+GSV Summit in San Diego. No presentations scheduled yet but we may do a pop-up or impromptu gathering depending for the Artificiality community. 
  • EY Innovation Realized. Join us on April 16-17 at EY Innovation Realized at the Palace of Fine Arts in San Francisco for our presentation on Rethinking Value in an Age of AI.
  • Starbucks Innovation Expo. Join us on May 14-15 as we return to the Starbucks Innovation Expo in Seattle for the fourth time to talk about Generative AI & Data Culture.
💡
Interested in us vising your organization to help you navigate the new worlds of AI and complex change? Set up time for a chat with us here.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.