As The (OpenAI) World Turns

We don’t yet know what OpenAI will look like after the dust settles. But here are our main takeaways at the moment. AI Regulatory Capture, LLMs Thinking about Space and Time, and Generative AI Agents.

Abstract image of servers

For most of the ten years we have worked in AI, few people didn’t pay much attention. This week, everything changed as the drama at OpenAI was the top story in major newspapers day after day. Perhaps the narrative about money vs safety tapped the public concern over AI risk. Perhaps the sudden firing of a prominent and likable founder tapped the public’s obsession with founders as celebrities. Or perhaps the drama simply unfolded like the Game of Thrones with everyone falling for their favorite character. 

While the he said / she said drama captures the public’s attention and news cycle, our interest is in the profound impact these events may have on AI and society. OpenAI was founded as a non-profit so that profit motives wouldn’t overwhelm concern for society. That noble objective has slowly dissipated as the company created a for-profit subsidiary to raise billions of dollars and prominent employees left due to concerns over safety to found Anthropic. Throughout it all, CEO Sam Altman lauded his board’s ability to fire him if he didn’t follow the company’s mission. They did just that a week ago. And within a few days, he was back and the board was gone. So much for mission-oriented governance.

We don’t yet know what OpenAI will look like after the dust settles. But here are our main takeaways at the moment:

  • The mission-oriented, non-profit governance structure is nothing more than a sham. The board members who wanted Altman out are now out themselves—replaced by stock standard Silicon Valley white men of privilege. What started as a unique idea to help build AI that will benefit humanity, is now like any other Silicon Valley start-up with a myopic founder-at-all-costs mentality. OpenAI is Altman’s company as long as he chooses. 
  • Altman & Co. now have a clear runway to build an AI goliath. In addition to OpenAI’s current business in large language models, Altman has been pursuing building an AI device with Softbank and Jony Ive, and developing a new AI chip to compete with Nvidia. Whether he rolls these ideas into the new OpenAI or he keeps them as independent companies, Altman & Co. have nothing to restrain their ambition to build a new tech goliath.
  • Sales has won over safety. Ironically, the “safety” company, OpenAI, started the AI race by releasing ChatGPT a year ago. Naively positioned as just an experiment, ChatGPT upended our relationship with technology and pushed every tech company into a catch-up race. Under Altman’s leadership, OpenAI continued to aggressively release new products to maintain its early leadership, increase revenue, and attract new investment. Now that the veneer of the company’s non-profit governance has been removed, how much more aggressively will OpenAI forego safety in favor of sales?
  • The potential for AI company self-governance is now dead. We never held much hope for tech companies to prioritize society over earnings and believed this narrative to be misplaced. Perhaps its best for all that we have seen self-governance fail now so that we can all move on to the alternatives. The most likely is regulation but, as we note in our regulatory capture piece below, regulatory capture seems likely if not inevitable—especially given that OpenAI is now firmly in the big tech, big capital, big profits race. 
  • The new board is a huge disappointment. The new chair, Bret Taylor, seems like a worthy appointment given his background as a founder (Quip), big tech exec (Salesforce), and board member (Twitter). Larry Summers, however, is a major disappointment—especially due to his messed up views on biological differences between men and women (not something we want perpetuated by ChatGPT). If Altman was truly trying to maintain OpenAI’s mission to benefit all humanity, so many options were available. For instance, if he wanted a retired president of a research institution, why not pick Shirley Ann Jackson, a physicist and former president of RPI? Jackson is just one of many qualified board member candidates who would more adequately adance OpenAI’s mission than Summers. Altman’s choice of Summers is beyond unimaginative. If his appointment tells us antying at all it is this: don’t expect OpenAI to be open to designing a diverse human experience. 

Hopefully, OpenAI’s new governance and mission will be clearer soon. For now, we encourage you to take a skeptical view and be wary of the Altman disciples who profess that he and his 700 employees are humanity’s only hope.


From Artificiality This Week: 

Is Regulatory Capture by Big AI Inevitable?
The Biden Administration’s Executive Order has stirred up discussion about the regulatory capture in AI. For good reason.
How LLMs Think About Space and Time
New research probes the inner workings of these models, revealing the emergence of structured knowledge about fundamental concepts like space and time.


Things That Caught Our Eye This Week:

Generative AI Agents

We are getting closer to an important stage in AI—generative AI agents. This is something on our watchlist because agentic AI is a marker of more general "real world" AI capability. This development matters because of the technical progress around memory, a necessary breakthrough for AGI. (For a discussion on natural general intelligence listen to our interview with Chris Summerfield).

The Stanford research unveiled generative agents that exhibit humanlike behaviors, grounded in a sophisticated memory system. Unlike traditional non-player characters in games, these agents use a large language model to remember interactions, build relationships, and plan events. This technical progress, particularly around memory, marks a critical step towards more lifelike and autonomous AI.

Here’s what’s exciting: significant technical progress on memory (think, ChatGPT remembers everything about you and can do stuff for you without detailed instructions). The advance goes beyond simply generating human-like responses to queries “by thinking through what it means for individual agents to generate believable humanlike behavior independently of human interaction, and converting that into a simple yet workable computational architecture.”

Here’s the worry: These agents are smart but still make dumb AI-style errors. “They spoke very formally even to close family members, used the same dorm lavatory simultaneously, and went to lunch at the local bar rather than the cafe, as though they’d developed a day-drinking problem”.

Opportunities:
- Can populate virtual spaces, communities, and interactive experiences with believable human behavior and social dynamics. 
- Can simulate complex interpersonal situations to allow people to safely practice and prepare for difficult conversations. 
- Open up new directions to explore foundational HCI questions around cognitive models, prototyping tools, etc.

Threats: 
- Risk of users forming inappropriate parasocial relationships with agents if they overly anthropomorphize them.
- Errors in agent reasoning could annoy users or cause outright harm if propagated into real-world systems.
- Could exacerbate risks like deepfakes, misinformation, and persuasion.

🔍 Read the full article on Stanford HAI, read the paper. This breakthrough not only enhances gaming experiences but also opens new avenues in social science research.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.