OpenAI, Vision, ChatGPT, and AGI

OpenAI Fires CEO Sam Altman, AI Vision Controlling Your Phone, and Are we as close to AGI as Sam Altman says? šŸ¤”

An abstract image of a network

Welcome to the first Artificiality Weekly Briefing. In this newsletter, we will highlight key stories, research, ideas, and productions from the week.

And what a week to start with!


OpenAI Fires CEO Sam Altman

On Friday afternoon, OpenAI announced that its board fired CEO Sam Altman because he ā€œwas not consistently candid in his communications with the board.ā€ The board also removed Chair and President Greg Brockman from the board, who subsequently quit along with a few other senior technical leaders. 

Silicon Valley is consumed with rumors about what led to Altmanā€™s dismissal. Only the six members of the board know for sure. That said, letā€™s take a walk through some most-noted possibilities.

A Friday afternoon firing often indicates an explosive personal issue. Companies prefer announcing these kinds of issues at the end of the week to give them the slow weekend news cycle to manage the fall-out. In this case, however, a personal issue doesnā€™t make sense given that Brockman was pushed from the board but not fired as well. And it doesnā€™t make sense why Brockman and others would quit if Altman was fired ā€œfor causeā€ related to a personal indiscretion. 

Some have proposed that Altman created conflicts of interest due to outside pursuits like his blockchain projects and new venture fund. These kinds of conflicts can certainly be a problem. But would that be such an explosive issue to prompt a Friday afternoon firing? Usually, a board would work on a transition plan, not a firing squad. 

Many are sharing rumors that Altman must have hidden important product safety issues from the board. But that board includes Ilya Sutskever, OpenAIā€™s Chief Scientist. What could Altman have known and hidden from his chief scientist? Again, this rumor doesnā€™t seem to add up. 

Despite that, a conflict between Altman and Sutskever seems to be the most logical explanation, given that Sutskever appears to have organized Altmanā€™s firing. And, in our minds, the most logical conflict is based in the core conflict of OpenAI: is it a non-profit creating AI for the benefit of humanity or is it a for-profit company creating a next generation tech giant? Is it a non-profit funded by donations to help save humanity, or is it a for-profit that will use immense quantities of capital to capture the value from replacing labor with AI?

OpenAI was initially created as a non-profit to separate the companyā€™s mission from profit motives. The thought process was that AI is so importantā€”and potentially so dangerousā€”that it shouldnā€™t be maligned with seeking profit. This argument makes a lot of sense. Perhaps it isnā€™t a great idea to develop a labor-replacing technology at a company that has a motivation to maximize profit. 

OpenAIā€™s mission and structure was muddied when the company created a for-profit subsidiary that raised billions, primarily from Microsoft. Suddenly, a wealth creation opportunity and expectation appeared, creating a core tension for those who wanted to maintain the initial mission. On one side of the argument has been Altman and Brockman, who saw capital as essential to success given how much it costs to develop and deploy AI. Altman and Brockman have aggressively pushed out new product features, in part, to raise investor interest and capital. On the other side of the argument has been Sutskever, who wanted more caution, care, and consideration for potential effects. It appears that the outside board members sided with Sutskever and Altman along with his aggressive capital capture strategy has been shown the door. 

OpenAIā€™s identity crisis fits within the broader question of potential harm from capitalistic goals mixed with AI power. OpenAIā€™s initial solution was to house AI within a non-profit. Governments around the world are pursuing regulatory limits. Leading figures have endorsed pausing new developments. To date, none of these solutions have slowed OpenAIā€™s pursuit of Altmanā€™s goal: being the first to create artificial general intelligence (AGI). 

Altman can be both credited with and faulted for causing the current AI race. In his pursuit to be first to AGI, he sparked a wave of tech emergence from Google, Facebook, Amazon, Microsoft, and others who all sought to catch up with OpenAI. That race has, within just a year, created a new world of generative AI tools with immense power and potential. But the speed of that race and the accompanying cheerleading has also blinded many to the many perils. 

Perhaps a new, more collaborative AI development path may emerge from OpenAIā€™s leadership change. Hopefully, the company can also develop more effective governance practices. Whether dismissing Altman was the right course for the company, the boardā€™s handling of the matter was sloppy. Given the society-changing potential of its technology, we would feel a lot better if OpenAIā€™s board didnā€™t seem to be learning on the job.

What to watch for: After the dust settles, watch for two things. 

  1. What will Altman and Brockman do next? Rumors are that they are already forming a new company, and 
  2. What changes, if any, will the new management team make to OpenAIā€™s product strategy? Will they stick with Altmanā€™s path of releasing AI control to the world through the API and GPTs?

AI Vision Controlling Your Phone

Thereā€™s some buzz about research showing that GPT-4V can navigate a phone interface and take action like buying a product (albeit only accurately 75% of the time). What could go wrong with an error rate of 25%, OpenAIā€™s security and privacy holes, and availability to anyone? 

Itā€™s important not to imply commercial applications from academic research. This paper shows what GPT-4V can doā€”not what it is good at or what we should use it for. 

In contrast, weā€™re more interested in the overlooked research from Apple showing similar functionalityā€”from a company we trust not to allow hackers to take over our phones. Appleā€™s research appears more focused on testing apps but it certainly could be applied to Siri to take action on apps if the company found it useful and safe. In particular, we wonder about Apple using this functionality to advance its accessibility goals. 


ChatGPT Turns One

On November 6, OpenAI made a splash at its first developer-focused event, announcing a wide range of new capabilities, features, and services. For a first birthday party, it was quite impressive but also awkward. OpenAIā€™s then-CEO Sam Altmanā€™s presentation had the excitement and clumsiness of the parent of a one year oldā€”celebrating its advances with the lack of surety of a new parent. 

In some ways, everyone in the OpenAI universe has something to celebrate. But everyone also has something to be wary about and Altman missed the mark explaining some logical inconsistencies in OpenAIā€™s product announcements and plans. 

In this article, we discuss four parts of the announcement with a skeptic's mindset:

1. Smaller is better. And easier to evade regulators.

2. Cannibal ChatGPT. Eating anything useful. 

3. GPTease. What are they really for? 

4. Copyright Shield. Copywrong.

Of course, everything about these announcements are now up in the air since it appears Altman was ousted, in part, due to his aggressive product release strategy. Will the new leadership stick with these products or will they change tack?


Are we as close to AGI as Sam Altman says? šŸ¤”

new paper from DeepMind suggests maybe not. It shows Transformer models - the core tech behind chatbots like ChatGPT - struggle to generalize beyond their training data. The researchers found that while Transformers can learn new tasks from just a few examples if they're similar to training data, they fail on anything even slightly different.

šŸ”„This reliance on pre-training coverage, not innate generalization abilities, suggests today's AI still lacks the flexible learning of human intelligence. AI pioneer Sam Altman claimed recently, the basic algorithms for AGI may already exist in models like GPT-3.

But this paper indicates major gaps remain around out-of-distribution generalization. Our models cannot learn truly new concepts easily without lots of retraining.

So while the raw computational power of models like GPT-4 is impressive, they may still be missing core ingredients for human-like adaptability and transfer learning.

The path to Artificial General Intelligence likely requires breakthroughs beyond sheer scale and data. We need AI that learns more flexibly across tasks, like humans.

This paper suggests we aren't quite as close to AGI as some may hope. Current models still specialize on their training distribution, rather than learn broadly.

šŸ˜µā€šŸ’«Extraordinary progress but maybe thereā€™s more to ā€œintelligenceā€ than we often conceptualize.

Great! Youā€™ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.