Google DeepMind on AI Assistants

The fundamental takeaway from this paper is that the design of true personal AI assistants necessitates a foundational shift toward responsible design principles from the outset.

The Ethics of Advanced AI Assistants

A personal assistant for everyone has been a goal in AI since, well, forever. When we first went deep into this topic in 2016 we came to a few conclusions:

  • Everything rests on a new model for privacy. It's not enough for users to click a set of terms and conditions on what they offer up to the AI, it's about having some insight and sense for how and why the AI nudges them in a particular direction or other. This is a hard problem to solve for many technical and commercial reasons.
  • An ecosystem of people and their assistants will create a new kind of complexity—for example, more gaming of pricing online, or the potential for a more chaotic information sphere as assistants "talk" with assistants. Inequality can arise in new places: for example, cohort-based dynamic pricing that is difficult or impossible to detect.
  • It will be a challenge to calibrate an assistant to be helpful over the long run. An overly "loyal" assistant, that never challenges or helps a user redirect to more productive or safer routes at the expense of their stated short term goal, would spell trouble for users and, potentially, for their real-life friends and connections.

Today, these issues are more visible than ever. The critical intersection in the complex Venn diagram—balancing a minimum viable business model with privacy, profit, and trust—remains unresolved. Researchers have documented intricate online collusion among various instances of Generative AI. And it's still uncertain how much people will trust and follow advice from AI, even when they recognize it might be in their best interest.

Today, I think AI assistants will be essential tools, and here's why: the contemporary information sphere is simply overwhelming for humans. We are inundated with excessive information that is often of poor quality and not suited for effective decision-making. Additionally, a substantial portion of this information is generated by machines, primarily intended to trigger responses from other machines.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.