Why Generative AI Transparency Matters

New research from the Stanford Center for Research on Foundation Models shows that foundation models are largely mysterious.

The Foundation Model Transparency Index

Generative AI is wrapped in layers of mystery.

  • What are the capabilities of each model? Which model is better at what? What is the risk of using each model?
  • How were the models trained? On what data? Is the data freely available or copyrighted? What data was excluded? Why?
  • Which humans were involved in the training? How does human feedback affect the model? How were these people incentivized for their training input?
  • How does the model connect data across its data cosmos? How often are answers true? How often are they biased? When are their biases problematic?
  • What are the effects on users? Positive? Negative? Unknown?
  • What are the effects on the planet? How much energy is consumed? How much water is used for cooling?
  • How much does training a generative AI model cost? How much does inference cost? What is the short and long term estimate for cost changes?

Some of these mysteries may come across as nice-to-know, while others are essential-to-know. To trust generative AI in your daily workflow, you need to know how it works, when to trust it, and when not to trust it. This is the same as any intelligence you work with. Consider the people you work with. Whom do you trust to do what? Who do you not trust to do what? Now think of your favorite generative AI model: do you have the same level of insight?

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.