Craft Better Prompts: Using AI to Improve Your Predictions

A review of research by Phil Tetlock and other experts on crafting better prompts by investigating if human forecasting can be improved through the use of a large language model.

AI-Augmented Prediction: LLM Assistants Improve Human Forecasting Accuracy

Key Points:

  • Collaboration with Experts: Phil Tetlock, in collaboration with experts from the Federal Reserve, MIT, and the London School of Economics, explores improving human forecasting through the use of large language models (LLMs).
  • Study Findings: The study, titled “LLM Assistants Improve Human Forecasting Accuracy,” shows that collaborating with GPT-4 on forecasting tasks enhances decision-making by addressing cognitive biases such as overconfidence.
  • Overconfidence in Decision-Making: Overconfidence, a common cognitive bias, leads to misjudgment of risks and overestimation of predictive accuracy. The study presents an innovative approach to mitigate overconfidence using LLMs.
  • Experimental Design: Participants could consult with two types of AI advisors: a “superforecasting” model providing optimized guidance and a biased model exhibiting overconfidence. The control group used a less sophisticated model without direct forecasting assistance.
  • Performance Improvement: Participants using advanced models—whether superforecaster or biased—showed a 23% improvement in forecasting accuracy compared to the control group, highlighting the benefits of structured AI assistance.
  • AI-Assisted Decision-Making: AI can enhance human skills by encouraging users to scrutinize forecasts critically, consider a broader range of information, and challenge assumptions, leading to better decision-making outcomes.

Our focus is deeply rooted in leveraging AI to enhance decision-making, so you can imagine my enthusiasm when I discovered that Phil Tetlock, the author of Superforecasters and a pivotal figure behind many strategies in our book, Make Better Decisions, has collaborated with experts from the Federal Reserve, MIT, and the London School of Economics. Their goal? To investigate if human forecasting can be improved through the use of a large language model.

The study, titled LLM Assistants Improve Human Forecasting Accuracy, explores how AI augmentation interacts with human cognitive biases. The findings reveal that collaborating with GPT-4 on forecasting tasks can markedly improve decision-making processes, particularly by addressing issues of overconfidence and enhancing forecasting precision.

Overconfidence in decision-making is characterized by an inflated belief in one's own judgment, capabilities, or control over outcomes. It's been called the mother of all biases and most of us are overconfident most of the time. This cognitive bias often leads to a misjudgment of risks and an overestimation of one’s predictive accuracy, resulting in decisions that fall short of optimal, especially in complex scenarios where uncertainty is high and information is incomplete.

The paper finds an innovative approach to mitigating the adverse effects of overconfidence through the strategic use of LLMs, including an experiment involving a "biased" LLM designed to neglect base rates and exhibit overconfidence.

The design of the study is especially intriguing for its structured approach. It allowed participants the option to consult with two distinct types of AI advisors: one being a high-quality "superforecasting" prompt designed to provide optimized forecasting guidance, and the other, a biased advisor characterized by overconfidence and a disregard for base rates. Conversely, the control group was only equipped with a less sophisticated model that lacked any direct forecasting assistance.

Participants leveraging the advanced models—regardless of whether they opted for the superforecaster or the biased forecaster—demonstrated a 23% improvement in performance over those in the control group.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.