The $1 Trillion Question
The $1 Trillion Question, Mortality in the Age of Generative Ghosts, Your Mind and AI, How to Design AI Tutors for Learning, The Imagining Summit Preview: Adam Cutler, and Helen's Book of the Week.
Chain of thought, tree of thoughts, and now graph of thoughts—a progression that may lead to agentic AI.
A recent paper on Graph of Thoughts reasoning highlights the progression towards Agentic AI, one of our Artificiality Pro Obsessions. As AI tools become increasingly autonomous and capable of handling complex tasks with minimal supervision, their reasoning abilities, task design, and capacity to recover from failures, is crucial.
To encourage Large Language Models (LLMs) to reason in more human-like ways, researchers have been exploring various methods, with a recent focus on enabling models to be more flexible and how they combine ideas. We see three big steps in prompt engineering techniques, each of which demonstrate a significant step up in the ability of prompt engineers to tap into the knowledge in a large language model: chain of thought, tree of thoughts, and recently graph of thoughts reasoning.
Initially, Chain of Thought reasoning was found to enhance the effectiveness of LLMs. By breaking down complex problems into simpler components, models could reason more effectively. Humans do this too: one of the best predictors of someone’s ability to solve a problem is how early in the process they break the problem up. Multiple chains of thought advanced this technique by enabling multiple independent paths to be generated but, even with this advance, reasoning is still limited because there is no way to perform any "local exploration" such as back tracking.
The Artificiality Weekend Briefing: About AI, Not Written by AI