Craft Better Prompts: Sharpening Your Intuitions

By understanding the principles behind the evolving field of prompt engineering, we can craft better queries and engage more effectively with AI. They're insights we can all use to sharpen our own interactions with AI, even if we're not writing the code ourselves.

An abstract image of a typewriter

Key Points:

  1. Prompt engineering is a specialized field within AI development, focused on designing queries that improve AI interactions, notably with tools like ChatGPT or Bard.
  2. While the technical aspects of prompt engineering might be complex, users can adopt its strategies to craft better queries for more effective AI communication.
  3. Often, effective prompting mirrors human interaction, asking how one might approach a task and breaking it down into comprehensible parts for the AI.
  4. By guiding the AI through a step-by-step analysis of a problem, users can mimic human cognitive processes to arrive at more comprehensive conclusions.
  5. Techniques that encourage self-reflection involve the AI evaluating its responses and refining them, similar to how humans improve through self-reflection.
  6. Encouraging the AI to adopt various personas enhances its response quality, offering insights from multiple viewpoints.
  7. Meta-Prompting for complex tasks is a method which decomposes tasks into subtasks handled by specialized prompts within the AI, then integrates these for a holistic solution, akin to a team-based problem-solving approach. Users can mimic this approach with crafted prompts that leverage meta-prompting concepts.

Prompt engineering is a niche within AI development that's complex and technical, often beyond the direct use of most people. It involves specialized skills in programming and working with AI's underlying mechanisms, which means the advancements in this area aren't immediately accessible to everyone.

However, everyday users can still adopt the essence of these advanced strategies to improve how we interact with AI tools like ChatGPT or Bard. By understanding the principles behind the evolving field of prompt engineering, we can craft better queries and engage more effectively with AI. They're insights we can all use to sharpen our own interactions with AI, even if we're not writing the code ourselves. It's about taking those high-level concepts developed by experts and applying them to our everyday digital conversations.

There's plenty we can do to enhance our AI interactions by developing better intuitions for the way a large language model responds to different kinds of prompt styles.

Here’s how we think about adapting prompt engineering research to every day prompt crafting:

  1. Intuition: How would you ask a human? What’s the core intuition around the task that informs your strategy?
  2. Mental Model: What is your mental model for how to complete this task? Who, what, when, and how? What is the causal modal and how do the various parts fit together?
  3. Prompt Structure: What levers can you adopt from the prompt design and apply strategically?

Here’s how we apply this with some important prompt engineering techniques: chain-of-thought reasoning, iterative self-feedback, role-playing, and meta-prompting.

Chain-of-Thought Reasoning


Intuition
When we interact with others, we often break down complex problems into simpler, more manageable parts, engaging in a step-by-step dialogue to explore each facet before reaching a conclusion. This approach not only aids in mutual understanding but also in generating comprehensive solutions. Translating this human-centric problem-solving method to language models can significantly enhance their reasoning and decision-making abilities.

Mental Model
Just as you might dissect a complex issue by discussing its various components, you can guide AI to emulate this process through chain-of-thought (CoT) prompting. By structuring queries in a way that encourages the model to break down a question into its constituent parts, you facilitate a more detailed and thorough exploration of the topic.

Prompt Structure
Example Task: Understanding safety of self-driving cars

  • Regular Prompt: Are self-driving cars safe?
  • Chain of Thought-style Prompt:
    • Let's think step-by-step about whether self-driving cars are safe enough to be allowed on the road. What are the biggest safety concerns about self-driving cars?



      Use the LLM to prompt you to ask follow up questions:
    • How do these concerns compare to the safety risks of human drivers?
    • What technologies are being developed to address these safety concerns in self-driving cars?
    • Are these technologies mature enough to be considered reliable?
    • Beyond technology, what regulations and infrastructure changes would be needed to ensure the safe operation of self-driving cars?
    • Weighing all these factors, do you think the potential benefits of self-driving cars outweigh the remaining safety risks?

This CoT prompt guides the LLM through a logical, step-by-step analysis, mirroring human cognitive processes and human-human dialogue to arrive at a more comprehensive conclusion.

There are no hard and fast rules here—chain-of-thought works best as a complement to your own ability to parse a complicated query and use the step-by-step approach to exploration of a reasoning space.

Iterative Self-Feedback and Refinement Mechanisms


Intuition
The intuition behind iterative self-feedback in language models is analogous to how humans learn and improve through reflection and practice. Just as a person might assess their own work, identify areas for improvement, and then refine their approach, these mechanisms enable AI to evaluate its responses, learn from them, and make adjustments to enhance accuracy and quality over time. This process of self-assessment and adjustment helps the model to improve its performance. This isn’t perfect: if an AI starts out wrong it’s very hard to get it to self-correct, but this approach can reduce errors and help you guide a conversation.

Mental Model
In human collaboration, feedback plays a crucial role in refining ideas and improving outcomes. We often propose solutions, solicit feedback, and then refine our approach based on new insights. This iterative process can be mimicked in LLMs to enhance their problem-solving capabilities. Think of this process as being iterative and contrastive.

Prompt Structure
Example Task: Understanding the Pros and Cons of Renewable Energy
Prompts for Self-Feedback:
- Write a brief argument in favor of renewable energy.
- Now, critique your own argument for potential weaknesses or counterarguments.
- Refine your original argument considering the critique.

This process encourages the LLM to self-evaluate and refine its output, as a human might engage in self-reflection to improve their reasoning. It’s a lot cruder but it performs a similar function, namely the requirement to sample from data and generate explanations in multiple ways improves sefl-reflection and meta-cognition.

Role-Playing


Intuition
The intuition for role-playing when using language models is based on the idea that adopting different perspectives or "personas" can enhance a person’s ability to imagine the world for someone else. This translates directly to a language model's ability to generate diverse and nuanced responses. Imagine how actors immerse themselves in various roles in order to portray a wide range of characters convincingly, language models can simulate expertise or viewpoints they wouldn't naturally possess.

Role-playing leverages the intuition that adopting diverse perspectives can enrich problem-solving and creativity, akin to how humans often brainstorm in groups, taking on different roles to explore various facets of an issue. This approach enables models to tackle questions and problems from unique angles, enriching the conversation and potentially leading to more creative, informed, and comprehensive outcomes.

Mental Model
Humans often adopt different perspectives or "wear different hats" to explore various aspects of a problem. This role-playing can lead to deeper insights and more balanced solutions. You can prompt an LLM to simulate this by adopting different personas or expertise.

Imagine a team of experts from varied fields tackling a complex problem together, each contributing insights from their specialization. This collaborative effort leads to a more comprehensive understanding and innovative solutions. In LLMs, role-playing simulates this process, with the model adopting different personas or expertise to generate multifaceted responses.

Prompt Structure
Example Task: Understanding different knowledge bases and perspectives in analyzing deforestation

  • Prompt for Role-Playing:
    • As an environmental scientist, what is your view on the impact of deforestation?
    • As a local resident, what the key issues associated with land use and the economy?
    • Now, as an economist, how do you assess the economic benefits versus the environmental costs of deforestation?
    • Finally, as a policy-maker, propose a balanced solution considering both perspectives.

By prompting the LLM to adopt diverse roles, it mimics the human approach of exploring different viewpoints, leading to a more rounded analysis of complex issues that you can use to expand your own thinking.

Meta-Prompting


Intuition
Meta-prompting enhances language models by breaking complex tasks into smaller parts, assigning them to specialized "expert" models within the same LM, and integrating their outputs for a comprehensive answer. This mimics a team approach, where a coordinator (meta-model) directs experts to tackle specific aspects of a problem.

Drawing from human collaborative problem-solving, where diverse expertise leads to richer solutions, meta-prompting leverages this diversity within a single LM, enhancing its ability to tackle varied tasks more effectively.

Mental Model
The mental model for meta-prompting is similar to orchestrating a team where each member contributes a specific skill to solve a complex problem. It involves breaking down a task into sub-tasks, assigning these to different "expert" prompts within a language model, and then integrating their outputs for a holistic solution. This approach maximizes the model's capabilities by leveraging specialized knowledge from various parts of the model, similar to how a project manager delegates and then synthesizes work from a team of experts to achieve a goal.

Prompt Structure
Example Prompts
Task: Summarize a scientific paper.
- Meta-prompt: "Break down the paper into sections (introduction, methodology, results, discussion). Summarize each section with the expertise of a specialized researcher in the paper's field. Integrate these summaries into a coherent overview."

Task: Solve a complex math problem.
- Meta-prompt: "Identify the mathematical concepts involved (algebra, calculus). Assign each concept to an expert mathematician specializing in that area to solve the relevant part of the problem. Compile their solutions for the final answer."

Task: Develop a marketing strategy.
- Meta-prompt: "Divide the strategy into components (target audience analysis, channel selection, messaging). Consult an expert in each area to develop that component. Synthesize their inputs into a comprehensive marketing strategy."

By borrowing from human interaction models—breaking down complex issues, iterating through feedback, and exploring different perspectives—you can craft prompts that significantly enhance the reasoning capabilities of LLMs. The science of prompt engineering gives us a rational basis for adopting these strategies in our every day usage of AI with justifiable expectations of crafting better human-AI outcomes.


Further Reading

Gati V Aher, Rosa I. Arriaga, and Adam Tauman Kalai. 2023. Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies. In Proceedings of the 40th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 202), Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, SivanSabato, and JonathanScarlett(Eds.). PMLR,337–371.

Simran Arora, Avanika Narayan, Mayee F Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, and Christopher Re. 2023. Ask Me Anything: A simple strategy for prompting language models. In The Eleventh International Conference on Learning Representations.

Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. 2023. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687 (2023)

BIG-Bench authors. 2023. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research (2023). https://openreview.net/forum? id=uyTL5Bvosj

Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. 2023. Large language models as tool makers. arXiv preprint arXiv:2305.17126 (2023).

Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2023c. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128 (2023).

Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. 2023. Toxicity in chatgpt: Analyzing persona-assigned language models. arXiv preprint arXiv:2304.05335 (2023).

Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabhar- wal. 2023. Decomposed Prompting: A Modular Approach for Solving Complex Tasks. In The Eleventh International Conference on Learning Representations.

Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2023. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651 (2023).

Aman Madaan and Amir Yazdanbakhsh. 2022. Text and patterns: For effective chain of thought, it takes two to tango. arXiv preprint arXiv:2209.07686 (2022).

Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. 2023. Language models are multilingual chain-of-thought reasoners. In The Eleventh International Conference on Learning Representations.

Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366 (2023).

Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, and Zhendong Mao. 2023. ExpertPrompting: Instructing Large Language Models to be Distinguished Experts. arXiv preprint arXiv:2305.14688 (2023).

Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. In The Eleventh International Conference on Learning Representations.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.