Notes on Prompt Engineering (promptingguilde.ai)

Elements of a prompt

  • Instruction - a specific task or instruction you want the model to perform
  • Context - external information or additional context that can steer the model to better responses
  • Input Data - the input or question that we are interested to find a response for
  • Output Indicator - the type or format of the output.

General tips

  • Simple is better
  • Separate the instruction from the context
  • Be specific : The more descriptive and detailed the prompt is, the better the results

Example:

Extract the name of places in the following text.

Desired format:
Place: <comma_separated_list_of_company_names>

Input: \"Although these developments are encouraging to researchers, much is still a mystery. “We often have a black box between the brain and the effect we see in the periphery,” says Henrique Veiga-Fernandes, a neuroimmunologist at the Champalimaud Centre for the Unknown in Lisbon. “If we want to use it in the therapeutic context, we actually need to understand the mechanism.\"\"
  • Avoid impreciseness

Example:

# Imprecise
Explain the concept prompt engineering. Keep the explanation short, only a few sentences, and don't be too descriptive.
# Better
Use 2-3 sentences to explain the concept of prompt engineering to a high school student.
  • Tell what to do, and not what not to do

Techniques

zero-shot prompting

<Instruction>
Q: <Question>?
A:

Good for simple tasks. Adding an instruction can help the model to better understand the task.

few-shot prompting

Q: <Question>?
A: <Answer>
Q: <Question>?
A: <Answer>
Q: <Question>?
A: <Answer>
Q: <Question>?
A:

Good for simple tasks where zero-shot prompting is not enough.

Few-shot prompts enable in-context learning, which is the ability of language models to learn tasks given a few demonstrations.

Chain of thought prompting (CoT)

Adding steps in the reasoning process can help the model to better understand the task.

This can be applied to zero-shot and few-shot prompting. In zero-shot CoT, the steps are added in the instruction (e.g. adding something like Let's think step by step). In few-shot prompting, the steps are added in the context.

Auto CoT

An approach to eliminate manual efforts by leveraging LLMs with "Let's think step by step" prompt to generate reasoning chains for demonstrations one by one. (e.g. creating / choosing examples of examples with step by step reasoning)

Self Consistency

Asking for several outputs and take some kind of majority vote to get the final answer.

Generate knowledge prompting

Design prompt to generate knowledge abot the input instead of answering the question, make several generations and then answer the question from the knowledge generation outputs.

Question: Part of golf is trying to get a higher point total than others. Yes or No?

Knowledge: Golf is a precision club-and-ball sport in which competing players (or golfers) use many types of clubs to hit balls into a series of holes on a course using the fewest number of strokes. The goal is to complete the course with the lowest score, which is calculated by adding up the total number of strokes taken on each hole. The player with the lowest score wins the game.

Explain and Answer:

Tree of Thoughts (ToT)

ToT maintains a tree of thoughts, where thoughts represent coherent language sequences that serve as intermediate steps toward solving a problem. This approach enables an LM to self-evaluate the progress intermediate thoughts make towards solving a problem through a deliberate reasoning process. The LM's ability to generate and evaluate thoughts is then combined with search algorithms (e.g., breadth-first search and depth-first search) to enable systematic exploration of thoughts with lookahead and backtracking.

Retrieval Augmented Generation (RAG)

RAG is a framework for combining retrieval and generation. It uses a retriever to find relevant passages from a large corpus, and then feeds the retrieved passages as context into a generator. The generator then generates a response based on the context.

Automatic Reasoning and Tool-use (ART)

These approaches typically require hand-crafting task-specific demonstrations and carefully scripted interleaving of model generations with tool use.

ReAct Prompting

Generating reasoning traces allow the model to induce, track, and update action plans, and even handle exceptions. The action step allows to interface with and gather information from external sources such as knowledge bases or environments. The ReAct framework can allow LLMs to interact with external tools to retrieve additional information that leads to more reliable and factual responses. Results show that ReAct can outperform several state-of-the-art baselines on language and decision-making tasks. ReAct also leads to improved human interpretability and trustworthiness of LLMs. Overall, the authors found that best approach uses ReAct combined with chain-of-thought (CoT) that allows use of both internal knowledge and external information obtained during reasoning.

Example:

Question What is the elevation range for the area that the eastern sector of the
Colorado orogeny extends into?
Thought 1 I need to search Colorado orogeny, find the area that the eastern sector
of the Colorado orogeny extends into, then find the elevation range of the
area.
Action 1 Search[Colorado orogeny]
Observation 1 The Colorado orogeny was an episode of mountain building (an orogeny) in
Colorado and surrounding areas.
Thought 2 It does not mention the eastern sector. So I need to look up eastern
sector.
Action 2 Lookup[eastern sector]
Observation 2 (Result 1 / 1) The eastern sector extends into the High Plains and is called
the Central Plains orogeny.
Thought 3 The eastern sector of Colorado orogeny extends into the High Plains. So I
need to search High Plains and find its elevation range.
Action 3 Search[High Plains]
Observation 3 High Plains refers to one of two distinct land regions
Thought 4 I need to instead search High Plains (United States).
Action 4 Search[High Plains (United States)]
Observation 4 The High Plains are a subregion of the Great Plains. From east to west, the
High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130
m).[3]
Thought 5 High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer
is 1,800 to 7,000 ft.
Action 5 Finish[1,800 to 7,000 ft]

Prompt Library

QA

Answer the question based on the context below. Keep the answer short and concise. Respond \"Unsure about answer\" if not sure about the answer.

Context: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.

Question: What was OKT3 originally sourced from?

Answer:

Reasoning

The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.

Solve by breaking the problem into steps. First, identify the odd numbers, add them, and indicate whether the result is odd or even.