1/40
Flashcards based on lecture notes covering various prompt engineering techniques for language models.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Direct Prompting
Simple, clear instructions without role-setting.
Direct Prompting Reference
Original GPT-2 paper, Radford et al., 2019
Few-Shot Prompting
Providing a few examples to guide the model.
Few-Shot Prompting Reference
Brown et al., 2020, Language Models are Few-Shot Learners
Details of Few-Shot Prompting
expected response format, context, and style, adaptability to complex and diverse tasks (without task-specific training data)
Zero-shot Prompting
Asking the model to perform a task with no examples; more subjective task than direct prompting.
Zero-shot Prompting Reference
Brown et al., 2020
Role-Based Prompting
Instructing the model to 'act as' a specific persona.
Role-Based Prompting Reference
Usage began in GPT-3 applications, Brown et al., 2020
Step-by-step Prompting
Breaking down a complex task into simpler, ordered instructions, guiding the model to complete each part sequentially.
Approach of Step-by-step Prompting
Explicitly structure each part of the task to make it easier for the model to handle in smaller pieces, often directing the model to complete one step before moving on to the next.
Typical Use Case of Step-by-step Prompting
Procedural tasks where each step is clearly defined
Chain of Thought (CoT) Prompting
Guiding the model to explain its reasoning step-by-step.
Chain of Thought (CoT) Prompting Reference
Wei et al., 2022
Benefits of CoT
reasoning and math problems
PROGRAM-OF-THOUGHTS
generate python code, delegate computations
CHAIN-OF-SYMBOL
formulas over natural language
STRUCTURED CHAIN-OF-THOUGHT
bullet points, numbered lists, or labeled sections
TREE-OF-THOUGHTS
allow branches, then do evaluation and selection
Instruction Tuning
Fine-tuning on diverse prompts phrased as instructions; reduces need for prompt engineering.
Instruction Tuning Reference
Chung et al., 2022, 'Scaling Instruction-Finetuned Language Models'
Self-Consistency Prompting
Asking the model the same question multiple times for reliability.
Self-Consistency Prompting Reference
Wang et al., 2022, 'Self-Consistency Improves Chain of Thought Reasoning'
ReACT (Reasoning + Acting) Prompting
Combining reasoning and action steps within prompts.
ReACT (Reasoning + Acting) Prompting Reference
Yao et al., 2022, 'ReAct: Synergizing Reasoning and Acting'
Reasoning Step (ReACT)
Logically thinking through the task, breaking it down and analyzing each part to generate a structured thought process.
Action Step (ReACT)
Based on the reasoning, the model performs a specific action, such as providing an answer, making a choice, or executing a command.
Reinforcement Learning from Human Feedback (RLHF)
Using human feedback to refine model responses; important for alignment.
Reinforcement Learning from Human Feedback (RLHF) Reference
Ouyang et al., 2022, 'Training Language Models to Follow Instructions with Human Feedback'
Alignment
Intent/Goal/Purpose Matching, Ethical and Safety Standards, Contextual Appropriateness
Example Prompt w/ Structured Input
Please provide details of the event in the following format: { 'Event' : 'Event Name' , 'Date' : 'Event Date' , 'Location' : 'Event Location' }
Formats for Structured Input
Key-Value CSV, JSON, XML, RDF
Data Augmentation with Synthetic Prompting
Generating synthetic data to improve generalization
Data Augmentation with Synthetic Prompting Reference
Gao et al., 2021, 'Making Pre-trained Language Models Better Few-shot Learners'
Synthetic Prompts Example
List some advantages of regular physical activity, What positive effects does exercise have on health?, Why is exercising good for you?
Active Prompt Adjustment
User dynamically refining prompts based on responses
Error Identification and Correction Prompting
Asking the model to identify and correct errors.
Error Identification and Correction Prompting Reference
PaLM models, Chowdhery et al., 2022
Confidence Scoring
PaLM uses confidence scoring mechanisms to gauge its certainty about a given response
Step-by-Step Verification
For multi-step reasoning tasks, PaLM can evaluate each step’s validity before proceeding to the next.
Redundant Answer Generation
PaLM sometimes generates multiple responses for the same query and evaluates these responses for consistency.