Prompt Engineering Techniques

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/40

flashcard set

Earn XP

Description and Tags

Flashcards based on lecture notes covering various prompt engineering techniques for language models.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

41 Terms

1
New cards

Direct Prompting

Simple, clear instructions without role-setting.

2
New cards

Direct Prompting Reference

Original GPT-2 paper, Radford et al., 2019

3
New cards

Few-Shot Prompting

Providing a few examples to guide the model.

4
New cards

Few-Shot Prompting Reference

Brown et al., 2020, Language Models are Few-Shot Learners

5
New cards

Details of Few-Shot Prompting

expected response format, context, and style, adaptability to complex and diverse tasks (without task-specific training data)

6
New cards

Zero-shot Prompting

Asking the model to perform a task with no examples; more subjective task than direct prompting.

7
New cards

Zero-shot Prompting Reference

Brown et al., 2020

8
New cards

Role-Based Prompting

Instructing the model to 'act as' a specific persona.

9
New cards

Role-Based Prompting Reference

Usage began in GPT-3 applications, Brown et al., 2020

10
New cards

Step-by-step Prompting

Breaking down a complex task into simpler, ordered instructions, guiding the model to complete each part sequentially.

11
New cards

Approach of Step-by-step Prompting

Explicitly structure each part of the task to make it easier for the model to handle in smaller pieces, often directing the model to complete one step before moving on to the next.

12
New cards

Typical Use Case of Step-by-step Prompting

Procedural tasks where each step is clearly defined

13
New cards

Chain of Thought (CoT) Prompting

Guiding the model to explain its reasoning step-by-step.

14
New cards

Chain of Thought (CoT) Prompting Reference

Wei et al., 2022

15
New cards

Benefits of CoT

reasoning and math problems

16
New cards

PROGRAM-OF-THOUGHTS

generate python code, delegate computations

17
New cards

CHAIN-OF-SYMBOL

formulas over natural language

18
New cards

STRUCTURED CHAIN-OF-THOUGHT

bullet points, numbered lists, or labeled sections

19
New cards

TREE-OF-THOUGHTS

allow branches, then do evaluation and selection

20
New cards

Instruction Tuning

Fine-tuning on diverse prompts phrased as instructions; reduces need for prompt engineering.

21
New cards

Instruction Tuning Reference

Chung et al., 2022, 'Scaling Instruction-Finetuned Language Models'

22
New cards

Self-Consistency Prompting

Asking the model the same question multiple times for reliability.

23
New cards

Self-Consistency Prompting Reference

Wang et al., 2022, 'Self-Consistency Improves Chain of Thought Reasoning'

24
New cards

ReACT (Reasoning + Acting) Prompting

Combining reasoning and action steps within prompts.

25
New cards

ReACT (Reasoning + Acting) Prompting Reference

Yao et al., 2022, 'ReAct: Synergizing Reasoning and Acting'

26
New cards

Reasoning Step (ReACT)

Logically thinking through the task, breaking it down and analyzing each part to generate a structured thought process.

27
New cards

Action Step (ReACT)

Based on the reasoning, the model performs a specific action, such as providing an answer, making a choice, or executing a command.

28
New cards

Reinforcement Learning from Human Feedback (RLHF)

Using human feedback to refine model responses; important for alignment.

29
New cards

Reinforcement Learning from Human Feedback (RLHF) Reference

Ouyang et al., 2022, 'Training Language Models to Follow Instructions with Human Feedback'

30
New cards

Alignment

Intent/Goal/Purpose Matching, Ethical and Safety Standards, Contextual Appropriateness

31
New cards

Example Prompt w/ Structured Input

Please provide details of the event in the following format: { 'Event' : 'Event Name' , 'Date' : 'Event Date' , 'Location' : 'Event Location' }

32
New cards

Formats for Structured Input

Key-Value CSV, JSON, XML, RDF

33
New cards

Data Augmentation with Synthetic Prompting

Generating synthetic data to improve generalization

34
New cards

Data Augmentation with Synthetic Prompting Reference

Gao et al., 2021, 'Making Pre-trained Language Models Better Few-shot Learners'

35
New cards

Synthetic Prompts Example

List some advantages of regular physical activity, What positive effects does exercise have on health?, Why is exercising good for you?

36
New cards

Active Prompt Adjustment

User dynamically refining prompts based on responses

37
New cards

Error Identification and Correction Prompting

Asking the model to identify and correct errors.

38
New cards

Error Identification and Correction Prompting Reference

PaLM models, Chowdhery et al., 2022

39
New cards

Confidence Scoring

PaLM uses confidence scoring mechanisms to gauge its certainty about a given response

40
New cards

Step-by-Step Verification

For multi-step reasoning tasks, PaLM can evaluate each step’s validity before proceeding to the next.

41
New cards

Redundant Answer Generation

PaLM sometimes generates multiple responses for the same query and evaluates these responses for consistency.