Unit-5 Generative Models for Text: Key Vocabulary

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/50

flashcard set

Earn XP

Description and Tags

Vocabulary flashcards covering essential terms from the Unit-5 lecture on Generative Models for Text, Transformers, BERT, GPT, prompt engineering, and LLM issues.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

51 Terms

1
New cards

Large Language Model (LLM)

A deep-learning system, usually Transformer-based, trained on massive text corpora to understand, generate, and manipulate human language across many tasks.

2
New cards

Language Model

An AI model that assigns probabilities to sequences of words, enabling applications like text prediction, translation, summarization, and Q&A.

3
New cards

Statistical Language Model (SLM)

Early language model that estimates word sequence probabilities with statistical methods such as n-grams.

4
New cards

Neural Network-based Language Model

Model that uses neural networks (e.g., RNNs, LSTMs) to capture complex word dependencies beyond simple statistics.

5
New cards

Transformer-based Language Model

State-of-the-art model that employs the Transformer architecture and self-attention to process sequences in parallel and capture long-range context.

6
New cards

Recurrent Neural Network (RNN)

Neural architecture that processes sequences step-by-step, carrying hidden states but struggling with long-term dependencies.

7
New cards

Long Short-Term Memory (LSTM)

RNN variant with gating mechanisms designed to remember information over longer sequences.

8
New cards

Transformer

Neural architecture built on self-attention and feed-forward layers, enabling parallel processing of tokens and superior context handling.

9
New cards

GPT (Generative Pre-trained Transformer)

Decoder-only Transformer model trained with causal language modeling to predict the next token and generate coherent text.

10
New cards

BERT (Bidirectional Encoder Representations from Transformers)

Encoder-only Transformer pre-trained with Masked Language Modeling and Next Sentence Prediction for deep bidirectional context understanding.

11
New cards

Token

The basic unit (word, sub-word, or character) on which a language model operates after text is tokenized.

12
New cards

Embedding

Dense vector representation of a token that captures semantic and syntactic information in continuous space.

13
New cards

Frequency-based Embedding

Vector derived from word occurrence statistics, e.g., Bag of Words, TF-IDF, or co-occurrence matrices.

14
New cards

Prediction-based Embedding

Vector learned by training a model to predict a word from its context (or vice versa); examples include Word2Vec and GloVe.

15
New cards

Contextualized Word Embedding

Dynamic vector that changes with sentence context, produced by models like ELMo, BERT, or GPT.

16
New cards

Word2Vec

Prediction-based embedding model using CBOW or Skip-gram to learn word vectors from local context.

17
New cards

GloVe (Global Vectors)

Embedding method that factorizes a global word co-occurrence matrix to capture statistical information.

18
New cards

FastText

Embedding model that represents words as bags of character n-grams, improving handling of rare or morphologically rich words.

19
New cards

ELMo (Embeddings from Language Models)

Deep bi-directional LSTM model that produces context-sensitive word embeddings by considering entire sentence context.

20
New cards

Bag of Words (Count Vectorization)

Frequency-based representation where each document is a vector of raw word counts.

21
New cards

TF-IDF

Weighting scheme that scales word counts by inverse document frequency to emphasize informative terms.

22
New cards

Co-occurrence Matrix

Square matrix counting how often words appear near each other within a selected window size.

23
New cards

Continuous Bag of Words (CBOW)

Word2Vec variant that predicts a target word from its surrounding context words.

24
New cards

Skip-gram

Word2Vec variant that predicts surrounding context words from a single target word.

25
New cards

Self-Attention

Mechanism where each token attends to all other tokens in the same sequence to compute contextual representations.

26
New cards

Cross-Attention

Decoder mechanism that attends to encoder outputs, aligning generated tokens with source inputs.

27
New cards

Query-Key-Value (QKV)

Triplet of vectors used in attention: queries compare with keys to produce weights that are applied to values.

28
New cards

Multi-Head Attention

Parallel set of attention heads allowing a model to capture different relationship types simultaneously.

29
New cards

Encoder (in Transformers)

Stack of self-attention and feed-forward layers that converts input sequence into contextual representations.

30
New cards

Decoder (in Transformers)

Stack that generates output tokens using masked self-attention plus cross-attention to encoder representations.

31
New cards

Causal Language Modeling (CLM)

Training objective where a model predicts the next token using only left-context; basis for GPT-style models.

32
New cards

Masked Language Modeling (MLM)

Pre-training task where random tokens are masked and the model predicts them from both left and right context.

33
New cards

Next Sentence Prediction (NSP)

BERT pre-training task where the model classifies whether two sentences are sequential in the original text.

34
New cards

ChatGPT

OpenAI conversational system based on GPT-3.5/4, further tuned with supervised fine-tuning and RLHF for dialogue.

35
New cards

Prompt

User-provided text or instruction that guides an AI model to perform a specific task.

36
New cards

Prompt Engineering

Process of designing, testing, and refining prompts to elicit desired responses from language models.

37
New cards

Zero-Shot Learning Prompt

Prompting method where the model receives only task instructions with no examples.

38
New cards

One-Shot Learning Prompt

Prompting method that includes a single example to illustrate the desired output format.

39
New cards

Few-Shot Learning Prompt

Prompting method that supplies several examples (typically 2–5) to guide the model’s response.

40
New cards

Chain-of-Thought Prompting

Technique that asks the model to reveal intermediate reasoning steps to improve complex task performance.

41
New cards

Iterative Prompting

Strategy of refining a prompt over multiple rounds based on previous outputs to converge on better results.

42
New cards

Negative Prompting

Prompting approach that explicitly states what content or style should be avoided in the response.

43
New cards

Hybrid Prompting

Combining multiple prompting techniques (e.g., few-shot + chain-of-thought) to optimize outputs.

44
New cards

Prompt Chaining

Breaking a complex task into sequential prompts where each output feeds into the next step.

45
New cards

Hallucination (in LLMs)

Phenomenon where a language model generates confident but factually incorrect or fabricated content.

46
New cards

Bias (in LLMs)

Tendency of a model’s outputs to reflect societal or data-driven stereotypes and unfair representations.

47
New cards

Language AI

Subfield of artificial intelligence dedicated to understanding, processing, and generating human language.

48
New cards

Foundation Model

Large, versatile model pre-trained on broad data that can be fine-tuned for many downstream tasks.

49
New cards

Residual Connection

Shortcut that adds layer input to its output, stabilizing deep Transformer training.

50
New cards

Layer Normalization

Normalization technique applied across features in each layer to improve training stability in Transformers.

51
New cards

GeLU Activation

Gaussian Error Linear Unit—non-linear function commonly used in Transformer feed-forward networks.