Fine-Tuning Large Language Models – Key Vocabulary

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/147

flashcard set

Earn XP

Description and Tags

Key terms, acronyms and concepts you must know to understand and discuss fine-tuning, deploying and safeguarding Large Language Models.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

148 Terms

1
New cards

Large Language Model (LLM)

A neural network with billions of parameters trained on vast text corpora to understand and generate human-like language.

2
New cards

Fine-Tuning

Further training a pre-trained model on a smaller, domain-specific dataset to specialize it for a new task.

3
New cards

Parameter-Efficient Fine-Tuning (PEFT)

Any technique that adapts an LLM by updating only a small subset of parameters, reducing compute and memory costs.

4
New cards

Low-Rank Adaptation (LoRA)

A PEFT method that inserts small low-rank matrices into weight layers and trains only these matrices while freezing the original weights.

5
New cards

QLoRA

A memory-efficient variant of LoRA that fine-tunes 4-bit quantised weights while training low-rank adapters.

6
New cards

DoRA

Weight-Decomposed Low-Rank Adaptation; splits weights into magnitude and direction, updating only low-rank directional components.

7
New cards

Half Fine-Tuning (HFT)

Technique that freezes half of a model’s parameters each round, preserving pre-trained knowledge while learning new tasks.

8
New cards

Mixture of Experts (MoE)

Architecture with multiple specialist sub-networks (experts); a router activates only a subset per token for efficiency.

9
New cards

Mixture of Agents (MoA)

Framework where several complete LLMs collaborate layer-by-layer, combining proposals and aggregations to improve output quality.

10
New cards

Retrieval-Augmented Generation (RAG)

Pipeline that retrieves external documents at query time and feeds them into an LLM to ground responses in fresh knowledge.

11
New cards

Seven-Stage Fine-Tuning Pipeline

End-to-end process: Dataset Preparation, Model Initialization, Training Setup, Fine-Tuning, Evaluation, Deployment, Monitoring.

12
New cards

Dataset Preparation

Collecting, cleaning, formatting and splitting data plus handling imbalance, augmentation and annotation.

13
New cards

Model Initialization

Loading a pre-trained checkpoint and setting initial configurations before training or inference.

14
New cards

Training Environment Setup

Configuring hardware (GPU/TPU), software libraries, hyper-parameters, optimiser and loss functions for efficient training.

15
New cards

Hyper-parameter

A training setting (e.g., learning rate, batch size, epochs) chosen before training that governs model learning behaviour.

16
New cards

Gradient Descent

Optimisation algorithm updating weights by moving them opposite to the gradient of the loss function.

17
New cards

Stochastic Gradient Descent (SGD)

Gradient descent variant that updates weights using one (or few) training samples per step, adding randomness.

18
New cards

Mini-Batch Gradient Descent

Updates parameters with gradients computed on small batches, balancing stability and speed.

19
New cards

Adam Optimizer

Adaptive learning-rate optimiser that combines momentum and RMSprop ideas; widely used for LLM fine-tuning.

20
New cards

AdamW

Adam variant that decouples weight decay from gradient updates, improving regularisation for transformers.

21
New cards

Quantisation

Technique that stores weights/activations in lower precision (e.g., 8-bit, 4-bit) to cut memory and speed up inference.

22
New cards

Pruning

Removing unimportant weights, neurons or filters from a network to make it smaller and faster.

23
New cards

Cross-Entropy Loss

Primary objective for language models measuring divergence between predicted token distribution and true distribution.

24
New cards

Perplexity

Exponentiated cross-entropy; lower values mean the model is less ‘surprised’ and predicts text better.

25
New cards

Safety Benchmark

Suite of tests (e.g., DecodingTrust) that probe toxicity, bias, privacy, hallucination and adversarial robustness of LLMs.

26
New cards

Llama Guard

Meta’s safeguard model that classifies prompts and responses into risk categories to filter unsafe content.

27
New cards

Shield Gemma

Google Gemma-based moderation model that filters hate, violence, sexual and other harmful content across parameter scales.

28
New cards

WildGuard

Open-source multitask moderation model fine-tuned on adversarial datasets to detect harmful prompts, risky outputs and refusals.

29
New cards

Proximal Policy Optimisation (PPO)

Reinforcement-learning algorithm that aligns LLMs by maximising a learned reward while constraining policy updates.

30
New cards

Direct Preference Optimisation (DPO)

Alignment method that directly maximises the likelihood of preferred over rejected responses without a reward model.

31
New cards

Odds-Ratio Preference Optimisation (ORPO)

Single-stage objective that boosts preferred answers and penalises disfavoured ones via a log-odds loss.

32
New cards

Adapters

Small trainable layers inserted into a frozen model; only adapter weights are updated during fine-tuning.

33
New cards

Soft Prompt Tuning

PEFT technique that learns a short sequence of virtual tokens prepended to every input instead of changing weights.

34
New cards

Data Augmentation

Creating synthetic examples (e.g., back-translation, paraphrasing) to enlarge training data and improve robustness.

35
New cards

Synthetic Data Generation

Using LLMs to produce new labelled samples that resemble target-domain data for fine-tuning.

36
New cards

Data Imbalance

Unequal class distribution in a dataset; mitigated via over-/under-sampling, class-weighted losses or focal loss.

37
New cards

Federated Learning

Framework where models train across multiple devices holding local data, improving privacy by keeping data in place.

38
New cards

Differential Privacy

Mathematical guarantee that training procedure prevents leakage of individual data points via noise injection.

39
New cards

Fairness

Model characteristic of producing equitable performance across demographic groups, avoiding algorithmic bias.

40
New cards

Bias

Systematic error favouring certain outputs or groups, often inherited from training data.

41
New cards

Data Drift

Shift in input distribution over time that can degrade model performance post-deployment.

42
New cards

Influence Score

Metric estimating each training example’s effect on model predictions; useful for data pruning (e.g., DEFT).

43
New cards

Data-Efficient Fine-Tuning (DEFT)

Approach that prunes training data by influence and effort scores to fine-tune LLMs with minimal samples.

44
New cards

Sparse Fine-Tuning

Updating only a small set of high-impact parameters (e.g., SpIEL), reducing memory and compute cost.

45
New cards

Autotrain

HuggingFace web service that automates data prep, hyper-parameter search, fine-tuning and deployment.

46
New cards

Transformers Library

HuggingFace Python package providing pre-trained models, tokenisers and Trainer API for fine-tuning.

47
New cards

Trainer API

High-level class in transformers that abstracts training loops, evaluation and distributed training setup.

48
New cards

Optimum

HuggingFace toolkit that applies hardware-aware optimisation (quantisation, pruning, distillation) for efficient inference.

49
New cards

Amazon SageMaker JumpStart

AWS service offering ready LLMs and automated pipelines for fine-tuning and deploying on SageMaker.

50
New cards

Amazon Bedrock

Fully-managed AWS service giving API access to foundation models and tools for fine-tuning and RAG.

51
New cards

OpenAI Fine-Tuning API

Endpoint that lets users upload datasets and customise GPT-3.5/4 models via simple API calls.

52
New cards

NVIDIA NeMo

Framework and set of micro-services for training, customising and serving LLMs with GPU acceleration.

53
New cards

Generative AI

Field of AI focused on creating new content—text, code, images, audio—rather than just analysing data.

54
New cards

Multimodal LLM

Model that processes and generates across multiple modalities, e.g., text + images or audio.

55
New cards

Vision-Language Model (VLM)

Multimodal model jointly trained on images and text, enabling tasks like captioning and VQA.

56
New cards

Contrastive Learning

Technique that teaches models by bringing paired representations (e.g., image–text) closer and pushing mismatched ones apart.

57
New cards

CLIP

OpenAI’s contrastive model that aligns image and text embeddings, enabling zero-shot vision tasks.

58
New cards

AdapterFusion

Method that merges multiple task-specific adapters into a single adapter for improved multi-task performance.

59
New cards

Data Cleaning

Removing noise, errors and inconsistencies from raw data to improve fine-tuning quality.

60
New cards

Tokenizer

Algorithm that splits raw text (or audio) into discrete tokens usable by a language model.

61
New cards

PagedAttention

vLLM memory-management algorithm that stores key-value cache in paged blocks, reducing fragmentation.

62
New cards

vLLM

Inference engine using PagedAttention plus smart scheduling to serve LLMs with high throughput and low memory.

63
New cards

Petals

Decentralised framework that splits LLM layers across volunteer GPUs, enabling torrent-style inference/fine-tuning.

64
New cards

WebGPU

Browser API enabling GPU compute inside web apps, letting LLMs run locally via projects like WebLLM.

65
New cards

Quantised LLM

Model whose weights are stored in reduced precision (4/8-bit) to cut memory and accelerate inference.

66
New cards

Pruning Schedule

Planned strategy dictating when and how much of a model’s weights or neurons to prune during training.

67
New cards

Model Card

Standardised report documenting a model’s purpose, data, performance, limitations and ethical considerations.

68
New cards

GLUE Benchmark

Suite of nine NLP tasks used to gauge general language understanding of models.

69
New cards

MMLU

Massive Multitask Language Understanding; benchmark spanning 57 subjects that tests broad knowledge and reasoning.

70
New cards

DecodingTrust

Comprehensive framework assessing LLM trustworthiness in toxicity, bias, privacy, robustness and ethics.

71
New cards

LLM Guardrails

Intermediary policies or models that filter or rewrite prompts/responses to enforce safety and compliance.

72
New cards

Large Language Model (LLM)

A neural network with billions of parameters trained on vast text corpora to understand and generate human-like language.

73
New cards

Fine-Tuning

Further training a pre-trained model on a smaller, domain-specific dataset to specialize it for a new task.

74
New cards

Parameter-Efficient Fine-Tuning (PEFT)

Any technique that adapts an LLM by updating only a small subset of parameters, reducing compute and memory costs.

75
New cards

Low-Rank Adaptation (LoRA)

A PEFT method that inserts small low-rank matrices into weight layers and trains only these matrices while freezing the original weights.

76
New cards

QLoRA

A memory-efficient variant of LoRA that fine-tunes 4-bit quantised weights while training low-rank adapters.

77
New cards

DoRA

Weight-Decomposed Low-Rank Adaptation; splits weights into magnitude and direction, updating only low-rank directional components.

78
New cards

Half Fine-Tuning (HFT)

Technique that freezes half of a model’s parameters each round, preserving pre-trained knowledge while learning new tasks.

79
New cards

Mixture of Experts (MoE)

Architecture with multiple specialist sub-networks (experts); a router activates only a subset per token for efficiency.

80
New cards

Mixture of Agents (MoA)

Framework where several complete LLMs collaborate layer-by-layer, combining proposals and aggregations to improve output quality.

81
New cards

Retrieval-Augmented Generation (RAG)

Pipeline that retrieves external documents at query time and feeds them into an LLM to ground responses in fresh knowledge.

82
New cards

Seven-Stage Fine-Tuning Pipeline

End-to-end process: Dataset Preparation, Model Initialization, Training Setup, Fine-Tuning, Evaluation, Deployment, Monitoring.

83
New cards

Dataset Preparation

Collecting, cleaning, formatting and splitting data plus handling imbalance, augmentation and annotation.

84
New cards

Model Initialization

Loading a pre-trained checkpoint and setting initial configurations before training or inference.

85
New cards

Training Environment Setup

Configuring hardware (GPU/TPU), software libraries, hyper-parameters, optimiser and loss functions for efficient training.

86
New cards

Hyper-parameter

A training setting (e.g., learning rate, batch size, epochs) chosen before training that governs model learning behaviour.

87
New cards

Gradient Descent

Optimisation algorithm updating weights by moving them opposite to the gradient of the loss function.

88
New cards

Stochastic Gradient Descent (SGD)

Gradient descent variant that updates weights using one (or few) training samples per step, adding randomness.

89
New cards

Mini-Batch Gradient Descent

Updates parameters with gradients computed on small batches, balancing stability and speed.

90
New cards

Adam Optimizer

Adaptive learning-rate optimiser that combines momentum and RMSprop ideas; widely used for LLM fine-tuning.

91
New cards

AdamW

Adam variant that decouples weight decay from gradient updates, improving regularisation for transformers.

92
New cards

Quantisation

Technique that stores weights/activations in lower precision (e.g., 8-bit, 4-bit) to cut memory and speed up inference.

93
New cards

Pruning

Removing unimportant weights, neurons or filters from a network to make it smaller and faster.

94
New cards

Cross-Entropy Loss

Primary objective for language models measuring divergence between predicted token distribution and true distribution.

95
New cards

Perplexity

Exponentiated cross-entropy; lower values mean the model is less ‘surprised’ and predicts text better.

96
New cards

Safety Benchmark

Suite of tests (e.g., DecodingTrust) that probe toxicity, bias, privacy, hallucination and adversarial robustness of LLMs.

97
New cards

Llama Guard

Meta’s safeguard model that classifies prompts and responses into risk categories to filter unsafe content.

98
New cards

Shield Gemma

Google Gemma-based moderation model that filters hate, violence, sexual and other harmful content across parameter scales.

99
New cards

WildGuard

Open-source multitask moderation model fine-tuned on adversarial datasets to detect harmful prompts, risky outputs and refusals.

100
New cards

Proximal Policy Optimisation (PPO)

Reinforcement-learning algorithm that aligns LLMs by maximising a learned reward while constraining policy updates.