1/9
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No study sessions yet.
Large Language Model (LLM)
A sophisticated mathematical function that predicts the next word for any given text, often with hundreds of billions of parameters.
Pre-training
The initial training phase where a model learns to autocomplete text from a vast amount of data.
Reinforcement Learning with Human Feedback
A training method where human feedback helps refine a model's predictions to make them more useful or preferred by users.
Backpropagation
An algorithm used to adjust the parameters of a model by comparing its predictions with the actual outcomes.
Transformer
A type of language model introduced in 2017 that processes text in parallel rather than sequentially.
Attention
An operation used in transformers that allows the model to weigh the importance of different words in relation to each other.
Parameters/Weights
Continuous values in a language model that determine the behavior and predictions of the model.
GPU (Graphics Processing Unit)
Specialized computer chips optimized for performing many calculations simultaneously, enabling faster model training.
Prediction Probability
A score assigned by the model indicating how likely each possible next word is, based on the preceding text.
Contextual Encoding
Process by which the meaning of a word is influenced by its surrounding words and derived semantic cues.