1/15
A set of vocabulary flashcards covering key concepts from the lecture notes on Large Language Models and their applications in AI.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Large Language Models (LLMs)
A type of AI that can understand and generate human-like text based on the input it receives.
Prompt Engineering
The practice of designing inputs to generative AI tools to yield better outputs.
Fine-tuning
The process of adapting pre-trained LLMs to specific tasks to improve their performance.
Retrieval Augmented Generation (RAG)
An AI framework that enhances LLM output with information retrieved from external sources without altering the model.
Embeddings
Vector representations of words or phrases that capture their semantic meanings in high-dimensional space.
Transformers
Deep learning models that utilize self-attention mechanisms to process input sequences for tasks like language understanding.
Quantization
A technique to reduce the size of LLMs by decreasing the precision of the weights and activations to save memory and processing power.
Distillation
A process to create smaller, more efficient models that replicate the capabilities of larger models.
Attention Mechanism
A process within neural network architectures that allows models to focus on specific parts of input sequences, enhancing performance.
ROUGE Score
A metric for evaluating the quality of text generated by a model compared to reference texts, using recall and precision of n-grams.
Zero-shot Prompting
A technique where the model is prompted without any examples to generate responses, relying only on its pre-training.
One-shot Prompting
A technique involving providing a single example in the prompt to help the model generate relevant responses.
N-shot Prompting
A technique involving multiple examples provided in the prompt to guide the model's output.
Vector Databases
Databases optimized for storing and querying vector embeddings, allowing efficient retrieval of similar items based on vector similarity.
Transformer Architectures
Models that define the structure of transformers, which include encoders and decoders used in natural language processing tasks.
Generative Pre-trained Transformer (GPT)
A type of LLM developed by OpenAI that is pre-trained on a variety of internet text and fine-tuned for specific applications.