Introduction to AI, Large Language Models, GenAI

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/85

flashcard set

Earn XP

Description and Tags

Flashcards based on the concepts from the lecture on Artificial Intelligence and Generative AI.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

86 Terms

1
New cards

Generative AI

A subset of AI that can create new content such as text, images, videos, and audio.

2
New cards

Large Language Models (LLMs)

Models trained on massive text datasets using self-supervised learning that can understand and generate human-like text.

3
New cards

Transfer Learning

A technique that leverages pre-trained models to enhance and expedite the development of new models.

4
New cards

Prompt Engineering

The process of crafting effective prompts to guide model outputs towards relevance and accuracy.

5
New cards

Generative Adversarial Networks (GANs)

A type of generative model architecture used for creating realistic content by pitting two neural networks against each other.

6
New cards

Variational Autoencoders (VAEs)

A generative model used to create new data instances, primarily in image generation and anomaly detection.

7
New cards

Fine-tuning

The process of adapting a pre-trained model to a specific task by continuing the training on a new dataset.

8
New cards

Neural Networks

Computational models inspired by the human brain that consist of interconnected nodes (neurons) for processing data.

9
New cards

Backpropagation

A training algorithm for neural networks that adjusts weights by propagating the error back through the network.

10
New cards

Reinforcement Learning with Human Feedback (RLHF)

A technique used to align AI models with human values and ethics by incorporating human feedback into the training process.

11
New cards
12
New cards

OpenAI GPT (GPT-3.5, GPT-4)

Models that excel with clear instructions and contextual examples, using techniques like few-shot prompting, Chain-of-Thought (CoT) prompting, role-based instructions, iterative refinement, and system prompts.

13
New cards

Few-shot prompting

A technique that improves task-specific performance by providing examples within the prompt.

14
New cards

Chain-of-Thought (CoT) prompting

A prompting method that enhances reasoning for complex tasks by breaking down problems into a series of steps.

15
New cards

Role-based instructions

Guidance in prompts that specifies the role of the model, such as 'You are a data scientist,' to influence behavior.

16
New cards

Iterative refinement

A process that ensures precision by allowing the model to revise its outputs.

17
New cards

System prompts

a specialized type of prompt used in AI language models that sets the overall context, behavior, or persona for the AI's responses.

EX:
Do not create images in the style of artists, creative professionals or studios whose latest work was created after 1912 (e.g. Picasso, Kahlo).

They guide the model's output by defining constraints and expectations.

18
New cards

Google PaLM (PaLM 2)

Models that benefit from CoT for reasoning, few-shot prompting for accuracy, and generated knowledge prompting for multi-step queries.

19
New cards

Meta LLaMA (LLaMA 2, LLaMA 3)

Models that excel in in-context learning, structured dialogue prompts, text-to-SQL formatting, and prompt chaining.

20
New cards

Anthropic Claude (Claude 2, Claude 3)

Models that respond well to XML-tagged prompts, utilize step-by-step reasoning (CoT), assign roles, and leverage long context for handling extensive inputs.

21
New cards

Code LLaMA

Models that excel at few-shot examples for code generation, function calling prompts, and debugging workflows.

22
New cards

LoRA (Low-Rank Adaptation)

A finetuning framework that modifies only a fraction of a model’s parameters, using low-rank matrices to adjust large models efficiently without retraining the entire architecture.

23
New cards

QLoRA (Quantized Low-Rank Adaptation)

A finetuning approach that combines LoRA with quantized precision to optimize memory and computation overhead while fine-tuning large language models.

24
New cards

Parameter-Efficient Fine-Tuning (PEFT)

Techniques that fine-tune a pretrained model by updating only a small subset of its parameters, offering improved computational efficiency.

25
New cards

Supervised Fine-Tuning (SFT)

The process of adapting a pretrained model on labeled data to align its responses with specific tasks or instructions.

26
New cards

Preference Tuning

A finetuning process aimed at aligning an LLM's behavior with desired human preferences through training based on a reward model.

27
New cards

Direct Preference Optimization (DPO)

A technique in preference tuning that directly compares outputs from different models to optimize for preferred generations during training.

28
New cards

Proximal Policy Optimization (PPO)

A more complex algorithm compared to DPO used in training LLMs to align outputs with preferences.

29
New cards

Benchmarking

The process of evaluating and comparing models across various metrics like accuracy, latency, and memory efficiency to assess finetuning success.

30
New cards

Metrics for evaluation (BLEU, F1-score, perplexity)

Quantitative measures used to evaluate the performance of generative models, providing insight on accuracy and quality.

31
New cards

Human evaluation

Methods of assessing model outputs through human judgment, often providing the most comprehensive insight but being resource-intensive.

32
New cards

ROUGE

A metric used for evaluating text generation that compares the generated text to reference text, assessing recall and precision at the n-gram level.

33
New cards

MMLU (Massive Multitask Language Understanding)

A benchmark that assesses the proficiency of language models across a variety of tasks and domains.

34
New cards

GLUE (General Language Understanding Evaluation)

A benchmark that evaluates and measures the performance of language models across multiple language understanding tasks.

35
New cards

Chatbot Arena

A human evaluation method that uses structured comparisons to assess the quality and performance of conversational agents.

36
New cards

Retrieval-Augmented Generation (RAG)

A framework that enhances language models by integrating external information retrieval to improve the accuracy and relevance of generated content.

37
New cards

Semantic search

A search method that finds contextually relevant information by encoding user queries and documents into vector representations, identifying the most conceptually similar documents.

38
New cards

Vector databases

Databases that store vector embeddings, allowing for efficient similarity searches and real-time updates as new information becomes available.

39
New cards

Indexing

The process of processing and structuring external data sources, splitting documents into smaller chunks, and converting these into embeddings for efficient retrieval.

40
New cards

Embedding

A vector representation of text or other data that captures its semantic meaning, allowing for similarity comparisons during retrieval.

41
New cards

Dense Retrieval

A retrieval method that maps text segments into a dense vector space of features for efficient searching.

42
New cards

Sparse Retrieval

A retrieval method that indexes text segments using specific words for exact matching.

43
New cards

Chunking

The process of dividing documents into smaller, more manageable segments for indexing and retrieval.

44
New cards

Query Rewriting

The process of modifying a user’s query to improve accuracy and relevance in search results.

45
New cards

Hallucination

The tendency of language models to generate false or nonsensical information that is not grounded in reality.

46
New cards

Chunking Optimization

The practice of experimenting with different chunk sizes and strategies to balance semantic coherence and noise reduction in retrieval tasks.

47
New cards

Knowledge Graph

A graph-structured database that represents entities and their relationships, facilitating dynamic retrieval of knowledge.

48
New cards

Data Dependency

The subset of data segments indispensable for addressing a query.

49
New cards

In-domain Data

Data from the same domain as the task at hand, improving the model's relevance and accuracy.

50
New cards

Zero-Shot Learning

A type of machine learning where a model can perform a task without having seen any specific examples of that task during training.

51
New cards

Few-Shot Learning

A type of machine learning where a model can learn a new task from only a few examples.

52
New cards

In-context Learning

The ability of a language model to learn from examples provided in the prompt, without requiring explicit fine-tuning.

53
New cards

IR (Information Retrieval)

The process of obtaining information system resources that are relevant to an information need from a collection.

54
New cards

Breadth-First Search (BFS)

An algorithm for traversing or searching tree or graph data structures.

55
New cards

Fine-tuning

The process of further training a pre-trained language model on a smaller, domain-specific dataset.

56
New cards

Instruction Tuning

Supervised fine-tuning using paired (instruction, output) data to infuse new capabilities into LLMs.

57
New cards

G-Evals

NLG evaluation using GPT-4 with better human alignment for offline evaluation techniques

58
New cards

Agent

A system that leverages a language model to determine which actions it should take and in what order, often using external tools.

59
New cards

Chains

A LangChain concept for connecting LLMs with additional components like prompt templates, external tools, or other LLMs, extending their capabilities.

60
New cards

ConversationBufferMemory

A type of memory in LangChain that stores the entire conversation history for an LLM to reference.

61
New cards

ConversationBufferWindowMemory

A type of memory in LangChain that only retains the last k number of conversations in the history.

62
New cards

ConversationSummaryMemory

A type of memory in LangChain that summarizes the entire conversation history to distill it into the main points, using another LLM for summarization.

63
New cards

GGUF

A file format for storing quantized models, often used with llama.cpp.

64
New cards

LangChain

A framework designed to simplify working with LLMs through useful abstractions and modular components.

65
New cards

LLMChain

A specific type of chain in LangChain that combines an LLM with a prompt and memory to streamline interactions.

66
New cards

Model I/O

The process of loading and working with LLMs, including handling input and output formats.

67
New cards

Open LLM Leaderboard

A ranking of open-source LLMs based on performance metrics.

68
New cards

Prompt Template

A predefined structure for formatting input prompts to an LLM, ensuring consistency and reusability.

69
New cards

Quantization

A technique for reducing the number of bits required to represent the parameters of an LLM, resulting in faster run times and reduced memory usage.

70
New cards

ReAct (Reasoning and Acting)

A framework for prompting LLMs that combines reasoning and acting in an iterative process of thought, action, and observation.

71
New cards

Sequential Chains

A series of connected chains in LangChain where the output of one chain is used as the input for the next, allowing for complex tasks to be broken down into smaller subtasks

72
New cards

Knowledge Graph

A powerful data structure that represents information as a network of interconnected entities and relationships, mimicking human-like understanding of concepts.

73
New cards

Nodes

Elements in a knowledge graph that represent entities.

74
New cards

Edges

Connections in a knowledge graph that represent relationships between entities.

75
New cards

Entity

An object or concept within a knowledge graph, represented by a node.

76
New cards

Relationship

The connection between entities in a knowledge graph, represented by an edge.

77
New cards

LLMs and Knowledge Graphs

Serve as a bridge between unstructured data and machine-readable formats, enhancing the reasoning capabilities of language models.

78
New cards

Multimodal Models

Models designed to process and integrate different types of data, such as text, images, and audio.

79
New cards

Vision-Enhanced Large Language Models (Vision LLMs)

Models that combine visual and textual information for tasks like image captioning and visual question answering.

80
New cards

Cross-Attention Mechanisms

Techniques used in vision LLMs to align visual features with textual representations.

81
New cards

Diffusion Models

Models primarily used for image generation that learn to reverse a noise-adding process to create realistic images from noise.

82
New cards

Image-to-Text Generation

The adaptation of diffusion models for generating text based on image content.

83
New cards

Dataset Preparation for Vision LLMs

Focuses on aligning visual and textual features, emphasizing data cleaning, normalization, and tokenization.

84
New cards

Dataset Preparation for Diffusion Models

Concentrates on image normalization and augmentation to enhance visual diversity.

85
New cards

Image Quality and Variety

Factors that diffusion models are particularly sensitive to, which affect the generation of realistic outputs.

86
New cards

Data Augmentation

Techniques used to artificially expand the size of a dataset by creating modified versions of existing data.