1/61
Vocabulary flashcards covering core AWS AI services, machine-learning concepts, responsible-AI principles, and key metrics for the AWS Certified AI Practitioner exam.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Accuracy
Metric that measures the proportion of correct predictions out of all predictions; can be misleading on imbalanced datasets.
Activation Function
Mathematical function in a neural-network node that introduces non-linearity (e.g., ReLU, Sigmoid, Tanh).
Agents for Amazon Bedrock
Bedrock feature that lets foundation models plan and execute multi-step tasks through API calls across company systems.
Amazon Bedrock
Fully managed AWS service providing API access to multiple high-performing foundation models for building generative-AI apps.
Amazon Comprehend
NLP service that extracts entities, key phrases, sentiment, language, and more from unstructured text.
Amazon Kendra
Enterprise search service that answers natural-language questions by searching across multiple content repositories.
Amazon Lex
Service for building chatbots and voice bots using automatic speech recognition and natural-language understanding.
Amazon Polly
Text-to-speech (TTS) service that converts written text into lifelike spoken audio in multiple voices and languages.
Amazon Rekognition
Computer-vision service for image and video analysis (object detection, facial analysis, content moderation, etc.).
Amazon SageMaker
End-to-end managed platform to build, train, and deploy machine-learning models at scale.
Amazon Titan
Family of AWS-built foundation models (text, image, embeddings) available exclusively through Amazon Bedrock.
Amazon Transcribe
Automatic speech-recognition (ASR) service that turns spoken audio into text.
Amazon Translate
Neural machine-translation service for fast, high-quality translation between languages.
Artificial Intelligence (AI)
Broad field of computer science aimed at creating systems that perform tasks requiring human intelligence.
Bias (AI)
Systematic error causing prejudiced outcomes, often due to non-representative or biased training data.
Chatbot
Software application that simulates human conversation via text or voice.
Classification
Supervised-learning task that predicts a categorical label (e.g., spam vs. not spam).
Clustering
Unsupervised-learning technique that groups similar data points without pre-existing labels.
Computer Vision
AI field enabling computers to interpret and derive information from images or videos.
Context Window
Maximum number of tokens a foundation model considers in one request (prompt + output).
Deep Learning
Subset of machine learning that uses multilayer neural networks to learn complex patterns.
Domain-Adaptation Fine-Tuning
Technique that adapts a foundation model to a specific domain by training on large unlabeled domain text.
Embeddings
Vector representations that capture semantic meaning of text, images, or other data for tasks like search.
Epoch
One complete pass through the entire training dataset during model training.
Explainability
Responsible-AI principle focused on understanding and communicating how a model makes its decisions.
Fairness
Responsible-AI pillar ensuring model predictions are not biased against protected subgroups.
Fine-Tuning
Further training a pre-trained foundation model on a smaller labeled dataset to specialize it.
Foundation Model (FM)
Large, pre-trained deep-learning model that can be adapted for many downstream tasks (e.g., GPT-4, Titan).
Generative AI
AI class that creates new content (text, images, code, etc.) resembling the data it was trained on.
Hallucination
When a generative model outputs false or nonsensical information while appearing confident.
Hyperparameter
User-set configuration (e.g., learning rate, batch size) that governs the training process but is not learned.
Inference
Using a trained model to generate predictions or decisions on new, unseen data.
Instruction-Based Fine-Tuning
Supervised fine-tuning method where training data is formatted as prompt/completion instruction pairs.
Knowledge Bases for Amazon Bedrock
Managed feature that implements Retrieval-Augmented Generation by linking Bedrock models to company data in S3.
Labeled Data
Dataset where each sample is tagged with the correct output; required for supervised learning.
Large Language Model (LLM)
Type of foundation model trained on massive text corpora to understand and generate human-like language.
Latency
Time delay between sending a prompt and receiving a model’s response; critical for real-time apps.
Machine Learning (ML)
AI subset where algorithms learn patterns from data to make predictions without explicit programming.
Modality
Type of data a model processes—text, images, audio, video; a model handling multiple types is multimodal.
Multi-Turn Messaging
Stateful dialogue with multiple back-and-forth exchanges, limited by the model’s context window.
Natural Language Processing (NLP)
AI field focused on enabling computers to understand, interpret, and generate human language.
Neural Network
Computing architecture inspired by the brain, consisting of interconnected layers of artificial neurons.
Overfitting
When a model learns noise in training data, harming its performance on new data.
Parameter
Internal value (weight or bias) learned by a model during training.
Precision
Percentage of positive predictions that are actually correct; True Positives ÷ (True Positives + False Positives).
Prompt
Input text provided to a foundation model instructing it to perform a task or generate content.
Prompt Engineering
Crafting and refining prompts to steer a foundation model toward desired outputs.
Provisioned Throughput
Bedrock feature that reserves dedicated inference capacity for consistent, scalable performance.
Recall
Percentage of actual positives correctly identified; True Positives ÷ (True Positives + False Negatives).
Regression
Supervised-learning task that predicts a continuous numerical value.
Reinforcement Learning
ML paradigm where an agent learns to make sequential decisions by maximizing cumulative reward.
Responsible AI
Designing, developing, and deploying AI that is safe, ethical, fair, and transparent.
Retrieval-Augmented Generation (RAG)
Framework that improves LLM outputs by retrieving relevant knowledge and appending it to the prompt.
Single-Turn Messaging
Stateless interaction consisting of one user input and one model output with no memory of past turns.
Supervised Learning
Machine-learning approach where models learn from labeled data; includes classification and regression.
Token
Basic unit of text (word, sub-word, or punctuation) processed by a language model.
Training
Process of feeding data to a model so it learns the values of its parameters for accurate predictions.
Transparency
Responsible-AI principle that provides clear information on an AI system’s purpose, workings, and limitations.
Unlabeled Data
Dataset without tags or annotations; used in unsupervised learning.
Unsupervised Learning
Machine-learning approach that discovers patterns in unlabeled data (e.g., clustering).
Validation Data
Subset of data used during training to evaluate model performance and tune hyperparameters, helping prevent overfitting.
EC2 Bootstrapping
is the process of running scripts on an EC2 instance when it first launches to automatically install software and apply configurations. This allows for immediate setup and customization of the instance without manual intervention.