AWS AI Practitioner: Foundations - Domain 4 - Guidelines for Responsible AI

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/25

flashcard set

Earn XP

Description and Tags

Last updated 10:23 PM on 3/26/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

26 Terms

1
New cards

Responsible AI

The set of principles and practices involved in designing, developing, and deploying ethical, trustworthy, and accountable AI systems. Ensures that AI technologies are fair, transparent, secure, and aligned with societal values. Key aspects include fairness, explainability, robustness, privacy and security, governance, and transparency.

2
New cards

Explainability (Responsible AI)

A responsible AI principle that refers to the ability to understand and interpret the reasoning behind an AI model's decisions or outputs. Allows for evaluation, auditing, and building trust in AI systems without necessarily exposing the full internal mechanics of the model. Distinct from interpretability: it uses model-agnostic approaches; interpretability requires deep access to internal workings.

3
New cards

Robustness (Responsible AI)

A responsible AI principle that involves developing AI systems that are reliable, fault-tolerant, and capable of minimizing errors or failures. These systems operate consistently and predictably even in unexpected or adversarial situations.

4
New cards

Veracity (Responsible AI)

A feature of responsible AI requiring that AI systems produce truthful, accurate, and non-deceptive outputs. Related to the hallucination problem in generative AI: veracious systems are designed to minimize false or fabricated content.

5
New cards

Data Bias

A type of model bias that occurs when the training data used to build an AI model is biased or underrepresents certain groups, causing the resulting model to exhibit biased predictions or decisions. Example: an AI hiring system trained on historically skewed data perpetuates those skewed recommendations.

6
New cards

Algorithm Bias

A type of bias introduced by the algorithms and model architectures used in AI systems, independent of the training data. Occurs because of inherent assumptions made by the algorithm, or when ML models are optimized for performance metrics rather than fairness objectives.

7
New cards

Interaction Bias

A type of bias that arises from the way humans use AI systems or the context in which the AI is deployed. Example: an AI facial recognition system tested primarily on one demographic group performs poorly on others.

8
New cards

Bias Amplification

A phenomenon where AI systems learn and perpetuate existing societal biases from training data, potentially amplifying them at scale. As AI adoption increases, amplified biases can cause unfair treatment or discrimination — especially through social media and recommendation systems.

9
New cards

Human Audits (AI Bias)

A responsible AI practice where human reviewers systematically evaluate AI system outputs, training data, and decision patterns to identify bias, fairness violations, or other ethical concerns that automated tools may miss.

10
New cards

Model Evaluation Jobs

A capability available through Amazon SageMaker Clarify and Amazon Bedrock that allows practitioners to assess and compare the performance of large language models across multiple evaluation dimensions — including bias/stereotyping, toxicity, factual accuracy, and semantic robustness — using built-in or custom prompt datasets and human workers.

11
New cards

Prompt Stereotyping (Evaluation Dimension)

An FM evaluation dimension that assesses whether a model generates outputs that reinforce or reproduce harmful stereotypes based on demographic or social characteristics embedded in prompts.

12
New cards

Toxicity (Evaluation Dimension)

An FM evaluation dimension that measures whether model outputs contain offensive, harmful, abusive, or otherwise toxic language — a key safety criterion for responsible AI deployment.

13
New cards

Factual Knowledge (Evaluation Dimension)

An FM evaluation dimension that assesses the accuracy of the model's factual claims and its ability to recall correct information from its training data.

14
New cards

Semantic Robustness (Evaluation Dimension)

An FM evaluation dimension that measures a model's ability to produce consistent, coherent outputs when given semantically equivalent inputs that are phrased differently — testing resilience to input variation.

15
New cards

Amazon Augmented AI (Amazon A2I)

An AWS service that combines ML model predictions with human review workflows. High-confidence predictions are returned automatically to the client application; low-confidence predictions are routed to human reviewers. Reviewer feedback is consolidated using weighted scores, stored in Amazon S3, and fed back into the model's training dataset to improve performance over time. Exemplifies a human-centered approach to AI.

16
New cards

Transparent Model

An AI model that provides deep insight into its internal mechanics and decision-making processes through interpretable algorithms (e.g., linear regression, decision trees). Transparent models make it more efficient to identify potential biases or unfair decisions but may involve performance and security tradeoffs.

17
New cards

Explainable Model

An AI model that provides a high-level understanding of model behavior without exposing intricate internal details — typically using model-agnostic or black-box approaches. Explainable models prioritize making outputs understandable to stakeholders without requiring access to the model's full internal architecture.

18
New cards

Black-Box Approach (Explainability)

A model-agnostic explainability approach that treats the model's internal workings as opaque and analyzes inputs and outputs to generate human-understandable explanations of model behavior — without requiring access to internal model weights or architecture.

19
New cards

Interpretability vs. Explainability Tradeoff

The design tension between transparency (interpretability) and performance: interpretable models (decision trees, linear regression) offer clearer decision logic but may sacrifice accuracy; complex models (neural networks) achieve higher accuracy but are inherently less interpretable. Regulatory requirements may mandate interpretability even at the cost of performance.

20
New cards

Human-Centered AI

An approach to AI development that places human needs, values, and well-being at the forefront. It emphasizes prioritizing human requirements, incorporating diverse perspectives and expertise, and enhancing human capabilities rather than replacing them. It recognizes the importance of human oversight and collaboration, ensuring AI remains a tool to augment human agency.

21
New cards

AI Service Cards

AWS documentation resources that provide centralized transparency information for specific AWS AI services (including Amazon Comprehend, Amazon Transcribe, Amazon Rekognition, Amazon Textract, Amazon Titan, and Amazon Nova). Each card covers intended use cases and limitations, responsible AI design considerations, and deployment and performance optimization guidance.

22
New cards

Amazon SageMaker Model Cards

A SageMaker feature that enables practitioners to document the full lifecycle of a custom ML model — from design and training to evaluation — in a structured model card. When a model card is created in SageMaker AI, model details (training process, datasets, containers used) are automatically populated. Supports transparency and responsible deployment of custom models.

23
New cards

Feature Attributions (Explainability)

A model explainability technique that quantifies the contribution of each input feature to an ML model's prediction. Implemented in Amazon SageMaker Clarify using Global Shapley Additive Explanations (SHAP) values — which rank input features by their relative importance in driving model decisions across tabular, computer vision, and NLP models.

24
New cards

SHAP Values (Shapley Additive Explanations)

A mathematical framework used by Amazon SageMaker Clarify's model explainability feature to calculate the relative importance of each input feature in driving an ML model's predictions. These values provide consistent, theoretically grounded attributions that can be visualized to identify the most influential features and detect potential biases.

25
New cards

Partial Dependence Plots (PDPs)

A model explainability visualization tool integrated into Amazon SageMaker Clarify that shows the marginal effect of one or two input features on the predicted outcome of a model — helping practitioners understand how changing a specific feature value impacts model predictions independently of other features.

26
New cards

Amazon Bedrock Guardrails

A configurable Amazon Bedrock capability that allows developers to implement safeguards controlling the behavior of foundation models in deployed applications. Guardrails can block harmful content, enforce topic restrictions, redact PII, and prevent the model from generating outputs that violate defined responsible AI policies.

Explore top notes

note
Unit 4 - Chapter 6
Updated 909d ago
0.0(0)
note
2.5: air pollution
Updated 1206d ago
0.0(0)
note
Chapter 14: Postmortem Examination
Updated 1083d ago
0.0(0)
note
The Krebs Cycle
Updated 1200d ago
0.0(0)
note
🦅 APUSH Unit 2 Notes
Updated 179d ago
0.0(0)
note
Chapter 16: Cytoskeleton
Updated 990d ago
0.0(0)
note
AP World History - Ultimate Guide
Updated 325d ago
0.0(0)
note
Chapter 12: Social Psychology
Updated 1283d ago
0.0(0)
note
Unit 4 - Chapter 6
Updated 909d ago
0.0(0)
note
2.5: air pollution
Updated 1206d ago
0.0(0)
note
Chapter 14: Postmortem Examination
Updated 1083d ago
0.0(0)
note
The Krebs Cycle
Updated 1200d ago
0.0(0)
note
🦅 APUSH Unit 2 Notes
Updated 179d ago
0.0(0)
note
Chapter 16: Cytoskeleton
Updated 990d ago
0.0(0)
note
AP World History - Ultimate Guide
Updated 325d ago
0.0(0)
note
Chapter 12: Social Psychology
Updated 1283d ago
0.0(0)

Explore top flashcards

flashcards
AP Lit American Year Vocab
166
Updated 1193d ago
0.0(0)
flashcards
Japanese Year 11 ATAR
182
Updated 1021d ago
0.0(0)
flashcards
CCHS Meteorology Brown
41
Updated 869d ago
0.0(0)
flashcards
Semester 1 Final: Names
37
Updated 1199d ago
0.0(0)
flashcards
G8 U4
22
Updated 471d ago
0.0(0)
flashcards
Unit 9: Protein Synthesis
32
Updated 377d ago
0.0(0)
flashcards
year 11 biology
297
Updated 422d ago
0.0(0)
flashcards
AP Lit American Year Vocab
166
Updated 1193d ago
0.0(0)
flashcards
Japanese Year 11 ATAR
182
Updated 1021d ago
0.0(0)
flashcards
CCHS Meteorology Brown
41
Updated 869d ago
0.0(0)
flashcards
Semester 1 Final: Names
37
Updated 1199d ago
0.0(0)
flashcards
G8 U4
22
Updated 471d ago
0.0(0)
flashcards
Unit 9: Protein Synthesis
32
Updated 377d ago
0.0(0)
flashcards
year 11 biology
297
Updated 422d ago
0.0(0)