AWS AI Practitioner: Foundations - Domain 5 - Security, Governance, and Compliance for GenAI

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/49

flashcard set

Earn XP

Description and Tags

Last updated 10:25 PM on 3/26/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

50 Terms

1
New cards

AWS Identity and Access Management (IAM)

An AWS service that enables granular control over who can access AWS resources and what actions they can perform. In AI/ML security, assigned roles and policies govern which users, services, and applications can interact with SageMaker, Bedrock, and other AI services. Supports the principle of least privilege by granting only the permissions required for a task.

2
New cards

Principle of Least Privilege

A security best practice requiring that each user, role, or service be granted only the minimum permissions necessary to perform its intended function. Applied in AI/ML deployments to prevent unauthorized access to training data, model artifacts, and inference endpoints.

3
New cards

Amazon SageMaker Role Manager

A SageMaker feature that streamlines the creation of IAM roles for different user personas and ML activities. Automates the role creation process to enforce least-privilege access without manual permission configuration. Provides three predefined personas: Data Scientist (data exploration, model training, experimentation), ML Operations/MLOps (model deployment, monitoring, pipeline management), and SageMaker AI Compute.

4
New cards

Amazon Macie

An AWS data security service that uses ML to automatically discover, classify, and protect sensitive data stored in Amazon S3. In AI/ML contexts, it identifies PII and other sensitive content in training datasets and data lakes, supporting data privacy compliance and secure data engineering practices.

5
New cards

Virtual Private Cloud (VPC)

A logically isolated section of the AWS cloud where users can launch AWS resources within a defined virtual network. For SageMaker AI, configuring a VPC with private subnets and VPC-only network access type enhances security by preventing internet exposure of training and inference workloads.

6
New cards

VPC Interface Endpoints (PrivateLink)

Private endpoints within a VPC that allow resources to connect to specific AWS services (such as Amazon S3, CloudWatch Logs, SageMaker Runtime, and the SageMaker API) without traversing the public internet. Implemented using elastic network interfaces; a key security control for isolating SageMaker AI Studio workloads.

7
New cards

AWS PrivateLink

An AWS networking service that enables private connectivity between VPCs and AWS services without exposing traffic to the public internet. In AI/ML deployments, it is used to securely access SageMaker, Bedrock, and other AI services from within a VPC, reducing data exposure risk.

8
New cards

Encryption at Rest

The protection of stored data by converting it into an unreadable format using cryptographic algorithms. Applied to AI/ML training datasets, model artifacts, and inference outputs stored in services such as Amazon S3, Amazon EFS, and Amazon EBS to prevent unauthorized access to sensitive data.

9
New cards

Encryption in Transit

The protection of data as it moves between systems using secure communication protocols (typically TLS/SSL). In AI/ML security, it secures data exchanged between ML clients, SageMaker endpoints, Amazon Bedrock APIs, and storage services.

10
New cards

AWS Key Management Service (AWS KMS)

An AWS managed service for creating and controlling cryptographic keys used to encrypt data across AWS services. Used to encrypt training data, model artifacts, and SageMaker notebook volumes, ensuring only authorized services and roles can decrypt AI/ML assets.

11
New cards

Data Lineage

The complete record of a data asset's origin, movement, transformation, and usage throughout the ML lifecycle. It enables organizations to trace how training data was collected, processed, and applied — supporting compliance, debugging, and governance of AI systems. Implemented in AWS via Amazon SageMaker ML Lineage Tracking and AWS Glue DataBrew.

12
New cards

Data Cataloging

The process of organizing and registering metadata about data assets in a centralized repository so they can be discovered, understood, and governed. Automates the role creation process to enforce least-privilege access without manual permission configuration. The AWS Glue Data Catalog enables teams to find and reuse datasets, understand their schema and lineage, and enforce access policies.

13
New cards

Amazon SageMaker ML Lineage Tracking

A SageMaker feature that automatically creates a visual, end-to-end representation of ML workflows by tracking artifacts and their relationships: dataset artifacts, training job trial components, input data config artifacts, model artifacts, model package artifacts, and endpoint contexts. Enables model governance, reproducibility, compliance auditing, and the ability to identify all endpoints affected by a problematic dataset.

14
New cards

Amazon SageMaker Model Registry

A SageMaker feature that serves as a centralized repository for cataloging and managing ML models for production use. Key capabilities: catalog models for production, view and compare model version details, associate metadata (descriptions, tags, training metrics) with model versions, and manage model approval status (pending, approved, or rejected) for governance and compliance purposes.

15
New cards

Amazon SageMaker Model Dashboard

A centralized portal within the SageMaker AI console that provides a unified interface for model management. Key capabilities: access a centralized model portal, track SageMaker Model Monitor job status, view model lineage graphs, and view endpoint performance metrics — all in one consolidated console experience.

16
New cards

Version Control (ML Artifacts) & Applications

The systematic practice of tracking changes to ML artifacts over time to ensure reproducibility, support collaboration, and enable rollback. In ML workflows, it applies to: code (GitHub, GitLab, Bitbucket), datasets (Amazon S3 partitioning), containers (Amazon ECR), training jobs (hyperparameters and configurations), models (Amazon SageMaker Model Registry), and endpoints (configuration and deployment tracking).

17
New cards

AI System Vulnerabilities

Security weaknesses that can be exploited at various stages of the AI system lifecycle. Categories include: training data vulnerabilities (mislabeled or manipulated data causing incorrect model behavior), input data vulnerabilities (adversarial inputs designed to cause misclassification), output data vulnerabilities, model vulnerabilities (inversion and theft), and LLM vulnerabilities (susceptibility to prompt injection attacks).

18
New cards

Model Inversion Attack

A security threat in which an adversary makes repeated queries to an AI model and analyzes its outputs to reconstruct the training data or infer sensitive information used during training. Example: repeatedly querying a facial recognition model to reconstruct employee images. Mitigated through proper data governance, access controls, and output rate limiting.

19
New cards

Model Theft

A security threat in which an adversary uses a target model's input-output pairs to train a new, functionally equivalent model — effectively stealing the model's learned behavior without access to the original training data or weights. Mitigated through access controls, query rate limiting, and secure model deployment.

20
New cards

Adversarial Input Attack

A security threat in which subtle, intentional modifications are made to input data (e.g., imperceptible changes to an image) to cause a model to misclassify or misbehave. Example: manipulating an image to trick a facial recognition system into misidentifying an individual. Defended against using input validation and sanitization.

21
New cards

Input Validation and Sanitization

A security practice that verifies and cleanses incoming data before it is processed by an AI model. Helps defend against adversarial input attacks and prompt injection attacks by ensuring only well-formed, expected inputs reach the model.

22
New cards

Adversarial Training

A model robustness technique that intentionally includes adversarial examples in the training data to make the model more resistant to adversarial input attacks. One of several proactive security measures recommended for securing AI systems throughout their lifecycle.

23
New cards

AWS CloudTrail

An AWS service that records API calls and account activity across AWS services, producing logs for security analysis, compliance auditing, and operational troubleshooting. In AI/ML governance, it logs access to SageMaker, Bedrock, and other AI services to maintain a tamper-evident audit trail.

24
New cards

AWS Config

An AWS service that continuously monitors and records the configurations of AWS resources and evaluates them against defined compliance rules. Used in AI governance to assess whether AI/ML infrastructure configurations comply with organizational security and compliance policies.

25
New cards

Amazon Inspector

An AWS automated vulnerability management service that continuously scans AWS workloads (EC2 instances, container images in ECR, Lambda functions) for software vulnerabilities and network exposures. Used in AI/ML security to identify and remediate vulnerabilities in the infrastructure hosting AI models.

26
New cards

AWS Audit Manager

An AWS service that helps organizations continuously audit their AWS usage to assess risk and maintain compliance with regulations and internal policies. Automates evidence collection for audits, mapping AWS resource usage to compliance framework requirements relevant to AI system deployments.

27
New cards

AWS Trusted Advisor

An AWS service that provides real-time guidance and recommendations to help optimize AWS infrastructure across five categories: cost optimization, performance, security, fault tolerance, and service limits. Used in AI/ML security posture management to identify potential security misconfigurations.

28
New cards

Vulnerability Management

A security practice involving the continuous identification, assessment, prioritization, and remediation of security vulnerabilities in AI systems and supporting infrastructure. Includes scanning infrastructure (Amazon Inspector), reviewing model architectures for attack surfaces, and updating dependencies to address known CVEs.

29
New cards

ISO 42001

An emerging international AI management system standard focused on assessing and managing risks specific to AI systems. Establishes a framework for organizations to systematically address and control risks related to AI development and deployment, emphasizing responsible AI practices and governance.

30
New cards

ISO 23894

An emerging international AI risk management standard that focuses specifically on AI risk assessment and treatment processes. Provides guidance for identifying, analyzing, and evaluating risks associated with AI systems.

31
New cards

EU Artificial Intelligence Act (EU AI Act)

A European Union regulation that categorizes AI applications based on risk levels: (1) Unacceptable risk — applications that are banned outright (e.g., social scoring, certain emotion-recognition systems); (2) High risk — applications subject to legal requirements for risk management, data governance, and compliance documentation (e.g., CV-scanning tools); (3) Unregulated — applications not explicitly banned or listed as high risk.

32
New cards

NIST AI Risk Management Framework (AI RMF)

A voluntary framework published by the National Institute of Standards and Technology (NIST) that provides structured guidance for governing, mapping, measuring, and managing AI risks. Helps organizations promote trustworthy and responsible AI system development and use through four core functions: Govern, Map, Measure, and Manage.

33
New cards

AI Risk Management

The structured process of identifying, assessing, and mitigating risks associated with AI systems. Involves estimating risk based on two key factors: probability of occurrence and severity of consequences. Risk levels range from very low to critical, with mitigation strategies prioritized accordingly. Referenced in frameworks such as the NIST AI RMF.

34
New cards

Risk Matrix

A tool used in AI risk management that categorizes risks by combining the likelihood of occurrence (ranging from rare to frequent) with the severity of impact (ranging from very low to extreme). Outputs a risk level (very low, low, medium, high, or critical) that guides prioritization and resource allocation for risk mitigation.

35
New cards

System and Organization Controls (SOC)

A set of audit standards developed by the American Institute of CPAs (AICPA) that evaluate the security, availability, processing integrity, confidentiality, and privacy of cloud service provider systems. AWS SOC reports (SOC 1, SOC 2, SOC 3) provide third-party assurance that AWS infrastructure meets these standards — available via AWS Artifact.

36
New cards

ISO 9001

An international standard for quality management systems. AWS maintains ISO 9001 certification at a global level, demonstrating that its cloud services and processes meet rigorous quality management requirements relevant to AI/ML infrastructure.

37
New cards

ISO 27001

An international standard for information security management systems (ISMS). AWS ISO 27001 certification demonstrates that its security controls are systematically managed and aligned with international best practices for protecting information assets — including AI training data and model artifacts.

38
New cards

AWS Artifact

An AWS self-service compliance portal that provides on-demand access to AWS security and compliance reports (such as SOC reports and ISO certifications) and online agreements. Enables organizations to retrieve compliance documentation needed to demonstrate AI infrastructure compliance to auditors and regulators without delay.

39
New cards

AWS Compliance Center (3 components)

An AWS resource hub that helps organizations operating in regulated industries meet compliance, governance, and auditing requirements. Provides three key components: (1) Compliance stories from companies in regulated industries sharing real-world best practices; (2) Access to compliance technical papers and documentation covering common compliance questions, risk processes, and auditing checklists; (3) An auditor learning path designed to equip compliance professionals with the skills to assess and validate AWS environments.

40
New cards

Data Governance

The comprehensive framework for managing, utilizing, and protecting an organization's data assets throughout their lifecycle. Encompasses three key dimensions: ( 1) Curation — organizing and maintaining data to ensure quality, accuracy, and accessibility through data quality management, data integration, and data management; (2) Understanding — enabling users to discover and comprehend available data resources via centralized data catalogs; (3) Protection — balancing data privacy, security, and access through security controls and access management.

41
New cards

Data Steward

A data governance role held by subject matter experts from the business who have in-depth knowledge of specific data domains required to support key organizational initiatives. They are involved in day-to-day project tasks and help identify potential data-related challenges — bridging business requirements and technical data management.

42
New cards

Data Owner

A data governance role held by executive-level stakeholders who are responsible for making data policy decisions, including those related to regulatory compliance and access controls. Data owners determine who can access sensitive data (such as customer or financial information) and are accountable for data governance outcomes.

43
New cards

Data Lifecycle Management

The policies and processes governing how data is created, stored, used, archived, and deleted throughout its useful life. In AI governance, data lifecycle management ensures that training data, model outputs, and evaluation records are retained for appropriate periods, stored securely, and purged in compliance with regulatory requirements.

44
New cards

Data Residency

A data governance requirement specifying that certain types of data must be stored and processed within defined geographic boundaries (e.g., a specific country or region) to comply with local data protection regulations. Relevant to AI deployments where training data or model outputs involve citizen data subject to sovereignty requirements.

45
New cards

AWS Glue DataBrew (4 Key Features)

A visual, no-code AWS data preparation service that provides essential data governance capabilities for AI/ML workflows. Key features: (1) Data profiling — runs profiling jobs to reveal data shape, content, structure, and relationships; (2) Data characterization — analyzes dataset types, composition, and patterns; (3) Data quality rules — defines and enforces rules to maintain data integrity and consistency; (4) Data lineage tracking — provides visibility into data origin, transformations, and storage locations throughout the data lifecycle.

46
New cards

AWS Glue Data Quality

A feature within AWS Glue that helps maintain high data quality standards for AI/ML training data and other datasets. Recommends quality rules based on best practices and can detect anomalies or deviations from expected data patterns — enabling proactive identification of data quality issues before they impact model training or inference.

47
New cards

Data Profiling

A data governance process that runs analytical jobs against datasets to reveal their shape, content, structure, distribution, and relationships. In AI/ML workflows, it precedes data quality management, providing the baseline understanding needed to define appropriate quality rules and identify anomalies.

48
New cards

Data Curation (Domain 5 Context)

The process within data governance that ensures the reliability and usability of data for AI analysis and decision-making. Encompasses three sub-processes: (1) Data quality management — addressing inaccuracies and inconsistencies identified during profiling; (2) Data integration — collecting and merging data from various sources into a unified dataset; (3) Data management — reconciling and harmonizing data stored across different systems to maintain consistency.

49
New cards

AI Governance Strategy (4 Steps)

The organizational framework for implementing and maintaining responsible AI practices across the enterprise. Comprises four key steps: (1) Determine scope — identify AI applications requiring governance, using tools such as the Generative AI Security Scoping Matrix; (2) Document and communicate policies — establish guidelines for data standards, access requests, and model transparency; (3) Define monitoring mechanisms — set thresholds and actions for policy compliance and bias detection; (4) Establish review cadence — periodically assess and improve governance processes.

50
New cards

Team Training Requirements (AI Governance)

A governance process requirement specifying that personnel involved in developing, deploying, and operating AI systems receive appropriate training on responsible AI principles, security practices, compliance obligations, and organizational AI policies. Ensures consistent application of governance protocols across AI projects.

Explore top notes

note
Unit 4 - Chapter 6
Updated 909d ago
0.0(0)
note
2.5: air pollution
Updated 1206d ago
0.0(0)
note
Chapter 14: Postmortem Examination
Updated 1083d ago
0.0(0)
note
The Krebs Cycle
Updated 1200d ago
0.0(0)
note
ðŸĶ… APUSH Unit 2 Notes
Updated 179d ago
0.0(0)
note
Chapter 16: Cytoskeleton
Updated 990d ago
0.0(0)
note
AP World History - Ultimate Guide
Updated 325d ago
0.0(0)
note
Chapter 12: Social Psychology
Updated 1283d ago
0.0(0)
note
Unit 4 - Chapter 6
Updated 909d ago
0.0(0)
note
2.5: air pollution
Updated 1206d ago
0.0(0)
note
Chapter 14: Postmortem Examination
Updated 1083d ago
0.0(0)
note
The Krebs Cycle
Updated 1200d ago
0.0(0)
note
ðŸĶ… APUSH Unit 2 Notes
Updated 179d ago
0.0(0)
note
Chapter 16: Cytoskeleton
Updated 990d ago
0.0(0)
note
AP World History - Ultimate Guide
Updated 325d ago
0.0(0)
note
Chapter 12: Social Psychology
Updated 1283d ago
0.0(0)

Explore top flashcards

flashcards
AP Lit American Year Vocab
166
Updated 1193d ago
0.0(0)
flashcards
Japanese Year 11 ATAR
182
Updated 1021d ago
0.0(0)
flashcards
CCHS Meteorology Brown
41
Updated 869d ago
0.0(0)
flashcards
Semester 1 Final: Names
37
Updated 1199d ago
0.0(0)
flashcards
G8 U4
22
Updated 471d ago
0.0(0)
flashcards
Unit 9: Protein Synthesis
32
Updated 377d ago
0.0(0)
flashcards
year 11 biology
297
Updated 422d ago
0.0(0)
flashcards
AP Lit American Year Vocab
166
Updated 1193d ago
0.0(0)
flashcards
Japanese Year 11 ATAR
182
Updated 1021d ago
0.0(0)
flashcards
CCHS Meteorology Brown
41
Updated 869d ago
0.0(0)
flashcards
Semester 1 Final: Names
37
Updated 1199d ago
0.0(0)
flashcards
G8 U4
22
Updated 471d ago
0.0(0)
flashcards
Unit 9: Protein Synthesis
32
Updated 377d ago
0.0(0)
flashcards
year 11 biology
297
Updated 422d ago
0.0(0)