1/11
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Generative AI (GAI)
A subclass of AI models that emulate input data structures to create derived synthetic content such as text, images, and audio.
NIST AI Risk Management Framework (AI RMF)
A framework intended for voluntary use to improve organizations' ability to integrate trustworthiness into AI product design, development, and evaluation.
Confabulation
The production of confidently stated but erroneous or false content by AI systems, potentially misleading users.
Data Privacy
The risks connected to unauthorized access, usage, or disclosure of personal or sensitive information, particularly when used in training AI models.
Harmful Bias
The amplification or perpetuation of historical, societal, or systemic biases in AI outputs due to biased training data.
Algorithmic Monocultures
A phenomenon where repeated use of the same algorithms leads to increased vulnerability and risk of correlated failures.
Deepfake
A form of synthetic media in which a person in an existing image or video is replaced with someone else's likeness.
Human-AI Configuration
The arrangements and interactions between humans and AI systems that can influence the effectiveness and outcomes of AI applications.
Information Integrity
The quality and reliability of information as it pertains to being truthful, accurate, and verifiable within a context.
Provenance Data Tracking
Methods used to trace the origin and history of content created by AI systems, ensuring authenticity and integrity.
Environment Impact
The potential adverse effects that AI systems may have on the environment, particularly through resource consumption.
Incident Disclosure
The process of documenting and reporting occurrences where AI systems contribute to harms or failures, aiming to foster transparency.