1/17
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Red Teaming
The practice of deliberately attacking a system like a real adversary to find weaknesses before real attackers do.
Runtime Protection
Security that actively watches over an AI system while it is running, stopping suspicious or negative activity the moment it happens
LLM Data Leakage
when a large language model reveals sensitive, private, or restricted information that it should not, either through its responses or misuse
Exfiltration
The act of secretly or unauthorizedly taking data out of a system, usually by an attacker.
AI Discovery
the process of identifying, inventorying, and understanding all AI-related assets in an environment, so they can be secured, governed, and defended.
LLM Data Leakage
When a large language model unintentionally reveals sensitive or private information—such as user data or proprietary content—due to manipulated prompts, training data exposure, or access to connected data sources.
Extract LLM System Prompt
A security attack where an adversary tries to reveal a large language model’s hidden system instructions—often through prompt injection or misconfigured files—to access proprietary guidance that influences the model’s behavior.
Exfiltration via AI Agent Tool Invocation
A security attack where an adversary manipulates an AI agent to use its authorized tools (e.g., email, document creation, media generation) to secretly transmit sensitive data disguised as a legitimate action.
Discover LLM Hallucination
A technique where adversaries intentionally prompt large language models to identify fabricated or nonexistent entities (such as packages, URLs, or organizations) and use those hallucinations as potential targets for further exploitation.
Discover AI Agent Configuration
A reconnaissance technique where an adversary seeks to learn an AI agent’s configuration—such as its enabled tools, permissions, or accessible services—by probing the agent or accessing its configuration interfaces to better target attacks.
Discover AI Model Family
A reconnaissance technique where an adversary determines the general type or family of an AI model—through documentation or response analysis—to better understand its behavior and tailor attacks accordingly.
Data from Information Repositories
A collection technique where adversaries access shared information repositories (such as SharePoint, Confluence, or databases) to mine stored data that can directly provide sensitive information or support further attacks.
Data from AI Services
A collection technique where adversaries exploit access to AI-powered services or agents to gather proprietary or sensitive data, including information reachable through AI tools or data sources that normal users cannot directly access.
Data from Local System:
A collection technique where adversaries search a compromised local system—such as file systems, configuration files, or local databases—to locate sensitive information before exfiltrating it.
Physical Environment Access:
An attack technique where an adversary manipulates the physical environment or real-world data sources feeding an AI system to influence its behavior, enabling modified versions of digital attacks through physical access.
Physical Environment Access:
An attack technique where an adversary manipulates the physical environment or real-world data sources feeding an AI system to influence its behavior, enabling modified versions of digital attacks through physical access.
AI-Enabled Product or Service:
An access technique where adversaries interact with an AI-powered product or service to indirectly reach the underlying AI model and infer details about the model or its outputs through exposed logs, metadata, or behavior.
Full AI Model Access:
A scenario where an adversary gains complete “white-box” access to an AI model, including its architecture, parameters, and logic, enabling offline analysis and the creation or testing of tailored attacks.