MITRE ATLAS MATRIX

0.0(0)
studied byStudied by 0 people
0.0(0)
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/17

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 5:59 PM on 1/23/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

18 Terms

1
New cards

Red Teaming

The practice of deliberately attacking a system like a real adversary to find weaknesses before real attackers do.

2
New cards

Runtime Protection

Security that actively watches over an AI system while it is running, stopping suspicious or negative activity the moment it happens

3
New cards

LLM Data Leakage

when a large language model reveals sensitive, private, or restricted information that it should not, either through its responses or misuse

4
New cards

Exfiltration

The act of secretly or unauthorizedly taking data out of a system, usually by an attacker.

5
New cards

AI Discovery

the process of identifying, inventorying, and understanding all AI-related assets in an environment, so they can be secured, governed, and defended.

6
New cards

LLM Data Leakage

When a large language model unintentionally reveals sensitive or private information—such as user data or proprietary content—due to manipulated prompts, training data exposure, or access to connected data sources.

7
New cards

Extract LLM System Prompt

A security attack where an adversary tries to reveal a large language model’s hidden system instructions—often through prompt injection or misconfigured files—to access proprietary guidance that influences the model’s behavior.

8
New cards

Exfiltration via AI Agent Tool Invocation

A security attack where an adversary manipulates an AI agent to use its authorized tools (e.g., email, document creation, media generation) to secretly transmit sensitive data disguised as a legitimate action.

9
New cards

Discover LLM Hallucination

A technique where adversaries intentionally prompt large language models to identify fabricated or nonexistent entities (such as packages, URLs, or organizations) and use those hallucinations as potential targets for further exploitation.

10
New cards

Discover AI Agent Configuration

A reconnaissance technique where an adversary seeks to learn an AI agent’s configuration—such as its enabled tools, permissions, or accessible services—by probing the agent or accessing its configuration interfaces to better target attacks.

11
New cards

Discover AI Model Family

A reconnaissance technique where an adversary determines the general type or family of an AI model—through documentation or response analysis—to better understand its behavior and tailor attacks accordingly.

12
New cards

Data from Information Repositories

A collection technique where adversaries access shared information repositories (such as SharePoint, Confluence, or databases) to mine stored data that can directly provide sensitive information or support further attacks.

13
New cards

Data from AI Services

A collection technique where adversaries exploit access to AI-powered services or agents to gather proprietary or sensitive data, including information reachable through AI tools or data sources that normal users cannot directly access.

14
New cards

Data from Local System:

A collection technique where adversaries search a compromised local system—such as file systems, configuration files, or local databases—to locate sensitive information before exfiltrating it.

15
New cards

Physical Environment Access:

An attack technique where an adversary manipulates the physical environment or real-world data sources feeding an AI system to influence its behavior, enabling modified versions of digital attacks through physical access.

16
New cards

Physical Environment Access:

An attack technique where an adversary manipulates the physical environment or real-world data sources feeding an AI system to influence its behavior, enabling modified versions of digital attacks through physical access.

17
New cards

AI-Enabled Product or Service:

An access technique where adversaries interact with an AI-powered product or service to indirectly reach the underlying AI model and infer details about the model or its outputs through exposed logs, metadata, or behavior.

18
New cards

Full AI Model Access:

A scenario where an adversary gains complete “white-box” access to an AI model, including its architecture, parameters, and logic, enabling offline analysis and the creation or testing of tailored attacks.

Explore top flashcards