Home
Explore
Exams
Search for anything
Login
Get started
Home
CS6262 Lecture 17 - Data Poisoning and Model Evasion
CS6262 Lecture 17 - Data Poisoning and Model Evasion
0.0
(0)
Rate it
Studied by 0 people
0.0
(0)
Rate it
Call Kai
Learn
Practice Test
Spaced Repetition
Match
Flashcards
Knowt Play
Card Sorting
1/44
There's no tags or description
Looks like no tags are added yet.
Study Analytics
All Modes
Learn
Practice Test
Matching
Spaced Repetition
Name
Mastery
Learn
Test
Matching
Spaced
No study sessions yet.
45 Terms
View all (45)
Star these 45
1
New cards
What is data poisoning in the context of attacks on machine learning?
An attack where adversaries manipulate training data to cause the model to learn incorrect behavior.
2
New cards
What is model evasion in machine learning security?
An attack where adversaries learn the model's decision boundary and craft inputs to bypass detection.
3
New cards
What was a common criticism from security experts regarding early machine-learning-based security models?
They produced too many false positives, making them less useful in practice.
4
New cards
What major change in recent years improved machine learning for security?
A significant increase in available data, including malware samples and logged network data.
5
New cards
Why is it difficult to find standard datasets for machine-learning security research?
Privacy concerns and restrictions surrounding malware and sensitive user data.
6
New cards
What challenge drives the need for machine learning in security?
The need to analyze extremely large amounts of data that cannot be processed manually.
7
New cards
Why is applying machine learning in security more challenging than other fields?
Attackers actively try to manipulate or evade models.
8
New cards
What is adversarial machine learning?
Machine learning applied in environments where attackers attempt to manipulate or evade models.
9
New cards
What is an exploratory attack in adversarial machine learning?
An attack where adversaries probe a model to learn its decision boundary and evade detection.
10
New cards
What is another name for an exploratory attack?
Evasion attack.
11
New cards
What is a causative attack in adversarial machine learning?
An attack that injects malicious training examples to corrupt the learned model.
12
New cards
What is another name for a causative attack?
Data poisoning attack.
13
New cards
Which evasion tactic enables malware to detect its environment before acting?
Environmental awareness.
14
New cards
Which evasion technique involves delaying execution or acting only at specific times?
Timing-based evasion.
15
New cards
Which evasion tactic confuses automated detection tools?
Obfuscating internal data.
16
New cards
What early example demonstrated evasion of anomaly detection models?
Attackers inserted normal system calls into malicious sequences to evade detection.
17
New cards
What technique can corrupt machine-generated worm signatures?
Inserting incorrect network data to produce invalid signatures.
18
New cards
What has recent research in adversarial machine learning focused on?
Developing frameworks and studying limitations of modern models like deep learning.
19
New cards
What is PAYL in the context of intrusion detection?
An anomaly detection system modeling byte frequency distributions in payloads.
20
New cards
What is the assumption underlying PAYL’s detection approach?
Normal traffic exhibits unique byte-frequency traits that differ from malicious traffic.
21
New cards
How does PAYL compute anomaly scores?
By comparing byte frequency distributions to a learned normal profile.
22
New cards
What advantage does PAYL offer?
High efficiency during run-time processing.
23
New cards
What kinds of attacks can PAYL detect?
Zero-day and polymorphic attacks that alter appearance.
24
New cards
What characteristic defines a polymorphic attack?
It changes appearance each time to avoid signature detection.
25
New cards
Why do polymorphic attacks lack predictable signatures?
Each instance is modified, preventing fixed patterns from being used.
26
New cards
What is a polymorphic blending attack?
A polymorphic attack adjusted to resemble normal traffic distributions.
27
New cards
How does a blending attack evade detection?
By matching byte frequency statistics of legitimate traffic.
28
New cards
Why are simple IDS models easier to evade?
They rely on limited features that attackers can mimic.
29
New cards
What makes evasion harder against advanced IDS?
More comprehensive features and sophisticated modeling reduce blending ability.
30
New cards
What is the primary objective of a polymorphic blending attack?
To match legitimate byte distributions and bypass anomaly detection.
31
New cards
What is an effective countermeasure to blending attacks?
Use more complex IDS features incorporating syntactic and semantic information.
32
New cards
What is a benefit of combining multiple randomized IDS models?
It makes it harder for attackers to predict or match detection profiles.
33
New cards
What is the key goal of a successful poisoning attack?
To degrade the model so attacks are not detected.
34
New cards
Why should poisoning attacks be subtle?
To remain undetected over long periods.
35
New cards
Why is permanence important in poisoning attacks?
To make corruption irreversible.
36
New cards
What moral lesson was demonstrated in the navigation app poisoning attempt?
When true signal outweighs noise, poisoning efforts fail.
37
New cards
What is the purpose of clustering in Polygraph’s workflow?
To separate worm flows from innocuous flows before generating signatures.
38
New cards
Why are signatures generated from fake invariants ineffective?
They fail to detect real worms and become useless.
39
New cards
What is a Bayes signature in Polygraph?
A signature based on tokens common within suspicious flows but rare in normal flows.
40
New cards
How is a Bayes signature used for classification?
By summing token scores and comparing to a threshold.
41
New cards
How can an attacker defeat Bayes signatures?
By injecting normal substrings into fake anomalous flows.
42
New cards
What impact does noise injection have on signature generation?
It produces signatures with high false positives or false negatives.
43
New cards
What is needed to mitigate noise-injection attacks?
A precise and reliable flow classifier.
44
New cards
Under what condition can data poisoning attacks be avoided?
When training data is tightly controlled and its integrity is ensured.
45
New cards
Why are poisoning attacks a risk when training in open environments?
Attackers can inject malicious or misleading data.