LECTURE Insightful Patterns & Explaining Neural Networks – Key Vocabulary

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/35

flashcard set

Earn XP

Description and Tags

Vocabulary flashcards summarising key terms from the lecture on robust rule mining, MDL-based pattern discovery, neural network explanation with rules, and differentiable pattern set mining.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

36 Terms

1
New cards

Robust Rule

An association rule selected for its stability under noise, offering reliable insight into data relationships.

2
New cards

Association Rule Mining

The process of discovering implication relationships (e.g., A → B) between items in large datasets.

3
New cards

Downward Closure Property

A pruning principle: if a rule or itemset is infrequent, all its supersets are also infrequent, enabling exhaustive search cuts.

4
New cards

Monotone Score

A quality measure that never decreases when items are added, allowing efficient exhaustive mining of rules.

5
New cards

Minimum Description Length (MDL) Principle

Model-selection rule stating the best model minimises the sum of its description length and the length of the data encoded with it.

6
New cards

Code Length L(D,M)

In MDL, the total number of bits to describe model M and encode data D using M.

7
New cards

ΔL (Delta L)

The change in code length when a candidate rule is added to the current model; negative ΔL means better compression.

8
New cards

Induction by Compression

Learning paradigm that derives models (rules, patterns) by seeking maximal data compression under MDL.

9
New cards

GRAB Algorithm

Greedy search method that builds a compact set of robust rules by iteratively adding candidates with the largest ΔL gain.

10
New cards

Singleton Rule

A rule whose tail contains a single item, e.g., ∅ → A, used as the initial model in GRAB.

11
New cards

Rule Head

The antecedent (left-hand side) of a rule; may contain single items or disjunctions (ORs).

12
New cards

Rule Tail

The consequent (right-hand side) of a rule; may contain one or multiple conjunctive items (ANDs).

13
New cards

OR-Rule

A rule whose head is a disjunction of items/classes, capturing shared features, e.g., Cat ∨ Dog → u₁.

14
New cards

AND-Rule

A rule whose tail is a conjunction of items, expressing that multiple conditions jointly predict the head.

15
New cards

Noisy Rule

A rule allowing a tolerance (k+/n) so only a subset of head items need be present, offering robustness to noise.

16
New cards

Candidate Generation (GRAB)

Step that forms new rule candidates by merging tails or merging tail with head of existing rules sharing the same head.

17
New cards

ExplainN

Framework that mines robust rules between layers of a neural network to explain how neurons encode class features.

18
New cards

Convolutional Neural Network (CNN)

Deep learning architecture with convolutional, pooling, and fully connected layers, widely used for image tasks.

19
New cards

Feature Visualization

Technique that depicts what input patterns maximise neuron activations, helping interpret CNN internals.

20
New cards

Model Distillation

Approach that approximates a complex model with a simpler, interpretable model (e.g., rules, additive models).

21
New cards

Activation Binarization

Conversion of neuron activations to 0/1 (or −1/1) values, simplifying computation and interpretation.

22
New cards

ImageNet Dataset

Large-scale image collection (ILSVRC-2012) with 1000 categories, commonly used to train and test CNNs.

23
New cards

Rule Chain

Sequence of rules over successive layers showing how low-level features combine into high-level concepts.

24
New cards

Pattern Set Mining

Task of selecting a small, informative collection of patterns that together describe data well under a global objective.

25
New cards

Boolean Matrix Factorization (BMF)

Decomposition of a binary matrix into binary factors whose OR product approximates the original data.

26
New cards

BINAPS

Binarized Neural Architecture for Pattern Sets; differentiable model that learns conjunctive patterns via binarized weights.

27
New cards

Binarized Neural Network (BNN)

Neural network with weights (and often activations) constrained to binary values, enabling fast, memory-efficient inference.

28
New cards

Bernoulli Binarization

Sampling scheme that converts continuous weights in [0,1] to binary by treating them as Bernoulli probabilities.

29
New cards

Negative Bias Term

Learned negative offset applied before step activation so a neuron fires only when enough pattern evidence is present.

30
New cards

ADAM Optimizer

Stochastic gradient-based optimisation algorithm combining adaptive learning rates and momentum.

31
New cards

Frequent Pattern Explosion

Phenomenon where mining returns millions of patterns, many spurious, motivating concise pattern-set objectives.

32
New cards

Single Nucleotide Polymorphism (SNP)

Genomic position where individuals vary in a single nucleotide; basic element in human variation studies.

33
New cards

1000 Genomes Project

Large database measuring genetic variation across ~2500 individuals, used to study SNP patterns.

34
New cards

Structural Property

Mathematical characteristic (e.g., monotonicity) of an objective that can be exploited to prune search.

35
New cards

Autoencoder

Neural network trained to reconstruct its input via a compressed embedding layer.

36
New cards

Pattern Hierarchy

Organised set of patterns where higher-level patterns are composed of lower-level ones, enabling multilevel explanations.