EEE-6512_Exam3_Lecture20

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/98

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 2:03 PM on 4/17/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

99 Terms

1
New cards

What is a pattern according to Watanabe

The opposite of chaos and an entity vaguely defined that could be given a name

2
New cards

What is a pattern class

A collection of similar objects

3
New cards

What is intra class variability

Variation among patterns within the same class

4
New cards

What is inter class variability

Variation between patterns from different classes

5
New cards

What is a pattern class model

A mathematical description of a class such as a probability density function

6
New cards

What are the key objectives of pattern classification recognition

Hypothesize models that describe each pattern class and assign a novel pattern to the class associated with the best fitting model

7
New cards

What is the difference between classification and clustering

Classification uses known categories while clustering creates new categories

8
New cards

What is another name given for classification in the lecture

Recognition or supervised classification

9
New cards

What is another name given for clustering in the lecture

Unsupervised classification

10
New cards

What question motivates pattern recognition in the lecture

How do we assign a novel pattern to the most appropriate class

11
New cards

What are some example biometric patterns shown in the lecture

Fingerprint and iris and voice and face and hand and signature

12
New cards

What does the lecture say about examples of patterns outside images

Insurance or credit card applications and dating services and web documents can all be patterns

13
New cards

What is the main idea of template matching in pattern recognition

Match a pattern against a stored template while accounting for allowable pose and scale changes

14
New cards

What assumption does template matching make according to the lecture

Small intra class variability

15
New cards

Why is learning difficult for template matching

Difficult for deformable templates

16
New cards

What is the main idea of statistical pattern recognition

Represent patterns in feature space and model each class statistically

17
New cards

What does statistical pattern recognition focus on

The statistical properties of patterns such as probability densities

18
New cards

What is the main idea of syntactic pattern recognition

Represent complicated patterns using simple primitives and describe them with logical rules or grammars

19
New cards

What is a weakness of syntactic pattern recognition

Primitive extraction is sensitive to noise and describing patterns in terms of primitives is difficult

20
New cards

What is the basic idea of artificial neural networks in this lecture

They are inspired by biological neural networks and use dense interconnections of simple computational elements

21
New cards

Why does the lecture motivate artificial neural networks with biology

Humans solve complex recognition tasks quickly which suggests massive parallelism is important

22
New cards

What numbers are given for biological neural systems in the lecture

About 10^10 to 10^12 neurons and about 10^3 to 10^4 interconnections per neuron and about 10^14 total interconnections

23
New cards

What property of artificial neural nodes is emphasized in the lecture

They are nonlinear

24
New cards

What is a multilayer ANN according to the lecture

A feed forward network with one or more hidden layers between the input and output nodes

25
New cards

What can a three layer neural network generate according to the lecture

Arbitrarily complex decision regions

26
New cards

What training algorithm is mentioned for multilayer ANNs

Back propagation

27
New cards

What weakness of artificial neural networks is listed in the lecture

Parameter tuning and local minima in learning

28
New cards

What are the major pattern recognition approaches compared in the lecture

Template matching and statistical pattern recognition and syntactic pattern recognition and artificial neural networks

29
New cards

What is the first stage of the PR system shown in the diagram

Data acquisition and sensing

30
New cards

What is the second stage of the PR system

Pre processing

31
New cards

What is the third stage of the PR system

Feature extraction

32
New cards

What stage links features to categories during training

Model learning and estimation

33
New cards

What stage assigns a pattern to a category during recognition

Classification

34
New cards

What is the final stage after classification in the PR system

Post processing leading to a decision

35
New cards

What happens in data acquisition and sensing

Measurements of physical variables are collected

36
New cards

What issues are important in data acquisition and sensing

Bandwidth and resolution and sensitivity and distortion and SNR and latency

37
New cards

What happens in pre processing

Noise is removed and patterns of interest are isolated from the background

38
New cards

What happens in feature extraction

A new representation is found in terms of features

39
New cards

What happens in model learning and estimation

A mapping between features and pattern categories is learned

40
New cards

What happens in post processing

Confidence is evaluated and context can be exploited and experts can be combined

41
New cards

What example problem is used to illustrate the complexity of pattern recognition

Sorting fish on a conveyor belt

42
New cards

What two fish categories are used in the PR example

Sea bass and salmon

43
New cards

What preprocessing steps are listed in the fish classification example

Image enhancement and separating touching or occluding fish and finding the boundary of each fish

44
New cards

What feature is first suggested for distinguishing sea bass from salmon

Length

45
New cards

What prior knowledge motivates using length as a feature

A fisherman said sea bass is generally longer than salmon

46
New cards

Why is length alone not a perfect feature

There are many fish for which sea bass is not longer than salmon

47
New cards

What second feature is tried in the fish example

Average lightness of the fish scales

48
New cards

Why is average lightness a better feature than length in the example

It seems easier to choose a threshold even though it still does not give perfect classification

49
New cards

What two possible classification errors are described in the fish example

Calling a salmon a sea bass and calling a sea bass a salmon

50
New cards

Why does cost of error matter in classification

Different mistakes can have different practical consequences

51
New cards

How does the fish canning example illustrate unequal error costs

Customers buying salmon object strongly to sea bass in their cans while sea bass customers may tolerate occasional salmon

52
New cards

Why might we use more than one feature at a time

Single features may not give the best performance and feature combinations can improve recognition

53
New cards

What two features are combined in the fish decision boundary example

Lightness and width

54
New cards

What is a decision boundary

A boundary in feature space that partitions the space into class regions

55
New cards

What is the goal when choosing a decision boundary

Minimize classification error

56
New cards

Do correlated features always improve performance

No correlated features do not improve performance

57
New cards

What are other drawbacks of adding more features

Some may be difficult to extract and computationally expensive

58
New cards

What is the curse of dimensionality according to the lecture

Adding too many features can paradoxically worsen performance

59
New cards

Why does the curse of dimensionality arise when each feature is divided into M intervals

The total number of cells becomes M^d and grows exponentially with the number of features

60
New cards

Why does the amount of training data need to grow in high dimensions

Because each cell should contain at least one training point

61
New cards

What is model complexity in the lecture

The complexity of the classifier or decision model used to separate classes

62
New cards

Why can complex models be dangerous

They can fit the training data perfectly but fail to generalize to new data

63
New cards

What is overfitting

Good performance on training data but poor performance on novel data because the model is too complex

64
New cards

What is generalization

The ability of a classifier to produce correct results on novel patterns

65
New cards

How can generalization performance be improved according to the lecture

Use more training examples and prefer simpler models

66
New cards

Why do more training examples help generalization

They provide better model estimates

67
New cards

Why do simpler models often help generalization

They are less likely to overfit the training data

68
New cards

What prior knowledge enters the design cycle according to the lecture

Invariances can guide feature choice and model choice

69
New cards

What are the steps in the design cycle shown in the lecture

Collect data and choose features and choose model and train classifier and evaluate classifier

70
New cards

What challenge is listed first in the lecture's challenge slide

Noise and segmentation

71
New cards

How can noise affect pattern recognition

Noise reduces the reliability of measured feature values

72
New cards

How can knowledge of the noise process help

It can be used to improve performance

73
New cards

Why is segmentation a challenge in pattern recognition

Individual patterns must be segmented and the correct elements must be grouped together

74
New cards

What is the data collection challenge in PR

Knowing whether the training and testing examples are adequately large and representative

75
New cards

What questions are raised under feature extraction in the challenge section

Which features are most promising and how many should be used and whether features can be learned automatically

76
New cards

What kinds of features should be favored according to the challenge slide

Features robust to noise and features that lead to simpler decision regions

77
New cards

What should pattern representations satisfy

Patterns from the same class should look similar and patterns from different classes should look dissimilar

78
New cards

What transformations should pattern representations ideally be invariant to

Translations and rotations and size and reflections and non rigid deformations

79
New cards

Why are missing features a challenge

Certain features may be unavailable such as under occlusion and the classifier must still be trained and make decisions

80
New cards

What question is raised under model selection

How to know when to reject one class of models and try another

81
New cards

What two issues are emphasized in the overfitting challenge slide

Choosing model complexity appropriately and finding principled ways to choose that complexity

82
New cards

When should domain knowledge be incorporated

When there is not sufficient training data

83
New cards

What is analysis by synthesis in the domain knowledge slide

Modeling how each pattern is generated

84
New cards

What OCR example is used for incorporating domain knowledge

Assuming characters are sequences of strokes

85
New cards

What is classifier combination according to the lecture

Using a pool of classifiers to improve performance

86
New cards

Why is classification error treated as a risk

Because different misclassification errors can have different costs

87
New cards

What question is asked under classification error

How to incorporate knowledge about different risks and whether the lowest possible risk can be estimated

88
New cards

What factors are listed under computational complexity

Number of feature dimensions and number of patterns and number of categories

89
New cards

Why are brute force methods often impractical

They may give perfect classification results but usually require too much time and memory

90
New cards

What tradeoff must be considered in computational complexity

The tradeoff between performance and computational cost

91
New cards

Why are general purpose PR systems difficult to design

Different tasks may require different features and different solutions and different tradeoffs

92
New cards

What higher order difference should you know between classification and clustering

Classification assigns patterns to known categories while clustering discovers new groupings without predefined labels

93
New cards

What higher order difference should you know between training accuracy and generalization

Training accuracy measures performance on seen data while generalization measures performance on novel data

94
New cards

What higher order lesson does the fish example teach about feature selection

A feature that seems reasonable may still be weak and combining better chosen features can improve decisions

95
New cards

What higher order lesson does the lecture give about adding more features

More features are not always better because correlated or costly features and high dimensionality can hurt performance

96
New cards

What higher order lesson does the lecture give about model complexity

A more complex model can reduce training error but may increase error on unseen data through overfitting

97
New cards

What kind of professor style question is likely from this lecture about pattern recognition approaches

Questions comparing template matching and statistical and syntactic and neural network approaches with their assumptions and weaknesses

98
New cards

What kind of professor style question is likely from this lecture about the fish example

Questions about feature thresholds and unequal misclassification costs and why multiple features improve decisions

99
New cards

What kind of professor style question is likely from this lecture about generalization and dimensionality

Questions about why more features can worsen performance and how simpler models and more training data improve generalization