1/19
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Primary Purpose of Forming Categories according to Rational Analysis
Prediction
By placing an object into a category, we can predict its unobserved attributes (e.g., if a creature is "dangerous")
Memory
Goal is to retrieve and act on a specific past experience (e.g., where you parked your car)
Categorization
Goal is to make a prediction about a new object
Category Labels
In this view, a linguistic label is not the "definition" of a category; it is simply another feature to be predicted, no different from a physical trait like "can fly"
Disjoint Partitioning
The model assumes the world is naturally divided into disjoint (non-overlapping) sets
Modeled after the plant and animal kingdoms, where species are disjoint due to their inability to crossbreed, and members share a probability of displaying specific traits based on their genetic code
Inferences
If an object belongs to a category (k), it has a specific probability (pij) of displaying a value (j) on a dimension (i)
Similarity-Based Categorization
Categorization is determined by the overlap of superficial features
Theory-Based Categorization
Categorization is driven by underlying "theories" or reasons
For example, we categorize natural objects (like dogs) based on their constitution, but we categorize artifacts (like cups) based on their use
The Rational Stance of Categorization
A rational analysis predicts that humans will use theory whenever it improves the accuracy of their predictions
Basic Level Categories
This is the level in a hierarchy (e.g., "fish" vs. "salmon" or "animal") that maximizes predictability
Subordinate
Levels below the basic level (subordinate) do not provide many more properties for the extra effort
Superordinate
Levels above the basic level (superordinate) lose too many specific properties
Disjoint Partitioning
The categorization process strives for a partition that is mutually exclusive and most useful for predicting the environment
Prior Probabilities
The system starts with "priors," which are assumptions about how objects are divided into categories before seeing any data
Conditional Probabilities
The likelihood that specific features will be observed given that an object belongs to a certain category
Posterior Probability
The likelihood of a specific category structure given the observed features
Decision Rule for Bayesian Framework
To make a prediction, a person should choose the feature with the highest posterior probability
Computational Infeasibility
It is impossible for the human mind to reconsider every possible partition of every object ever seen every time it makes a prediction
The Iterative Solution
Because of these limits, humans use an iterative algorithm that commits to a specific category structure for objects seen so far and does not reconsider them later
Categorizing New Objects
When a new (m+1) object is encountered, the algorithm calculates the probability that it belongs to an existing category (k) or a completely new category (0)