1/19
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
PDP Basic Unit
Processing occurs through a large number of simple elements called units → These units send excitatory and inhibitory signals to one another
Parallelism
Unlike sequential (serial) computers, PDP models process many pieces of information—or constraints—simultaneously
The Microstructure of Cognition
While human thought may appear sequential at a macro level (ideas coming one after another), it is actually composed of vast numbers of parallel microsteps
Connection Strengths (Weights)
The "knowledge" in a PDP model is not a stored list of facts but the strengths of the connections between units
Local Representation
A system where one specific unit is reserved for one specific concept or pattern (e.g., one unit for "Lance" in the Jets and Sharks model)
Distributed Representation
Knowledge about a pattern is spread across the connections of many units; No single unit stands for a whole concept; instead, the concept is the pattern of activation across the network
Active Representation
The current pattern of activation across units
Stored Representation
The connection strengths that allow that pattern to be re-created later
Learning as Connection Tuning
Learning is not about formulating explicit rules; it is the gradual adjustment of connection strengths based on experience so the right patterns of activation emerge under the right conditions
Hebb Rule
A simple learning principle: adjust the strength of a connection between two units in proportion to their simultaneous activation → This allows the system to teach itself without a central "programmer"
Content Addressability
The ability to retrieve a full memory from just a fragment of its attributes (e.g., remembering a person's name just by thinking of their hair color and where you met them)
Graceful Degradation
Because information is distributed, the system does not crash if it receives slightly wrong information or if a few units are damaged; it simply produces a slightly weaker or "best guess" output
Emergent Properties List
Spontaneous Generalization
Default Assignment
Rule-like Behavior
Spontaneous Generalization
If a system is probed with a general cue, it will naturally retrieve what is common to the memories matching that cue, effectively creating a "general concept" on the fly
Emergent Properties List
Default Assignment
If information about a specific object is missing, the system fills in the blanks based on what it knows about similar instances (e.g., assuming a bird can fly because most other birds in its memory can)
Emergent Properties List
Rule-like Behavior
PDP models can behave as if they follow linguistic or logical rules without those rules being explicitly written in the system
Ex. A model learning past tenses through examples can "overregularize" (saying "camed" instead of "came") just as children do, because it has extracted the statistical regularity of the "-ed" ending
Emergent Properties List
Interactive Activation Model
Used for word recognition. It shows how higher-level knowledge (words) and lower-level features (letters/lines) constrain each other simultaneously
Jets and Sharks Model
A simulation of gang members used to demonstrate content addressability and default assignment
Pattern Associator
A simple network that learns to associate one pattern (like the sight of a rose) with another (the smell of a rose) using the Hebb Rule
Typing Model
Demonstrates that serial behavior (typing one letter after another) emerges from a parallel system where units for upcoming letters anticipate and inhibit each other