1/25
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Neurons
Cells that fire electrical impulses along axons, with firing dependent on synaptic activity.
Excitatory Inputs
Inputs that promote neuron firing.
Inhibitory Inputs
Inputs that inhibit neuron firing.
Dendrite
The part of a neuron that receives information, often referred to as "little hands."
Myelin Sheath
A gel-like substance that covers the axon of a neuron.
Axon
The part of a neuron along which electrical signals are fired.
Weighted Inputs
Inputs from presynaptic neurons that can be excitatory or inhibitory, represented by numerical weights.
Activation Function
A function that specifies the strength of the output signal in a neural network.
Binary Threshold-Activation Function
An activation function that models neurons that either fire or do not fire.
Sigmoid Function
A nonlinear activation function that has specific behaviors below and above certain thresholds.
Single-Layer Networks
The first neural networks developed, consisting of a single layer of interconnected units.
Mapping Functions
Functions that map items from a domain to items in a range, with each input corresponding to one output.
Boolean Functions
Functions that classify objects in the domain as TRUE or FALSE.
AND Function
A Boolean function that outputs TRUE only when both inputs are TRUE.
OR Function
A Boolean function that outputs TRUE when at least one input is TRUE.
XOR Function
A Boolean function that outputs TRUE when exactly one input is TRUE, not representable by a single-layer network.
Perceptron-Convergence Rule
A supervised learning algorithm for neural networks that adjusts weights and thresholds based on output error.
Linear Separability
The property of a function that allows it to be separated by a straight line in input space.
Hidden Units
Units in multilayer networks that allow for multiple weights to be assigned to inputs.
Feedforward
The process in which activations spread forward through the network without backward activation.
Backpropagation Algorithm
An algorithm that calculates error and adjusts weights in multilayer networks by propagating error backward.
Hebbian Learning Rule
A learning rule where a unit’s weight changes based on its inputs and outputs, often used in unsupervised learning.
Localist Networks
Networks where information is represented by specific, distinct units.
Distributed Networks
Networks where information is represented across a pattern of weights, with no single unit corresponding to a specific feature.
Physical Symbol System Hypothesis
The idea that information processing involves distinct rules and representations.
Connectionist Networks
Networks that excel in pattern recognition tasks, often modeling cognitive abilities that are difficult to represent with rules.