1/32
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Neural Networks
Inspired by the structure and function of the human brain; attempt to mimic how biological neurons process and transmit information
Input Layer
Receives raw data
Hidden Layers
Process and transform data by finding patterns
Output layer
Produces the final result or classification
Neurons
Small computing units that combine inputs, weights, and activation functions to make simple decisions.
Weights
Connect neurons and determine how important each input is; Adjusted during training to improve accuracy. Represent the importance of each input.
Bias
allows flexibility by shifting the activation threshold up or down; Helps fine-tune neuron decisions so the model doesn’t always pass through the origin (adds flexibility).
Input Layer function
data intake
Hidden layer function
pattern recognition
output layer function
final prediction
artificial neuron
A mathematical model that takes inputs, multiplies them by weights, adds bias, and applies an activation function
shallow network
has no or few hidden layers
deep learning
Neural networks with many hidden layers, capable of learning complex features.
Activation functions
determine whether a neuron fires (ex. Sigmoid, ReLU, and Tanh); introduce nonlinearity, allowing networks to learn complex patterns
forward propagation
Data flows from input → hidden → output layer to generate a prediction.
loss calculation
Compares predicted output to the true value to measure error
backpropogation
Adjusts weights and biases in the opposite direction to minimize error — the key learning process.
training data
Input-output pairs used to “teach” the model.
Supervised Learning
The model learns with labeled examples (e.g., “This is a cat”).
Updating Weights
After each training example, the network slightly changes weights to reduce future errors.
Neural Network Learning from Errors
Improve by identifying these, adjusting parameters, and iterating over many examples; allows predictions to become more accurate over time
Image recognition
Detects edges, shapes, and complex objects.
Speech recognition
Converts spoken language into text.
spam detection
Classifies emails as spam or not spam.
Autonomous Vehicles
Uses deep learning for perception and decision-making.
Medical diagnosis
Identifies disease patterns in scans or data
Multiple hidden layers
allow network to recognize complex, abstract features (e.g. from pixels —> edges —> shapes —> faces); deep learning models outperform shallow networks on complex tasks
overfitting
model memorizes training data and fails on new data; prevented using regularization, dropout, and more diverse training data
underfitting
Model is too simple and can’t learn the patterns.
high computation needs
Deep learning requires strong hardware (GPUs).
Ethical issues
Data bias can lead to unfair or inaccurate outcomes.
Ethical concerns
Bias in training data, privacy issues, and accountability for AI decisions
GPUs and TPUS
accelerate deep learning computations