1/65
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
How the graph looks:
It's an "S"-shaped curve that starts near 0, rises smoothly, and levels off near 1.
How to understand it:
It smoothly turns numbers into values between 0 and 1, like a dimmer that adjusts brightness instead of just on or off.
Output:
Always between 0 and 1. Useful for binary classification and representing probabilities.
How the graph looks:
It also looks like an "S", but centered at zero, going from -1 to +1.
How to understand it:
It stretches numbers so negatives become closer to -1 and positives closer to +1, balancing the data around zero.
Output:
Between -1 and 1. Often used in hidden layers to keep activations centered.
How the graph looks:
Flat at 0 for negative inputs, then a straight line increasing for positive inputs.
How to understand it:
It's like a door that stays closed (0) for negatives and opens linearly for positives.
Output:
0 for values below zero, and the same as the input for values above zero. Common in hidden layers for speed and simplicity.
How the graph looks:
It doesn't have one single curve but a set of probabilities that sum to 1, showing which class is most likely.
How to understand it:
It turns raw scores into probabilities — higher numbers mean higher chances for that class.
Output:
A vector of probabilities adding up to 1. Used in output layers for multi-class classification.