CI

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/161

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 8:25 AM on 5/15/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

162 Terms

1
New cards
2
New cards
3
New cards
4
New cards
5
New cards
Computational Intelligence (CI)
A branch of Artificial Intelligence that focuses on adaptive, learning-based, and nature-inspired techniques to solve complex problems.
6
New cards
Artificial Intelligence (AI)
The broader field of making machines act intelligently, such as reasoning, learning, planning, recognizing patterns, and making decisions.
7
New cards
Machine Learning (ML)
A branch of AI where computers learn patterns from data and use them to make predictions or decisions.
8
New cards
Relationship between AI, ML, and CI
AI is the broadest field. ML focuses on learning from data. CI includes adaptive techniques such as neural networks, fuzzy logic, genetic algorithms, and swarm intelligence.
9
New cards
Is CI the same as ML?
No. CI and ML overlap, especially in neural networks, but CI also includes fuzzy logic, genetic algorithms, and swarm intelligence.
10
New cards
Example of AI
A chatbot, self-driving car, face recognition system, or expert system.
11
New cards
Example of ML
A model that learns from Iris flower data to classify the flower species.
12
New cards
Example of CI
An artificial neural network, fuzzy logic controller, genetic algorithm, or particle swarm optimization system.
13
New cards
Application of CI in healthcare
CI can be used to detect diseases, classify medical images, or predict patient risk.
14
New cards
Application of CI in agriculture
CI can be used for fruit grading, plant disease detection, crop prediction, or smart irrigation.
15
New cards
Application of CI in transportation
CI can be used in traffic prediction, self-driving systems, and route optimization.
16
New cards
Application of CI in energy
CI can be used to predict electricity consumption and optimize energy usage.
17
New cards
Classification
A machine learning task where the output is a category or class.
18
New cards
Regression
A machine learning task where the output is a continuous numerical value.
19
New cards
Classification example
Predicting whether an Iris flower is Setosa, Versicolor, or Virginica.
20
New cards
Regression example
Predicting daily electricity consumption in kWh.
21
New cards
Main difference between classification and regression
Classification predicts a class/category, while regression predicts a numerical value.
22
New cards
Supervised learning
A learning method where the model learns from labelled data with known correct answers.
23
New cards
Unsupervised learning
A learning method where the model finds patterns in unlabelled data.
24
New cards
Labelled data
Data that includes both input values and the correct output.
25
New cards
Artificial Neural Network (ANN)
A machine learning model inspired by biological neurons, used to learn patterns from data.
26
New cards
Neural Network (NN)
Another name commonly used for Artificial Neural Network.
27
New cards
ANN as a model
ANN is a model or technique used in machine learning and computational intelligence.
28
New cards
Backpropagation
A learning algorithm used to train ANN by calculating errors and updating weights.
29
New cards
Gradient Descent
An optimization method used to adjust weights to reduce error.
30
New cards
Neuron
A processing unit in ANN that receives inputs, applies weights, calculates a net value, and produces an output.
31
New cards
Input node
A node that receives input data or features.
32
New cards
Hidden neuron
A neuron in the hidden layer that processes information between input and output layers.
33
New cards
Output neuron
A neuron that produces the final prediction of the network.
34
New cards
Weight
A value that controls the strength of connection between neurons.
35
New cards
Bias
An additional value added to the weighted sum to shift the activation function.
36
New cards
Input layer
The first layer of ANN that receives the input features.
37
New cards
Hidden layer
The middle layer of ANN that learns patterns from the input data.
38
New cards
Output layer
The final layer of ANN that produces the prediction or classification.
39
New cards
ANN architecture
The structure of ANN, usually consisting of input layer, hidden layer, and output layer.
40
New cards
Example ANN architecture for Iris classification
4 input nodes, hidden layer, and 3 output nodes.
41
New cards
Why Iris ANN has 4 input nodes
Because the Iris dataset uses sepal length, sepal width, petal length, and petal width.
42
New cards
Why Iris ANN has 3 output nodes
Because there are three Iris species: Setosa, Versicolor, and Virginica.
43
New cards
Example ANN architecture for electricity prediction
4 input nodes, hidden layer, and 1 output node.
44
New cards
Why electricity prediction ANN has 1 output node
Because the model predicts one numerical value: daily electricity consumption.
45
New cards
Activation function
A function that transforms the net input of a neuron into an output.
46
New cards
Sigmoid activation function
A function that converts input into a value between 0 and 1.
47
New cards
Sigmoid formula
f(x) = 1 / (1 + e^-x)
48
New cards
Step function
An activation function that outputs 1 if the input reaches the threshold, otherwise outputs 0.
49
New cards
Threshold
A boundary value used by a perceptron to decide whether the output is 0 or 1.
50
New cards
Net input
The weighted sum of inputs before applying the activation function.
51
New cards
Net input formula
net = x1w1 + x2w2 + x3w3 + ...
52
New cards
Learning rate
A value that controls how much the weights change during training.
53
New cards
Small learning rate effect
The model learns slowly because weights change only slightly.
54
New cards
Large learning rate effect
The model may overshoot the best solution and become unstable.
55
New cards
Epoch
One complete pass through the training dataset.
56
New cards
Target output
The correct or desired output used during training.
57
New cards
Actual output
The output produced by the model.
58
New cards
Error
The difference between target output and actual output.
59
New cards
Simple error formula
Error = Target - Output
60
New cards
Loss function
A function used to measure how wrong the model prediction is.
61
New cards
Training data
Data used to train the model.
62
New cards
Testing data
Data used to evaluate the model after training.
63
New cards
Validation data
Data used to tune model settings before final testing.
64
New cards
Overfitting
When the model performs well on training data but poorly on new data.
65
New cards
Underfitting
When the model is too simple and fails to learn the pattern in the data.
66
New cards
Generalization
The ability of a model to perform well on unseen data.
67
New cards
Normalization
Scaling input values to a similar range so the model can learn better.
68
New cards
Why normalize ANN input
To prevent features with larger values from dominating the learning process.
69
New cards
Hyperparameter tuning
The process of adjusting settings such as learning rate, hidden neurons, epochs, and activation function.
70
New cards
ANN process step 1
Prepare and clean the dataset.
71
New cards
ANN process step 2
Divide the dataset into training and testing sets.
72
New cards
ANN process step 3
Choose ANN architecture such as number of input, hidden, and output nodes.
73
New cards
ANN process step 4
Initialize weights, usually with small random values.
74
New cards
ANN process step 5
Perform forward propagation to calculate output.
75
New cards
ANN process step 6
Calculate error between target and actual output.
76
New cards
ANN process step 7
Use backpropagation to update weights.
77
New cards
ANN process step 8
Repeat training for several epochs until error is reduced.
78
New cards
Forward propagation
The process of passing input through the network to calculate the output.
79
New cards
Backpropagation
The process of sending error backward through the network to update weights.
80
New cards
K-Fold Cross Validation
A model evaluation method where data is split into K parts, and each part is used once as testing data.
81
New cards
Purpose of K-Fold Cross Validation
To evaluate model performance more reliably by testing the model on different data splits.
82
New cards
Example of 5-Fold Cross Validation
The dataset is split into 5 parts; the model trains on 4 parts and tests on 1 part, repeated 5 times.
83
New cards
Advantage of K-Fold Cross Validation
It reduces bias from using only one train-test split.
84
New cards
Confusion Matrix
A table that compares actual classes with predicted classes.
85
New cards
True Positive (TP)
The model predicts positive and the actual class is positive.
86
New cards
True Negative (TN)
The model predicts negative and the actual class is negative.
87
New cards
False Positive (FP)
The model predicts positive but the actual class is negative.
88
New cards
False Negative (FN)
The model predicts negative but the actual class is positive.
89
New cards
Accuracy
The proportion of total predictions that are correct.
90
New cards
Accuracy formula
Accuracy = (TP + TN) / (TP + TN + FP + FN)
91
New cards
Precision
The proportion of predicted positive cases that are actually positive.
92
New cards
Precision formula
Precision = TP / (TP + FP)
93
New cards
Recall
The proportion of actual positive cases that are correctly detected.
94
New cards
Recall formula
Recall = TP / (TP + FN)
95
New cards
F1 Score
A performance measure that balances precision and recall.
96
New cards
F1 Score formula
F1 = 2 × (Precision × Recall) / (Precision + Recall)
97
New cards
Precision simple meaning
When the model says positive, how often is it correct?
98
New cards
Recall simple meaning
Out of all actual positives, how many did the model find?
99
New cards
Accuracy simple meaning
Out of all predictions, how many were correct?
100
New cards
When accuracy can be misleading
When the dataset is imbalanced and one class is much larger than another.