FBLA - Data Science and AI

0.0(0)
studied byStudied by 1 person
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/142

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

143 Terms

1
New cards

Mean

The average of a set of numbers, found by summing values and dividing by count.

2
New cards

Variance

A measure of how far data points spread out from the mean.

3
New cards

Precision

The proportion of true positives among all predicted positives.

4
New cards

Recall

The proportion of true positives among all actual positives.

5
New cards

F1-score

Harmonic mean of precision and recall, useful for imbalanced datasets.

6
New cards

Supervised Learning

Machine learning using labeled data to train models.

7
New cards

Unsupervised Learning

Machine learning using unlabeled data to find patterns.

8
New cards

Overfitting

When a model performs well on training data but poorly on new data.

9
New cards

Normalization

Scaling data to a standard range, often 0-1.

10
New cards

Neural Network

A computational model inspired by the human brain, consisting of layers of nodes.

11
New cards

Bias in AI

Systematic error introduced when training data misrepresents reality.

12
New cards

Bayes' Theorem

A formula for conditional probability: P(A|B) = P(B|A)P(A)/P(B).

13
New cards

ROC Curve

Graph showing trade-off between true positive rate and false positive rate.

14
New cards

Feature Engineering

Creating new input variables to improve model performance.

15
New cards

Reinforcement Learning

Training agents through rewards and penalties for actions taken.

16
New cards

Population vs. sample

A population is the entire set of interest; a sample is a subset drawn from it to estimate population parameters.

17
New cards

Parameter vs. statistic

Parameters describe populations (e.g., μ, σ); statistics describe samples (e.g., x, s).

18
New cards

Mean

Sum of values divided by count; sensitive to outliers.

19
New cards

Median

Middle value in a sorted list; robust to outliers.

20
New cards

Mode

Most frequent value; can be multimodal.

21
New cards

Variance

Average squared deviation from the mean; population: σ², sample: s².

22
New cards

Standard deviation

Square root of variance; interpretable spread in original units.

23
New cards

Interquartile range (IQR)

Q3 − Q1; robust spread measure used in boxplots.

24
New cards

Skewness

Asymmetry of distribution; positive skew has a long right tail.

25
New cards

Kurtosis

Tail heaviness relative to normal; high kurtosis implies heavy tails.

26
New cards

Empirical rule

In normal distributions, ~68%, 95%, 99.7% within 1, 2, 3 SDs.

27
New cards

Z-score

Standardized value: (x - μ)/σ; compares across scales.

28
New cards

Central limit theorem

Sample mean tends toward normal as n increases, regardless of population distribution.

29
New cards

Law of large numbers

Sample average converges to population mean as sample size grows.

30
New cards

Correlation vs. causation

Correlation quantifies association; causation requires mechanisms and controls.

31
New cards

Pearson correlation

Linear association; sensitive to outliers; −1 to 1.

32
New cards

Spearman correlation

Rank-based; robust to nonlinearity and outliers.

33
New cards

Probability basics

P(A ∪ B) = P(A) + P(B) - P(A ∩ B); independence: P(A ∩ B)=P(A)P(B).

34
New cards

Conditional probability

P(A|B)=P(A ∩ B)/P(B).

35
New cards

Bayes' theorem

P(A|B)=P(B|A)P(A)/P(B).

36
New cards

Prior vs. posterior

Prior: belief before data; posterior: updated belief after observing evidence via Bayes.

37
New cards

Likelihood

Probability of data given parameters; central in ML (maximum likelihood).

38
New cards

Distributions: normal

Symmetric, bell-shaped; defined by μ, σ; ubiquitous in measurement data.

39
New cards

Distributions: binomial

Fixed n trials, success probability p; counts of successes; mean np, var np(1−p).

40
New cards

Distributions: Poisson

Counts of events over fixed interval with rate λ; mean = variance = λ.

41
New cards

Distributions: exponential

Memoryless waiting times; parameter λ; mean 1/λ.

42
New cards

Distributions: Bernoulli

Single trial with success/failure; mean p, variance p(1−p).

43
New cards

Sampling methods

Simple random, stratified, cluster, systematic; impact bias and variance.

44
New cards

Bias types (stats)

Selection bias, survivorship bias, measurement bias, nonresponse bias.

45
New cards

Confidence intervals

Range likely containing parameter; depends on variability and sample size.

46
New cards

Hypothesis testing

Null vs. alternative; p-value assesses evidence against null.

47
New cards

Type I vs. Type II error

Type I: false positive (α); Type II: false negative (β); power = 1−β.

48
New cards

Data quality dimensions

Accuracy, completeness, consistency, timeliness, validity, uniqueness.

49
New cards

Data cleaning

Handle missing (drop, impute), fix types, de-duplicate, resolve outliers, enforce constraints.

50
New cards

Missing data mechanisms

MCAR, MAR, MNAR; guide imputation strategy.

51
New cards

Imputation methods

Mean/median, mode, KNN impute, regression impute, multivariate imputation (MICE).

52
New cards

Feature scaling

Normalization (min-max), standardization (z-score), robust scaling (IQR-based).

53
New cards

Feature encoding

One-hot, ordinal, target encoding (use with caution to avoid leakage).

54
New cards

Feature selection

Filter (correlation, chi-squared), wrapper (RFE), embedded (L1/L2 regularization).

55
New cards

Dimensionality reduction

PCA (linear), t-SNE/UMAP (manifold visualization), autoencoders (nonlinear).

56
New cards

Data leakage

Train data includes information from test or future; inflates performance; avoid via strict splits.

57
New cards

Train/validation/test split

Typical: 60-20-20 or 70-15-15; validation tunes; test is final unbiased estimate.

58
New cards

Cross-validation

k-fold, stratified k-fold for classification; reduces variance of performance estimates.

59
New cards

Stratification

Preserve class proportions across folds/splits; critical in imbalanced data.

60
New cards

Supervised learning

Learn mapping from features X to labels y using labeled data.

61
New cards

Regression vs. classification

Regression predicts continuous values; classification predicts discrete classes.

62
New cards

Overfitting vs. underfitting

Overfit: memorizes noise; underfit: too simple; manage with regularization, more data.

63
New cards

Bias-variance tradeoff

High bias: underfit; high variance: overfit; aim for optimal complexity.

64
New cards

Regularization

L1 (lasso) sparsity, L2 (ridge) shrinkage; reduces overfitting.

65
New cards

Early stopping

Halt training when validation loss stops improving to prevent overfit.

66
New cards

Ensembles

Bagging (Random Forest), boosting (XGBoost), stacking; often superior generalization.

67
New cards

Linear regression

Minimize \(\sum (y - \hat{y})^2\); assumptions: linearity, homoscedasticity, normal errors, independence.

68
New cards

Logistic regression

Sigmoid outputs probability; decision boundary via log-odds; interpretable coefficients.

69
New cards

KNN

Instance-based; choose k and distance metric; sensitive to scaling and noise.

70
New cards

Naive Bayes

Assumes feature independence; strong baseline for text; fast and robust.

71
New cards

Decision trees

Recursive splits; interpretable; prone to overfitting without pruning.

72
New cards

Random forest

Ensemble of trees via bagging; reduce variance; feature importance estimates.

73
New cards

Gradient boosting

Sequential trees fit residuals; powerful but sensitive to hyperparameters.

74
New cards

SVM

Maximize margin with kernels (linear, RBF); effective in high-dimensional spaces.

75
New cards

Clustering: K-means

Partition into k clusters; minimizes within-cluster variance; requires scaling; spherical clusters.

76
New cards

Clustering: hierarchical

Agglomerative/divisive; dendrogram visual; flexible but computationally heavy.

77
New cards

Clustering: DBSCAN

Density-based; finds arbitrary shapes and noise; requires eps/minPts tuning.

78
New cards

Topic modeling

LDA uncovers topics via word distributions; unsupervised text analysis.

79
New cards

Evaluation: accuracy

Proportion correct; misleading in imbalanced data.

80
New cards

Precision

TP / (TP + FP); how often positives predicted are correct.

81
New cards

Recall (sensitivity)

TP / (TP + FN); how many actual positives captured.

82
New cards

Specificity

TN / (TN + FP); true negative rate.

83
New cards

F1-score

Harmonic mean of precision and recall; balances both.

84
New cards

Confusion matrix

2×2 summary: TP, FP, TN, FN; foundation for metrics.

85
New cards

ROC curve

TPR vs. FPR across thresholds; AUC summarizes separability.

86
New cards

PR curve

Precision vs. recall; preferred in heavy class imbalance.

87
New cards

Regression metrics

MAE (robust), MSE (penalizes large errors), RMSE (scale-aware), \(R^2\) (variance explained).

88
New cards

Calibration

Agreement between predicted probabilities and observed frequencies; reliability diagrams.

89
New cards

Threshold selection

Choose decision threshold optimizing metric of interest (F1, cost-sensitive, Youden's J).

90
New cards

Neural networks

Layers of neurons; weights and biases; nonlinear activations enable complex functions.

91
New cards

Activation functions

ReLU, Leaky ReLU, Sigmoid, Tanh, Softmax (for multiclass probabilities).

92
New cards

Backpropagation

Gradient-based weight updates via chain rule; paired with optimizers like SGD/Adam.

93
New cards

Vanishing/exploding gradients

Gradients shrink or blow up in deep nets; mitigated with normalization, residuals.

94
New cards

Batch normalization

Normalizes layer inputs per batch; stabilizes training.

95
New cards

Dropout

Randomly zeroes activations; regularizes by preventing co-adaptation.

96
New cards

CNNs

Convolutions for spatial features; pooling reduces dimensions; used in computer vision.

97
New cards

RNNs

Sequential modeling with recurrent connections; struggles with long dependencies.

98
New cards

LSTM/GRU

Gated RNNs; capture long-term dependencies more effectively than vanilla RNNs.

99
New cards

Transformers

Attention mechanisms model global dependencies; state-of-the-art in NLP and beyond.

100
New cards

Word embeddings

Dense vector representations (Word2Vec, GloVe); capture semantic similarity.