busi analytics

0.0(0)
studied byStudied by 0 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/118

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

119 Terms

1
New cards

What is Business Analytics?

Business Analytics is the process of transforming data into actions through analysis and insights in the context of organizational decision making and problem-solving.

2
New cards

Three Disciplines of Business Analytics

Business Intelligence (collects and manages data), Statistics (analyzes data relationships), and Operations Research/Management Science (provides solutions using models).

3
New cards

Examples of Business Analytics Applications

Pricing, customer segmentation, merchandising, location analysis, supply chain design, staffing, and healthcare optimization.

4
New cards

Importance of Business Analytics

Organizations that effectively use analytics report improved decision-making, efficiency, productivity, and customer satisfaction.

5
New cards

Challenges in Business Analytics

Lack of understanding, competing priorities, insufficient analytical skills, poor data quality, and unclear ROI.

6
New cards

Descriptive Analytics

Analyzes historical data to understand trends and patterns. Methods include descriptive statistics, charts, and probability distributions.

7
New cards

Predictive Analytics

Uses statistical, information system, and operations research methods to predict future outcomes. Example: regression and data mining.

8
New cards

Prescriptive Analytics

Applies decision science and optimization models to recommend actions and allocate resources efficiently.

9
New cards

Business Analytics vs. Predictive Modeling

Business Analytics uses data-driven insights for decisions; Predictive Modeling uses statistical techniques to forecast unknown events.

10
New cards

Common Predictive Models

Decision Trees, Regression Models, Cluster Models, and Time Series Models.

11
New cards

Applications of Predictive Modeling

Forecasting financial performance, predicting consumer behavior, loan defaults, and product life cycles.

12
New cards

Sales-Promotion Decision Model

Predicts sales based on variables like price, coupons, and advertising to guide marketing decisions.

13
New cards

Regression in Predictive Modeling

Models the relationship between dependent and independent variables using equations like Y = f(X) + error.

14
New cards

Regression vs Classification

Regression predicts continuous values; Classification predicts categorical outcomes (e.g., fraud detection, spam filtering).

15
New cards

Elements of Prediction

Prediction involves assigning a value to an unknown target variable (y) based on known predictors (x).

16
New cards

Logic of Predictive Modeling

Original data reveals X-Y relationships; live data applies the model to predict unknown outcomes.

17
New cards

Simple Linear Regression Example

Predict hotel price (Y) from distance to city center (X) using a linear model: Price = β0 + β1 × Distance.

18
New cards

Types of Predictions

Quantitative prediction (numeric values), Probability prediction (likelihood), Classification (categories).

19
New cards

Point vs Interval Prediction

Point gives a single predicted value; Interval gives a range where the true value likely falls.

20
New cards

Forecasting

Predicts future values of a variable, often using time series data.

21
New cards

Steps of Simple Linear Regression

1) Prepare data, 2) Explore data, 3) Fit model, 4) Test significance, 5) Evaluate model, 6) Check assumptions, 7) Interpret coefficients.

22
New cards

Significance Testing in Regression

Includes coefficient tests, ANOVA table, confidence intervals, and F-statistics to validate model fit.

23
New cards

Prediction Error

The difference between actual and predicted values (ei = yi - ŷi). Measures prediction quality.

24
New cards

Loss Function

Translates prediction errors into numeric measures for decision-making; MSE is most common.

25
New cards

Mean Squared Error (MSE)

Average squared difference between actual and predicted values. Lower MSE means better accuracy.

26
New cards

Bias in Prediction

Error due to simplifying assumptions; high bias means the model is too simple (underfitting).

27
New cards

Variance in Prediction

Measures sensitivity to data changes; high variance means overfitting.

28
New cards

Overfitting

Model fits training data too well, capturing noise rather than trend.

29
New cards

Underfitting

Model too simple to capture underlying trends in data.

30
New cards

Bias-Variance Tradeoff

As bias decreases, variance increases. The goal is balancing both to minimize total error.

31
New cards

Model Evaluation Metrics

Use MSE, R-squared, and adjusted R-squared to compare models.

32
New cards

AIC and BIC

Criteria combining model fit and complexity; lower values indicate better balance between accuracy and simplicity.

33
New cards

Training vs Test Set

Training data builds the model; test data evaluates predictive accuracy.

34
New cards

Cross-Validation

Technique that splits data into k folds to estimate model performance more reliably.

35
New cards

Leave-One-Out Cross Validation (LOOCV)

Special case of k-fold where k = n; each observation is used once as a validation sample.

36
New cards

K-Fold vs LOOCV

LOOCV has less bias but higher variance; k-fold is computationally efficient and more stable.

37
New cards

Best Model Selection

The best model minimizes prediction error while avoiding overfitting; uses validation or cross-validation results.

38
New cards

What is the purpose of model building for prediction?

To create a statistical model that best predicts the response variable using relevant explanatory variables while avoiding overfitting.

39
New cards

What is the main issue with high-dimensional data in regression models?

When there are too many predictors relative to observations, it can lead to overfitting, multicollinearity, and difficulty in interpretation.

40
New cards

What does the p-value of an explanatory variable represent in regression?

It indicates how helpful that variable is in explaining the variation in the response; lower p-values suggest stronger relationships.

41
New cards

When can insignificant variables be removed from a model?

If removing them does not significantly decrease adjusted R² or worsen prediction accuracy.

42
New cards

What are two conflicting goals in variable selection?

Including more predictors to improve model accuracy vs. limiting predictors to reduce variance and improve interpretability.

43
New cards

What is stepwise regression?

A step-by-step method of adding or removing predictor variables to identify the most effective set of predictors for the model.

44
New cards

What is forward selection in stepwise regression?

A bottom-up approach that begins with no predictors and adds the most significant ones sequentially based on criteria like p-value or AIC.

45
New cards

What is backward elimination in stepwise regression?

A top-down approach that starts with all candidate variables and removes the least significant ones until only significant predictors remain.

46
New cards

What is the combined stepwise method?

A hybrid of forward and backward selection that allows adding and removing variables as the model updates dynamically.

47
New cards

What is the Akaike Information Criterion (AIC)?

A measure of model quality that balances model fit and complexity; lower AIC indicates a better model.

48
New cards

What is Mallows’ Cp statistic used for?

To evaluate model bias and variance; the best model has a Cp value close to the number of predictors plus the intercept.

49
New cards

What is the Best Subset approach?

A method that evaluates all possible combinations of predictors and selects the model with the best statistical criteria like R², AIC, Cp, or BIC.

50
New cards

What are advantages of stepwise regression?

It identifies the most relevant predictors, reduces overfitting, and provides interpretable insights.

51
New cards

What are disadvantages of stepwise regression?

It may overfit, struggle with multicollinearity, introduce selection bias, and assume linear relationships.

52
New cards

What is the Bayesian Information Criterion (BIC)?

Similar to AIC but imposes a larger penalty for model complexity; lower BIC indicates a more parsimonious model.

53
New cards

What is dimensionality reduction?

Techniques used to reduce the number of predictors, such as principal component analysis (PCA), factor analysis, or Lasso regression.

54
New cards

What is the omitted variable problem in regression?

It occurs when an important variable is left out of the model, causing bias in the estimated coefficients of included variables.

55
New cards

When does omitted variable bias occur?

When the omitted variable affects the response variable and is correlated with one or more included predictors.

56
New cards

What is an example of omitted variable bias?

Excluding 'education level' when modeling income with 'work experience' can bias results since education influences both income and experience.

57
New cards

How does multiple regression help prevent omitted variable bias?

By including multiple relevant predictors, it accounts for shared variation and isolates each variable’s true effect on the outcome.

58
New cards

What are the two main hypothesis tests in multiple regression?

1) The F-test for overall model significance and 2) t-tests for individual coefficients.

59
New cards

What does the F-test in regression evaluate?

Whether at least one predictor variable in the model has a nonzero coefficient, indicating the model provides explanatory power.

60
New cards

What does a t-test for regression coefficients evaluate?

Whether a specific predictor has a statistically significant effect on the response variable, holding others constant.

61
New cards

What is R² in regression analysis?

The proportion of variance in the dependent variable explained by the independent variables in the model.

62
New cards

What is adjusted R²?

A modified version of R² that accounts for the number of predictors, preventing artificial inflation when adding unnecessary variables.

63
New cards

What is the correlation coefficient (r)?

A measure of the linear relationship between two variables ranging from -1 to +1.

64
New cards

What does the regression intercept represent?

The expected value of the response variable when all predictors are zero.

65
New cards

How is the slope coefficient interpreted in multiple regression?

It represents the expected change in the response variable for a one-unit change in that predictor, holding all other variables constant.

66
New cards

Why can the same variable appear to have opposite effects in simple and multiple regression?

Because simple regression ignores other predictors, while multiple regression controls for them, revealing the variable’s true effect.

67
New cards

What are the four key assumptions of multiple regression?

Linearity, Normality of errors, Homoscedasticity (constant variance), and Independence of errors.

68
New cards

What additional assumptions are important in regression?

No significant outliers and correct model specification.

69
New cards

What does linearity mean in regression?

That the relationship between each predictor and the response variable is linear.

70
New cards

How can nonlinearity be detected?

Using scatterplots or residual plots showing curved or systematic patterns.

71
New cards

What are ways to correct nonlinearity?

Add polynomial terms, transform predictors (e.g., log or square root), or include interaction terms.

72
New cards

What is homoscedasticity?

The assumption that the variance of the residuals is constant across all levels of predicted values.

73
New cards

What is heteroscedasticity?

When residual variance changes with fitted values, often forming a funnel shape in residual plots.

74
New cards

How can heteroscedasticity be fixed?

By transforming the dependent variable (e.g., log), using weighted regression, or robust standard errors.

75
New cards

What does independence of errors mean?

That residuals are uncorrelated; one observation’s error does not predict another’s.

76
New cards

What causes correlated errors?

Time-series data or clustered samples where measurements are not independent.

77
New cards

How can correlated errors be detected?

Using residual lag plots, Durbin-Watson tests, or autocorrelation plots.

78
New cards

What is the Durbin-Watson test used for?

To detect autocorrelation in residuals; values near 2 indicate no correlation, below 1.4 indicate positive correlation.

79
New cards

What are outliers in regression?

Observations with extreme response values that do not follow the overall data pattern.

80
New cards

What are leverage points?

Observations with extreme predictor values that can strongly influence the regression line.

81
New cards

How can outliers be detected?

By examining standardized residuals (values beyond ±3), Cook’s distance, or DFFITS values.

82
New cards

What is Cook’s Distance?

A measure of how much a data point influences the fitted regression coefficients; values >1 suggest influential points.

83
New cards

What are consequences of fitting a model with outliers?

Biased coefficients, inflated standard errors, and misleading p-values.

84
New cards

How can outliers be handled?

Investigate, remove, or model them separately; retrain the model iteratively to improve fit.

85
New cards

What is multicollinearity?

When two or more predictors are highly correlated, making it difficult to isolate individual effects.

86
New cards

How can multicollinearity be detected?

Using correlation matrices, scatterplots, or the Variance Inflation Factor (VIF).

87
New cards

What is the Variance Inflation Factor (VIF)?

A metric that quantifies how much the variance of a coefficient is inflated due to multicollinearity; VIF > 10 suggests severe collinearity.

88
New cards

What are the effects of multicollinearity?

It causes inflated standard errors, unstable coefficients, and unreliable significance tests.

89
New cards

How can multicollinearity be mitigated?

By removing or combining correlated predictors, or using principal component or partial least squares regression.

90
New cards

What is the main goal of Principal Component Analysis (PCA)?

Reduce dimensionality by finding new orthogonal axes (principal components) that capture the most variance.

91
New cards

Name two uses of PCA.

Reduce dimensions for computation, visualize high-dimensional data, remove noise, find patterns, and identify outliers.

92
New cards

How are principal components ordered?

By decreasing explained variance: PC1 explains the most, PC2 the second-most, etc.

93
New cards

What does a high eigenvalue mean in PCA?

A principal component (eigenvector) with a high eigenvalue explains a large portion of the variance.

94
New cards

Why standardize variables before PCA?

To put variables on the same scale so variance contributions are comparable (especially when units differ).

95
New cards

What problem do shrinkage (regularization) methods address?

High variance and overfitting when there are many predictors or multicollinearity.

96
New cards

Write the ridge regression loss function (conceptually).

Least squares loss plus λ times the sum of squared coefficients (L2 penalty).

97
New cards

Write the lasso regression loss function (conceptually).

Least squares loss plus λ times the sum of absolute coefficients (L1 penalty).

98
New cards

How does ridge regression affect coefficients?

Shrinks coefficients towards zero but does not set them exactly to zero.

99
New cards

How does lasso regression affect coefficients?

Can shrink some coefficients exactly to zero, performing variable selection.

100
New cards

What is an elastic net?

A combination of L1 (lasso) and L2 (ridge) penalties; balances shrinkage and variable selection.