BM

handouts-of-psy516-from-lesson-19-45

Statistics in Psychology (PSY-516)

Lesson 19: Confidence Interval - II

  • Confidence Interval for Population Means: To estimate population mean using sample measurements.

    • Requires sample mean, standard deviation, and observations count.

    • Sample mean does not affect confidence interval width.

    • Data forms a bell shape, with sample mean at the center.

  • Formula for Confidence Interval:

    • For Normal Distribution: CI = X̄ ± Z*(σ/√n)

    • For T-Distribution: replace Z* with t* when population standard deviation is unknown.

  • Standard Error of the Mean (SEM):

    • SEM = σ/√n

    • For 95% Confidence Interval, use Sample mean ± 2*SEM (if n >= 30).

Lesson 20: Hypothesis Testing - I

  • Hypothesis Testing Process:

    • Step 1: State hypothesis about population (Null H0, Alternative H1).

    • Step 2: Set decision criteria, including alpha level for significance.

    • Step 3: Collect data and compute statistics.

    • Step 4: Make decision about H0 based on data and significance.

  • Key Concepts:

    • Null Hypothesis (H0): No effect or difference.

    • Alternative Hypothesis (H1): There is an effect or difference.

    • Alpha Levels: Commonly 0.05 (5%), 0.01 (1%).

Lesson 21: Hypothesis Testing - II

  • Assumptions of Hypothesis Testing:

    1. Random sampling.

    2. Independent observations.

    3. Population standard deviation remains unchanged.

    4. Normal sampling distribution.

  • Errors in Hypothesis Testing:

    • Type I Error: Rejecting a true null hypothesis (false positive).

    • Type II Error: Failing to reject a false null hypothesis (false negative).

Lesson 22: Hypothesis Testing - III

  • Factors Influencing Test Outcomes:

    • Variability of scores.

    • Sample size.

    • Statistical power (probability of correctly rejecting a false null hypothesis).

  • Power of a Test: 1 - β; factors affecting power include sample size, alpha level, and directionality of tests.

Lesson 23: Hypothesis Testing - IV

  • Sample Size and Significance:

    • Larger samples yield more significant results but can detect small differences.

    • Importance of calculating appropriate sample size using power analysis.

Lesson 24: T-Test - I

  • t-statistic: Used when population standard deviation is unknown.

    • Compares sample mean against population mean.

    • Assumes normal distribution of data.

Lesson 25: T-Test - II

  • Using t-test in SPSS: Describes steps to enter data, run t-test for comparing population mean to a sample mean.

Lesson 26: Independent Sample T-Test - I

  • Independent T-Test: Compares two independent groups on a continuous measure. Assumes independent samples, normality, and equal variances.

Lesson 27: Independent Sample T-Test - II

  • Running T-Test in SPSS: Detailed process for entering data and interpreting results. Includes Post Hoc tests.

Lesson 28: Repeated Measure T-Test

  • Design: Same individuals measured under different conditions (within-subjects).

  • Wilcoxon Signed Rank Test: Non-parametric alternative to paired t-test.

Lesson 29: Analysis of Variance (ANOVA)

  • One-Way ANOVA: Compares three or more groups. Assesses whether group mean differences are significant.

Lesson 30: Two-Way ANOVA - I

  • Two-Way ANOVA: Examines interaction effects between two independent variables.

Lesson 31: Two-Way ANOVA - II

  • Interpreting Output: Examines main and interaction effects; analyzes significance across multiple groups.

Lesson 32: Two-Way ANOVA - III

  • Robustness of ANOVA: ANOVA can tolerate deviations from normality, reducing the risk of Type I error.

Lesson 33: Correlation Analysis - I

  • Correlation Definition: Assesses relationships between two variables using methods like Pearson's and Spearman's correlations.

Lesson 34: Correlation Analysis - II

  • Correlation Coefficient (r): Measures strength and direction of linear relationships between variables.

Lesson 35: Types of Correlation - I

  • Partial Correlation: Measures relationship while controlling for additional variables.

    • Point-Biserial Correlation: For one continuous and one dichotomous variable.

Lesson 36: Types of Correlation - II

  • Phi Coefficient: Measures correlation between two dichotomous variables.

Lesson 37: Introduction to Regression

  • Regression: Technique for predicting values of one variable based on another; uses linear equations.

Lesson 38: Simple Linear Regression

  • Simple Regression Steps: Data entry, analysis, and interpretation in SPSS. Evaluates predictor's influence on an outcome.

Lesson 39: Multiple Linear Regression

  • Multiple Regression Analysis: Involves multiple predictors to assess their cumulative effect on outcome variable.

Lesson 40: Non-Parametric Test

  • Introduction to Non-Parametric Tests: Used when assumptions of parametric tests are violated (e.g., small sample sizes).

Lesson 41: Chi-square Test for Independence - I

  • Chi-square Definition: Evaluates relationship between two categorical variables; requires frequency data.

Lesson 42: Chi-square Test for Independence - II

  • Interpreting Results: Assess association and calculate effect size.

Lesson 43: Mann-Whitney U-Test

  • Mann-Whitney U Test: Non-parametric alternative to independent sample t-test; compares independent groups.

Lesson 44: Wilcoxon Signed Rank Test

  • Wilcoxon Test: Used for comparing two related samples, based on ranking scores.

Lesson 45: Kruskal Wallis & Friedman Test

  • Kruskal-Wallis Test: Non-parametric alternative to one-way ANOVA; compares multiple groups.

Statistics in Psychology (PSY-516)

Lesson 19: Confidence Interval - II

Confidence Interval for Population Means:

  • Purpose: To estimate the population mean based on sample measurements.

  • Requirements: To calculate a confidence interval, one needs the sample mean (X̄), the sample standard deviation (σ), and the number of observations (n).

  • Observation: The width of the confidence interval is not affected by the sample mean itself; rather, it is influenced by both the standard deviation and the sample size.

  • Data Representation: The distribution of data, when plotted, forms a bell-shaped curve with the sample mean positioned at the center.

  • Formula for Confidence Interval:

    • For Normal Distribution:CI = X̄ ± Z*(σ/√n)

    • For T-Distribution: When the population standard deviation is unknown, replace Z* with t*.

  • Standard Error of the Mean (SEM):

    • Given by SEM = σ/√n, which quantifies the amount of variability in the sample mean estimates of the population mean.

  • 95% Confidence Interval: Use Sample mean ± 2*SEM when the sample size (n) is greater than or equal to 30. This implies that we are 95% confident that the actual population mean lies within this interval.

Lesson 20: Hypothesis Testing - I

Hypothesis Testing Process:

  1. State Hypothesis: Formulate the null hypothesis (H0) which represents no effect or difference, and the alternative hypothesis (H1) that indicates the existence of an effect or difference.

  2. Set Decision Criteria: This includes establishing the alpha level (α) which denotes the threshold for significance, commonly set at 0.05 (5%) or 0.01 (1%).

  3. Collect Data: Gather relevant data and perform the necessary statistical computations.

  4. Decision Making: Draw conclusions about the null hypothesis based on the calculated statistics and the predetermined significance level.

Key Concepts:**

  • Null Hypothesis (H0): The premise that states there is no statistically significant effect or difference.

  • Alternative Hypothesis (H1): The proposition that contradicts the null hypothesis, suggesting that there is an effect or difference.

  • Alpha Levels: Significance levels that indicate the probability of rejecting the null hypothesis incorrectly, with values of 0.05 and 0.01 commonly used in research.

Lesson 21: Hypothesis Testing - II

Assumptions of Hypothesis Testing:

  • Random Sampling: The samples must be drawn randomly from the population to avoid bias.

  • Independent Observations: The data points are assumed to be independent of each other.

  • Population Standard Deviation: It is assumed that the population standard deviation remains constant across samples.

  • Normal Distribution: The sampling distribution should be normal, especially in smaller sample sizes.

Errors in Hypothesis Testing:**

  • Type I Error: Occurs when the null hypothesis is erroneously rejected when it is true; this is also known as a false positive.

  • Type II Error: Occurs when the null hypothesis is not rejected despite its being false; referred to as a false negative.

Lesson 22: Hypothesis Testing - III

Factors Influencing Test Outcomes:

  • Variability of Scores: The dispersion of data points affects the reliability of the statistical tests.

  • Sample Size: Larger sample sizes generally yield more reliable estimates and enhance the statistical power of the test.

  • Statistical Power: Refers to the probability of correctly rejecting a false null hypothesis, calculated as 1 - β; this power is impacted by sample size, alpha levels, and the directionality of the test.

Lesson 23: Hypothesis Testing - IV

Sample Size and Significance:

  • Significance Level: Larger sample sizes may lead to detecting even trivial differences that may not be practically significant.

  • Power Analysis: It is vital to calculate the appropriate sample size before conducting tests to ensure sufficient power to detect meaningful effects.

Lesson 24: T-Test - I

t-statistic:

  • Specifically utilized when the population standard deviation is unknown.

  • It compares the sample mean against a known population mean under the assumptions of normality.

Lesson 25: T-Test - II

Using t-test in SPSS:

  • Discusses the procedural steps to enter data into SPSS and execute a t-test to compare a sample mean to a population mean.

Lesson 26: Independent Sample T-Test - I

Independent T-Test:

  • Compares the means of two independent groups on a continuous measure.

  • It operates under assumptions of independent samples, normalization, and equal variances between groups.

Lesson 27: Independent Sample T-Test - II

Running T-Test in SPSS:

  • Detailed instructions on how to input data and interpret results, including the execution of Post Hoc tests to further analyze significant differences.

Lesson 28: Repeated Measure T-Test

Design:

  • Measures the same individuals under different conditions (within-subjects), allowing for a comparison of how a score changes with varying conditions.

  • Wilcoxon Signed Rank Test: A non-parametric alternative used when the assumptions of the paired t-test are violated, applicable for ordinal data or non-normally distributed interval data.

Lesson 29: Analysis of Variance (ANOVA)

One-Way ANOVA:

  • Facilitates the comparison of three or more groups, determining if any significant differences exist between the group means.

Lesson 30: Two-Way ANOVA - I

Two-Way ANOVA:

  • Examines interaction effects that may occur between two independent variables, providing insights into how they jointly affect the dependent variable.

Lesson 31: Two-Way ANOVA - II

Interpreting Output:

  • Involves examining the significance of main effects and interaction effects, analyzing how different groups compare across multiple dimensions.

Lesson 32: Two-Way ANOVA - III

Robustness of ANOVA:

  • The ANOVA method demonstrates tolerance to deviations from normality, substantially mitigating the risk of Type I error occurrences.

Lesson 33: Correlation Analysis - I

Correlation Definition:

  • Evaluates the relationship between two variables, utilizing different methods including Pearson's correlation for linear relationships and Spearman’s for ranked data.

Lesson 34: Correlation Analysis - II

Correlation Coefficient (r):

  • Designed to measure both the strength and the direction of linear relationships between two variables, ranging from -1 to 1.

Lesson 35: Types of Correlation - I

Partial Correlation:

  • Measures the relationship between two variables while controlling for the influence of one or more additional variables, assisting in clarifying direct associations.

  • Point-Biserial Correlation: Specifically applied in cases involving one continuous variable and one dichotomous variable, thus evaluating relationships in two distinct types of data.

Lesson 36: Types of Correlation - II

Phi Coefficient:

  • A method for measuring correlation between two dichotomous variables, providing insights into relationships in categorical data.

Lesson 37: Introduction to Regression

Regression:

  • A statistical technique used for predicting the values of one variable based on another, employing linear equations to establish relationships among variables.

Lesson 38: Simple Linear Regression

Simple Regression Steps:

  • Guide for data entry, analysis, and interpretation in SPSS focusing on evaluating the influence of a single predictor on the outcome variable.

Lesson 39: Multiple Linear Regression

Multiple Regression Analysis:

  • Involves the use of multiple predictors to assess their cumulative effect on a dependent outcome variable, enhancing predictive accuracy.

Lesson 40: Non-Parametric Test

Introduction to Non-Parametric Tests:

  • Utilize these tests when the assumptions underpinning parametric tests are violated, such as with small sample sizes or non-normal distributions.

Lesson 41: Chi-square Test for Independence - I

Chi-square Definition:

  • Evaluates the relationship between two categorical variables and requires analysis of frequency data, determining whether distributions of categorical variables differ from each other.

Lesson 42: Chi-square Test for Independence - II

Interpreting Results:

  • Examines the degree of association between variables and includes calculations of effect size, informing about the strength of the relationship.

Lesson 43: Mann-Whitney U-Test

Mann-Whitney U Test:

  • Serves as a non-parametric alternative to the independent sample t-test, facilitating comparisons between independent groups when data do not meet required assumptions.

Lesson 44: Wilcoxon Signed Rank Test

Wilcoxon Test:

  • Implements ranking for scores when comparing two related samples, providing a non-parametric approach suitable for ordinal data.

Lesson 45: Kruskal Wallis & Friedman Test

Kruskal-Wallis Test:

  • Functions as a non-parametric alternative to one-way ANOVA, employed for comparing three or more groups when distribution conditions are not met.