HPCS4031: Research Methods in Psychology - Lecture 7 and 8: t-test

HPCS4031: Research Methods in Psychology - Lecture 7 and 8: t-test

p-value vs. Alpha Level

  • Establishing an Alpha Level

    • An alpha level is the probability of rejecting the null hypothesis when it is actually true.

    • For example, if a significant difference is observed between two groups, this could be attributed to random chance (noise) rather than a systematic difference caused by the independent variable (IV).

    • Statistics is fundamentally about probability; thus, nothing can ever be 100% certain.

    • In psychology, the common practice is to set the alpha level at 0.05 (5%):

    • This signifies that we accept a 5% risk of identifying a false significant effect (Type I error).

    • Alternative alpha levels:

    • More conservative: 0.01 (1%)

    • More liberal: 0.10 (10%) (rarely used)

Distribution of Sample Means

  • The distribution of sample means is divided into two sections:

    1. Sample means that are likely to be obtained if the null hypothesis (H0) is true.

    • These sample means are close to the null hypothesis and support it.

    1. Sample means that are very unlikely to be obtained if H0 is true.

    • These sample means are far different from the null hypothesis and lead to its rejection.

Alpha Level Explained

  • The alpha level or level of significance is a probability value that defines what is meant by "very unlikely" in hypothesis testing.

  • Commonly used alpha levels include:

    • 0.05 (5%)

    • 0.01 (1%)

    • 0.001 (0.1%)

  • Example with α = 0.05:

    • The most unlikely 5% of sample means, defined by extreme values, separate from the most likely 95% (central values). This extreme segment constitutes the critical region.

Critical Region

  • Definition:

    • The critical region consists of the extreme sample values that are unlikely based on the established alpha level if the null hypothesis is true.

  • Boundaries:

    • Determined by the alpha level.

  • Decision Rule:

    • If sample data fall within this critical region, the null hypothesis is rejected.

Type I and Type II Errors

  • Type I Error:

    • Occurs when the null hypothesis is rejected when it is true.

    • Probability of Type I error is denoted by the alpha level (α).

  • Type II Error:

    • Happens when the null hypothesis is retained when it is false.

    • Probability of Type II error is referred to as Beta (β).

Comparing p-value to Alpha Level

  • p-value:

    • Represents the calculated probability that an observed difference between groups is due to chance (random noise) rather than a true effect.

  • Connection between t-values and p-values: larger absolute t-values yield smaller p-values.

  • For instance, a p-value of 0.063 indicates a 6.3% chance that the observed difference is not real.

    • Given α = 0.05 (5%), a p-value of this magnitude leads to retaining the null hypothesis since 6.3% is too high a risk.

Rules for Hypothesis Testing

  • If p >= α, retain the null hypothesis (H0: m1 = m2).

  • If p < α, reject the null hypothesis (H1: m2 > m1).

SPSS Output for Independent Samples Test

  • Independent Samples Test Results:

    • t = -2.157

    • df = 8

    • Sig. (2-tailed) = 0.071

    • Mean Difference = -0.32000

    • Std. Error Difference = 0.14832

Calculating p-value Manually

  • After calculating the t-statistic, reference a t-distribution table to look up the corresponding p-value.

  • Requirements:

    1. Degrees of freedom (df)

    2. One-tailed or two-tailed test designation.

Degrees of Freedom (df) Explained

  • For sample variance, the calculation uses n - 1.

    • This reflects that for n data points, n - 1 can vary freely; the last point is determined by the previous n - 1 points.

  • Thus, for a sample of n scores, the equation for degrees of freedom is: df=n1df = n - 1.

    • Illustratively, with n = 3 data points and a mean of 20 (total = 60), then df=31=2df = 3 - 1 = 2.

Independent Samples t-test and Degrees of Freedom Calculation

  • For independent measures t-statistic:

    • df=df<em>1+df</em>2=(n<em>11)+(n</em>21)df = df<em>1 + df</em>2 = (n<em>1 - 1) + (n</em>2 - 1).

    • Example:

    • If both samples (n1, n2) are 5, then:
      df=(51)+(51)=8df = (5 - 1) + (5 - 1) = 8.

One-Tailed vs. Two-Tailed Tests

  • For example, in a comparison of online vs face-to-face learning, a two-tailed test is typically appropriate.

    • To convert a two-tailed p-value to a one-tailed, divide it by 2.

    • A one-tailed test is viewed as more liberal since it may lead to easier to find significant results, sometimes considered as "cheating".

Between-Subjects/Independent-Measures Design

Goals of Independent-Measures Research Study
  • To evaluate the mean difference between two populations or treatment conditions.

  • Null Hypothesis:

    • H0: <br>u<em>1=u</em>2<br>u<em>1 = u</em>2 (no difference between the population means).

    • Alternative Hypothesis:

    • H1:<br>u<em>1equ</em>2<br>u<em>1 eq u</em>2 (indicating there is a mean difference).

Independent Measures t Formula Explanation

  • The independent-measures t statistic evaluates a hypothesis concerning the difference between two population means.

  • Notable components:

    • The formula comprises of mean difference M<em>1M</em>2M<em>1 - M</em>2 from the sample data.

    • Expected difference derived from null hypothesis: <br>u<em>1u</em>2=0<br>u<em>1 - u</em>2 = 0.

Standard Error and Its Role

  • Standard error: Indicates the expected error divergence between the sample mean difference ($M1 - M2$) and the population mean difference ($
    u1- u2$).

  • Two sources of error exist when evaluating the t statistic for independent measures:

    1. From sample one.

    2. From sample two.

Conclusion for t-statistic Calculation

  • To calculate the t-statistic accurately:

    • Utilize t=rac(M<em>1M</em>2)S<em>differencet = rac{(M<em>1 - M</em>2)}{S<em>{difference}}, where $S{difference}$ factors the variance from both samples.

Example of Difference Evaluation

  • Analysis of significant differences must account for both signal (effect of treatment) and noise (measurement error, individual variations, etc.).

  • For instance, a course mode comparison reveals means of 3.06 and 3.38 respectively:

    • A difference of 0.32 could stem from either signal (actual treatment effects) or noise (random variations).

  • The interpretations yield support for either the null hypothesis (H0: m<em>1=m</em>2m<em>1 = m</em>2) or the alternative hypothesis (H1: m2 > m1).

Testing Procedures by Hand

  1. Compute t statistic ($t{obs}$), represented by t</em>obs=racX<em>1X</em>2St</em>{obs} = rac{X<em>1 - X</em>2}{S} where signal over noise yields the t statistic outcome.

  2. Ensure to recognize whether sample variances are equal or unequal:

    • If equal, utilize basic formula for t-statistic.

    • If unequal, employ a different formula to pool variances for an unbiased measure of standard error.

Steps for Example Observations

  1. Ensure hypothesis clarity before assessment.

  2. Execute systematic calculations for variances, mean differences, and standard error values to finalize t-statistics.

  3. Outcome interpretation rests upon comparing calculated t to critical t-values (e.g., t critical for two-tailed test being ±2.306 at α = 0.05).

Summary of Testing Decisions

  • Compare obtained t versus critical thresholds to ascertain the acceptance or rejection of H0 based on the critical regions defined by alpha values.

  • Studies reveal decisions influenced by the significance levels set at the onset of research inquiries.

Student Example for Practice

  • A study investigating sleep effects on academic performance utilized a sample size of 20 (music group) and 15 (quiet group) with queried output of 12 and 16 total average recalls:

    • Sample variance noted, aligning participants into groups to evaluate statistical significance against normative behaviors.

Here are some application-based multiple-choice questions derived from the provided content:

  1. A researcher conducts a study and obtains a p-value of 0.0750.075. If the alpha level was set to 0.050.05, what is the correct statistical decision?
    A) Reject the null hypothesis, as the result is statistically significant.
    B) Retain the null hypothesis, as the p-value is greater than the alpha level.
    C) Reject the alternative hypothesis, as the p-value indicates no effect.
    D) The p-value is close to the alpha level, so more data is needed.

  2. A psychology student is analyzing data from an independent samples t-test comparing two groups. The SPSS output shows t=2.50t = 2.50, df=28df = 28, and Sig.(2tailed)=0.017Sig. (2-tailed) = 0.017. If the researcher initially hypothesized a directional difference (e.g., Group A scores higher than Group B), what would be the p-value for a one-tailed test, and what decision should be made if α=0.05\alpha = 0.05?
    A) p-value = 0.0170.017; Retain the null hypothesis.
    B) p-value = 0.00850.0085; Reject the null hypothesis.
    C) p-value = 0.0340.034; Retain the null hypothesis.
    D) p-value = 0.00850.0085; Retain the null hypothesis.

  3. In an experiment, Group 1 has n<em>1=12n<em>1 = 12 participants and Group 2 has n</em>2=15n</em>2 = 15 participants. Both groups are independent. What is the correct degrees of freedom (dfdf) for an independent measures t-test?
    A) df=27df = 27
    B) df=26df = 26
    C) df=11df = 11
    D) df=14df = 14

  4. A research team publishes findings stating they rejected the null hypothesis with an alpha level of 0.010.01. However, it turns out that there was no real effect in the population. Which type of error did the research team commit?
    A) A Type II error.
    B) A Beta error.
    C) A Type I error.
    D) A standard error.

  5. A researcher wants to be very cautious about making a Type I error. Which of the following alpha levels would be most appropriate for their study?
    A) 0.100.10
    B) 0.050.05
    C) 0.010.01
    D) 0.0010.001


Answer Key:

  1. B

  2. B

  3. B

  4. C

  5. D