Research Methods I Flashcards

Factor Analysis

  • Factor analysis is a variable reduction technique (Field #17.2, page 628).
  • It assesses how many unobserved constructs cause interrelated item scores, such as on a Big Five questionnaire.
  • It identifies groups of intercorrelated items within a dataset.

Example: Big Five Questionnaire

  • Items:
    • Is talkative
    • Is full of energy
    • Generates a lot of enthusiasm
    • Is helpful and unselfish to others
    • Has a forgiving nature
    • Is generally trusting
    • Does a thorough job
    • Is a reliable worker
    • Perseveres until the task is finished
    • Is depressed, blue
    • Can be tense
    • Can be moody
    • Is original, comes up with new ideas
    • Is curious about many different things
    • Is ingenious, a deep thinker
  • Potential Factors:
    • Extraversion
    • Agreeableness
    • Conscientiousness
    • Neuroticism
    • Openness

Practical Application of Factor Analysis

  • Task: Evaluate a newly developed Big Five Questionnaire (short version).
  • Data: Test scores from an initial pilot study.
  • Test-reference group: 200 students from the Central University of Achterveld, The Netherlands (105 male, 95 female, M age = 22, STD = 3).

Key Questions

  • Are there potential outliers?
  • How many factors are present? (Use Factor Analysis and report the Scree plot.)
  • Which items should be excluded based on explained variance or factor loadings?
  • Are factors correlated?
  • What is the general reliability of the test?
  • Which individual items should be excluded based on reliability?

Formulating Conclusions

  • Interpret the results.
  • Are the results congruent with expectations?
  • Are there alternative explanations?
  • Should there be modifications to the test, such as excluding items based on:
    • Explained variance by the factors
    • Factor loadings
    • Reliability analysis

Ten Item Personality Measure (TIPI)

  • A very brief Big Five Questionnaire.
  • Used when time is limited or personality is not the primary focus.
  • Has adequate convergence with widely used Big-Five measures, test-retest reliability, patterns of predicted external correlates, and convergence between self and observer ratings.
  • Reference: (http://gosling.psy.utexas.edu/scales-weve-developed/ten-item-personality-measure-tipi/)

Evaluating the TIPI

  • Dataset available for evaluation.
  • Item scores range from 1 to 7.
  • Dataset contains raw, unprocessed responses; no items have been reverse coded yet.
  • Steps:
    1. Download the TIPI scale .pdf from Canvas and read it.
    2. Download the TIPI dataset and open it in JASP.
    3. Explore the dataset for the range of item scores and outliers.
    4. Recode/transform the item scores.
    5. Explore the validity and reliability of the scale using factor analysis and reliability analysis.
    6. Draw conclusions about whether it is a good test, or could be after modifications like exclusion/replacement of items.

Quality Assessment of Multiple Choice Tests

  • Response format inherent to MC tests: Selected Response Format.
  • Items should reflect the construct of interest (i.e., course-related material).
  • Scoring: Criterion referenced (proportion or percent correct à specific grade).
  • Example item:
    • A scale
      • A) consists of several items that measure the same construct
      • B) is an effect size measure
      • C) is a reliability measure
      • D) All of the above

Indices of Test Quality

  • Cronbach’s alpha: Items should measure the same thing (course-content related knowledge).
  • Difficulty index
  • Discrimination index (e.g., item-total correlation): Indicator of how well the item separates high performers from low performers (see page 270 of Cohen et al.).

Exercise

  • Open the test data (sample_examdata.sav).
  • Assess internal consistency.
  • Determine the optimal item difficulty index for a 4-response option multiple choice test.
  • Based on the item-difficulty index, decide if any items should be excluded.

Adapting Tests

  • Ability is estimated from responses on items that vary in difficulty.
  • Correct responses lead to presentation of more difficult items.

Item Response Theory (IRT) in Adapting Tests

  • The probability of a correct response depends on the person’s ability and the item parameters (difficulty/discriminative quality).
  • Basic idea:
    • Difficulty
      • 1 parametric logistic / 1PL / Rasch model
    • Discriminative quality
      • 2 parametric logistic /2PL / Birnbaum model
  • Items differ in terms of difficulty and discriminative quality.