PSY3217 – Cultural Issues in Psychology: Cross-Cultural Research Methods

Lecture Outline

  • Today’s focus: research methodology in cultural psychology, especially how to design, carry out, and interpret cross-cultural work.
    • Types of research methods
    • Cross-cultural comparison studies
      • Typology of comparison
      • Designing the study ("unpackaging" culture)
      • Recognising and preventing bias
        • Conceptual
        • Method / measurement
        • Interpretational
        • Language & translation

Review – Core Goals of Cross-Cultural Psychology

  • Identify universals vs culture-specific aspects of human behaviour
  • Broaden samples beyond WEIRD (Western, Educated, Industrialised, Rich, Democratic) populations
  • Clarify how culture shapes behaviour and how behaviour maintains, reproduces, or changes culture
    • Bidirectional influence → dynamic systems view

Classic Illustration – Hudson (1960)

  • Participants from several African sub-cultural groups shown a two-dimensional drawing (elephant, antelope, hunter, horizon line).
    • Questions: “Which animal is nearer?”, “What is the man doing?”
    • Some groups interpreted picture in a flat, non-perspectival way (no depth cues) → thought elephant was smaller or nearer, hunter aiming at elephant, etc.
    • Demonstrates that perception of pictorial depth is learned and culturally variable, not a strict universal.

Typology of Cross-Cultural Research

  • Method-validation studies
    • Purpose: confirm that an existing scale/measure assesses the same psychological construct in another culture.
    • Typical question: “Does item X load on the same factor across cultures?”
  • Indigenous cultural studies
    • Explore phenomenon within a single cultural context, using locally relevant concepts.
    • Avoids imposing outsider categories.
  • Cross-cultural comparisons
    • Compare two or more cultural groups on the same phenomenon.
    • Central question: “Are mean levels / structure of construct Y different across cultures?”

Cross-Cultural Comparisons – Four Design Axes

  1. Exploratory vs Hypothesis-testing
    • Exploratory: open search for any differences.
    • Hypothesis-testing: theory-driven predictions (e.g., collectivism → higher interdependent self-construal).
  2. Contextual factors
    • Must ask why a difference occurs. Is it genuinely cultural or due to a confound (e.g., SESSES, education, urbanicity)?
  3. Structure- vs Level-oriented
    • Structure: “Is the factorial configuration the same?” (qualitative form)
    • Level: “Is the mean score higher/lower?” (quantitative magnitude)
  4. Unit of analysis
    • Individual-level (within each culture)
    • Ecological-level (aggregated culture/nation means, norms, GDP, etc.)

Designing Comparative Studies & “Unpackaging” Culture

  • Research question (RQ) must link a cultural variable to a specific psychological variable.
    • Move beyond mere nationality labels → specify mediators/moderators (e.g., values, norms, institutions).
  • “Unpackaging” = identifying concrete mechanisms behind cultural differences instead of reifying “culture”.
  • Toolbox for unpackaging:
    • Experiments (e.g., manipulate independence vs interdependence primes)
    • Priming studies (cultural frame switching in biculturals)
    • Behavioural tasks, physiological measures, naturalistic observation
    • Context variables (parenting style, educational system, relational mobility, pathogen prevalence, etc.)

Illustrative Findings Mentioned

  • “White Americans have higher average IQ compared with African-Americans.”
    • Raises questions of measurement equivalence, socio-economic confounds, historical oppression, stereotype threat.
  • “Children in Colombia develop theory of mind later than children in Australia.”
    • Could reflect linguistic structure, schooling, parental mental-state talk.
  • “When recalling happiness, Japanese students cite socially interdependent situations; North Americans focus on personal achievement.”
    • Example of culturally shaped emotion concept & memory.

Bias & Equivalence – Overarching Framework

  • Conceptual bias: Is the theory/construct meaningful in all cultures tested?
  • Method bias
    • Sampling, linguistic, procedural, administration differences.
  • Measurement bias
    • Psychometric inequivalence, lack of structural equivalence.
  • Interpretational bias
    • Over-generalisation, cultural attribution errors.

Sampling Bias & Representativeness

  • Culture often treated as independent variable in designs CultureOutcomeCulture \rightarrow Outcome, yet samples may differ on age, education, SESSES, rural/urban status, internet literacy.
  • Remedies
    • Match samples on potential confounds.
    • Use multi-level modelling to partition individual vs cultural variance.

Linguistic Bias & Translation Issues

  • Lexical gaps: e.g., many Indigenous Arctic languages have numerous words for “snow”; English does not.
  • If a term has no direct counterpart, conceptual equivalence is threatened.
  • Back-translation procedure
    1. Translate original English instrument → target language.
    2. Independent bilingual translates it back to English.
    3. Compare versions, resolve discrepancies.
  • Humorous real-world mistranslations (illustrate stakes):
    • “Ladies, please leave your clothes here and spend the afternoon having a good time.” (Laundry, Rome)
    • “Drop your trousers here for best results.” (Bangkok dry cleaners)
    • “Specialist in women and other diseases.” (Italian doctor)
    • “We take your bags and send them in all directions.” (Dutch airline)
    • “The manager has personally passed all the water served here.” (Acapulco restaurant)

Measurement Bias & Structural Equivalence

  • Psychometric operationalisation: same indicators must relate to latent construct similarly across groups.
    • Techniques: Confirmatory Factor Analysis (CFA) for configural, metric, scalar invariance.
  • Structural equivalence: identical underlying factor structure.
    • Example: González-González et al. (2015) intercultural empathy scale validated for cross-cultural use.
  • Without invariance, comparing raw means is meaningless.

Response Biases

  1. Extreme responding (tendency to choose endpoints)
    • Cultural scripts: collectivist norms (“The nail that sticks up gets pounded down”) may suppress extremes.
  2. Acquiescence bias (“yea-saying”)
  3. Socially desirable responding
    • a) Self-deceptive enhancement (“I’m not racist!”) – unconscious.
    • b) Impression management (“You mustn’t think I’m racist!”) – strategic.
  • Remedies: balanced keying, forced-choice formats, anonymity assurances, behavioural or implicit measures.

Interpretational Bias & Cultural Attribution Fallacies

  • Statistical significance p < .05 ≠ practical or theoretical significance.
    • A tiny mean gap may be statistically significant with large NN but irrelevant in real life.
  • Visualisations: overlapping distributions of males vs females (example slide) remind us effect sizes matter.
  • Researchers must check own cultural “blinkers” – tendency to explain all differences as cultural even when economic or historical.
  • Cultural attribution fallacy: Ascribing causal power to culture per se without measuring mediators.

Practical Take-aways for Researchers

  • Pre-register hypotheses & specify the cultural mechanism.
  • Use multi-method convergence (survey + behavioural + qualitative).
  • Check measurement invariance before mean comparisons.
  • Ensure rigorous translation/back-translation and pilot testing.
  • Report effect sizes (dd, η2\eta^2), confidence intervals, and contextual variables.
  • Interpret cautiously; avoid stereotyping or essentialising cultures.