UQ Extend Module 3

image.png

Ethics in Psychological Research

Definition:

  • Ethics = guidelines/principles for moral & just treatment of others.

  • In research: focus on how researchers treat participants, run studies, and conduct themselves.

  • Based on Universal Declaration of Ethical Principles for Psychology.

Four Guiding Ethical Principles

Respect for the Dignity of Persons and Peoples

  • Value, acknowledge, and treat all people equally regardless of origin, beliefs, or identity.

  • Special care for vulnerable groups (e.g., children, minorities).

  • Ensure equal opportunity to be seen, heard, acknowledged.

  • Protect anonymity and confidentiality.

  • Example: Evolving gender data collection → beyond male/female binary to non-binary & open responses to show respect and inclusion.

Competent Caring for the Well-Being of Persons and Peoples

  • Aim for research findings to enhance well-being.

  • Conduct research to benefit participants or at least cause no harm.

  • Plan for and mitigate possible harm.

  • “Competent” caring → researchers must have proper training for tools/tests used.

  • Case Study – Tuskegee Syphilis Study (1932–1972):

    • 600 African-American men (399 with syphilis) misled, denied treatment (penicillin available from 1947).

    • No informed consent → participants not told study details.

    • Ethics committees now prevent such abuse (require informed consent, minimal/no harm).

Integrity

  • Conduct research with objectivity and honesty, free from self-interest or outside influence.

  • Avoid exploitation and bias in reporting.

  • Example – Grossarth-Maticek Research:

    • Linked personality types to cancer/heart disease.

    • Allegations of data falsification (e.g., reclassifying participants, duplicating data).

    • Funded by tobacco companies → possible conflict of interest.

    • Findings not replicated → likely due to falsified data.

Responsibility to Society

  • Psychology should contribute to understanding the human condition and improving well-being.

  • Researchers must:

    • Understand and follow ethical conduct.

    • Reflect on and update research practices to stay ethical.

Stanford Prison Experiment (1971)

  • Conducted at Stanford University by Philip Zimbardo.

  • Setup: Mock prison in psychology building; participants randomly assigned as prisoners or guards.

  • Payment: $15/day.

  • Role of Zimbardo: Prison superintendent.

  • Informed Consent Issues:

    • Participants given vague info, not told specifics (e.g., surprise home arrest, strip search).

    • Guards encouraged to be aggressive (no physical harm) to instill fear.

  • Ethical Concerns:

    • Prisoners who wanted to leave were told they could not.

    • Planned 2 weeks → ended after 6 days when an outsider raised concerns.

    • Zimbardo admitted losing objectivity due to his role.

    • Formal debriefing not until years later.

Variables in Research

Key Variables

  1. Independent Variable (IV):

    • Manipulated by the experimenter (e.g., age groups, drug type).

    • Sometimes cannot be directly manipulated (e.g., age).

  2. Dependent Variable (DV):

    • Measured outcome; depends on IV.

    • Example: Studying mental ability vs. age → IV = age, DV = IQ score.

Unwanted (Extraneous) Variables

  • Variables that contaminate results and obscure the relationship between IV and DV.

  1. Situational Variables:

    • Environmental factors (temperature, noise, lighting, time of day).

    • Can affect all participants differently and unpredictably.

  2. Individual Differences:

    • Natural variations between people (height, weight, motivation, anxiety).

    • Combine with situational variables to increase variability.

  3. Measurement Error:

    • Inconsistencies in recording data (e.g., misreading ruler, stopwatch error).

    • Linked to experimenter’s attention, training, or bias.

  • Effect:

    • Random variability can weaken or completely hide real relationships.

    • Example: Teaching method study → with unwanted variables removed, clear difference; with them present, results less consistent.

Confounding Variables

  • Definition: Variables that vary systematically with IV, providing an alternative explanation for results → prevents establishing causation.

  • Example:

    • Testing two drugs on rats: all Drug A rats tested in the morning, all Drug B rats in the afternoon.

    • Time of day becomes a confounding variable.

Controlling Confounding Variables:

  1. Keep constant: Test all groups under same conditions (e.g., all in morning).

  2. Counterbalance: Spread variations evenly (e.g., half of each group in morning, half in afternoon).

True Experimental Designs — Key Features

  1. At least two levels of the Independent Variable (IV)

    • One level can be absence of treatment (control group/placebo).

    • Other = presence of treatment (experimental group).

  2. Random Assignment

    • Equal chance of being in any group (coin flip, random number table, etc.).

    • Purpose: Distribute extraneous factors (motivation, ability, age, health) evenly so they don’t vary systematically with the IV.

  3. Control for Confounding Variables

    • Prevent alternative explanations for observed differences between conditions.

Independent Groups Design (Between-Subjects)

  • Structure:

    • Two or more groups, each experiencing a different level of the IV.

    • Participants randomly assigned to one group.

    • Experimental group receives IV; control group does not.

  • Example — Tickling Experiment:

    • IV: Who does the tickling (robot vs. self).

    • DV: Ticklishness rating (1–10).

    • 32 participants → random assignment into 2 groups of 16.

    • Robot group tickled by robot; self group tickled themselves using robot arm.

    • Results: Robot tickle group generally rated higher ticklishness, but some overlap.

  • Drawbacks:

    • High variability from individual differences (e.g., natural differences in ticklishness).

    • Requires more participants than repeated measures.

Repeated Measures Design (Within-Subjects)

  • Structure:

    • Same participants tested in all conditions of the IV.

    • Fewer participants needed (e.g., 16 instead of 32 in tickling example).

    • Reduces random variability due to individual differences.

  • Order Effects:

    • Experiencing one condition may influence responses in the next.

    • Controlled via counterbalancing:

      • Half participants → Condition A then B.

      • Half participants → Condition B then A.

  • Example — Tickling Experiment:

    • All 16 participants experienced both robot and self tickling.

    • Order counterbalanced to control for order as a confound.

    • Results showed less spread in data → reduced variability from individual differences.

When Repeated Measures is NOT Suitable

  • If one condition permanently changes participant responses (e.g., learning effects).

  • Example: Comparing teaching methods for statistics → learning from first method influences performance in second method.

Summary Table

Feature

Independent Groups (Between)

Repeated Measures (Within)

Participants per Condition

Different people in each group

Same people in all conditions

Randomization Purpose

Equalize groups

Control order effects

Main Advantage

No carryover/order effects

Reduces variability from individual differences; fewer participants

Main Disadvantage

More participants needed; variability from individual differences

Risk of order/carryover effects

Key Control Method

Random assignment

Counterbalancing

Observational Designs

Two types covered:

  1. Correlational Design

  2. Quasi-Experimental Design

Why not use true randomized experiments?

  • Sometimes impractical or unethical (e.g., cannot assign people to start smoking).

Example: Smoking & Health

Hypothesis: Smoking is bad for health.

Operational definition of health: Number of doctor visits per year.

Prediction: More smoking → more doctor visits.

Correlational Study

  • Method:

    • Observe people who already smoke/don’t smoke.

    • Measure:

      • Cigarettes smoked/day (IV)

      • Doctor visits/year (DV)

    • Example: Ask 200 people about both variables.

    • Create scatter plot: each point = one person’s data.

  • Observation: Positive relationship — heavier smokers see doctors more often.

  • Key point: No variables manipulated → just observation.

  • Limitation: Cannot conclude causation.

Quasi-Experimental Design

  • Similar to true experiment, but no random assignment.

  • Method:

    • Form groups based on pre-existing characteristics (e.g., smoking habits).

    • Example:

      • Light smokers: 0–10 cigarettes/day

      • Heavy smokers: 20–30 cigarettes/day

      • DV = doctor visits/year

    • Plot results: heavy smokers have more doctor visits.

  • Key difference from true experiment: Grouping based on existing traits, not random assignment.

  • Other uses:

    • Age (e.g., young vs. older adults)

    • Health conditions (e.g., high blood pressure vs. normal)

    • Income levels (e.g., wealthy vs. middle class)

Causation?

  • From correlational or quasi-experimental designs → No causal conclusion possible.

  • Unknown third variables could explain results (e.g., drinking, diet).

Conditions for Causal Inference

(Only true randomized experiments can fully meet all three)

  1. Relationship: Regular & reliable changes in one variable associated with changes in the other.

  2. Time Order: Cause precedes effect.

  3. No Other Explanations: Alternative causes ruled out (via randomization).

Summary

  • Observational Designs = Correlational studies + Quasi-experiments.

  • Correlational study: Measures 2+ variables in same group, examines relationship.

  • Quasi-experiment: Like true experiment but without random assignment.

  • Limitation: Lack of randomization → cannot infer causation.

  • Only true experiments (with randomization) allow causal claims.