CD

Introduction to Clinical Research

Topic overview

  • Clinical researchers aim to uncover nomothetic principles of abnormal psychological functioning using the scientific method.

  • They seek general laws or principles that apply across individuals (nomothetic understanding) rather than focusing on a single case (idiographic understanding).

  • Key goals:

    • Describe relationships between variables.

    • Determine whether changes in one variable relate to changes in another.

    • Test predictions and hypotheses about abnormality across many individuals.

  • Important terms:

    • Variable: any characteristic or event that can vary across time, place, or person. Examples include childhood upsets, present life experiences, moods, social functioning, and responses to treatment.

    • Independent variable (IV): the manipulated factor in an experiment.

    • Dependent variable (DV): the factor observed and measured for change.

    • Sample vs population: the subset studied vs. the larger group of interest.

    • Internal validity: accuracy in attributing observed effects to the manipulated variable.

    • External validity: generalizability of findings to other people and settings.

  • Three main methods of investigation used by clinical researchers (each best suited to different questions):

    • Case study

    • Correlational method

    • Experimental method

  • Core reason for using multiple methods: to form and test hypotheses and to draw broad conclusions about why certain variables are related.

  • Key caveat: scientific progress often requires testing ideas rather than relying on conventional wisdom alone (e.g., lobotomies and psychoanalytic ideas once thought true).

  • Ethical and practical constraints shape how researchers study abnormal psychology (e.g., measurement of elusive constructs, cultural variability, rights of participants).

The case study

  • Definition: a detailed description of a person’s life and psychological problems, including history, present circumstances, symptoms, possible causes, and sometimes treatment.

  • Uses and value:

    • Source of new ideas about behavior.

    • May open pathways for discoveries and generate hypotheses.

    • May provide tentative support for a theory or show the value of new therapies.

    • Useful for studying unusual problems that do not occur frequently enough for large samples.

  • Famous example: Freud’s Little Hans case study (1909) describing a 4-year-old boy with a fear of horses; includes material from Hans’s father and Freud’s interpretations, leading to insights about repression, castration anxiety, and the family dynamics surrounding a child’s phobia.

  • Other influential case studies: Oliver Sacks’ The Man Who Mistook His Wife for a Hat (1985) illustrating various neurological cases and broad psychological processes.

  • How case studies can play nomothetic roles beyond the individual case and influence theories and therapies.

  • Limitations of case studies:

    • Bias: cases are reported by therapists who may have a stake in outcomes; selection bias in what gets reported.

    • Subjectivity: evidence is often interpretive and depends on the therapist’s or client’s reports.

    • Internal validity: case studies typically have low internal validity because they do not control for confounding variables.

    • External validity: limited generalizability to other individuals or populations.

  • Notable limitations illustrated by famous cases: whether a single case proves a general rule; extrapolating from one individual to many.

  • Notable discussion points from the case-study literature:

    • Case studies can inform theory and therapy, but they do not establish causality.

    • They may contribute to understanding patterns across families or cultures (e.g., dissociative identity disorder cases like the “Three Faces of Eve”).

The correlational method

  • Definition: a research procedure used to determine the degree to which two or more variables vary together.

  • Key ideas:

    • Correlation describes a relationship or association, not causation.

    • If two variables vary together, they may be related; this can be positive, negative, or non-existent.

  • Describing a correlation:

    • Scatterplots: each point represents a participant’s scores on two variables.

    • Line of best fit: the straight line that minimizes distance between data points and the line, illustrating the strength and direction of the relationship.

    • Direction of correlation:

    • Positive correlation: as one variable increases, the other tends to increase (line slopes upward).

    • Negative correlation: as one variable increases, the other tends to decrease (line slopes downward).

    • No correlation: no consistent relationship (line is flat).

    • Magnitude (strength) of correlation: described by the correlation coefficient r \in [-1, 1]. The closer |r| is to 1, the stronger the relationship; the closer to 0, the weaker the relationship.

    • Examples:

    • Positive: life stress and depression tend to rise together (e.g., a strong positive trend; a line of best fit with a steep slope).

    • Negative: depression and activity level tend to move in opposite directions (higher depression associated with fewer activities; negative slope).

    • No correlation: intelligence and depression may show near-zero correlation in some samples.

  • Magnitude examples from the transcript:

    • A strong positive correlation example had a line of best fit with a steep slope and data points close to the line (e.g., r around +0.53 in some long-running studies).

    • A moderately positive correlation example had a less steep line (e.g., r around +0.28).

    • A strong negative correlation example and a near-zero correlation example were also described.

  • Describing sample validity:

    • Sample should be representative of the population to generalize findings (external validity).

    • If a sample is not representative (e.g., only children), generalizability to adults may be compromised.

  • Statistical significance in correlational findings:

    • Researchers use probability to determine whether a correlation is unlikely due to chance.

    • Convention: if there is less than a 5% probability that the observed correlation occurred by chance, the result is considered statistically significant.

    • Notation: significance is often expressed as p < 0.05.

  • Practical interpretation:

    • Larger magnitude and larger sample sizes increase confidence that a correlation reflects a real association in the population.

  • Limitations:

    • Correlation does not imply causation; third variables or bidirectional influences may explain the association.

    • The method is strong for identifying relationships but weak for establishing causal directions.

The experimental method

  • Definition: a research procedure in which a variable is manipulated and the manipulation’s effect on another variable is observed.

  • Core concepts:

    • Independent variable (IV): the manipulated factor believed to cause an effect.

    • Dependent variable (DV): the outcome measured to assess the effect of the IV.

    • Random assignment: participants are assigned to groups in a way that each participant has an equal chance of being placed in any group, helping to control preexisting differences.

    • Control group: a group not exposed to the IV, used for comparison to determine the IV’s effect.

    • Experimental group: the group exposed to the IV.

    • Mask design (blinding): procedures to prevent participants or researchers from knowing group assignments to reduce bias.

  • How experiments test causal questions:

    • Example question: Does a particular therapy relieve symptoms of a disorder?

    • Therapy is the IV; psychological improvement is the DV.

  • Statistical significance and clinical significance:

    • Statistical significance: when the observed difference between groups is unlikely to have occurred by chance, typically p < 0.05.

    • Clinical significance: whether the amount of improvement is meaningful in the participant’s life, beyond statistical metrics.

  • Random assignment and confounds:

    • Random assignment helps ensure comparable groups and reduces preexisting differences as potential confounds.

    • Confounds: other variables that might influence the DV besides the IV (e.g., office location, soothing music, participant motivation).

    • To guard against confounds, experiments typically use a control group, random assignment, and mask design.

  • Practical considerations:

    • In clinical research, there are ethical and practical limits on manipulations; designs may be less than ideal and incorporate quasi-experimental elements.

    • Statistical analyses assess whether observed group differences are likely due to the IV rather than chance.

  • Key distinctions:

    • Statistical significance vs. clinical significance: a result can be statistically significant but not necessarily meaningful in real-world life quality improvements.

  • Deterministic vs probabilistic conclusions:

    • When true causation cannot be separated from other potential causes, experiments provide limited information.

Alternative (quasi-experimental) research designs

  • When pure experiments are not feasible or ethical, researchers use quasi-experimental designs that mix elements of experimental and correlational methods:

    • Matched design

    • Natural experiment

    • Analog experiment

    • Single-case experiment

    • Longitudinal study

    • Epidemiological study

  • Matched design:

    • Researchers compare groups that are already existing in the world (e.g., abused vs. non-abused children) and match participants on key characteristics (age, sex, race, socioeconomic status, etc.) to reduce confounds.

  • Natural experiments:

    • Nature manipulates the IV; researchers observe effects. Examples include studying effects of floods, earthquakes, plane crashes, or fires.

    • Useful for studying unusual or unpredictable events, but generalization can be limited because events are not controlled.

    • Classic example discussions include tsunamis (Sumatra 2004) and other disasters (Haiti 2010, Japan 2011, Sandy 2012, various California wildfires 2018–2019).

  • Analog experiments:

    • Researchers induce abnormal-like behavior in laboratory participants (humans or animals) and study the outcomes to shed light on real-world conditions.

    • Often used to explore causes of human depression (learned helplessness paradigm).

    • Major caveat: laboratory phenomena may not perfectly map onto real-world disorders; external validity may be limited.

  • Single-case experiments (single-subject design):

    • Focus on one participant with systematic manipulation of the IV and repeated measurements.

    • Baseline (A) data are collected before manipulation; then the IV is introduced (B); changes are observed; may return to baseline (A) to test reversibility (ABAB design).

    • Example: using rewards to reduce disruptive behavior; behavior improves when rewards are given, reverts when rewards stop, and improves again when rewards resume.

  • Longitudinal studies:

    • Track the same individuals over an extended period to observe changes and development.

  • Epidemiological studies:

    • Examine the distribution and determinants of disorders in populations; focus on prevalence and incidence across groups and time.

  • Practical note on quasi-designs:

    • These designs are often necessary due to ethical and practical constraints but generally provide weaker internal validity than randomized controlled trials.

Protecting and evaluating human participants

  • Human participants require careful ethical protections.

  • Institutional Review Board (IRB):

    • A committee (often five or more members) at a research facility that reviews, approves, and monitors studies involving human participants.

    • Possesses the power to require changes, disapprove, or stop a study if participant safety or rights are jeopardized.

    • In the U.S., IRBs are empowered by federal agencies such as the Office for Human Research Protections and the Food and Drug Administration.

  • Rights of participants and informed consent:

    • Participation should be voluntary.

    • Participants must be adequately informed about what the study entails before enrollment.

  • Conflicts of interest and data handling in modern research:

    • Reports indicate that many pharmaceutical-funded studies show favorable outcomes, while independent studies show fewer favorable results; this underscores potential bias in research sponsored by industry.

    • The American Psychological Association requires data sharing for replication and reanalysis, though not all authors comply (reasons include data misplacement, ethical concerns, or ongoing studies).

  • Ethical challenges in social media and online data:

    • Online research participants may differ from in-person participants (WEIRD concerns and sampling biases).

    • Studies have raised concerns about consent when data are public or collected through social platforms without explicit participant consent (e.g., Facebook mood manipulation study in 2014).

    • Debates over informed consent in digital data use and privacy concerns have led to initiatives like the Social Data Initiative (SSRC) to develop ethical guidelines.

  • WEIRD participants and generalizability:

    • More than 70% of psychology studies use college students as participants.

    • WEIRD: Western, Educated, Industrialized, Rich, Democratic populations; findings from WEIRD samples may not generalize to non-WEIRD populations.

    • WEIRD participants tend to be more educated, more individualistic, more narcissistic, and more risk-taking, among other differences, which can limit generalizability.

  • Internet vs in-person sampling differences:

    • Online (weird): ~57% female; more racial diversity; less education; older; poorer; more geographic diversity.

    • In-person: ~71% female; less racial diversity; more educated; younger; wealthier; less geographic diversity.

  • Replication and publication bias:

    • Replication is essential for establishing accuracy and generalizability.

    • A sizeable share of replication studies are unsupportive or weaker than original findings (53% unsupportive vs 47% supportive in some analyses).

    • Fewer replication studies are conducted over time, and negative replication findings are less likely to be published, which can distort the scientific record.

  • Data sharing and transparency:

    • While journals encourage data sharing, actual sharing rates may be low due to ethical concerns, ongoing studies, or misplacing data.

  • Potential ethical concerns in modern research:

    • Direct manipulation of social media content without informed consent can raise ethical issues and potential harm to participants (e.g., mood manipulation studies).

    • There is ongoing debate about privacy, consent, and the use of public data in research.

Confounds and research design integrity

  • Confound: a variable other than the IV that may be influencing the DV, leading to spurious conclusions.

    • Examples: location of the therapy office, background music, participant expectations, or other situational factors.

  • Strategies to guard against confounds:

    • Include a control group.

    • Use random assignment.

    • Implement mask (blinding) designs where possible.

  • Animal research and ethics:

    • Animal studies can provide insights but raise ethical concerns; institutions use guidelines to protect animals and require committees (e.g., IACUC) to oversee proposal and ensure humane treatment.

    • Rough estimates suggest between 12 \text{ to } 27\text{ million} animals used in U.S. research annually, with about 0.5\text{ million} being primates and other species; animal research has contributed to life expectancy gains and medical advances but remains controversial.

  • The rights and welfare of animals in research:

    • Public health and regulatory bodies have established guidelines to minimize harm and ensure humane treatment; the number of animals used has declined compared to previous decades.

  • Special case: animals in therapy and interventions:

    • Some studies explore animal-assisted therapy or companionship as calming interventions, illustrating the broad range of potential treatments.

Ethical and methodological synthesis

  • Research methods function best as a team: each approach has strengths and weaknesses.

  • Converging evidence from multiple methods strengthens conclusions; conflicting results indicate areas where knowledge is still limited.

  • Critical evaluation is essential before accepting findings: consider

    • Whether variables were properly controlled

    • Whether the sample is representative and large enough

    • Whether bias was minimized

    • Whether conclusions are justified by the data

    • Whether ethical standards were met

  • Key takeaways:

    • Case studies provide rich, detailed information but limited generalizability and internal validity.

    • Correlational studies identify associations and have strong external validity but cannot establish causation alone.

    • Experimental studies can establish causation and have high internal validity when well-controlled, but may face ethical and practical limits and potentially limited external validity.

    • Quasi-experimental designs are useful when true experiments are impractical, though they typically offer weaker causal inferences.

    • Protecting human participants is paramount, with IRBs and informed consent as foundational elements.

    • Replication, data transparency, and careful consideration of WEIRD sampling are critical for robust psychology.

Notable quotes and historical context (illustrative examples)

  • Misperceptions have driven scientific progress when challenged by research (e.g., Aristotle and others on gender, communications technology, cloning, etc.).

  • The field emphasizes that beliefs require testing in representative samples to avoid harm from incorrect theories (e.g., lobotomy as a cure for schizophrenia).

  • General admonition: "If we knew what it was we were doing, it would not be called research" (paraphrase of Einstein’s sentiment about scientific inquiry).

Summary comparisons and practical implications

  • Case studies vs correlational vs experimental:

    • Case studies: detailed, idea-generating, therapy-refining; limited internal/external validity.

    • Correlational studies: identify relationships and generalize to populations; cannot establish causation; rely on representative samples and statistical significance to assess real-world relevance.

    • Experimental studies: establish causation with random assignment and control of confounds; robust for understanding mechanisms but often constrained by ethics and practicality.

  • Ethical and methodological integration:

    • In practice, multiple methods are used together to build a coherent understanding of abnormal functioning.

    • If all methods converge on similar conclusions, confidence increases; if results diverge, further investigation is needed.

  • Pragmatic takeaway for exam preparation:

    • Be able to define each method, list its strengths and limitations, and identify typical research questions suited to that method.

    • Recognize the role of control groups, random assignment, and masking in experiments.

    • Understand the difference between statistical significance (p-value) and clinical significance (meaningful real-world impact).

    • Be aware of WEIRD sampling issues, online vs in-person data collection differences, and replication concerns in modern psychology.

    • Remember the ethical scaffolding: informed consent, IRBs, and responsible data handling, including data sharing and conflict-of-interest awareness.

Key formulas and numeric references to memorize

  • Correlation coefficient domain:
    r \in [-1, 1]

  • Statistical significance threshold commonly used:
    p < 0.05

  • Relationship descriptors (direction and magnitude) relate to the sign and absolute value of r (e.g., strong positive: |r| \text{ close to } 1; weak: |r| \text{ much less than } 1).

  • Animal usage estimate and impact (for context):

    • Animals used in U.S. research annually: 12 \text{ to } 27\times 10^6

    • Primates and other mammals: ≈ 0.5\times 10^6

  • Institutional review and data sharing acronyms:

    • IRB: Institutional Review Board

    • IACUC: Institutional Animal Care and Use Committee

    • SSRC: Social Science Research Council

  • Notable qualitative statements to recall:

    • Major disasters have served as natural experiments revealing patterns of psychological reactions (e.g., acute stress, PTSD symptoms after disasters).

    • The replication landscape shows a shift toward more replication studies but ongoing concerns about publishing negative or contradictory results.

    • Social media research raises unique ethical challenges (informed consent, data privacy, manipulation concerns).