Chapter 1 Notes: Research Strategies in Psychology (Overview of the Scientific Method, Descriptive Methods, Correlation, Experimental Design, Ethics, Integrity, and Study Tips)

Scientific Method in Psychological Science

  • Psychology relies on the scientific method because human intuition and common sense are often biased or incorrect for big questions about behavior and brains.
  • The scientific method is a self-correcting process using observations, analysis, and peer review to evaluate ideas with minimal bias.
  • Key idea: predictions supported by data lead to confidence in a theory; if not supported, theories are revised or rejected.
  • Six core terms to know well:
    • Theory: a well-supported, integrated set of principles that explains observations and predicts behaviors or events. It is backed by substantial evidence and is not just a casual guess. Often only a few theories reach this level.
    • Hypothesis: a testable prediction derived from a theory about what will happen under specific conditions.
    • Operational definitions: precise, concrete statements of how variables will be measured or manipulated in a study to avoid ambiguity.
    • Replication: repeating a study to see if the same results are obtained, establishing reliability.
    • Pre-registration: publicly stating hypotheses and analysis plans before collecting data to prevent hindsight bias and data dredging.
    • Meta-analysis: combining data from multiple studies to obtain a more reliable estimate of a phenomenon; considered the gold standard for evidence.
  • The cycle: theory → generate hypotheses → design/perform experiments → collect data → analyze → revise theory → repeat.
  • Practical implication: theories may be revised many times; replication and meta-analysis improve confidence in findings.

Human biases and misinformation

  • Humans are prone to intuition-based errors that mislead our understanding:
    • Hindsight bias: after learning an outcome, we think we could have predicted it; the
      "I knew it all along" effect.
    • Overconfidence: people think they know more than they do; those less expert in a domain often show greater overconfidence.
    • Perceiving patterns in randomness: brains seek meaningful patterns even in random data (e.g., cloud shapes that resemble animals).
  • Misinformation harms decision-making by spreading fear and causing harm, including rationalizing wars or fostering conspiracy theories.
  • Misinformation tends to spread faster than real news on social media and is reinforced by:
    • repetition familiarity effects
    • trust in friends and social circles
    • vivid, memorable examples rather than boring data
    • confirmation of preexisting beliefs
    • echo chambers that amplify polarization
  • Gallup data (1992–2022) show a prevalent misperception: Americans often report higher crime despite a long-term decline in crime; this illustrates how misinformation can distort reality.
  • Combatting misinformation with psychological science:
    • Understanding why people listen to misinformation helps tailor corrective strategies.
    • Basic steps to counter misinformation (Lewandowski, four-step approach):
    1. Fact: present a clear, simple assertion (e.g., “In several studies, childhood vaccinations were found to be safe and effective.”)
    2. Acknowledge the myth: recognize the concerns that lead to the misinformation (e.g., “Some people worry that MMR vaccine increases autism risk.”)
    3. Acknowledge the fallacy: explain why the myth is incorrect and point to evidence (e.g., a Lancet study was fraudulent and later retracted).
    4. Restate the fact: reaffirm the truth (e.g., “Dozens of studies find no link between vaccines and autism.”)
  • Prebunking (preemptive inoculation): activates the “psychological immune system” and builds cognitive antibodies to reduce susceptibility to misinformation.
    • Analogy: prebunking works like a vaccine; it helps people resist false claims before they are encountered.
    • Embedded poison parasite technique: prepare people to counter specific misinformation by presenting counter-arguments alongside anticipated misleading claims.
    • Prebunking is often more effective than debunking after misinformation has already spread.

The scientific method in depth: six key terms explored

  • Theory: robust, evidence-backed explanation that integrates observations and makes testable predictions.
  • Hypothesis: testable prediction derived from a theory, used to guide experiments.
  • Operational definitions: precise procedures and criteria for measuring/manipulating variables.
  • Replication: repeating studies to verify findings; lack of replication can undermine confidence in results.
  • Pre-registration: preregistering hypotheses and analysis plans to reduce bias and increase transparency.
  • Meta-analysis: statistical synthesis of results across multiple studies to derive a stronger, general conclusion.

Descriptive and observational research

  • Descriptive research aims to observe and describe phenomena without manipulating variables.
  • Types:
    • Naturalistic observation: observing behavior in natural settings without interference.
    • Interviews and surveys: collecting data from people through questions; often more scalable but less depth per participant.
    • Case studies: in-depth examination of a single case or a few cases, often about rare phenomena; not generalizable but useful for theory development.
  • Naturalistic observation example: analyzing social media posts to infer mood across times of day and days of week (e.g., Twitter mood patterns); later hypotheses can be tested with experiments.
  • The descriptive approach often leads to testable hypotheses that can be studied with more controlled methods.
  • Random sampling importance: to generalize findings, samples should represent the population; random sampling helps ensure comparable groups.
  • Wording effects in surveys:
    • Phrasing can change responses (e.g., “aid to the needy” vs. “welfare”).
    • Terms like “undocumented workers” vs. “illegal aliens” elicit different reactions.

Correlation and its limits

  • Correlation measures how two variables vary together and the extent to which one can predict the other.
  • Numerical representation: the correlation coefficient is commonly denoted as rr, with
    • range r[1,1]r \in [-1, 1]
    • positive correlations: as one increases, the other tends to increase; as one decreases, the other tends to decrease
    • negative correlations: as one increases, the other tends to decrease; as one decreases, the other tends to increase
    • magnitude indicates strength; values near ±1 indicate strong relationships, values near 0 indicates weak or no linear relationship
  • Common caveat: correlation does not imply causation. There may be:
    • a causal relationship one way (A causes B)
    • the reverse (B causes A)
    • a third variable causing both (confounding variable)
  • Example: a well-documented correlation between mental illness and smoking could reflect various causal structures or a third factor like stress; correlation alone cannot establish causation.
  • To establish causation, researchers use experimental methods that control for confounding variables and allow manipulation of the independent variable.

Longitudinal, experimental, and quasi-experimental methods

  • Longitudinal studies: follow the same participants over an extended period to observe changes and development; rich data but costly and time-consuming; often observational though some experimental elements can be included.
  • Experimental method: gold standard for causation; involves manipulation of an independent variable and random assignment to conditions; control of confounding variables; measurement of a dependent variable.
  • Quasi-experimental methods: resemble experiments but lack random assignment; rely on naturally occurring groups; offer stronger causal inference than purely descriptive studies but not as strong as true experiments; used when random assignment is unethical or impractical.
  • Experimental design components:
    • Independent variable: the manipulated factor (e.g., caffeine vs no caffeine).
    • Dependent variable: the measured outcome (e.g., cognitive performance scores).
    • Confounding variables: unwanted factors that can influence the outcome; controlled through random assignment, standardized procedures, and operational definitions.
  • Double-blind procedures: neither participants nor researchers know group assignments to prevent bias; helps mitigate placebo effects and observer bias.
  • Placebo effect: improvement due to expectations rather than the treatment itself; necessitates control groups to isolate true treatment effects.
  • Basic experimental structure: random assignment to control and experimental groups; compare dependent variable outcomes to establish causal effects.
  • Ethical considerations in experimentation: certain manipulations may be unacceptable (e.g., exposing non-smokers to smoking conditions); quasi-experiments offer alternatives when randomization is unethical or impractical.
  • Example: US teen mental health study using quasi-experimental and longitudinal approaches showed increases in reported sadness/hopelessness, especially among girls, across a decade; subsequent experimentation could test factors like social media use.
  • Decision framework for choosing a design: start with the research question, consider cost/time/ethics, select the design that best addresses the question while meeting practical constraints.
  • Important caveat: laboratory experiments test theoretical principles in controlled settings and may not generalize perfectly to real-life behavior; the goal is theory advancement and principle identification, not exact everyday replication.

Types of research questions and designs

  • Correlational research: identifies relationships and makes predictions but cannot infer causation.
  • Longitudinal studies: observe development over time; strong for observing change but not always causal.
  • Experimental methods: manipulate variables and randomly assign participants to establish causality; include control groups and blinding when possible.
  • Quasi-experimental methods: use existing groups when random assignment is not feasible or ethical; still aim to infer causality but with more caution.
  • At the intersection: a typical research progression might start with correlational findings, move to longitudinal data, and then use experimental or quasi-experimental designs to test causal claims.

Ethics in research: animals and humans

  • Animal research is common in psychology (e.g., reward learning in animals); safeguards exist to protect animal welfare, including housing, social needs, minimizing pain, and health monitoring.
  • Institutional oversight for animal research: institutions have committees to review and approve study designs; studies proceed only if approved, with ongoing monitoring.
  • Human research ethics: governed by codes from professional associations (e.g., APA) and national standards; reviewed by Institutional Review Boards (IRBs).
  • Informed consent: participants must be informed about the study’s purpose, procedures, risks, and benefits; participation is voluntary and can be withdrawn at any time.
  • Confidentiality: individual data are protected; results are reported in aggregate to prevent identification of participants.
  • Debriefing: after participation, researchers explain the study’s purpose and procedures, address questions, and correct any potential misconceptions; more detailed in cases involving deception or sensitive topics.
  • IRB role: ensures participant safety, privacy, and ethical treatment; can require changes or deny an experiment if risks outweigh benefits.

Scientific integrity and replication

  • Trust in science depends on integrity; fake or falsified data can cause societal harm (e.g., anti-vaccine movements from fraudulent Lancet paper).
  • Replication is essential to verify findings; single studies rarely prove a phenomenon; multiple replications across labs and populations strengthen confidence.
  • When replication fails, researchers should revisit methods, sample characteristics, and potential confounds to understand discrepancies.
  • Researchers must be aware of their own biases and strive for objectivity; transparency about methods, data, and analyses helps minimize bias.
  • Common bias in interpretation arises from exposure to a bias-inducing cue (e.g., a rabbit picture biasing responses to a duck-rabbit image); awareness of bias helps reduce its impact on research.

Real-world applications and evidence-based practice

  • Psychological science provides evidence-based strategies for everyday life and learning:
    • Time management for exams and study planning
    • Getting a full night’s sleep before exams to support memory
    • Regular exercise to support cognitive function
    • Setting long-term goals with daily tasks; breaking big goals into manageable steps
    • Developing a growth mindset: viewing failures as opportunities for growth
    • Prioritizing meaningful relationships to reduce stress and promote well-being
  • The goal is to extract general principles that help explain and predict behavior across many contexts, not to specify exact behavior for every individual.

Study tips and test-taking strategies (evidence-based)

  • Testing effect: actively retrieving information improves retention more than passive review.
  • Put more emphasis on output than input while studying:
    • Summarize from memory
    • Teach the material to someone else; explaining concepts reinforces learning
  • SQ3R method: Survey, Question, Read, Retrieve, Review to structure studying effectively.
  • Spaced practice: distribute study over time rather than cramming; improves long-term retention.
  • Combine distributed practice with occasional cramming right before an exam for short-term boost.
  • Critical thinking: actively question material, connect to broader contexts, and relate to real-world implications.
  • Overlearning: study beyond the minimum to solidify understanding and reduce forgetting.
  • Balance study approaches to maximize both understanding and recall in exam situations.

Practical takeaways

  • Use evidence-based study and learning techniques to improve retention and performance.
  • When evaluating a study, consider population, sampling methods, operational definitions, and whether findings generalize to real-world settings.
  • In everyday life, apply the scientific mindset: question information, seek high-quality evidence, and be mindful of biases and misinformation.