Module 2 Notes: Research Methods in Psychological Science
What counts as research in psychology
- Everyday use of “research” vs psychology use
- Everyday: looking something up (e.g., “I researched it on Google”).
- In psychology: acquiring new knowledge by conducting studies to answer a question.
- “Study” is an umbrella term that includes experiments, case studies, naturalistic observation, correlational surveys, and more.
- No single method is universally best; methods have strengths and weaknesses and are chosen to fit the question, resources, and constraints.
The scientific method and empirical investigation
- Empirical: you have to go look; data collection is required to answer questions about the mind and behavior.
- The scientific method is an error-correcting knowledge mechanism: imperfect but progressively approaches truth through rigorous, repeated application.
- Science helps us understand psychology through careful testing and evidence rather than relying on intuition or testimony.
- The process is not about absolute truth but about converging on what is most supported by data.
Seven (approximate) steps of empirical inquiry
- Step 1: Start with a research question
- Questions should be focused and amenable to empirical testing.
- Example:
- Broad question: "What is the best way for college students to study?"
- Focused question: "Which is more effective for memory: rereading vs practice testing?"
- Step 2: Literature review
- Review published scientific research related to the topic.
- Helps you see what is already known and where to contribute.
- Can lead to revisions of the research question.
- Step 3: Hypothesis and null hypothesis
- Hypothesis (H1): tentative answer to the research question (e.g., practice testing yields better memory than rereading).
- Null hypothesis (H0): no difference between conditions.
- Notation often used: or with subscripts if multiple hypotheses (e.g., )
- Step 4: Operational definitions
- Define exactly how variables will be measured or manipulated so that others can replicate the study.
- Important for clarity and comparability across studies.
- Step 5: Variables and design
- Variables:
- Independent Variable (IV): manipulated by the researcher; levels are the conditions or treatments.
- Dependent Variable (DV): the outcome measured.
- Example: In a memory study, IV could be method of study; DV could be recall performance.
- Operational definitions provide concrete implementations of the IV and DV.
- Rationale for multiple IVs: real-world phenomena are messy; multiple IVs can reveal nuanced effects but add complexity.
- Step 6: Sampling and assignment
- Population vs. sample:
- Population: the entire group of interest (e.g., all college students).
- Target population: specific subgroup of interest (e.g., all college students with depression).
- Population vs. sample: aim for a sample that represents the target population.
- Random sampling: best practice to obtain a representative sample; not always feasible in practice.
- Convenience samples: common in psychology (e.g., undergraduate participants, SONA platforms); acceptable with caveats about generalizability.
- Random assignment: allocate participants to groups randomly to control for preexisting differences.
- Confounding variables: variables that covary with the IV and could explain observed effects (e.g., gender if it is systematically different across groups).
- Extraneous/third variables: other variables that could influence the DV if not controlled; random assignment helps mitigate these.
- Control of environment: keep conditions (time of day, ambient light, etc.) consistent to isolate the effect of the IV.
- Step 7: Data collection, analysis, and interpretation; publication
- Experiments are the gold standard for determining causality because they manipulate the IV and control other factors.
- Correlational studies can describe associations but cannot establish causation (correlation does not imply causation).
- Data analysis should rely on formal statistics rather than eyeballing results; consider effect size and probability.
- Language: scientists avoid saying “proven” or “disproved”; they discuss support for hypotheses and limitations.
- After a study, write it up in an appropriate format and submit for publication; the process involves editors and peer reviewers.
- Publication process:
- Manuscript submitted to a journal; editors send it to expert reviewers (often blind to the authors).
- Reviewers critique theory, rationale, methods, analyses, and conclusions; provide comments for revision.
- Editor decides to accept, revise, or reject; revisions may require additional studies or analyses.
- Publication is not a guaranteed payoff; emphasis is on quality control and advancing knowledge.
- Limitations of publication:
- Reviewers and editors can have biases; there is margin of error in sampling and interpretation.
- The process aims to improve quality, not to protect the author.
Operational definitions, IVs, DVs with a concrete example
- Memory study example used in lecture:
- IV1: Method of study with two levels
- Level A: reading and rereading (two 5-minute sessions)
- Level B: writing to generate recall without feedback (two 5-minute sessions)
- IV2: Retention interval with two levels
- Short interval (e.g., five minutes)
- Longer delay (e.g., multiple days)
- DV: Memory performance measured as recall proportion
- ext{Recall Proportion} = rac{n{ ext{recalled}}}{n{ ext{total}}}
- Hypotheses:
- Why include multiple IVs? To gain a more nuanced picture and to see if effects depend on retention interval.
- Why extra emphasis on operational definitions? For replication and comparability across studies; different operational definitions can complicate meta-analyses.
The difference between experimental and correlational methods
- Experimental design (random sampling, random assignment, control groups) allows inference of causality (IV causes DV changes).
- Correlational design (surveys, observational data) can describe associations and predict outcomes but cannot prove causation.
- When experiments are not ethical or feasible, researchers rely on correlational designs to explore associations (e.g., surveys on time spent on Facebook and mental health).
- A key caution: correlation does not imply causation due to potential third variables (e.g., seasonality affecting ice cream consumption and water activity, or other confounds).
Third variables, confounds, and control of extraneous factors
- Confounding variable: covaries with the IV and could offer an alternative explanation for DV changes (e.g., gender differences if one group has more men than women).
- Extraneous/third variables: alternative factors that were not controlled or measured but could influence the DV.
- Random assignment helps to distribute individual differences (age, SES, prior experience, etc.) evenly across groups, reducing confounds.
The role of statistics and measurement in experiments
- It is not sufficient to look at means; formal statistical analyses determine whether observed differences could occur by chance.
- Consider effect size in addition to statistical significance to gauge practical importance.
- Beware of language suggesting proof; science typically speaks in terms of support, evidence, and probability.
The publication and peer-review landscape
- After a study, researchers submit to a journal in an 8k-format (as described in the course); editors send to peer reviewers who critique the theory, logic, methods, data analyses, and conclusions.
- Reviewers’ comments are considered by the editor; revisions may be requested, or the manuscript may be rejected.
- The process acts as quality control to improve or weed out weak studies; not all submitted articles are published.
- Publication biases exist; editors and reviewers can have biases, and disagreements can occur, sometimes leading to extra studies or revised hypotheses.
Practical/ethical considerations in research design
- The nature of the topic dictates the importance of random sampling; in highly generalizable domains (low-level perceptual processing), sampling concerns may be less critical than in more culturally influenced domains.
- In drug trials or clinical experiments, use of placebos and standard treatments as controls improves inference about the new intervention.
- Real-world generalizability (ecological validity) can be a limitation of laboratory experiments; researchers strive to design tasks that reflect real-world behavior while maintaining experimental control.
Course logistics (assignment structure mentioned in the transcript)
- Six short written assignments over the semester; students must choose three, selecting one from each unit (one from the first five weeks and one from the second five weeks in each block).
- You cannot do all six; this distributes grading and workload; due dates are on the syllabus.
- Focus is on concise, targeted responses; assignments are described as short but not trivial, designed to assess understanding of unit content.
Connections to foundational principles and real-world relevance
- The research process mirrors how knowledge accumulates: narrow questions, build on prior evidence, test hypotheses, replicate, and publish to advance the field.
- The emphasis on randomization, control, and replication reflects the core goal of distinguishing causation from correlation and ensuring findings generalize beyond the sample.
- Understanding operational definitions and measurement is crucial for scientific communication and cumulative science; it enables meaningful comparisons across studies and meta-analyses.
Ethical, philosophical, and practical implications discussed
- Science as a self-correcting enterprise: openness to revision and critique is central to progress.
- Caution against overclaiming proof; scientific conclusions are probabilistic and contingent on accumulated evidence.
- The peer-review process embodies collective quality control, acknowledging that researchers should not take results personally but view feedback as a means to improve scientific quality.