Psychology Chapter 2: Psychological Research
2.1 Why is research important?
Learning objectives
Explain how scientific research addresses questions about behavior.
Discuss how scientific research guides public policy.
Appreciate how scientific research can be appropriate or inappropriate and important in personal decisions.
Emphasize that scientific research is a critical tool for navigating a complex world; avoid relying solely on intuition, authority, or luck.
The role of evidence vs opinion
History shows we can be very wrong when we ignore evidence (e.g., geocentric models, flat-earth ideas, possession as cause of mental illness).
Science aims to disprove preconceived notions and superstitions to gain objective understanding.
Psychology seeks to understand behavior and the underlying cognitive, mental, and neural processes.
Empirical knowledge vs. intuition
Scientific knowledge is empirical: grounded in observable, repeatable evidence.
Behavior is observable; the mind is not always directly observable, so researchers infer causes from observable data.
Use of research information in decision making
In public policy: evaluate which theories are accepted by the scientific community; consider credibility, consensus, and quality of evidence.
In education and technology: mixed findings exist; technology can both help and hinder learning, engagement, sleep, and time management.
Real-world example: governor deciding which early intervention programs to fund. Research shows many programs are effective long-term; guides funding decisions.
Everyday personal decisions and health
Parents may seek information about child development (e.g., speech delays) and encounter vast online sources.
Facts vs. opinions: facts are observable realities from empirical research; opinions are judgments that may not be supported by data.
Notable researchers and historical context
Margaret Floy Washburn — first woman to earn a PhD in psychology; animal behavior and cognition.
Mary Whiton Calkins — memory research; early experimental psychology lab founder; first female APA president in 1905.
Francis Sumner — first African American to receive a PhD in psychology (1920); founder of Howard University’s psychology department; father of Black psychology.
Inez Beverly Prosser — first African American woman to earn a PhD in psychology; education in segregated vs. integrated schools; influenced Brown v. Board of Education.
Global expansion of psychology labs: early labs in South America (e.g., Jorge Otobio Pinario in Buenos Aires); in India (Visveswara Guanamadian David Boas and Narayanra Nath Sen Gupta) established first independent psychology departments.
APA history: founded 1892 with white male members; by 1905, Mary Whiton Calkins elected president; by 1946, ~¼ of American psychologists were female.
Diversity and representation
Growth in diversity of researchers reflected in broader representation and potentially broader applicability of findings.
The scientific method in psychology
Scientific knowledge advances via a circular process of theory -> hypothesis -> empirical observation -> refinement of theory.
Deductive reasoning: start with a general theory/hypothesis and test predictions in the real world.
Inductive reasoning: derive generalizations from empirical observations.
In practice, scientists use both deductive and inductive reasoning; case studies lean toward inductive patterns, experiments toward deductive testing.
James Lange theory example
Emotion as a result of physiological arousal; a hypothesis from the theory might be that someone unaware of arousal will not feel fear; this hypothesis is falsifiable and testable.
Falsifiability and Freud
A hypothesis must be falsifiable; Freud’s ideas were criticized for not yielding falsifiable predictions.
James-Lange theory yields testable predictions; some evidence shows emotions can exist even with impaired arousal awareness, supporting that emotions can occur without full arousal awareness.
The value of falsifiability
The ability to disconfirm a theory increases confidence in the knowledge produced.
Scientific claims are accepted after repeated testing and potential replication.
Key takeaways
Scientific research helps separate opinion from evidence.
The public realm (policy, education, health) benefits from robust, replicable research.
2.2 Approaches to research
Objectives
Describe strengths and weaknesses of case studies, naturalistic observation, surveys, and archival research.
Compare longitudinal and cross-sectional approaches.
Compare correlation and causation; understand how experiments establish causality.
Methods and their trade-offs
Case studies
Rich, in-depth information about a single person or a small group.
Strength: deep understanding of rare cases.
Weakness: limited generalizability to the broader population.
Often used for rare phenomena or initial exploration.
Naturalistic observation
Observing behavior in its natural context without manipulation.
Strength: high ecological validity; natural behavior.
Weakness: observer effects (people alter behavior when aware of being observed); hard to control variables; time-consuming; costly.
Example: Suzanne R. Fanger observing preschool playground interactions with inconspicuous observation and wireless microphones.
Animal studies use field observation to understand social structures (e.g., Jane Goodall on chimpanzees).
Structured observation
Observations in a controlled set of tasks or environment (e.g., the Strange Situation by Mary Ainsworth for infant attachment).
Balances naturalistic context with some control to elicit relevant behaviors.
Surveys
Questionnaires or interviews to collect data from large samples.
Strength: generalizability to a population when sample is representative.
Weakness: reliance on self-report; social desirability bias; misreporting; limited depth.
Central tendency measures: mode, median, mean
Outliers can skew the mean; consider distribution shape.
Archival research
Analyze existing records/datasets without interacting with participants.
Strength: cost-effective; no participant interaction; historical trends.
Weakness: no experimental control; data quality varies; missing data; inconsistency across sources.
Correlation and causation
Correlation means a relationship between two or more variables, not necessarily causation.
Correlation coefficient ranges from , where:
Positive: as one variable increases, the other increases.
Negative: as one variable increases, the other decreases.
Magnitude indicates strength; closer to ig|rig|=1 means a stronger relationship; closer to 0 means weaker.
Example: ice cream consumption and crime rates both rise with higher temperatures; this is a positive correlation due to a third variable (temperature) rather than causation.
Illusory correlations: people perceive relationships that don’t exist (e.g., moon phases affecting behavior) due to confirmation bias or availability heuristics.
From observation to theory
Inductive reasoning builds general theories from observed data; used heavily in case studies and naturalistic observations.
Deductive reasoning tests predictions derived from theories; used heavily in experiments.
Theoretical constructs become hypotheses; hypotheses become tested via observations/experiments; results refine theories.
Experimental design basics
Hypothesis: testable prediction (often in if-then form) about a relationship between variables.
Operational definitions: precise, testable definitions of variables (how you measure learning, for example).
Experimental vs control groups: manipulate one variable (independent variable) and observe effect on another (dependent variable).
Random assignment: each participant has an equal chance to be in either group; essential for reducing preexisting differences.
Blind designs: single-blind (participants unaware of group) and double-blind (participants and researchers unaware of group) to minimize bias.
Placebo effect: participants’ expectations cause changes in outcomes; a placebo control helps isolate the effect of the manipulation.
Ethical design limits: some questions cannot be ethically studied through direct manipulation (e.g., child abuse exposure).
Key terms in experimental design
Independent variable (IV): the manipulated variable.
Dependent variable (DV): the measured outcome.
Operationalization: turning abstract constructs into measurable operations.
Sampling concepts
Population: the entire group of interest.
Sample: a subset drawn from the population.
Random sampling: increases representativeness of the sample;
Random assignment: ensures comparable groups at the start of an experiment.
Practical example and interpretation
Technology in the classroom example: compare learning outcomes between computer-based instruction (IV) and traditional instructor-led learning (control); measure learning via a test (DV).
If a study uses random assignment and a well-defined operational definition of learning, observed differences can be attributed to the instructional method with statistical justification (see significance below).
Significance and limitations
Statistical significance: typically p-value p < 0.05, meaning less than a 5% chance that observed differences occurred by chance.
Quasi-experiments: when random assignment is not possible (e.g., sex differences), causal claims are weaker; ethical constraints may limit manipulation.
Interpreting research quality
Peer-reviewed publication process helps ensure quality and replicability.
Replication crisis: concerns about reproducibility; some famous studies fail to replicate; discussions lead to improved methodologies and openness.
Transparency and replication: replication attempts reinforce or challenge original findings; strong science relies on converging evidence.
Real-world illustrations
Longitudinal and cross-sectional designs illustrate development and cohort effects (see below).
2.4 Ethics in research
Human subjects research
Institutional Review Board (IRB): reviews proposals to protect participant safety and rights; required for federally funded research.
Informed consent: participants receive a description of what they will experience, potential risks, voluntary participation, and the right to withdraw without penalty; data confidentiality guaranteed.
Deception and debriefing: deception may be used to preserve study integrity when ethically justified; participants must receive a thorough debriefing after participation, explaining the true purpose and data collected.
Special considerations for minors: parents/guardians provide consent; assent from minors when appropriate.
Animal research
Humane treatment and welfare of animal subjects are required.
Institutional Animal Care and Use Committee (IACUC): reviews research proposals involving animals; conducts semiannual inspections of facilities; no project proceeds without IACUC approval.
Note: most animal research involves rodents or birds; many basic processes are similar across species, justifying animal models.
Ethical principles in practice
Respect for human dignity and safety; responsible conduct of research; minimizing harm and maximizing benefits.
Balancing scientific knowledge gains with participant welfare; ensuring confidentiality and minimizing risk.
Summary of ethical regulation
IRB handles human research ethics, consent, deception, and confidentiality.
IACUC handles animal research ethics and welfare.
Ethical review processes aim to prevent harm, ensure informed participation, and promote responsible reporting and replication of findings.
Key concepts and formulas to remember
Correlation coefficient: ; sign indicates direction (positive or negative) and magnitude indicates strength.
Statistical significance: typically p < 0.05 as a threshold for rejecting the null hypothesis.
Deductive vs inductive reasoning: deductive testing of hypotheses derived from theory; inductive generation of theory from observations.
Independent vs dependent variables: IV is manipulated; DV is measured.
Operational definitions: precise, replicable definitions of how variables are measured.
Reliability vs validity:
Reliability: consistency of measurement across time or observers (e.g., inter-rater reliability, test-retest).
Validity: accuracy of measurement (ecological validity, construct validity, face validity).
Blind designs:
Single-blind: participants unaware of group assignment.
Double-blind: both participants and researchers unaware of group assignment.
Types of research designs:
Case studies, naturalistic observation, surveys, archival research, longitudinal studies, cross-sectional studies, and experiments.
Ethics acronyms:
IRB (Human subjects): institutional review board.
IACUC (Animal subjects): institutional animal care and use committee.
Examples mentioned in the lecture:
Moon phases and behavior: meta-analytic evidence shows no consistent relationship; demonstrates illusionary correlations and the importance of meta-analysis.
Ice cream and crime: temperature as a confounding variable; illustrates correlation vs causation.
James-Lange theory: emotion following physiological arousal; falsifiability of hypotheses derived from the theory.
The Strange Situation: Mary Ainsworth’s assessment of infant attachment styles (structured observation).
Hogan twins case: twins connected at the head offering a unique natural experiment in sensory integration.
Longitudinal cancer studies: American Cancer Society’s Cs studies linking cancer risk factors (e.g., smoking) to cancer outcomes over decades.
Note: Throughout, the transcript emphasizes the necessity of evidence-based conclusions, critical thinking, and the careful application of ethical guidelines when conducting psychological research.