Chapter 2 Notes (Psych 107)

Why science matters

  • Before: Earth considered flat; mental illness thought to be demonic possession.
  • Why it matters: Research validates claims; without it, intuition and baseless assumptions prevail.
  • Science requires a systematic process and verification.
  • Example: trephination—a hole in the skull—was historically believed to let evil spirits escape and cure disorders (Figure 2.2). The image shows the skull with a circular hole; roots in historical beliefs about mental illness being caused by spirits.
  • Takeaway: Systematic inquiry helps separate myths from evidence-based conclusions.

Reasoning in the research process

  • Deductive reasoning:
    • Premise 1: All living things require energy to survive.
    • Premise 2: Humans are living things.
    • Conclusion: Humans require energy to survive.
    • Driven by logical analysis.
  • Inductive reasoning:
    • Based on observations: e.g., humans, dogs, and trees require energy to survive; AI programs require energy to run.
    • Conclusion: AI must be a living thing (an inference from observed data).
  • Note: Deduction moves from theory to hypothesis; Induction moves from observations to theory.

Science uses both forms of reasoning

  • Ideas formed through deductive reasoning.
  • Hypotheses tested through empirical observations.
  • Scientists form conclusions through inductive reasoning.
  • Conclusions lead to new theories, which generate new hypotheses, creating a cycle: theory → hypothesis → observations → theory.

Theory and Hypotheses

  • Theory: well-developed set of ideas that explains observed phenomena.
  • Hypothesis: tentative, testable statement about relationships between two or more variables.
    • Predicts how the world will behave if the theory is correct.
    • Usually an if-then statement.
    • Is falsifiable, i.e., can be shown incorrect via empirical methods.

Types of Research

  • Not all research is experimental.
  • In this course:
    1) The term “experiment” describes a very particular design.
    2) “Empirical” means researchers followed a specific methodology and collected their own data to observe, analyze, and describe.

Case studies

  • Focus on one individual.
  • The studied individual is often in an extreme or unique psychological circumstance.
  • Classic example: Phineas Gage.
  • Conclusions: Brain injury (frontal lobe) might impact behaviors and personality, but generalizing to the broader population requires caution.
  • Pros: Rich insight into a case.
  • Cons: Limited generalizability to the larger population.

Naturalistic observation

  • Observation of behavior in its natural setting.
  • Reduces performance-related anxiety and yields genuine behavior.
  • Observer bias: observations may be skewed to fit observer expectations.
  • Mitigation: establish clear observation criteria.
  • Pros: Observes genuine behavior.
  • Cons: Susceptible to observer bias.
  • Example: Seeing a police car behind you may affect driving behavior.

Surveys

  • A list of questions delivered in various formats:
    • Paper-and-pencil
    • Electronically
    • Verbally
  • Used to gather data from a large sample of individuals from a larger population.
  • Pros: Efficient data collection from many people.
  • Cons: People may lie; depth of information is limited compared to interviews.
  • Data can be quantitative or qualitative.

Archival research

  • Uses past records or data sets to answer questions or explore patterns.
  • Pros: Data already collected; cost/time efficient.
  • Cons: Cannot change what information is available.
  • Researchers examine records (hardcopy or electronic).
  • Image credits indicate sources for archival examples.

Timing: cross-sectional vs. longitudinal

  • Cross-sectional research: compare multiple groups at a single point in time.
  • Longitudinal research: take multiple measurements from the same group over time.
  • Risk: attrition – participants dropping out over time.

Correlations

  • Correlation: relationship between two or more variables; when variables change together.
  • Correlation Coefficient: a number from
    -1 to +1, denoted by $r$, indicating strength and direction of the relationship.
  • Visualization: scatterplots illustrate strength and direction; closer to a straight line indicates a stronger correlation.

Correlation does not imply causation

  • Causation: a cause-and-effect relationship where changes in one variable cause changes in another.
  • Can only be established through experimental design.
  • Confounding variable: an unanticipated outside factor that affects both variables, creating a false impression of causation.
  • Example: Ice cream sales and drowning incidents tend to correlate due to a third variable (hot weather) driving both.
  • Statistical note: significance is tested to determine if results could occur by chance.

Issues with correlational research

  • Illusory correlations: perceiving a relationship that does not exist.
  • Confirmation bias: ignoring evidence that disproves preexisting beliefs.
  • Example: The belief that full moons influence behavior, which research does not support.

Cause-and-effect and experiments

  • Only experiments can conclusively establish causation.
  • Not all research is an experiment.
  • Experiments involve:
    • Experimental group: participants who experience the manipulated variable.
    • Control group: participants who do not experience the manipulated variable.
    • Purpose: provide a basis for comparison and control for extraneous factors.

Example experiment: the bystander effect

  • Participants randomly assigned to experimental or control group.
  • Difference between groups is the presence of others (the manipulation).
  • Operational definitions specify how the researchers measure the study variables (e.g., interpretation of an emergency, measured by whether participants act).
  • Scenario: Confederate participants present in the experimental group; no others present in the control group.

Other experimental design considerations

  • Aim to minimize bias and placebo effects.
  • Experimenter bias: researchers' expectations influence results.
  • Participant bias: participants' expectations influence results (e.g., placebo effect).
  • Solution: blinding.
    • Single-blind: participants do not know which group they’re in.
    • Double-blind: neither participants nor researchers interacting with participants know group assignments.

What are we studying? Variables

  • Variable: a characteristic that can vary among subjects.
  • Independent variable (IV): what researchers manipulate or control (e.g., group assignment).
  • Dependent variable (DV): what researchers measure; may be influenced by the IV.

Selecting participants

  • Participants are recruited from a population into a smaller subset called a sample.
  • Random sampling is the gold standard for representation and bias prevention.
  • Goal: use a sample of a population to generalize findings.

What do the results say? Statistics and significance

  • Data are analyzed with statistics to determine if results could have occurred by chance.
  • If the probability of the result happening by chance is very unlikely (usually $p < 0.05$), the results are considered statistically significant.

Reporting the findings

  • Scientific studies are typically published in peer-reviewed journals.
  • Peer review involves other scientists evaluating the study for quality and impact.
  • Provides anonymous feedback and improves research quality.

Recognizing good science

  • Measures and results should be: Reliable (consistent over time, across situations/raters) and Valid (measuring what it intends to measure).
  • Variable and operational definitions:
    • A valid measure is always reliable, but a reliable measure is not always valid.

Ethics in research

  • Research must follow ethical principles enforced by review boards/agencies.
  • Human subjects research:
    • Institutional Review Boards (IRBs) check informed consent, voluntary participation, awareness of risks, benefits, implications, and confidentiality.
    • Ensure risks vs. benefits are considered for participants.
  • Animal subjects research:
    • Institutional Animal Care and Use Committee (IACUC) checks humane treatment of animals.