Chapter 2 Notes: Psychological Research

Why it matters

  • Historically, people believed essential claims like: Earth is flat and mental illness is demonic possession.
  • Why study psychology scientifically? Research is necessary for validating claims; otherwise intuition and baseless assumptions may mislead.
  • Science requires a systematic process and verification of findings.
  • Trephination example: Some ancestors believed that making a hole in the skull would let evil spirits leave the body, curing mental illness and other disorders. (Figure 2.2; credit: taiproject/Flickr)

Reasoning in the research process

  • Deductive reasoning
    • Results are predicted based on a general premise.
    • Example (logical):
    • All living things require energy to survive (premise)
    • Humans are living things (premise)
    • Therefore, humans require energy to survive (conclusion)
    • Based on logical analysis.
  • Inductive reasoning
    • Conclusions are drawn from observations (empirical).
    • Example (empirical):
    • Humans require energy to survive
    • Dogs require energy to survive
    • Trees require energy to survive
    • AI programs require energy to run
    • Conclusion: AI must be a living thing

Science uses both forms of reasoning

  • Ideas are formed through deductive reasoning.
  • Hypotheses are tested through empirical observations.
  • Scientists form conclusions through inductive reasoning.
  • Conclusions lead to new theories, which generate new hypotheses, continuing the cycle.

Key terms

  • Theory: a well-developed set of ideas that proposes an explanation for observed phenomena.
  • Hypothesis: a tentative and testable statement about the relationship between two or more variables.
    • Predicts how the world will behave if the theory is correct.
    • Usually an “if-then” statement: \text{If } X \text{, then } Y.
    • Is falsifiable (capable of being shown to be incorrect), usually using empirical methods.

Types of Research

  • Not all research is experimental.
  • In this class:
    1) The term “experiment” describes a very particular type of research design.
    2) “Empirical”: researchers followed a methodology and collected their own data to observe, analyze, and describe.

Case studies

  • Case studies focus on one individual.
  • The studied individual is typically in an extreme or unique psychological circumstance.
  • Classic example: Phineas Gage.
  • Conclusions: Brain injury (frontal lobe) might impact behaviors related to personality, but generalizing should be done with CAUTION.
  • Pros (PRO): Allows for rich insight into a case.
  • Cons (CON): Difficult to generalize results to the larger population.

Naturalistic observation

  • Naturalistic observation = observation of behavior in its natural setting.
  • Pros (PRO): Eliminates performance anxiety; allows study of genuine behaviors.
  • Cons (CON): Observer bias; observations may be skewed to fit expectations.
  • Observer bias: bias in observations due to observer expectations.
  • Establishing clear criteria for observation helps reduce observer bias.
  • Example: Seeing a police car behind you may alter driving behavior. (credit: Michael Gil)

Surveys

  • A survey is a list of questions delivered in multiple formats: paper-and-pencil, electronic, or verbal.
  • Used to gather a large amount of data from a sample (subset of individuals) from a larger population.
  • Pros (PRO): Efficiently collects data from many people.
  • Cons (CON): People may lie; less depth per respondent.
  • Quantitative vs. Qualitative data.

Archival research

  • Uses past records or data sets to answer research questions or identify patterns/relationships.
  • Pros (PRO): Data are already obtained, saving time and money.
  • Cons (CON): Cannot change what information is available.
  • Researchers examine records, whether hardcopy or electronic.
    • (credit: paper files; computer archives)

It’s all about the timing

  • Cross-sectional research: comparing multiple groups at a single point in time.
  • Longitudinal research: multiple measurements from the same group over time.
  • Risk of attrition: participants dropping out over time.

Correlations

  • Correlation: relationship between two or more variables; when two variables are correlated, one variable changes as the other does.

Correlation details

  • Correlation Coefficient: a number from -1 to +1, indicating the strength and direction of the relationship, usually represented by r.
  • The more the data align with a straight line (points close to a line), the stronger the correlation.
  • Positive correlation: variables change in the same direction (both increase or both decrease).
  • Negative correlation: variables change in opposite directions (one increases, the other decreases).
  • Scatterplots visually display the strength and direction of correlations.
  • Stronger correlations have data points lying closer to a straight line.

Correlation DOES NOT mean causation

  • Cause-and-effect relationship: changes in one variable cause changes in the other; can be established only through experimental design.
  • Confounding variable: an outside factor that affects both variables, creating a false impression of causality.
  • Example: Ice cream sales and drowning incidents can both rise with hot weather, suggesting a spurious relationship.

Issues with correlational research

  • Illusory correlations: perceiving a relationship where none exists.
  • Confirmation bias: tendency to ignore evidence that contradicts beliefs.
  • Example: The full moon belief that it affects behavior; research shows no reliable relationship.

Cause-and-effect

  • Can be conclusively established only with an experiment.
  • Not all research counts as an “experiment.”
  • Experiments involve:
    • Experimental group: participants who experience the manipulated variable.
    • Control group: participants who do not experience the manipulated variable; used for comparison and to control for chance factors.

Example experiment

  • Participants are randomly assigned to the control or experimental group (random assignment is the key difference).
  • Example: Bystander effect
    • Experimental group: confederates (fake participants) present.
    • Control group: no other people around.
  • Research question: How does the presence of others impact how people interpret an emergency?
  • Operational definitions: precise definitions of what is being studied and how it will be measured.
    • Example: interpretation of emergency, measured by whether participants act in response to the emergency.

Other experimental design considerations

  • Aim to minimize bias and placebo effects.
  • Experimenter bias: researchers’ expectations skew results.
  • Participant bias: participants’ expectations skew results (e.g., placebo effect, power of expectations).
  • Solution: Blinding.
    • Single-blind: participants do not know which group they’re in.
    • Double-blind: neither participants nor researchers who interact with participants know group assignments.

What are we studying?

  • Variable: a characteristic on which subjects can vary.
  • Independent variable (IV): something researchers directly control in an experiment (e.g., which group).
  • Dependent variable (DV): something measured that may be influenced by the IV.

Selecting participants

  • Participants are recruited from a population into a smaller subset called a sample.
  • Random sampling is the “gold standard” → ensures representation and minimizes bias.
  • Goal: use a sample of a population to generalize findings to the population.

What do the results say?

  • Data are analyzed with statistics to determine whether results could have occurred by chance (a random fluke) rather than due to the study itself.
  • Statistical significance: when results are very unlikely to have occurred by chance, typically defined as p < 0.05.

Reporting the findings

  • Scientific studies are typically published in peer-reviewed journals.
  • Other scientists with knowledge on the topic review the study for quality and impact.
  • Feedback contributes to quality control and improvement of research.

Recognizing good science

  • Measures and results should be:
    • Reliable: consistent over time and across situations, raters, or observers.
    • Valid: measuring what the study intends to measure.
  • Variable vs. Operational Definitions:
    • A valid measure is always reliable, but a reliable measure is not always valid.

Ethics in research

  • Ethical principles are enforced by review boards/agencies.
  • Human subjects research: Institutional Review Boards (IRBs).
    • Check for informed consent: voluntary agreement to participate after knowing the procedures, risks, benefits, implications, and confidentiality assurances.
    • Check for risks vs. benefits to participants.
  • Animal subjects research: Institutional Animal Care and Use Committee (IACUC).
    • Check for humane treatment of animals.
  • Additional ethical considerations include confidentiality, minimizing harm, and voluntary participation.