chapter 1
Core assumptions of science
- Science assumes two fundamental notions that guide all psychologists (and scientists):
- Everything is lawful: phenomena follow the laws of science; magical explanations are not accepted in science (e.g., an apple floating without a reason would be rejected; there’s always an underlying cause).
- Science aims to explain why something happens: researchers make educated attempts to explain observations, with the understanding that explanations may be right or wrong and may require refinement over time.
- The role of bias and thinking about thinking (metacognition)
- It’s important to examine our own assumptions, beliefs, words, and thoughts to uncover potential biases that could influence scientific inquiry.
- Open-minded skepticism
- Be willing to test your expectations rather than assuming your experience defines the outcome; avoid tunnel vision around your hypothesis.
- The process of science is not about guaranteed correctness but about rigorous inquiry, testing, and revision.
The nature of hypothesis and explanation
- Hypothesis definition:
- A hypothesis is an educated guess about a relationship between variables, based on prior knowledge; it may be right or wrong.
- It is not always the final verdict; it’s a tested proposition that guides data collection and analysis.
- The goal of a hypothesis is to drive investigation, not to guarantee the correct outcome.
The scientific process: steps and flexibility
- The process can be described with varying numbers of steps (four, five, etc.), but the key is understanding the flow from question to conclusion to replication.
- A flexible outline often used:
- Step 1: Create a thought experiment (consider what you want to test).
- Step 2: Identify a question (or formulate a hypothesis).
- Step 3: Design a study and collect data.
- Step 4: Analyze the data to draw conclusions about the hypothesis.
- Step 5: Report findings and replicate the study to verify results.
- So-called “four steps” vs. “five steps” differences are inconsequential; the important point is understanding the progression from question to data to conclusion.
Hypothesis in practice
- A hypothesis is an educated guess about what will happen in a test scenario; outcomes may support or challenge the hypothesis.
- The emphasis is on testing and learning, not merely proving oneself correct.
Types of research designs: from experiments to surveys
- Ways to test a question:
- True experiments (randomized): manipulate an independent variable (IV) and observe a dependent variable (DV) while controlling for other factors.
- Correlational studies: assess relationships between variables without manipulating them (cannot establish causation).
- Surveys: collect self-reported data about behaviors, attitudes, or states.
- The research question guides whether you pursue an experiment, a survey, or a correlational study.
The core of experimental design
- Key components in experiments:
- Independent Variable (IV): the variable the researcher manipulates.
- Dependent Variable (DV): the outcome measured.
- Control group: does not receive the experimental manipulation.
- Experimental group: receives the manipulation.
- Confounding variables: other factors that could influence the DV and must be controlled.
- An ideal comparison requires all factors to be equal between groups except for the IV.
- Example (violence in media):
- Population: children.
- Randomly assign to two groups: one watches violent content (IV) and one watches non-violent content (control).
- Measure aggression on a playground (DV).
- Control potential confounds: room color, temperature, screen size, time of day, duration, etc. to keep them identical across groups.
- Operational definitions are essential to ensure that terms like “violence” are clearly defined for replication (e.g., punching, pushing, shoving, yelling).
Operational definitions and replicability
- Operational definitions specify exactly how concepts will be measured and observed in the study.
- Replication is crucial: studies must be repeatable by others to verify results.
- The example of replication in the 1990s vaccination-autism claim:
- A researcher published a claim linking vaccines to autism.
- Hundreds of researchers attempted to replicate the finding.
- All replication attempts failed; the original data were later admitted to be falsified.
- Result: the researcher faced consequences; the episode underscored that science relies on verifiable evidence and repeatable results.
- Because of non-replicability, a single study is rarely enough to support a broad theory.
Theories vs. hypotheses vs. pseudoscience
- Theory as umbrella: a theory is a broad, well-supported framework that integrates many findings and aims to explain a wide range of phenomena. It is continually tested and refined.
- The big bang theory example: a well-supported theory with extensive supporting data; it remains a theory because it cannot be proven with absolute certainty (no one observed the universe’s origin firsthand).
- Pseudoscience: presented as science but not supported by rigorous testing or evidence; relies on testimonials or untested mechanisms rather than robust data.
- Example: balance bands claimed to increase balance/strength without credible evidence; ads and testimonials can mislead without controlled testing.
- Scientists test such claims and may find no beneficial effect beyond placebo or none at all.
Research methods: naturalistic observation and case studies
- Naturalistic observation (people-watching): observing behavior in natural settings without interference.
- Case studies: in-depth investigations of a single person or small group, often when large-scale experiments are impractical or unethical.
- When to use each:
- Naturalistic observation or case studies are useful for exploring phenomena and generating hypotheses.
- They are not substitutes for controlled experiments when causal conclusions are required.
Correlation vs. causation
- Correlational studies examine whether two variables move together, but do not prove that one causes the other.
- Important facts about correlation:
- Direction: the sign of the correlation indicates direction of the relationship.
- Positive correlation: both variables move in the same direction; example: as attendance increases, engagement might increase.
- Negative correlation: variables move in opposite directions; example: more study time might relate to lower anxiety.
- Strength: the magnitude of the correlation coefficient indicates strength; stronger relationships have larger |r| values.
- Example: a correlation of r = 0.6 indicates a moderately strong relationship.
- The caveat: correlation does not imply causation; there may be a third variable driving both.
- Third-variable problem (a.k.a. confounding influence): a separate variable could account for the observed relationship between A and B (e.g., heat as a third variable linking ice cream consumption and crime rates).
- Practical note: headlines often report correlational findings as if causal; always check whether a study design supports causation.
- When interpreting correlations, consider that causation requires experimental manipulation and control of confounds.
Sampling methods: getting the right participants
- Random sampling (random selection): every member of the population has an equal chance of being selected; aims to generalize findings to the population.
- Representative sampling: ensure subgroups are included proportionally to their presence in the population (e.g., selecting 50 freshmen, 50 sophomores, etc., from a class to reflect class composition).
- The random-number example from the transcript illustrates a simple random approach to assignment or sampling (e.g., using odds vs. evens).
- When causation is the goal, experiments with random assignment to IV conditions strengthen causal inferences.
Ethics in research with human participants
- Informed consent: participants must be informed about what they will do, their rights, and potential risks; participation is voluntary.
- In education research, students can choose to participate in experiments or complete alternative tasks (e.g., article quizzes) to fulfill requirements.
- Deception: allowed only when justified, not harmful, and followed by debriefing; deception must not cause major distress or risk.
- Confidentiality: personal data must be protected; information should be kept private and secured.
- Minimal risk and protection: researchers must minimize potential harm; if deception is used, participants should be debriefed afterwards to explain the study’s purpose and their role.
- Historical ethics example:
- A classic social-psychology observation involved smoke filling a room during a staged experiment; researchers studied how long a participant would wait to report smoke when others were present to see if conformity or diffusion of responsibility affected behavior.
- Another ethical example relates to the 1960s Woolworth’s department store fire in England, where survivors reported social norms (e.g., paying a bill) affected their actions; this highlighted group dynamics and safety considerations.
- Real-world note: deception and ethics are designed to protect participants while allowing researchers to study phenomena that cannot be observed without some manipulation or controlled setting.
Practice implications and real-world relevance
- Replication and scientific integrity impact human health and public policy (e.g., misinformation about vaccines and autism; replication failures reduced spread of false claims and improved public health trust).
- Operational definitions matter for reproducibility; clearly defining what counts as a variable (e.g., what constitutes “violent behavior”) ensures that different researchers measuring the same construct can compare results.
- Ethical conduct in research protects participants and preserves the credibility of science, which in turn informs better decisions in medicine, education, and public policy.
- Understanding correlation vs. causation helps critically evaluate media headlines and scientific claims encountered in daily life.
- A strong scientific mindset combines curiosity, skepticism, rigorous methods, transparent reporting, and a commitment to replication and refinement of knowledge.