Psychological Research Chapter 2
Why Research?
- Research is used to validate claims with evidence rather than rely on intuition or untested beliefs.
- Historical examples show that beliefs can be wrong (e.g., earth was thought to be flat; mental illness attributed to possession).
- Scientific research is empirical: grounded in objective, tangible evidence that can be observed repeatedly by different observers.
- Psychology is a science; research provides verification and support for findings, not just exploration for exploration’s sake.
- Research is needed to move from groundless assumptions to proven ideas through study and testing.
The Research Process
- The Process (as outlined):
- Identify the Research Problem
- Review Existing Literature
- Formulate a Hypothesis or Research Question
- Choose a Research Design
- Select Participants and Sampling Method
- Data Collection
- Data Analysis
- Interpret Findings
- Draw Conclusions
- Before beginning: many courses emphasize preparing and planning steps; in-class activities (e.g., inductive/deductive reasoning practice) support understanding of the process.
- Note: Some slide numbering is non-sequential, but the essential steps are listed above.
Approaches to Research
- CASE STUDY (Clinical or Case Studies)
- Focus on one individual or a small group in an extreme or unique circumstance.
- Pros: rich, detailed insight; cons: limited generalizability to the larger population.
- Naturalistic Observation
- Observation of behavior in a natural setting.
- Naturalistic behavior tends to be more genuine when not observed directly; avoids performance biases.
- Observer bias: observations may be skewed by observer expectations.
- Remedy: establish clear observation criteria to reduce bias.
- Notable example: Jane Goodall’s work with chimpanzees (naturalistic observation).
- Surveys
- Use questions delivered in writing, electronically, or verbally to gather data from a large sample.
- Flexible administration methods (e.g., paper, online, in-person).
- Archival Research
- Examine existing records (hardcopy or electronic) to answer research questions.
- Longitudinal and Cross-Sectional Research
- Cross-Sectional: compares different segments of a population at a single time (e.g., age groups).
- Longitudinal: follows the same group over an extended period; attrition (dropout) is a common issue; researchers often recruit many participants initially to account for this.
Population, Sample, and Inferential Statistics
- Population vs Sample
- Population: the entire group of interest to the researcher.
- Sample: a subset of the population used to represent it.
- Inferential Statistics
- Using sample statistics to draw conclusions about population parameters.
- Process: data from a sample infer about the broader population.
- Sampling and Representation
- Random Sample: every member of the population has an equal chance of being selected.
- Random sampling helps ensure representativeness across variables such as sex, ethnicity, and socio-economic status.
The Scientific Method and Reasoning
- Goals of Scientific Study (Describe, Identify, Classify, Explain, Propose reasons, Predict, Hypothesize, Influence/Use)
- Inductive vs Deductive Reasoning
- Inductive reasoning: start with observations and generalize to broader theories.
- Deductive reasoning: start with a theory/premise and derive specific predictions.
- The scientific process often moves: Inductive reasoning → Theory → Hypotheses (deduction from theory) → Empirical testing → Refined theory.
- Inductive vs Deductive Reasoning (Quick reminders)
- Inductive: observations lead to general conclusions; e.g., many observed fruits grow on trees → conclude all fruit grows on trees (illustrative, simplified).
- Deductive: general rule applied to a specific case; e.g., All living things require energy; a duck is a living thing; therefore a duck requires energy.
- The Process of Scientific Research: Inductive vs Deductive Reasoning
- Inductive reasoning forms theories; deductive reasoning tests hypotheses derived from theories.
- Conclusions can lead to new theories or broader generalizations.
- The Scientific Method overview (cycle)
- Observation → Theory/Idea generation → Hypothesis → Design a study → Data collection → Data analysis → Interpretation → Theory refinement.
- The method is not necessarily linear; it often loops as new evidence emerges.
The Research Design and Measurement
- Step 1: Identify the Research Problem
- Step 2: Review Existing Literature
- Step 3: Formulate a Hypothesis or Research Question
- Step 4: Choose a Research Design
- Step 5: Select Participants and Sampling Method
- Step 6: Data Collection
- Step 7: Data Analysis
- Step 8: Interpret Findings
- Step 9: Draw Conclusions
Defining Variables and Measurement
- Operational Definition
- Description of the actions/operations used to measure the dependent variable and manipulate the independent variable.
- Independent Variable (IV)
- The variable that is manipulated or controlled by the experimenter; ideally the only important difference between experimental and control groups.
- Dependent Variable (DV)
- The variable measured to assess the effect of the IV.
- Examples (from slides):
- Meditation on stress reduction; Sleep on memory performance; Social media use on well-being; Positive affirmations on self-esteem; Classroom seating on participation.
The Experimental Method and Biases
- Experimental Design Concepts
- Experimental Group: participants exposed to the manipulated variable.
- Control Group: participants not exposed to the manipulated variable.
- Random Assignment: equal chance of being assigned to either group; helps prevent preexisting differences.
- Cause-and-effect: achieved when random assignment, manipulation of IV, and control of extraneous variables are in place.
- Bias and Placebo Effects
- Experimenter bias: researcher's expectations can skew results.
- Participant bias: participant expectations can skew results.
- Single-blind: participants unaware of group assignment; experimenter may know.
- Double-blind: neither participants nor researchers know group assignments.
- Placebo effect: participants' expectations influence outcomes; to test effects, use a placebo analog.
- Ethics in Experimentation
- Some questions cannot be tested with experiments due to ethical concerns (e.g., abuse exposure).
- In such cases, other methods like case studies or surveys may be used.
Interpreting Findings and Reporting
- Analyzing Findings
- Statistical analysis determines how likely observed differences are due to chance.
- Significance: commonly, results are deemed significant if p-value ≤ 0.05, i.e., p \,\le\, 0.05.
- Reporting Findings
- Results are often published in scientific journals, usually peer-reviewed.
- Peer-reviewed journal article: reviewed by other scientists to assess quality and replicability.
- Replication checks reliability of results and can expand or challenge original findings.
- Bad Science and Retraction: Vaccine-Autism example
- Early studies claimed vaccines caused autism; later larger studies found no link and some original studies were retracted due to financial conflicts of interest.
- Public health consequences included decreased vaccination rates and measles outbreaks.
- Important example of why replication and scrutiny are essential in science.
Reliability, Validity, and Measurement Quality
- Reliability
- Consistency and reproducibility of results.
- Inter-rater reliability: agreement among observers when recording events.
- A measure can be reliable without being valid.
- Validity
- Accuracy of a result in measuring what it is intended to measure.
- A valid measure is always reliable, but a reliable measure is not necessarily valid.
Correlation, Causation, and Illusory Correlations
- Correlational Research
- Examines relationships between two or more variables.
- Correlation coefficient: r\in[-1,1]; indicates strength and direction of the relationship.
- Positive correlation: variables move in same direction.
- Negative correlation: variables move in opposite directions.
- Correlation does not imply causation: only experimental manipulation can establish causality.
- Confounding variable: an outside factor that influences both studied variables, potentially creating a spurious association.
- Illusory Correlations and Confirmation Bias
- Illusory correlations: perceiving a relationship where none exists.
- Confirmation bias: tendency to favor evidence that supports preconceptions and ignore contradictory data.
- These biases can contribute to prejudicial attitudes and discriminatory behavior.
- Case Study Activity Example (Ice Cream vs Shark Attacks)
- Demonstrates that a strong correlation can be observed even when no causation exists; potential confounding variable (e.g., warm weather increases both ice cream sales and beach attendance, which could raise shark incidents).
- Question prompts encourage identifying how to interpret findings and what confounds might be involved.
Ethics: Human and Animal Research
- Institutional Review Board (IRB)
- A committee that reviews research proposals involving human participants; exists at institutions receiving federal support for human-subjects research.
- IRB approval is generally required before proceeding.
- Informed Consent
- Process of informing participants about the study, risks, implications, voluntariness, and confidentiality; participants provide consent.
- Deception and Debriefing
- Deception is sometimes used to prevent bias, but requires debriefing afterward to inform participants about the true nature of the study.
- Example: Tuskegee Syphilis Study (1932): unethical withholding of diagnosis and treatment; highlights evolution of ethical guidelines.
- Animal Research Ethics
- IACUC: Institutional Animal Care and Use Committee reviews proposals for non-human animal research.
- Most psychology research with animals uses rodents or birds; animals are used when experiments would be unethical in humans.
- Core aim is to minimize pain or distress.
Additional Concepts and Terms
- The placebo effect
- The expectation of improvement can cause real changes in experience; testing with a placebo helps isolate true treatment effects.
- Operationalization and measurement design
- Clear operational definitions ensure that variables are observable and measurable in a consistent way across observers and contexts.
- Reliability vs Validity visual example
- A chart shows Target A/B/C illustrating how a measurement can be reliable (consistent) but vary in validity (accuracy toward the target).
- Statistical significance threshold commonly used: p \le 0.05
- Correlation coefficient range: r \in [-1, 1]
- Conceptual distinction: correlation does not imply causation; causality established via experimental design with random assignment and controlled manipulation
Connections to Foundational Principles and Real-World Relevance
- Emphasizes empirical validation to avoid intuition-based errors in everyday claims (e.g., advertising claims must be scrutinized for evidence).
- Critical thinking framework: assess expertise, potential gains, evidence justification, and peer consensus before accepting claims.
- Ethical considerations are central to research design, ensuring participants’ rights and welfare while enabling scientifically valid conclusions.
Ethical and Practical Implications Highlight
- Research aims to balance scientific rigor with ethical responsibility, especially when involving humans or animals.
- Missteps (e.g., biased results, deceptive practices without proper debriefing, or unethical experiments) can erode trust and have real-world health consequences.
- Replication and transparency are essential for building a reliable body of knowledge.
Summary of Core Concepts
- Research is essential to validate claims with empirical evidence.
- The research process is iterative and multidimensional, encompassing problem definition, literature review, design, sampling, data collection, analysis, interpretation, and reporting.
- Multiple data-collection approaches exist (case studies, naturalistic observation, surveys, archival research, longitudinal/cross-sectional).
- Inductive and deductive reasoning form the backbone of theory development and hypothesis testing.
- Experimental design (IV, DV, random assignment, control groups) is necessary to establish causality.
- Reliability and validity determine measurement quality; both are crucial for credible findings.
- Correlation does not equal causation; beware confounding variables and illusory correlations.
- Ethics govern all stages of research, with IRBs, informed consent, debriefing, and, when applicable, animal welfare considerations.
- Reporting and replication underpin scientific progress; bad science can have harmful real-world consequences.