Unit 0: Research Methods and Data Interpretation - Comprehensive Notes

Psychology Science Foundations

  • Psychology is presented as a science that uses the scientific method to study human behavior and mental processes.
  • Key tension: intuition/common sense vs empirical science; both aid understanding but can be fallible.
  • Many ideas come from everyday experience, but rigorous methods help reduce bias and error.
  • Ethical and practical implications are central to methodology and interpretation of findings.

Intuition, Common Sense, and Limits of Intuition

  • Intuition: knowledge or belief gained without explicit reasoning or perception; a “hunch”.
  • Common Sense: sound practical judgment not grounded in specialized knowledge.
  • Both intuition and common sense can aid but are error-prone; psychology emphasizes systematic testing to overcome biases.
  • Examples and concerns: personal interviews can be biased by gut feelings; reliance on intuition alone can mislead conclusions.

The Need for Psychological Science

  • Statement: intuition and common sense alone are insufficient for understanding human nature.
  • Science uses methods to overcome bias and generate reliable, generalizable knowledge.
  • The scientific attitude is essential to translate observations into reliable knowledge.

The Scientific Attitude

  • Curiosity: Does it work? Can predictions be confirmed?
  • Skepticism: What do you mean? How do you know? Distinguish reality from fantasy; not cynical or gullible.
  • Humility: Be willing to be surprised; follow new ideas even when they contradict current beliefs.
  • Examples of scientific attitude in practice: evaluating whether facial expressions or body postures affect feelings; relationships between stress and health; debates about parental influence on sexual orientation; recognition that some claims (e.g., extrasensory mind-reading) lack evidence.

The Scientific Method

  • Theories: Integrated explanations that organize and predict behavior or events (e.g., low self-esteem feeds depression).
  • Hypotheses: Testable predictions prompted by a theory (e.g., people with low self-esteem are more likely to feel depressed).
  • Research and Observations: Administer tests, collect data to see if predictions hold.
  • Example relationship: low self-esteem predicts higher depression; test via data on self-esteem and depression scores.
  • The goal: generate or refine theories based on consistent, repeatable results from well-designed studies.

Operational Definitions and Replication

  • Operational Definition: precise, explicit definition of how a variable is measured or manipulated (e.g., how depression or self-esteem is measured).
  • Replication: essential to confirm findings; others should be able to replicate methods and obtain similar results.
  • Clear operational definitions enable replication and verification.

The Scientific Method in Practice

  • The Theory-Hypothesis-Experiment cycle:
    • (1) Theories generate hypotheses.
    • (2) Hypotheses lead to predictions tested by experiments.
    • (3) Research/Observations test the hypotheses.
  • Example flow: Theory → Hypothesis (low self-esteem → more depression) → Test (administer self-esteem and depression measures) → Observe results → Revise theory if necessary.

Experimental vs Non-Experimental Methods

  • Experimental methods: manipulate one or more factors to observe effects on behavior/mental processes.
    • Independent Variable (IV): what is manipulated (e.g., vitamin D vs placebo).
    • Dependent Variable (DV): what is measured (e.g., depression levels).
    • Experimental Group: receives treatment.
    • Control Group: does not receive treatment (placebo).
    • Random Assignment: groups formed by chance to minimize preexisting differences.
    • Double-Blind: both participants and researchers are unaware of group assignments to reduce bias.
    • Single-Blind: either participants or researchers are unaware of the treatment assignment.
  • Non-Experimental Methods: do not involve random assignment or manipulation of variables.
    • Case Studies: in-depth study of one individual; rich detail but limited generalizability and potential bias.
    • Surveys and Questionnaires: self-reports; efficient but vulnerable to biases (e.g., wording, social desirability).
    • Meta-Analysis: synthesis of data from multiple independent studies to identify overall trends.
    • Observations: watching subjects; may introduce observer bias.
  • Example: Vitamin D and depression study setup illustrates experimental design with IV (Vitamin D vs placebo) and DV (depression levels).

Measurement and Data Types

  • Qualitative Data: in-depth narrative data not reduced to numbers (e.g., interviews, open-ended responses).
  • Quantitative Data: numerical data (e.g., scale scores, test results).
  • Central Tendency (measures that summarize data with a single value):
    • Mode: most frequently occurring score.
    • Mean: average score.
    • Median: middle score in a distribution.
  • Variation: how much scores differ from each other.
    • Range: difference between highest and lowest scores.
    • Standard Deviation (SD): average amount scores vary from the mean.
    • Formula: s = \sqrt{\frac{1}{n-1} \sum{i=1}^{n} (xi - \overline{x})^2}
  • Normal Distribution (bell-shaped curve): most scores cluster around the mean; about 68\% \text{ fall within } \mu \pm \sigma.
  • Distributional shape matters for interpretation of statistics.

Statistical Significance and Inference

  • Statistical Significance: the probability that observed differences could occur by chance.
  • Threshold: p < 0.05 indicates results are statistically significant; there is at least 95% confidence that differences are due to the manipulated variable, not random chance.
  • Important distinction: significance does not prove causation.

Correlation and Causation

  • Correlation (r): measures the degree to which two variables relate.
    • Range: r \: [-1, 1] (perfect negative to perfect positive relationship).
    • Sign indicates direction; magnitude indicates strength, but not causation.
    • Interpretation examples:
    • Positive correlation: as one variable increases, the other tends to increase.
    • Negative correlation: as one variable increases, the other tends to decrease.
    • Zero correlation: no linear relationship.
  • Scatterplots illustrate the relationship between two variables.
  • Important caveats:
    • Correlation does not imply causation (third-variable problem, directionality problem).
    • Directionality: even if A and B are correlated, A might cause B, B might cause A, or a third variable C could influence both.
    • Illusory correlations: perceiving a relationship where none exists (common in misinterpreted data).
  • Examples discussed include common-sense inferences that seem correlated but are not causally linked (e.g., extreme events regressing to the mean).

Regression Toward the Mean and Illusory Correlations

  • Regression toward the mean: extreme scores tend to move toward the average on subsequent measurements.
  • Example: after a coach yells at a team for a poor performance, the team may perform better next game due to regression toward the mean, not the scolding.
  • Illusory correlations can arise when random coincidences are mistaken for meaningful relationships.

The Normal Curve and Variation in Data

  • Normal distribution: symmetrical bell-shaped curve; most data near the mean.
  • About 68% of scores fall within one standard deviation of the mean.
  • Standard deviation reflects dispersion around the mean.

Bias, Validity, and Ethics in Research

  • Bias: any factor that unfairly influences results; researchers should minimize bias.
  • Social Desirability Bias: participants respond in a way they think will be viewed favorably.
  • The Hawthorne Effect: participants alter behavior because they know they are being observed.
  • Experimenter Bias: researchers’ expectations influence outcomes (conscious or unconscious).
  • Sampling Bias: some members of the population are less likely to be included in the sample; reduces representativeness.
  • Random Sampling: every member of the population has an equal chance of being included; reduces bias.
  • Ethics in Research:
    • Informed Consent: participants must be informed about the research, risks, and right to withdraw; assent may be required for minors.
    • Right to be Protected from Harm and Discomfort: research should minimize risk; harm allowed only under strict conditions.
    • Confidentiality: participant data should not be disclosed.
    • Debriefing: participants should receive a full explanation of the study after participation, especially if deception was involved.
    • Institutional Review Board (IRB): reviews and monitors biomedical and behavioral research with humans; approval or modification may be required.
    • FDA regulations and IRB oversight: IRBs have authority to approve, modify, or disapprove research.
  • Animal Research Ethics:
    • Reasons for using animals: easier to control conditions, genetics, shorter lifespans.
    • Ethical considerations: minimize pain and distress; ensure humane treatment; ensure scientific justification.
    • APA guidelines emphasize comfort, health, humane treatment, and minimizing infection and pain.
    • IACUCs (Institutional Animal Care and Use Committees) oversee animal research; NIH/OLAW policies shape these practices; each institution with federally funded research must have an IACUC.

Notable Ethical Violations in History (Infamous Studies)

  • Informed consent violations: subjects misled about participation (e.g., tutoring method studies with course-entrance deception).
  • Deception without debreifing or ongoing harm: e.g., driving simulator scenarios with deceptive stimuli; subjects learned about deception later but suffered during experiments.
  • Lack of confidentiality or observation without consent: observing individuals in contexts (e.g., eye chart distance test) without informing them.
  • These cases illustrate why ethical guidelines (informed consent, protection from harm, confidentiality, debriefing) exist and how IRBs monitor research practices.

Putting It All Together: What Makes Psychology a Science

  • Psychological science relies on the scientific method, testable theories, hypotheses, and empirical data.
  • It emphasizes careful measurement, replication, and cautious interpretation to avoid overgeneralization from limited data.
  • It requires ethical considerations that protect participants and ensure credible research.
  • The field integrates qualitative and quantitative data and uses various methods to triangulate findings (case studies, surveys, experiments, meta-analyses, observations).

Quick Reference: Key Terms and Concepts

  • Theory: A comprehensive explanation that organizes and predicts behavior.
  • Hypothesis: A testable prediction derived from a theory.
  • IV (Independent Variable): The manipulated variable.
  • DV (Dependent Variable): The measured outcome.
  • Operational Definition: Specific, replicable definition of how a variable is measured or manipulated.
  • Random Assignment: Randomly placing participants into experimental or control groups to reduce preexisting differences.
  • Double-Blind: Neither participants nor researchers know group assignments.
  • Single-Blind: Either participants or researchers, but not both, know group assignments.
  • Case Study: In-depth study of a single individual or small group.
  • Survey: Self-reports; can use Likert scales to measure attitudes/behaviors.
  • Meta-Analysis: Synthesis of multiple studies to identify overall effects.
  • Correlation Coefficient (r): Measures strength and direction of a relationship between two variables; range -1 \le r \le 1.
  • Causation vs Correlation: Correlation implies a relationship but not causation; causation requires controlled manipulation and ruling out confounds.
  • Significance (p-value): Probability that results occurred by chance; p < 0.05 is commonly used as threshold for significance.
  • Normal Distribution: Bell-shaped curve; most data cluster near the mean; about 68% within one SD.
  • Standard Deviation (SD): A measure of dispersion around the mean; formula shown above.
  • Regression Toward the Mean: Extreme scores tend to move closer to the average on subsequent measures.
  • Bias and Descriptive Pitfalls: Social desirability, Hawthorne effect, experimenter bias, sampling bias, illusory correlations.
  • Ethics Codes: Informed consent, protection from harm, confidentiality, debriefing; IRB oversight; animal care under IACUC and APA guidelines.

End of Unit Reflections

  • Psychology integrates theory, method, and ethics to build robust, applicable knowledge.
  • Students should be able to distinguish intuition from evidence, design clean experiments, interpret results with awareness of biases, and apply ethical standards in research.