psychology 9/10Notes on Correlation, Causation, and Experimental Design
Variables and measurement
- A variable is anything that can be measured in psychology. Examples discussed: eye color (a variable), love (a variable), having thoughts about stress (a variable).
- In research, two big terms are correlation and causation:
- Correlation: a relationship or connection between two variables. As one variable changes, the other tends to change in a related way (positive or negative).
- Causation: a cause-and-effect relationship where one variable directly influences the other.
- The phrase you’ll hear a lot: correlation does not equal causation. Just because two variables are correlated does not mean one causes the other.
- Example from the transcript: ice cream sales ↑ in summer and crime rates ↑ in summer. The correlation exists, but ice cream sales do not cause crime; both rise due to hotter weather and more outdoor activity.
- Why study correlations? They help us predict outcomes and plan actions (e.g., predicting budget needs when time with a partner increases, or adjusting patrols when crime tends to rise in the summer).
- Quick practical tip from lecture: correlation coefficients quantify strength and direction of relationships; the closer the coefficient is to +1 or -1, the stronger the relationship; closer to 0 means weaker relationship.
Correlation coefficients and interpretation
- The mathematical symbol for a correlation is r; in psychology reports you’ll see statements like r = -0.89 or r = +0.75.
- The range of r is from -1 to +1.
- Strength and direction:
- Positive correlation: as one variable increases, the other also increases; or as one decreases, the other also decreases.
- Negative correlation: as one variable increases, the other decreases.
- If one variable increases while the other decreases (e.g., time with a partner increases, money decreases), that’s a negative correlation.
- Examples discussed:
- Positive: more positive feedback ↗ higher confidence.
- Negative: less time spent at the gym ↘ less muscle mass (a positive correlation if you consider both decreasing to be a positive correlation since they move together in the same direction).
- Example values mentioned:
- Very weak/near zero: r = -0.001 (negative but extremely small, effectively no relationship).
- Moderate to strong positive: r = +0.75 (strong positive relationship).
- Strong negative: r ≈ -0.89 (strong negative relationship; the exact sign context depends on how the variables move).
- Quick math intuition:
- Positive × Positive = Positive (e.g., both ↑ or both ↓).
- Negative × Negative = Positive (two negatives give a positive relationship).
- Positive × Negative = Negative (one up, the other down).
- Reminder: r is the standard symbol for the correlational coefficient in psychology reports.
Confounding variables
- A confounding variable is a third variable that can influence the observed relationship between the two studied variables, potentially biasing the strength or even the direction.
- Examples from discussion:
- Ice cream sales and crime rates: confounded by summertime/hot weather.
- Time with a partner and money: other factors like work hours and coffee purchases can influence money, not just relationship time.
- Why it matters: confounds can make a correlational relationship appear stronger or weaker than it truly is.
Confirmation bias
- Defined: the tendency to pay more attention to information that supports your beliefs and to disregard information that contradicts them.
- Why it matters in psychology: it can skew interpretation of data, lead to self-fulfilling prophecies, or biased conclusions.
- Examples from the lecture:
- Personal stories about truck drivers in Texas (initial belief that truck drivers are rude) and how expectancies can color interpretation of behavior.
- A roommate’s belief about a dating partner influencing how they interpreted that person’s actions.
- Monstrous fear example (monsters under the bed) and placebo-based reassurance (monster spray) as a demonstration of belief shaping experience.
- Related concept mentioned: observer bias was touched on, but the main focus here is confirmation bias as a general cognitive bias.
- Video prompt shown to illustrate confirmation bias and its consequences (including potential self-fulfilling prophecies and ethical concerns).
Experiments: designing to establish causation
- The central claim: experiments are the only way to prove causation (a cause-and-effect relationship).
- Steps to set up an experiment:
1) Develop a specific, testable hypothesis (often based on observation; relates to inductive/deductive reasoning).
2) Assign operational definitions to the variables (precise, measurable definitions so others can replicate).
3) Form two groups: experimental and control. Participants are the people in the study. The experimental group is exposed to the manipulated variable; the control group is not.
4) Consider experimenter bias: the researcher’s expectations, experiences, or beliefs might influence interpretation of observations.
5) Distinguish between confirmation bias (a broad, general bias) and experimenter bias (specific to conducting experiments).
6) Understand randomization: randomly assign participants to control or experimental groups to reduce systematic differences between groups.
7) Ethical considerations: informed consent, potential deception, debriefing, and animal research ethics. - Key terms:
- Hypothesis: testable statement about a relationship between variables.
- Operational definitions: precise definitions of variables to be measured.
- Participants: people in the study.
- Experimental group: receives the manipulated variable.
- Control group: does not receive the manipulation.
- Independent variable (IV): the variable the researcher manipulates or changes.
- Dependent variable (DV): the variable measured to observe the effect.
- Confounding variables: external factors that may influence the DV or the IV and must be controlled or acknowledged.
- Example scenario walkthroughs from the lecture:
- Ice cream consumption and life satisfaction: IV = ice cream consumption frequency; DV = life satisfaction. Control group = no ice cream; Experimental group = defined ice cream intake (e.g., one cone per day). The discussion emphasized the need for precise operational definitions (e.g., type of ice cream, brand, portion size, etc.).
- Sleep and patience in parents: IV = amount of sleep; DV = patience level (operationalized via Likert scale, body language cues, heart rate). Randomly assign groups to normal sleep vs four hours sleep; discuss potential measurement challenges.
- Teenagers and screen time vs anxiety: IV = screen time (>4 hours daily vs 0 hours); DV = anxiety (operationalized via Likert scale or a standard anxiety measure).
- Toy size and enjoyment in children: IV = toy size (e.g., 12-inch vs. larger; control often the smaller size in two-size scenarios); DV = enjoyment (operationalized via duration of interaction, play behavior, or explicit ratings).
- Monday vs Friday memory: IV = day of week (manipulated by testing on Monday vs Friday); DV = memory/recall (operationalized via quizzes).
- Types of designs:
- Single-blind study: participants do not know if they’re in the control or experimental group.
- Double-blind study: neither participants nor the researchers interacting with them know which group participants are in.
- Placebo effect: participants’ expectations can produce real changes in experience, even when the treatment is inert.
- Placebo effect in depth:
- Classic example: a fake painkiller made participants report less pain.
- Historical context: placebos were used to test treatments; ethical concerns have reduced their use in some contexts.
- Modern view: placebos can confound results and raise ethical questions about deception; sometimes they serve as a control to compare new vs. old/alternative treatments.
- Debriefing and deception in experiments:
- Informed consent is required; deception is allowed if it does not harm participants and if participants are debriefed afterwards.
- Debriefing explains the true nature of the study and addresses any short- or long-term effects.
- Reliability and validity:
- Reliability: consistency of results across repeated trials or different samples.
- Validity: accuracy of the measurements—whether the study actually measures what it intends to measure.
- Example: using a ruler to measure eye color would be neither a valid approach nor an appropriate measure for color—a validity issue.
- Ethics in research:
- Informed consent: participants must be told what the study is about and agree to participate in writing.
- Deception and liar consent: allowed in some cases if non-harmful, with debriefing afterward.
- Debriefing: explains the study’s purpose and potential side effects or risks after participation.
- Animal research: tightly regulated, with review boards to protect animals; more oversight than some human studies because animals cannot consent.
- Quasi-experiments:
- When a true experiment is not possible (e.g., you cannot ethically assign biological sex, race, etc.), researchers may conduct quasi-experiments.
Practice and group activity insights from the lecture
- The instructor had six hypotheses and students were grouped to identify:
- The two variables under study (IV and DV).
- Operational definitions for those variables.
- Which variable is independent and which is dependent.
- Who is in the control vs the experimental group.
- Example from the session: Parents’ sleep and patience; Teenagers’ screen time and anxiety; Big toys vs small toys; Day of the week and memory.
- Emphasis on being specific in operational definitions: the more precise, the easier replication by other researchers.
- Emphasis on random sampling as a way to reduce systematic differences between groups, and the idea that ethical considerations may limit certain manipulations (leading to quasi-experiments).
Real-world relevance and takeaways
- Correlation helps with prediction and planning but cannot establish causation by itself.
- Experimental design is required to claim causation, with careful attention to operational definitions, randomization, and control conditions.
- Biases (confirmation bias, experimenter bias) can distort interpretation; strategies like single/double-blind designs and placebo controls help mitigate bias.
- Ethics are central: informed consent, deception safeguards, debriefing, and consideration of animal welfare.
- Reliability and validity are foundational for trustworthy research; researchers must choose measurement tools and procedures that maximize both.
- The role of confounding variables reminds us that observed relationships may be due to multiple interacting factors; careful design and analysis are needed to draw valid conclusions.
Quick recap: key terms to remember
- Variable: anything that can be measured.
- Correlation: relationship between two variables (association, not causation).
- Causation: one variable causes the other.
- Correlation coefficient: symbolized as r, range \-1 to 1; strength increases as |r| approaches 1.
- Independent variable (IV): the manipulated variable.
- Dependent variable (DV): the measured outcome.
- Operational definition: precise, replicable definition of a variable.
- Confounding variable: third variable that distorts the observed relationship.
- Placebo effect: improvement due to expectations; not due to the treatment itself.
- Single-blind / Double-blind: methods to reduce bias by concealing group assignment.
- Reliability: consistency of results.
- Validity: accuracy of what is being measured.
- Informed consent / Debriefing / Deception: ethical considerations in experiments.
- Quasi-experiment: a study that resembles an experiment but lacks random assignment or manipulation of the IV.
- Random sampling / Random assignment: strategies to reduce bias and achieve representative groups.
- Ethics in animal research: rigorous regulatory oversight.