1/21
A comprehensive set of vocabulary flashcards based on lecture notes on evaluating the strength of findings in psychological research.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Descriptive Terms
Subjective terms used by psychologists like 'large', 'important', and 'dramatic', with no strict rules.
Scientific Terms
Objective terms like 'significant', which have a strict statistical meaning.
Null-Hypothesis Significance Testing (NHST)
A method that tests the probability of obtaining results if no real effect is happening.
p-value
The probability of the observed data (or more extreme) occurring if the null hypothesis is true.
p = .05 → 5% chance result is due to random variation
Type I Error
A false positive, where an effect is detected when none exists.
Common Misinterpretation
WRONG: “p = .05 means 95% chance hypothesis is true”
✅ CORRECT:
It’s the probability of the data given the null hypothesis, NOT the probability the hypothesis is true
🔹 3. Problems with NHST
Major Issues:
Confusing logic
Even experts misunderstand it
Misinterpretation is common
Only addresses Type I error
Ignores Type II error
❗ Types of Errors
Type I Error:
False positive (detect effect when none exists)
Type II Error:
False negative (miss real effect)
📌 Bottom Line:
NHST is:
Widely used ✔
BUT flawed ❗
Moving toward:
Effect size
Replication
Type II Error
A false negative, where a real effect is missed.
Effect Size
A measure of how big or strong a result is, indicating practical importance.
Why It Matters
Significance ≠ importance
Effect size tells:
Magnitude of effect
Practical importance
📌 Key Idea
Required by the APA because:
p-values don’t show effect strength
✅ Common Effect Size Measures:
Correlation coefficient (r)
Cohen’s d
Beta weights
Odds ratios
Correlation Coefficient (r)
A measure of the strength and direction of a relationship between two variables.
Even small correlations can matter over time
📌 Example:
Baseball: r = .06 per at-bat → huge seasonal impact
✅ Interpretation (Funder & Ozer Rule of Thumb)
r value | Size |
.05 | Very small |
.10 | Small |
.20 | Medium |
.30 | Large |
.40 | Very large |
Confidence Intervals (CI)
A range that estimates where the true population value likely lies.
🔹 7. Problems with “Variance Explained” (r²)
Common Method:
Square correlation:
r = .30 → r² = .09 (9%)
⚠ Issue:
Makes effects look too small
Misleading interpretation
✅ Better Approach:
Use:
Raw correlation (r)
Context
Real-world impact
Variance Explained (r²)
The square of the correlation which indicates the proportion of variance accounted for, but can be misleading.
3. Why Effect Size Matters
Significance (p-value) ≠ importance
Small effect sizes can accumulate to produce large outcomes (Abelson, 1985)
Example: A correlation of r = .30 is predictive 2 out of 3 times (Rosenthal & Rubin, 1982)
Knowing effect size helps determine practical, theoretical relevance
4. Problems of Ignoring Effect Size
Misjudging importance of results
Social psychologists may miss perspective on what matters
Classic example: Distance of victim in Milgram obedience study had r ≈ .30; previously unquantified
The Binomial Effect Size Display (BESD)
The BESD is a simple method of converting correlations to a metric that is easier to interpret (Funder's mentions this in passing in Chapter 4).
Effectively, the BESD allows us to more easily understanding the magnitude of an effect.
The Binomial Effect Size Display is simply a calculation that allows us to interpret correlations into a more meaningful metric.
The BESD formula we will use assumes that we have two groups with the same number of people in each group
.50+ r/2
Formula definition
Assume we know that more conscientious people tend to have more health check-ups and that the correlation between conscientiousness and number of health check-up per year is .45. What would the corresponding BESD be?
(Note - make sure to do the division first, then the addition - recall from school that division and multiplication are calculated before addition or subtraction)
Now this value - .725 means that 72.5% of people who score above average on conscientiousness will have had a health check-up in the past year.
Significance vs Importance
Significance does not equate to importance; effect size provides insight into practical relevance.
Evaluation of Effect Size
Assessing research accuracy, comparing findings, and understanding real-world impacts.
Memory Trick for p-value
Remember that p represents the probability of data, not the hypothesis.
Importance of Small Effects
Small effects can accumulate to produce significant outcomes and should not be underestimated.
Science Reporting Recommendation
Science should assess and report effect sizes carefully to avoid misjudgments of result importance.