1/54
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Attrition bias
When people drop out of a study or some data goes missing. This can mess with results and make the evidence less reliable—like only finishing a puzzle with the pieces you didn’t lose.
Echo chambers
When you only hear opinions that match your own because of who you follow, your friends, or algorithms that show you what you already agree with. It creates a feedback loop where your views get louder and alternatives disappear.
Evidence for H
A fact counts as evidence for a hypothesis (H) if it would be more likely to happen if H were true than if it weren’t. That means we should trust H a little more because of this fact.
Evidence test
To check if something is evidence for a hypothesis, ask
File-drawer effect
Researchers sometimes don’t share studies with boring or negative results—they just leave them in their “file drawer.” This creates a misleading picture because we mostly see exciting or positive results that get published.
Heads I win, tails we’re even
This mistake happens when someone treats a fact as supporting their belief, but if the opposite happened, they’d just ignore it. It’s an unfair way of judging evidence, because it only accepts support, not challenge.
Hypothesis (H)
A claim or idea that you’re trying to test to see whether it’s true.
Independent of H
If a fact is just as likely whether a hypothesis is true or not, it doesn’t support or oppose the hypothesis—it’s independent of it.
Media bias
When media content is shaped by what grabs attention or pleases certain audiences. This includes political bias but also the way algorithms tailor content to keep us engaged, even if it leaves out the full truth.
One-sided strength testing
A mistake where we only ask how likely a fact is if our hypothesis is true—without asking how likely it would be if it were false. That can make something seem like good evidence when it might not be.
Opposite evidence rule
To avoid being biased, ask "If the opposite fact had happened, would I count it as evidence for my view?" If yes, then the actual fact should count against your view. Treat opposites fairly.
Publication bias
Academic journals prefer publishing studies that show surprising or attention-grabbing results. This means ordinary or unexciting studies often get left out, which can skew what we think the overall evidence says.
Selection effect
When something filters or limits what we observe, it can make our evidence biased without us realizing it. We only see what gets through the filter.
Selective noticing
We tend to notice and remember facts that support a hypothesis we’re already thinking about—and overlook facts that don’t. This can make it feel like we’re seeing more support than we really are.
Serial position effect
People remember the first and last items in a series better than the middle ones.
Strength factor
A number that tells us how strong a piece of evidence is for a hypothesis. It’s based on how much more likely the evidence is if the hypothesis is true than if it’s false. Higher number = stronger evidence.
Strength test
To find how strong a piece of evidence is, compare how likely it is if the hypothesis is true vs. false. The ratio is the strength factor.
Survivor bias
A type of selection effect where we only see the “survivors” of a process and forget about those who didn’t make it. Like thinking smoking is safe because you've only met old smokers—not the ones who died young.
Base rate
Central tendency
A way to describe what’s typical in a dataset. This includes
Confidence interval
A range of numbers we’re fairly sure includes the true population value. If we say a 95% confidence interval, it means we’re 95% confident the truth lies in that range.
Convenience sample
A small group chosen because it’s easy to access, not because it represents the whole population. It can lead to weak or biased results.
Heuristic
A mental shortcut that helps you make quick decisions. It’s useful, but it can lead to predictable mistakes.
Law of large numbers
The bigger your sample, the more accurately it reflects the population. Small samples can easily give weird results by chance.
Loose generalization
A vague claim about a group, without knowing how many in the group actually fit the description (e.g., “Canadians are polite”).
Margin of error
The amount by which a reported number might be off. It goes hand-in-hand with confidence intervals.
Outlier
A number way different from the rest of your data. It can affect the average a lot and might be worth examining separately.
Participation bias
When the people who choose to respond to a survey are different from those who don’t. This can distort results.
Representative sample
A group that mirrors the population in all important ways (like age, race, etc.). It makes your results more trustworthy.
Representativeness heuristic
A shortcut where we judge things based on how much they match our idea of a category, even if that’s not statistically accurate.
Response bias
When people give survey answers they think are expected or more acceptable, rather than honest answers.
Sampling bias
When the way we choose our sample makes certain types of people more or less likely to be included, leading to skewed results.
Statistical generalization
Using data from a sample to make a claim about the larger group it came from.
Statistical inference
Using specific data to support a more general claim—or sometimes, using general knowledge to say something about a specific case.
Statistical instantiation
The opposite of generalization—applying what we know about the population to guess about a sample.
Stereotype
A broad claim about a group that often lacks clear evidence or numbers. It’s usually oversimplified or unfair.
Stratified random sampling
A way to make sure your sample has the same proportions of important subgroups as the full population (e.g., same mix of genders, races, etc.).
Summary statistics
Key numbers that give an overview of your data—like average, median, range, or standard deviation.
Causal argument
Causal mechanism
The explanation for how one thing causes another. It’s the step-by-step path from cause to effect.
Clustering illusion
When we see a pattern in data that’s actually random. Our brain is just good at finding patterns—even if they’re not real.
Common cause
When two things happen together not because one causes the other, but because a third factor causes both. For example, hot weather causes both more ice cream sales and more drownings.
Correlation
When two things tend to happen together. But remember
Double-blind study
A study where neither the participants nor the researchers know who’s in the test group or the control group. This helps avoid bias.
Immediate cause
Direct cause of something (e.g., touching fire = burn).
Distal cause
An earlier or background factor that led to the outcome (e.g., leaving the stove on = fire later).
Mere chance (as an explanation)
Sometimes things happen together just by coincidence—no cause, no pattern, just random luck.
Pattern-seeking
Our natural tendency to see patterns in noise or randomness, even when there’s no real meaning there.
Placebo-controlled
A study where one group gets the real treatment and the other gets a fake (placebo), to see if the real one actually works.
Placebo effect
When people feel better just because they think they got a helpful treatment—even if it was fake.
Post hoc ergo propter hoc
A fallacy that means “after this, therefore because of this.” Just because one thing happened after another doesn’t mean it was caused by it.
Randomized controlled trial
An experiment where people are randomly assigned to a treatment or control group. This helps ensure the groups are fair and makes the results more reliable.
Regression to the mean
Extreme results tend to be followed by more average ones. For example, someone with a super high test score one time might score closer to normal next time.
Reverse causation
When you think A causes B, but really B causes A. For example, maybe happier people exercise more—not the other way around.
Robust evidence
Evidence that holds up across many different studies and methods. It’s less likely to be a fluke.
Side effect (as an explanation)
Sometimes two things are related not through a cause-effect path, but because one causes a third thing that makes them look connected.
Statistical significance
A result is statistically significant if it’s unlikely to have happened just by chance. Usually, this means the chance of it being random is less than 5%.