Quiz 2 study guide - easy definitions

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/54

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

55 Terms

1
New cards

Attrition bias

When people drop out of a study or some data goes missing. This can mess with results and make the evidence less reliable—like only finishing a puzzle with the pieces you didn’t lose.

2
New cards

Echo chambers

When you only hear opinions that match your own because of who you follow, your friends, or algorithms that show you what you already agree with. It creates a feedback loop where your views get louder and alternatives disappear.

3
New cards

Evidence for H

A fact counts as evidence for a hypothesis (H) if it would be more likely to happen if H were true than if it weren’t. That means we should trust H a little more because of this fact.

4
New cards

Evidence test

To check if something is evidence for a hypothesis, ask

5
New cards

File-drawer effect

Researchers sometimes don’t share studies with boring or negative results—they just leave them in their “file drawer.” This creates a misleading picture because we mostly see exciting or positive results that get published.

6
New cards

Heads I win, tails we’re even

This mistake happens when someone treats a fact as supporting their belief, but if the opposite happened, they’d just ignore it. It’s an unfair way of judging evidence, because it only accepts support, not challenge.

7
New cards

Hypothesis (H)

A claim or idea that you’re trying to test to see whether it’s true.

8
New cards

Independent of H

If a fact is just as likely whether a hypothesis is true or not, it doesn’t support or oppose the hypothesis—it’s independent of it.

9
New cards

Media bias

When media content is shaped by what grabs attention or pleases certain audiences. This includes political bias but also the way algorithms tailor content to keep us engaged, even if it leaves out the full truth.

10
New cards

One-sided strength testing

A mistake where we only ask how likely a fact is if our hypothesis is true—without asking how likely it would be if it were false. That can make something seem like good evidence when it might not be.

11
New cards

Opposite evidence rule

To avoid being biased, ask "If the opposite fact had happened, would I count it as evidence for my view?" If yes, then the actual fact should count against your view. Treat opposites fairly.

12
New cards

Publication bias

Academic journals prefer publishing studies that show surprising or attention-grabbing results. This means ordinary or unexciting studies often get left out, which can skew what we think the overall evidence says.

13
New cards

Selection effect

When something filters or limits what we observe, it can make our evidence biased without us realizing it. We only see what gets through the filter.

14
New cards

Selective noticing

We tend to notice and remember facts that support a hypothesis we’re already thinking about—and overlook facts that don’t. This can make it feel like we’re seeing more support than we really are.

15
New cards

Serial position effect

People remember the first and last items in a series better than the middle ones.

16
New cards

Strength factor

A number that tells us how strong a piece of evidence is for a hypothesis. It’s based on how much more likely the evidence is if the hypothesis is true than if it’s false. Higher number = stronger evidence.

17
New cards

Strength test

To find how strong a piece of evidence is, compare how likely it is if the hypothesis is true vs. false. The ratio is the strength factor.

18
New cards

Survivor bias

A type of selection effect where we only see the “survivors” of a process and forget about those who didn’t make it. Like thinking smoking is safe because you've only met old smokers—not the ones who died young.

Base rate

19
New cards

Central tendency

A way to describe what’s typical in a dataset. This includes

20
New cards

Confidence interval

A range of numbers we’re fairly sure includes the true population value. If we say a 95% confidence interval, it means we’re 95% confident the truth lies in that range.

21
New cards

Convenience sample

A small group chosen because it’s easy to access, not because it represents the whole population. It can lead to weak or biased results.

22
New cards

Heuristic

A mental shortcut that helps you make quick decisions. It’s useful, but it can lead to predictable mistakes.

23
New cards

Law of large numbers

The bigger your sample, the more accurately it reflects the population. Small samples can easily give weird results by chance.

24
New cards

Loose generalization

A vague claim about a group, without knowing how many in the group actually fit the description (e.g., “Canadians are polite”).

25
New cards

Margin of error

The amount by which a reported number might be off. It goes hand-in-hand with confidence intervals.

26
New cards

Outlier

A number way different from the rest of your data. It can affect the average a lot and might be worth examining separately.

27
New cards

Participation bias

When the people who choose to respond to a survey are different from those who don’t. This can distort results.

28
New cards

Representative sample

A group that mirrors the population in all important ways (like age, race, etc.). It makes your results more trustworthy.

29
New cards

Representativeness heuristic

A shortcut where we judge things based on how much they match our idea of a category, even if that’s not statistically accurate.

30
New cards

Response bias

When people give survey answers they think are expected or more acceptable, rather than honest answers.

31
New cards

Sampling bias

When the way we choose our sample makes certain types of people more or less likely to be included, leading to skewed results.

32
New cards

Statistical generalization

Using data from a sample to make a claim about the larger group it came from.

33
New cards

Statistical inference

Using specific data to support a more general claim—or sometimes, using general knowledge to say something about a specific case.

34
New cards

Statistical instantiation

The opposite of generalization—applying what we know about the population to guess about a sample.

35
New cards

Stereotype

A broad claim about a group that often lacks clear evidence or numbers. It’s usually oversimplified or unfair.

36
New cards

Stratified random sampling

A way to make sure your sample has the same proportions of important subgroups as the full population (e.g., same mix of genders, races, etc.).

37
New cards

Summary statistics

Key numbers that give an overview of your data—like average, median, range, or standard deviation.

Causal argument

38
New cards

Causal mechanism

The explanation for how one thing causes another. It’s the step-by-step path from cause to effect.

39
New cards

Clustering illusion

When we see a pattern in data that’s actually random. Our brain is just good at finding patterns—even if they’re not real.

40
New cards

Common cause

When two things happen together not because one causes the other, but because a third factor causes both. For example, hot weather causes both more ice cream sales and more drownings.

41
New cards

Correlation

When two things tend to happen together. But remember

42
New cards

Double-blind study

A study where neither the participants nor the researchers know who’s in the test group or the control group. This helps avoid bias.

43
New cards

Immediate cause

Direct cause of something (e.g., touching fire = burn).

44
New cards

Distal cause

An earlier or background factor that led to the outcome (e.g., leaving the stove on = fire later).

45
New cards

Mere chance (as an explanation)

Sometimes things happen together just by coincidence—no cause, no pattern, just random luck.

46
New cards

Pattern-seeking

Our natural tendency to see patterns in noise or randomness, even when there’s no real meaning there.

47
New cards

Placebo-controlled

A study where one group gets the real treatment and the other gets a fake (placebo), to see if the real one actually works.

48
New cards

Placebo effect

When people feel better just because they think they got a helpful treatment—even if it was fake.

49
New cards

Post hoc ergo propter hoc

A fallacy that means “after this, therefore because of this.” Just because one thing happened after another doesn’t mean it was caused by it.

50
New cards

Randomized controlled trial

An experiment where people are randomly assigned to a treatment or control group. This helps ensure the groups are fair and makes the results more reliable.

51
New cards

Regression to the mean

Extreme results tend to be followed by more average ones. For example, someone with a super high test score one time might score closer to normal next time.

52
New cards

Reverse causation

When you think A causes B, but really B causes A. For example, maybe happier people exercise more—not the other way around.

53
New cards

Robust evidence

Evidence that holds up across many different studies and methods. It’s less likely to be a fluke.

54
New cards

Side effect (as an explanation)

Sometimes two things are related not through a cause-effect path, but because one causes a third thing that makes them look connected.

55
New cards

Statistical significance

A result is statistically significant if it’s unlikely to have happened just by chance. Usually, this means the chance of it being random is less than 5%.