Exam 1 - 203

studied byStudied by 0 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 104

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

105 Terms

1

Statistics as Social Constructions

Subjective choices in defining variables, measurement, and sampling shape statistical findings.

New cards
2

Critical Evaluation of Statistics

Involves identifying issues like bad statistics, mutant statistics, soft statistics, and the concept of the dark figure.

New cards
3

Conceptual Definitions

Abstract meanings defining variables in research.

New cards
4

Operational Definitions

Measurable specifications for defining variables in research.

New cards
5

Sampling Bias

Systematic differences between a sample and the population.

New cards
6

Selection Bias

Occurs when certain individuals are more likely to be included in a sample.

New cards
7

Volunteer Bias

Participants who volunteer differ from those who do not volunteer for a study.

New cards
8

Convenience Sampling Bias

A limitation in diversity due to easy access in selecting participants.

New cards
9

Undercoverage Bias

When certain groups are underrepresented in the sample.

New cards
10

Nonresponse Bias

Occurs when participants who decline to respond differ systematically from those who participate.

New cards
11

Survivorship Bias

Only the outcomes of 'survivors' are analyzed, overlooking non-survivors.

New cards
12

Healthy User Bias

Participants in research are generally healthier than the general population.

New cards
13

Recall Bias

Inaccuracies in data stemming from participants' memories.

New cards
14

Sampling Error

Random differences between a sample statistic and the true population parameter that reduce with larger sample sizes.

New cards
15

Measurement Scales

Types include nominal, ordinal, interval, and ratio scales.

New cards
16

Categorical vs. Continuous Data

Categorical data (nominal, sometimes ordinal) vs. continuous data (sometimes ordinal, interval, ratio); determines choice of summary statistics and visualizations.

New cards
17

Describing Categorical Data

Use frequency distributions and bar charts; the mode is used as a measure of central tendency.

New cards
18

Describing Continuous Data

Use histograms or density plots to assess the shape of the distribution.

New cards
19

Symmetric Distributions

Distributions where mean = median = mode, typically following a normal or uniform distribution.

New cards
20

Asymmetric Distributions

Distributions characterized by skewness with mean differing from the median.

New cards
21

z Scores

Standardized scores that allow for meaningful comparisons between different distributions.

New cards
22

Normal Distribution

A symmetric, bell-shaped distribution where most scores cluster around the mean.

New cards
23

p-value

The probability of obtaining a result at least as extreme as the observed data if the null hypothesis is true.

New cards
24

Type I Error

Rejecting the null hypothesis when it is actually true.

New cards
25

Type II Error

Retaining the null hypothesis when it is actually false.

New cards
26

Effect Size

A measure of the magnitude of an observed difference or relationship in research.

New cards
27

Cohen's d

A standardized measure of effect size indicating mean differences expressed in standard deviation units.

New cards
28

Statistical Power

The probability of correctly rejecting a false null hypothesis.

New cards
29

Confidence Intervals

Quantitative ranges that estimate the precision of sample estimates for population parameters.

New cards
30

Replication Crisis

A situation where many published research findings fail to replicate.

New cards
31

Meta-Analysis

A systematic review method that quantitatively synthesizes research findings to provide objective evidence.

New cards
32

Describing Continuous Data

Use histograms or density plots to assess the shape of the distribution.

New cards
33

Symmetry, the Mean, and the Median

If the mean and median are equal, the distribution is symmetric; if they differ, the distribution is asymmetric, with the mean pulled toward the tail.

New cards
34

Standard Scores

Transform raw scores into a common scale; z scores are the most widely used standard scores.

New cards
35

Estimating vs. Calculating Percentiles

Visual inspection of graphs helps estimate percentiles before using the unit normal table.

New cards
36

Statistical Testing

Moves beyond describing data to evaluating whether observed patterns reflect true effects or random variation.

New cards
37

Null and Alternative Hypotheses

The null hypothesis (H0) assumes no effect, while the alternative hypothesis (H1) suggests a true effect or difference.

New cards
38

Sampling Distributions

Describe the expected variation in sample statistics under the null hypothesis, forming the basis for statistical decision-making.

New cards
39

Alpha Level (α)

The pre-set probability threshold (typically 0.05) for defining statistical significance.

New cards
40

One-Tailed vs. Two-Tailed Tests

One-tailed tests allocate the entire alpha level to one extreme of the sampling distribution, while two-tailed tests split it across both extremes.

New cards
41

Critical Value and Critical Region

The critical value marks the boundary for significance; the critical region consists of results so extreme they would occur with probability less than α under H0.

New cards
42

Confidence Intervals as Precision Estimates

Confidence intervals (CIs) quantify the precision of sample estimates by providing a range of plausible values for a population parameter.

New cards
43

Challenges to Reproducibility

Many published research findings fail to replicate, undermining trust in scientific conclusions.

New cards
44

Meta-Analysis as a Systematic Review Method

A quantitative approach that synthesizes research findings to provide a more objective and reproducible summary of evidence.

New cards
45

Effect Size for Associations

Measures the strength of relationships rather than group differences.

New cards
46

Rules of Thumb for Effect Sizes

Guidelines for interpreting effect sizes vary by context.

New cards
47

Contextual Interpretation of Effect Sizes

Even small effects can be meaningful when outcomes are significant.

New cards
48

Statistical Power

The probability of correctly rejecting a false null hypothesis.

New cards
49

Factors That Influence Statistical Power

Three main factors determine power: sample size, effect size,

New cards
50

Statistical Testing

Moves beyond describing data to evaluating whether observed patterns reflect true effects or random variation.

New cards
51

Modeling Chance

Establishes expectations for data under the assumption that no real effect exists; comparisons to these expectations determine significance.

New cards
52

Null and Alternative Hypotheses

The null hypothesis (H0) assumes no effect, while the alternative hypothesis (H1) suggests a true effect or difference.

New cards
53

Sampling Error

Random variability causes sample statistics to differ from population parameters; statistical tests account for this variability when assessing significance.

New cards
54

Sampling Distributions

Describe the expected variation in sample statistics under the null hypothesis, forming the basis for statistical decision-making.

New cards
55

Alpha Level (α)

The pre-set probability threshold (typically 0.05) for defining statistical significance.

New cards
56

One-Tailed vs. Two-Tailed Tests

One-tailed tests allocate the entire alpha level to one extreme of the sampling distribution, while two-tailed tests split it across both extremes; two-tailed tests are standard to avoid bias.

New cards
57

Critical Value and Critical Region

The critical value marks the boundary for significance; the critical region consists of sample results so extreme that they would occur with probability less than α under H0, leading to its rejection.

New cards
58

Interpreting Results

If the test statistic falls inside the critical region, reject H0; the result is statistically significant. If outside, retain H0; the result is not statistically significant.

New cards
59

Statistical vs. Practical Significance

Statistical significance indicates whether results are unlikely due to chance, but does not address whether they are meaningful in practical terms.

New cards
60

Practical Significance

A statistically significant result may lack practical importance due to flawed study design or small effect size.

New cards
61

Effect Size

Measures the magnitude of an observed difference or relationship, independent of sample size.

New cards
62

Why Effect Size Matters

Helps compare findings across studies, interpret unfamiliar metrics, and assess the impact of research results.

New cards
63

Standardized Effect Sizes for Mean Comparisons

Cohen’s d expresses mean differences in standard deviation units, and η² indicates how much variability in the dependent variable is accounted for by group differences.

New cards
64

Effect Sizes for Associations

Measures the strength of relationships rather than group differences; includes Pearson’s correlation coefficient and coefficient of determination.

New cards
65

Rules of Thumb for Effect Sizes

Guidelines for interpreting effect sizes vary by context; common benchmarks include Cohen’s d and r values.

New cards
66

Contextual Interpretation of Effect Sizes

Even small effects can be meaningful; costs and benefits, accumulation over time, and generality of effects must be considered.

New cards
67

Statistical Power

The probability of correctly rejecting a false null hypothesis; higher power allows for greater detection of true effects.

New cards
68

Typical Power Levels in Research

Studies often have insufficient power; small effect power is ~0.23, medium effect ~0.62, large effect ~0.84.

New cards
69

Factors That Influence Statistical Power

Sample size, effect size, and decision threshold (α level) determine power.

New cards
70

Power Analysis

Conducted during study planning to ensure adequate power and determine sample size needed.

New cards
71

How to Maximize Power

Increase sample size, collect more data, use within-subjects designs, measure variables precisely, and avoid dichotomizing continuous variables

New cards
72

Challenges to Reproducibility

Many published research findings fail to replicate, undermining trust in scientific conclusions.

New cards
73

False Findings in Research

John Ioannidis argued that most published findings are false due to high false-positive rates, small sample sizes, flexibility in study designs and analyses, conflicts of interest, and competitive research environments.

New cards
74

Replication Crisis

Replication is a cornerstone of science, but replications are rare due to a focus on novelty in academic publishing. Large-scale replication efforts in psychology found that fewer than half of published findings were successfully replicated.

New cards
75

Questionable Research Practices (QRPs)

Researchers engage in flexible analysis and reporting strategies that can increase false positives.

New cards
76

Researcher Degrees of Freedom

The flexibility researchers have in designing studies, analyzing data, and reporting results can lead to inflated false-positive rates.

New cards
77

p Hacking

Conducting multiple analyses and only reporting those that produce significant results.

New cards
78

HARKing (Hypothesizing After the Results are Known)

Presenting post hoc explanations as if they were planned in advance.

New cards
79

Selective Reporting

Failing to report all experimental conditions, variables, or analyses, leading to biased literature.

New cards
80

Bias in Peer Review

Peer review has systemic flaws that can undermine its role as a quality control mechanism.

New cards
81

Volunteer Nature of Reviewing

Reviewers are unpaid, leading to variable effort and care in evaluations.

New cards
82

Anonymity and Accountability in Peer Review

Anonymous reviews encourage honesty but reduce accountability and recognition, leading to inconsistent diligence.

New cards
83

Reviewer Errors

Careful reviewers may fail to detect undisclosed multiple statistical tests or subtle questionable research practices.

New cards
84

Shared and Idiosyncratic Biases

Reviewers may favor research aligned with their own views or overlook methodological flaws in studies that support a preferred narrative.

New cards
85

Resistance to Criticizing Common Practices

Reviewers may avoid critiquing methods they use, such as convenience sampling or reliance on online participant pools.

New cards
86

Strategies to Improve Reproducibility

Encouraging replication studies, pre-registration, registered reports, open science practices, and comprehensive reporting to verify research results.

New cards
87

Replication Studies

Encouraging direct and conceptual replications to verify results.

New cards
88

Pre-Registration

Publicly posting hypotheses, methods, and analysis plans before data collection to prevent p hacking and HARKing.

New cards
89

Registered Reports

Journals accept studies for publication based on methodological quality before results are known, reducing publication bias.

New cards
90

Open Science Practices

Making data, analysis code, and materials publicly available for verification and reanalysis.

New cards
91

Comprehensive Reporting

Requiring full disclosure of all analyses, experimental conditions, and results to ensure transparency.

New cards
92

Limitations of Narrative Reviews

Traditional literature reviews rely on subjective impressions, which can lead to bias and inconsistency.

New cards
93

Qualitative Impressions vs. Statistical Aggregation

Reviewers form qualitative impressions rather than aggregating data statistically.

New cards
94

Variability in Primary Studies

Small primary studies have high variability, making it hard to detect true effects.

New cards
95

Statistical Tools for Interpretation

Differences in study findings are difficult to interpret without statistical tools.

New cards
96

Moderator Analysis

Investigates factors that influence effect sizes across studies, such as differences in study design, participant characteristics, or measurement techniques.

New cards
97

Publication Bias

Studies with significant results are more likely to be published, skewing the literature.

New cards
98

Funnel Plots

A visual diagnostic tool for detecting asymmetry in study distribution, which may indicate missing studies.

New cards
99

Trim-and-Fill Method

A statistical adjustment to estimate the true effect size in the presence of publication bias.

New cards
100

The Eight Core Elements of a Meta-Analysis

Include 1) Clearly defined research question, 2) Systematic literature search, 3) Effect size extraction, 4) Weighting of studies, 5) Computation of summary effect size, 6) Assessment of heterogeneity, 7) Moderator analysis, 8) Evaluation of publication bias.

New cards
robot