Exam 1 - 203

0.0(0)
studied byStudied by 0 people
0.0(0)
linked notesView linked note
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/104

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

105 Terms

1
New cards

Statistics as Social Constructions

Subjective choices in defining variables, measurement, and sampling shape statistical findings.

2
New cards

Critical Evaluation of Statistics

Involves identifying issues like bad statistics, mutant statistics, soft statistics, and the concept of the dark figure.

3
New cards

Conceptual Definitions

Abstract meanings defining variables in research.

4
New cards

Operational Definitions

Measurable specifications for defining variables in research.

5
New cards

Sampling Bias

Systematic differences between a sample and the population.

6
New cards

Selection Bias

Occurs when certain individuals are more likely to be included in a sample.

7
New cards

Volunteer Bias

Participants who volunteer differ from those who do not volunteer for a study.

8
New cards

Convenience Sampling Bias

A limitation in diversity due to easy access in selecting participants.

9
New cards

Undercoverage Bias

When certain groups are underrepresented in the sample.

10
New cards

Nonresponse Bias

Occurs when participants who decline to respond differ systematically from those who participate.

11
New cards

Survivorship Bias

Only the outcomes of 'survivors' are analyzed, overlooking non-survivors.

12
New cards

Healthy User Bias

Participants in research are generally healthier than the general population.

13
New cards

Recall Bias

Inaccuracies in data stemming from participants' memories.

14
New cards

Sampling Error

Random differences between a sample statistic and the true population parameter that reduce with larger sample sizes.

15
New cards

Measurement Scales

Types include nominal, ordinal, interval, and ratio scales.

16
New cards

Categorical vs. Continuous Data

Categorical data (nominal, sometimes ordinal) vs. continuous data (sometimes ordinal, interval, ratio); determines choice of summary statistics and visualizations.

17
New cards

Describing Categorical Data

Use frequency distributions and bar charts; the mode is used as a measure of central tendency.

18
New cards

Describing Continuous Data

Use histograms or density plots to assess the shape of the distribution.

19
New cards

Symmetric Distributions

Distributions where mean = median = mode, typically following a normal or uniform distribution.

20
New cards

Asymmetric Distributions

Distributions characterized by skewness with mean differing from the median.

21
New cards

z Scores

Standardized scores that allow for meaningful comparisons between different distributions.

22
New cards

Normal Distribution

A symmetric, bell-shaped distribution where most scores cluster around the mean.

23
New cards

p-value

The probability of obtaining a result at least as extreme as the observed data if the null hypothesis is true.

24
New cards

Type I Error

Rejecting the null hypothesis when it is actually true.

25
New cards

Type II Error

Retaining the null hypothesis when it is actually false.

26
New cards

Effect Size

A measure of the magnitude of an observed difference or relationship in research.

27
New cards

Cohen's d

A standardized measure of effect size indicating mean differences expressed in standard deviation units.

28
New cards

Statistical Power

The probability of correctly rejecting a false null hypothesis.

29
New cards

Confidence Intervals

Quantitative ranges that estimate the precision of sample estimates for population parameters.

30
New cards

Replication Crisis

A situation where many published research findings fail to replicate.

31
New cards

Meta-Analysis

A systematic review method that quantitatively synthesizes research findings to provide objective evidence.

32
New cards

Describing Continuous Data

Use histograms or density plots to assess the shape of the distribution.

33
New cards

Symmetry, the Mean, and the Median

If the mean and median are equal, the distribution is symmetric; if they differ, the distribution is asymmetric, with the mean pulled toward the tail.

34
New cards

Standard Scores

Transform raw scores into a common scale; z scores are the most widely used standard scores.

35
New cards

Estimating vs. Calculating Percentiles

Visual inspection of graphs helps estimate percentiles before using the unit normal table.

36
New cards

Statistical Testing

Moves beyond describing data to evaluating whether observed patterns reflect true effects or random variation.

37
New cards

Null and Alternative Hypotheses

The null hypothesis (H0) assumes no effect, while the alternative hypothesis (H1) suggests a true effect or difference.

38
New cards

Sampling Distributions

Describe the expected variation in sample statistics under the null hypothesis, forming the basis for statistical decision-making.

39
New cards

Alpha Level (α)

The pre-set probability threshold (typically 0.05) for defining statistical significance.

40
New cards

One-Tailed vs. Two-Tailed Tests

One-tailed tests allocate the entire alpha level to one extreme of the sampling distribution, while two-tailed tests split it across both extremes.

41
New cards

Critical Value and Critical Region

The critical value marks the boundary for significance; the critical region consists of results so extreme they would occur with probability less than α under H0.

42
New cards

Confidence Intervals as Precision Estimates

Confidence intervals (CIs) quantify the precision of sample estimates by providing a range of plausible values for a population parameter.

43
New cards

Challenges to Reproducibility

Many published research findings fail to replicate, undermining trust in scientific conclusions.

44
New cards

Meta-Analysis as a Systematic Review Method

A quantitative approach that synthesizes research findings to provide a more objective and reproducible summary of evidence.

45
New cards

Effect Size for Associations

Measures the strength of relationships rather than group differences.

46
New cards

Rules of Thumb for Effect Sizes

Guidelines for interpreting effect sizes vary by context.

47
New cards

Contextual Interpretation of Effect Sizes

Even small effects can be meaningful when outcomes are significant.

48
New cards

Statistical Power

The probability of correctly rejecting a false null hypothesis.

49
New cards

Factors That Influence Statistical Power

Three main factors determine power: sample size, effect size,

50
New cards

Statistical Testing

Moves beyond describing data to evaluating whether observed patterns reflect true effects or random variation.

51
New cards

Modeling Chance

Establishes expectations for data under the assumption that no real effect exists; comparisons to these expectations determine significance.

52
New cards

Null and Alternative Hypotheses

The null hypothesis (H0) assumes no effect, while the alternative hypothesis (H1) suggests a true effect or difference.

53
New cards

Sampling Error

Random variability causes sample statistics to differ from population parameters; statistical tests account for this variability when assessing significance.

54
New cards

Sampling Distributions

Describe the expected variation in sample statistics under the null hypothesis, forming the basis for statistical decision-making.

55
New cards

Alpha Level (α)

The pre-set probability threshold (typically 0.05) for defining statistical significance.

56
New cards

One-Tailed vs. Two-Tailed Tests

One-tailed tests allocate the entire alpha level to one extreme of the sampling distribution, while two-tailed tests split it across both extremes; two-tailed tests are standard to avoid bias.

57
New cards

Critical Value and Critical Region

The critical value marks the boundary for significance; the critical region consists of sample results so extreme that they would occur with probability less than α under H0, leading to its rejection.

58
New cards

Interpreting Results

If the test statistic falls inside the critical region, reject H0; the result is statistically significant. If outside, retain H0; the result is not statistically significant.

59
New cards

Statistical vs. Practical Significance

Statistical significance indicates whether results are unlikely due to chance, but does not address whether they are meaningful in practical terms.

60
New cards

Practical Significance

A statistically significant result may lack practical importance due to flawed study design or small effect size.

61
New cards

Effect Size

Measures the magnitude of an observed difference or relationship, independent of sample size.

62
New cards

Why Effect Size Matters

Helps compare findings across studies, interpret unfamiliar metrics, and assess the impact of research results.

63
New cards

Standardized Effect Sizes for Mean Comparisons

Cohen’s d expresses mean differences in standard deviation units, and η² indicates how much variability in the dependent variable is accounted for by group differences.

64
New cards

Effect Sizes for Associations

Measures the strength of relationships rather than group differences; includes Pearson’s correlation coefficient and coefficient of determination.

65
New cards

Rules of Thumb for Effect Sizes

Guidelines for interpreting effect sizes vary by context; common benchmarks include Cohen’s d and r values.

66
New cards

Contextual Interpretation of Effect Sizes

Even small effects can be meaningful; costs and benefits, accumulation over time, and generality of effects must be considered.

67
New cards

Statistical Power

The probability of correctly rejecting a false null hypothesis; higher power allows for greater detection of true effects.

68
New cards

Typical Power Levels in Research

Studies often have insufficient power; small effect power is ~0.23, medium effect ~0.62, large effect ~0.84.

69
New cards

Factors That Influence Statistical Power

Sample size, effect size, and decision threshold (α level) determine power.

70
New cards

Power Analysis

Conducted during study planning to ensure adequate power and determine sample size needed.

71
New cards

How to Maximize Power

Increase sample size, collect more data, use within-subjects designs, measure variables precisely, and avoid dichotomizing continuous variables

72
New cards

Challenges to Reproducibility

Many published research findings fail to replicate, undermining trust in scientific conclusions.

73
New cards

False Findings in Research

John Ioannidis argued that most published findings are false due to high false-positive rates, small sample sizes, flexibility in study designs and analyses, conflicts of interest, and competitive research environments.

74
New cards

Replication Crisis

Replication is a cornerstone of science, but replications are rare due to a focus on novelty in academic publishing. Large-scale replication efforts in psychology found that fewer than half of published findings were successfully replicated.

75
New cards

Questionable Research Practices (QRPs)

Researchers engage in flexible analysis and reporting strategies that can increase false positives.

76
New cards

Researcher Degrees of Freedom

The flexibility researchers have in designing studies, analyzing data, and reporting results can lead to inflated false-positive rates.

77
New cards

p Hacking

Conducting multiple analyses and only reporting those that produce significant results.

78
New cards

HARKing (Hypothesizing After the Results are Known)

Presenting post hoc explanations as if they were planned in advance.

79
New cards

Selective Reporting

Failing to report all experimental conditions, variables, or analyses, leading to biased literature.

80
New cards

Bias in Peer Review

Peer review has systemic flaws that can undermine its role as a quality control mechanism.

81
New cards

Volunteer Nature of Reviewing

Reviewers are unpaid, leading to variable effort and care in evaluations.

82
New cards

Anonymity and Accountability in Peer Review

Anonymous reviews encourage honesty but reduce accountability and recognition, leading to inconsistent diligence.

83
New cards

Reviewer Errors

Careful reviewers may fail to detect undisclosed multiple statistical tests or subtle questionable research practices.

84
New cards

Shared and Idiosyncratic Biases

Reviewers may favor research aligned with their own views or overlook methodological flaws in studies that support a preferred narrative.

85
New cards

Resistance to Criticizing Common Practices

Reviewers may avoid critiquing methods they use, such as convenience sampling or reliance on online participant pools.

86
New cards

Strategies to Improve Reproducibility

Encouraging replication studies, pre-registration, registered reports, open science practices, and comprehensive reporting to verify research results.

87
New cards

Replication Studies

Encouraging direct and conceptual replications to verify results.

88
New cards

Pre-Registration

Publicly posting hypotheses, methods, and analysis plans before data collection to prevent p hacking and HARKing.

89
New cards

Registered Reports

Journals accept studies for publication based on methodological quality before results are known, reducing publication bias.

90
New cards

Open Science Practices

Making data, analysis code, and materials publicly available for verification and reanalysis.

91
New cards

Comprehensive Reporting

Requiring full disclosure of all analyses, experimental conditions, and results to ensure transparency.

92
New cards

Limitations of Narrative Reviews

Traditional literature reviews rely on subjective impressions, which can lead to bias and inconsistency.

93
New cards

Qualitative Impressions vs. Statistical Aggregation

Reviewers form qualitative impressions rather than aggregating data statistically.

94
New cards

Variability in Primary Studies

Small primary studies have high variability, making it hard to detect true effects.

95
New cards

Statistical Tools for Interpretation

Differences in study findings are difficult to interpret without statistical tools.

96
New cards

Moderator Analysis

Investigates factors that influence effect sizes across studies, such as differences in study design, participant characteristics, or measurement techniques.

97
New cards

Publication Bias

Studies with significant results are more likely to be published, skewing the literature.

98
New cards

Funnel Plots

A visual diagnostic tool for detecting asymmetry in study distribution, which may indicate missing studies.

99
New cards

Trim-and-Fill Method

A statistical adjustment to estimate the true effect size in the presence of publication bias.

100
New cards

The Eight Core Elements of a Meta-Analysis

Include 1) Clearly defined research question, 2) Systematic literature search, 3) Effect size extraction, 4) Weighting of studies, 5) Computation of summary effect size, 6) Assessment of heterogeneity, 7) Moderator analysis, 8) Evaluation of publication bias.