1/22
Memorization, lists & technical definitions
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
I2 cut-offs
0 - 25% = small
25 - 50% = moderate
50 - 75% = substantial
75 - 100% = large
GRADE acronym and technical definition
Grading of Recommendations, Assessment, Development and Evaluations, a system used to assess the quality of evidence and the strength of recommendations in healthcare and psychological research.
GRADE confidence criteria
Evidence for dose-response relationship = evidence supporting clinical recommendations.
Effect size = likelihood that additional research will nullify the evidence of a dose-response relationship.
Acceptable amount of bias, as determined by an independent measure of bias.
Confidence is either high, moderate, low, or very low (uncertain)
PRISMA acronym and technical definition
Preferred Reporting Items for Systematic reviews and Meta-Analyses, a set of guidelines designed to improve the transparency and quality of systematic reviews and meta-analyses.
Criteria for consideration as Evidence-Based Treatment
Random assignment
Intent-to-treat analysis (if applicable)
Two independent replications of significant findings
For single-subject designs: nine or more validated, significant cases
GRADE threats to confidence
Inexcusable amount of bias (too high)
Imprecision
Inconsistency
Indirectness
Odds ratio effect size cut-offs
0 - 3.5 = small
3.5 - 9.0 = moderate
9.0+ = large
Pearson r strength cut-offs
0 - 0.3 = small
0.3 - 0.5 = moderate
0.5+ = large
Cohen’s d effect size cut-offs
0 - 0.2 = small
0.2 - 0.8 = moderate
0.8+ = large
Steps for conducting a meta-analysis
Literature review
Coding of variables
Deciding a measure of effect size
Generate a forest plot
Trim & fill (estimate publication bias)
Calculate mean effect size
Calculate fail-safe n (verify publication bias)
Calculate Q statistic or I2 (heterogeneity)
Conduct moderator analysis for significant Q statistic as needed
Measures of effect size in meta-analysis
Cohen’s d
Pearson’s r
Hedge’s g
Odds ratio
Hedge’s g cut-offs
0 - 0.2 = small
0.2 - 0.8 = moderate
0.8+ = large
Symbols utilized in depicting experimental designs
X = manipulation of independent variable
Xp = placebo, no manipulation of independent variable
XTAU = treatment as usual, standard treatment
XT = active component of treatment only
XT + A = combined control group
Xfull = full dose of treatment
Xminus = dismantled control group
O = observation/measurement
R = random assignment
Non-R = no random assignment
Manipulation checks - examples and technical definition
Methods of verifying that manipulations of the independent variable are being implemented as intended, include:
Training of clinicians
Regular supervision of clinicians
Audio/video recording of clinicians
Tests for competency of clinicians
Threats to statistical conclusion validity
Family-wise error (Type I error)
Restricted range of dependent variable (e.g. floor/ceiling effect)
Outliers in the data
Model misspecification
Uncontrolled confounds
Subject heterogeneity (Type II error)
Types of non-probability sampling
Convenience - intact groups
Availability - self-selection of participants
Screened - criterion/threshold
Snowball - self-selection of participants through peers of other participants
Quota - non-probability stratified sampling
Methods for addressing confounding variables
Random assignment - distributes error caused by confound to be non-systematically distributed
Holding constant - dimensionally reducing the confound to one level across groups
Matching - enables confounds to be held constant across groups at more than one level
Operationalization - treats confound like an independent variable, establishes levels
Split the file - dataset divided into segments based on levels of confound
What to assess in a confirmatory factor analysis table
χ2 value (lower = better)
Goodness-of-fit indices (higher = better, 1.00 = max)
Comparative fit indices, Tucker-Lewis (higher = better, 1.00 = max)
RMSEA (lower = better, 0.00 = min)
Information criterion (lower = better)
Most common standard score systems include
z-scores: mean = 0, SD = 1
MMPT T-scores: mean = 50, SD = 10
WAIS/WISC IQ scores: mean = 100, SD = 15
Wechsler subscale scores: mean = 10, SD = 3
Academic achievement scores: mean = 500, SD = 100
List the common comparative fit indices
Tucker-Lewis index (TLI)
Comparative Fit Index (CFI)
Normed Fit Index (NFI)
Incremental Fit Index (IFI)
List the common information criteria
Akaike Information Criterion (AIC)
Consistent Akaike Information Criterion (CAIC)
Bayesian Information Criterion (BIC)
Hannan-Quinn Information Criterion (HQIC)
List the kappa cut-offs and meanings
-1.00 - 0.00 = poor agreement
0.01 - 0.20 = slight agreement
0.21 - 0.40 = fair agreement
0.41 - 0.60 = moderate agreement
0.61 - 0.80 = substantial agreement
0.81 - 1.00 = almost perfect agreement
CONSORT acronym
Consolidated Standards of Reporting Trials, system that standardizes how data from Randomized Clinical Trials is reported.