Psychometrics & Research Design - Episodic

0.0(0)
studied byStudied by 14 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/22

flashcard set

Earn XP

Description and Tags

Memorization, lists & technical definitions

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

23 Terms

1
New cards

I2 cut-offs

0 - 25% = small

25 - 50% = moderate

50 - 75% = substantial

75 - 100% = large

2
New cards

GRADE acronym and technical definition

Grading of Recommendations, Assessment, Development and Evaluations, a system used to assess the quality of evidence and the strength of recommendations in healthcare and psychological research.

3
New cards

GRADE confidence criteria

Evidence for dose-response relationship = evidence supporting clinical recommendations.

Effect size = likelihood that additional research will nullify the evidence of a dose-response relationship.

Acceptable amount of bias, as determined by an independent measure of bias.

Confidence is either high, moderate, low, or very low (uncertain)

4
New cards

PRISMA acronym and technical definition

Preferred Reporting Items for Systematic reviews and Meta-Analyses, a set of guidelines designed to improve the transparency and quality of systematic reviews and meta-analyses.

5
New cards

Criteria for consideration as Evidence-Based Treatment

  1. Random assignment

  2. Intent-to-treat analysis (if applicable)

  3. Two independent replications of significant findings

  4. For single-subject designs: nine or more validated, significant cases

6
New cards

GRADE threats to confidence

  1. Inexcusable amount of bias (too high)

  2. Imprecision

  3. Inconsistency

  4. Indirectness

7
New cards

Odds ratio effect size cut-offs

0 - 3.5 = small

3.5 - 9.0 = moderate

9.0+ = large

8
New cards

Pearson r strength cut-offs

0 - 0.3 = small

0.3 - 0.5 = moderate
0.5+ = large

9
New cards

Cohen’s d effect size cut-offs

0 - 0.2 = small
0.2 - 0.8 = moderate
0.8+ = large

10
New cards

Steps for conducting a meta-analysis

  1. Literature review

  2. Coding of variables

  3. Deciding a measure of effect size

  4. Generate a forest plot

  5. Trim & fill (estimate publication bias)

  6. Calculate mean effect size

  7. Calculate fail-safe n (verify publication bias)

  8. Calculate Q statistic or I2 (heterogeneity)

  9. Conduct moderator analysis for significant Q statistic as needed

11
New cards

Measures of effect size in meta-analysis

  1. Cohen’s d

  2. Pearson’s r

  3. Hedge’s g

  4. Odds ratio

12
New cards

Hedge’s g cut-offs

0 - 0.2 = small

0.2 - 0.8 = moderate

0.8+ = large

13
New cards

Symbols utilized in depicting experimental designs

X = manipulation of independent variable

Xp = placebo, no manipulation of independent variable

XTAU = treatment as usual, standard treatment

XT = active component of treatment only

XT + A = combined control group

Xfull = full dose of treatment

Xminus = dismantled control group

O = observation/measurement

R = random assignment

Non-R = no random assignment

14
New cards

Manipulation checks - examples and technical definition

Methods of verifying that manipulations of the independent variable are being implemented as intended, include:

  1. Training of clinicians

  2. Regular supervision of clinicians

  3. Audio/video recording of clinicians

  4. Tests for competency of clinicians

15
New cards

Threats to statistical conclusion validity

  1. Family-wise error (Type I error)

  2. Restricted range of dependent variable (e.g. floor/ceiling effect)

  3. Outliers in the data

  4. Model misspecification

  5. Uncontrolled confounds

  6. Subject heterogeneity (Type II error)

16
New cards

Types of non-probability sampling

  1. Convenience - intact groups

  2. Availability - self-selection of participants

  3. Screened - criterion/threshold

  4. Snowball - self-selection of participants through peers of other participants

  5. Quota - non-probability stratified sampling

17
New cards

Methods for addressing confounding variables

  1. Random assignment - distributes error caused by confound to be non-systematically distributed

  2. Holding constant - dimensionally reducing the confound to one level across groups

  3. Matching - enables confounds to be held constant across groups at more than one level

  4. Operationalization - treats confound like an independent variable, establishes levels

  5. Split the file - dataset divided into segments based on levels of confound

18
New cards

What to assess in a confirmatory factor analysis table

  1. χ2 value (lower = better)

  2. Goodness-of-fit indices (higher = better, 1.00 = max)

  3. Comparative fit indices, Tucker-Lewis (higher = better, 1.00 = max)

  4. RMSEA (lower = better, 0.00 = min)

  5. Information criterion (lower = better)

19
New cards

Most common standard score systems include

z-scores: mean = 0, SD = 1

MMPT T-scores: mean = 50, SD = 10

WAIS/WISC IQ scores: mean = 100, SD = 15

Wechsler subscale scores: mean = 10, SD = 3

Academic achievement scores: mean = 500, SD = 100

20
New cards

List the common comparative fit indices

  1. Tucker-Lewis index (TLI)

  2. Comparative Fit Index (CFI)

  3. Normed Fit Index (NFI)

  4. Incremental Fit Index (IFI)

21
New cards

List the common information criteria

  1. Akaike Information Criterion (AIC)

  2. Consistent Akaike Information Criterion (CAIC)

  3. Bayesian Information Criterion (BIC)

  4. Hannan-Quinn Information Criterion (HQIC)

22
New cards

List the kappa cut-offs and meanings

-1.00 - 0.00 = poor agreement

0.01 - 0.20 = slight agreement

0.21 - 0.40 = fair agreement

0.41 - 0.60 = moderate agreement

0.61 - 0.80 = substantial agreement

0.81 - 1.00 = almost perfect agreement

23
New cards

CONSORT acronym

Consolidated Standards of Reporting Trials, system that standardizes how data from Randomized Clinical Trials is reported.