Research Design - Episodic

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/17

flashcard set

Earn XP

Description and Tags

Memorization/rehearsal-based, lists & technical definitions

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

18 Terms

1
New cards

I2 cut-offs

0 - 25% = small

25 - 50% = moderate

50 - 75% = substantial

75 - 100% = large

2
New cards

GRADE acronym and technical definition

Grading of Recommendations, Assessment, Development and Evaluations, a system used to assess the quality of evidence and the strength of recommendations in healthcare and psychological research.

3
New cards

GRADE confidence criteria

Evidence for dose-response relationship = evidence supporting clinical recommendations.

Effect size = likelihood that additional research will nullify the evidence of a dose-response relationship.

Acceptable amount of bias, as determined by an independent measure of bias.

Confidence is either high, moderate, low, or very low (uncertain)

4
New cards

PRISMA acronym and technical definition

Preferred Reporting Items for Systematic reviews and Meta-Analyses, a set of guidelines designed to improve the transparency and quality of systematic reviews and meta-analyses.

5
New cards

Criteria for consideration as Evidence-Based Treatment

  1. Random assignment

  2. Intent-to-treat analysis (if applicable)

  3. Two independent replications of significant findings

  4. For single-subject designs: nine or more validated, significant cases

6
New cards

GRADE threats to confidence

  1. Inexcusable amount of bias (too high)

  2. Imprecision

  3. Inconsistency

  4. Indirectness

7
New cards

Odds ratio effect size cut-offs

0 - 3.5 = small

3.5 - 9.0 = moderate

9.0+ = large

8
New cards

Pearson r strength cut-offs

0 - 0.3 = small

0.3 - 0.5 = moderate
0.5+ = large

9
New cards

Cohen’s d effect size cut-offs

0 - 0.2 = small
0.2 - 0.8 = moderate
0.8+ = large

10
New cards

Steps for conducting a meta-analysis

  1. Literature review

  2. Coding of variables

  3. Deciding a measure of effect size

  4. Generate a forest plot

  5. Trim & fill (estimate publication bias)

  6. Calculate mean effect size

  7. Calculate fail-safe n (verify publication bias)

  8. Calculate Q statistic or I2 (heterogeneity)

  9. Conduct moderator analysis for significant Q statistic as needed

11
New cards

Measures of effect size in meta-analysis

  1. Cohen’s d

  2. Pearson’s r

  3. Hedge’s g

  4. Odds ratio

12
New cards

Hedge’s g cut-offs

0 - 0.2 = small

0.2 - 0.8 = moderate

0.8+ = large

13
New cards

Symbols utilized in depicting experimental designs

X = manipulation of independent variable

Xp = placebo, no manipulation of independent variable

XTAU = treatment as usual, standard treatment

XT = active component of treatment only

XT + A = combined control group

Xfull = full dose of treatment

Xminus = dismantled control group

O = observation/measurement

R = random assignment

Non-R = no random assignment

14
New cards

Manipulation checks - examples and technical definition

Methods of verifying that manipulations of the independent variable are being implemented as intended, include:

  1. Training of clinicians

  2. Regular supervision of clinicians

  3. Audio/video recording of clinicians

  4. Tests for competency of clinicians

15
New cards

Threats to statistical conclusion validity

  1. Family-wise error (Type I error)

  2. Restricted range of dependent variable (e.g. floor/ceiling effect)

  3. Outliers in the data

  4. Model misspecification

  5. Uncontrolled confounds

  6. Subject heterogeneity (Type II error)

16
New cards

Types of non-probability sampling

  1. Convenience - intact groups

  2. Availability - self-selection of participants

  3. Screened - criterion/threshold

  4. Snowball - self-selection of participants through peers of other participants

  5. Quota - non-probability stratified sampling

17
New cards

Methods for addressing confounding variables

  1. Random assignment - distributes error caused by confound to be non-systematically distributed

  2. Holding constant - dimensionally reducing the confound to one level across groups

  3. Matching - enables confounds to be held constant across groups at more than one level

  4. Operationalization - treats confound like an independent variable, establishes levels

  5. Split the file - dataset divided into segments based on levels of confound

18
New cards

Types of error

Systematic - bias, confounds, covariates

Unsystematic - random variation in participants or conditions, noise