Reliability

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/19

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

20 Terms

1
New cards

Reliability

refers to consistency of the data collected using a given tool

observed score = tru score + measurement error

2
New cards

reliability coefficient

true scores variance / observed scores variance

→ the shared variance between the true score and the observed score

rtt

0.90 → excellent reliability

>0,70 → good reliability

3
New cards

experiments assessing the reliability

  • Test-Retest Agreement

  • Inter-Rater Agreement

  • Agreement between methods

e.g.:

  • Pearson's Correlation Coefficient

  • Intraclass Correlation Coefficient

  • Cronbach's Alpha

4
New cards

test-retest

correlation between repeated applications over time

5
New cards

equivalents forms, split-half, internal consistency

correlation across different variations of the instrument

6
New cards

inter-rater reliability

correlation from different observers

7
New cards

test-retest choosing intervals

  • too short interval → recall of the previous responses

  • too long interval → change in the subject's state

8
New cards

interclass correlation coefficient

= number of patients*(variance of patients - variance of errors) / nop*variance of patients + number of repetitions*variance of repeats +(nop-nor)*variance of error

9
New cards

variability between patients

each patient is different

10
New cards

variability within patients

measurement error

11
New cards

residual variability

scores vary for other causes

12
New cards

equivalent forms reliability

the agreement between scores when using two alternative instruments

  • eliminates learning effects and memory bias

  • difficult to construct truly equivalent versions

13
New cards

parallelism assumption

scores should have the same variability and possibly the same mean

14
New cards

split-half

dividing the questionnaire into two halves

  • instruments have to have relatively large number of items

  • eliminates the learning effect

  • difficult to construct two halves truly parallel

15
New cards

internal consistency

the extent to which items on a psychometric scale are correlated with each other

  • Cronbach’s α coefficients - conservative

16
New cards

Cronbach’s α

sum of item variances / variance of the sum

  • scales developed to maximize reliability tend to be long

17
New cards

item selection using Cronbach’s α

if Cronbach's α does not decrease when an item is omitted, that item is a candidate for elimination from the scale

18
New cards

the Spearman-Brown formula

predicts the gain in α that is expected by increasing the number of items

x*α/x*α + (1-α)

19
New cards

internal consistency method

based on the principle that each question in the questionnaire is a parallel form of the other questions

→ the overall reliability of the questionnaire is the average correlation between all possible pairs of items

  • items must be homogeneous

20
New cards

construction of reliable questionnaire

  • Construct simple, precise, concrete questions to reduce ambiguity

  • Increase the length of the scale → random error decreases exponentially as length increases

  • Increase the homogeneity of items (statistical packages suggest which questions to eliminate to increase reliability)

  • Control the administration of the questionnaire, which must be done under standard conditions and according to standard procedures