Chapter 9 - Reliability of Measurements

0.0(0)
studied byStudied by 0 people
0.0(0)
call with kaiCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/43

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

44 Terms

1
New cards

reliability

the extent to which a measured value can be obtained consistently during repeated assessment of unchanging behavior; can be conceptualized as reproducibility or dependability; consistent responses under steady conditions; estimate how much of a measure is attributable to an accurate reading and how much is error

2
New cards

classical measurement theory

an observed score can be thought of as a function of a fixed, true score (this is unknown) +/- an unknown error component

3
New cards

measurement error

any difference between the true value and the observed value

4
New cards

systematic errors

types of measurement error; predictable, constant measures of error in the same direction; not considered a statistical problem for reliability, but can hurt validity; example includes a tape measure that is incorrectly marked

5
New cards

random errors

types of measurement error; unpredictable errors due to chance or variability; affects reliability by moving values further from the true value; over/underestimates should occur with equal frequencies over long run, so averaging trials helps; examples: errors due to fatigue, mechanical inaccuracies, a patient moving during a height measurement

6
New cards

individual taking measure, measuring instrument, variability

List the three general sources of error within a measurement system.

7
New cards

variance

a measure of the variability among scores within a sample

8
New cards

greater

A larger variance has a LESSER/GREATER dispersion of scores.

9
New cards

relative

RELATIVE/ABSOLUTE reliability coefficients reflect true variance as a proportion of total variance.

10
New cards

general reliability ratio (coefficient)

true score variance / (true score variance + error variance)

11
New cards

true

T/F: 1.00 indicates perfect reliability.

12
New cards

intraclass correlation coefficient (ICC), Kappa coefficients

List the two most common relative reliability coefficients.

13
New cards

absolute

RELATIVE/ABSOLUTE reliability indicate something about how much an actual measured value is likely due to error.

14
New cards

standard error of measurement

What is the most common absolute reliability?

15
New cards

ICC

What relative reliability coefficient is used for continuous scales?

16
New cards

Kappa coefficients

What relative reliability coefficient is used for categorical scales?

17
New cards

number/timing of trials

Which of the following considerations on reliability are PTs most concerned about in our setting?

subject characteristics

training and skill of examiners

setting

number/timing of trials

18
New cards

false

T/F: Reliability is an all-or-none.

19
New cards

test-retest, rater, alternate forms, internal consistency

List the four types of relative reliability.

20
New cards

test-retest, rater

List the two types of relative reliability most used in PT.

21
New cards

test-retest reliability

used to establish that an instrument is capable of measuring an unchanging variable with consistency; one sample is measure two times (at least), keeping all testing conditions as constant as possible

22
New cards

carryover effects

consists of learning and practice, and can affect the second measurement in test-retest reliability

23
New cards

testing effects

another consideration that can affect the second measurement in test-retest reliability, such as fatigue

24
New cards

true

T/F: If scores are reliable, they should be similar.

25
New cards

ICC

ICC/KAPPA are used for quantitative measures when considering test-retest reliability.

26
New cards

kappa

ICC/KAPPA are used for categorical data when considering test-retest reliability.

27
New cards

intra, inter

List the two types of rater reliability.

28
New cards

rater reliability

to establish this type of reliability, we must assume the instrument and response variable are considered stable, so differences between scores can be attributed to ________ error

29
New cards

intrarater reliability

the stability of the data recorded by 1 individual across two or more recordings; best established with two or more recordings; essentially the same as test-retest reliability when the rater skill is relevant to the accuracy of the test; should be established FIRST

30
New cards

interrater reliability

concerns variation between two or more raters who measure the same characteristic; when it is not established, it limits generalizability of study results; best assessed when all raters assess the exact same trial, simultaneously and independently

31
New cards

alternate forms

when multiple versions of a measurement instrument are considered equivalent; achieved by giving both versions of the test to the same group in the same siting, and correlating the results

32
New cards

correlation

Most reliability coefficients are based on _________ metrics.

33
New cards

correlation

the degree of association between two sets of data, but cannot tell us the extent of agreement between two sets

34
New cards

internal consistency

generally applicable to surveys, questionnaires, written examinations, and interviews; reflects the extent to which items homogeneously measure various aspects of the same characteristic

35
New cards

internal consistency

commonly tested by evaluating the correlation between each item and a summative, Cronbach's alpha, and split-half reliability

36
New cards

Cronbach's alpha

relative reliability index

37
New cards

split-half reliability

combining two sets of items testing the same content into one long instrument with redundant halves; score then halves and correlate the results

38
New cards

change

When evaluating _______, we need to have confidence that the instrument is reliable so we can assume that the observed difference represents true _______.

39
New cards

change scores

measure the difference between the first and next measures (ex: pretest to posttest)

40
New cards

regression to mean

extreme scores can reflect substantial error, and tend to move closer to the expected average score when retested; extreme scores are a concern when classifying subjects based on score

41
New cards

minimum detectable change

the amount of change in a variable that must be achieved to reflect some true difference (outside of measurement error)

42
New cards

smaller

The greater the reliability, the SMALLER/LARGER the MDC.

43
New cards

smaller

MDC is generally SMALLER/LARGER than MCID.

44
New cards

MCID

the amount of change that is considered meaningful