Chapter 9: Reliability of Measurements

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/57

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

58 Terms

1
New cards

What is the definition of reliability in measurement?

The extent to which a measured value is obtained consistently during repeated assessment of unchanging behavior.

2
New cards

Reliability can be conceptualized as what

Reproducibility or dependability

3
New cards

According to classical measurement theory, what are the two components of an observed score?

A fixed true score and an unknown error component.

4
New cards

In classical measurement theory, measurement error is defined what

Any difference between the true value and the observed value

5
New cards

Why is it impossible to calculate the exact error component of a measurement?

The true score is unknown.

6
New cards

Which measurement theory accounts for specific, identifiable sources of error in addition to random error?

Generalizability theory

7
New cards

What are the two types of measurement error

Systematic errors
Random errors

8
New cards

What characterizes a systematic measurement error?

It is a predictable, constant measure of error that occurs in the same direction.

9
New cards

How does systematic error typically affect measurement statistics for reliability?

It is not considered a statistical problem for reliability.

10
New cards

While systematic error does not typically hurt reliability, what measurement quality does it negatively affect?

Validity

11
New cards

What is an example of a tool causing systematic error?

A tape measure that is incorrectly marked.

12
New cards

What defines a random measurement error?

An unpredictable error due to chance or variability.

13
New cards

Why does taking the average of several trials help mitigate random error?

Over- and under-estimates should occur with equal frequency and cancel out over the long run.

14
New cards

How are random error and reliability related?

As random errors diminish, the measure becomes more reliable.

15
New cards

What are the three general sources of error within a measurement system?

The individual taking the measure, the instrument, and the variability of the characteristic being measured.

16
New cards

Define reliability

An estimate of the extent to which a score is free from error

17
New cards

In the context of reliability, what does variance measure?

The variability among scores within a sample.

18
New cards

What does a larger variance mean

A greater dispersion of scores

19
New cards

What does relative reliability coefficients reflect

True variance as a proportion of total variance

20
New cards

What is the formula for the general reliability ratio (coefficient)?

true score variance / (true score variance + error variance)

21
New cards

What is the numerical range of relative reliability coefficients?

0.00 to 1.00

22
New cards

What does a relative reliability coefficient of 1.00 indicate?

Perfect reliability

23
New cards

What are the two most common types of relative reliability coefficients?

Intraclass Correlation Coefficients (ICC) and Kappa coefficients.

24
New cards

How does absolute reliability differ from relative reliability?

It indicates how much of an actual measured value is likely due to error rather than a proportion of variance.

25
New cards

What is the most common metric used to express absolute reliability?

Standard error of the measurement (SEM).

26
New cards

According to the provided guidelines, a reliability coefficient (α) ≥0.9 is considered _____.

Excellent

27
New cards

According to the provided guidelines, a reliability coefficient (α) below 0.5 is considered _____.

Unacceptable

28
New cards

What are some factors that can affect reliability

Subject characteristics
Training and skill of examiners
Setting
Number/timing of trials

29
New cards

What are the four primary approaches to relative reliability testing?

Test-retest, Rater, Alternate forms, and Internal consistency.

30
New cards

What is the purpose of test-retest reliability?

To establish that an instrument can measure an unchanging variable with consistency.

31
New cards

Why must test-retest intervals be carefully timed?

To be far enough apart to avoid fatigue/learning effects, but close enough to avoid true changes in the variable.

32
New cards

What are 'carryover effects' in the context of repeated measurements?

Changes in the second measurement caused by practice or learning from the first measurement.

33
New cards

What is the difference between carryover effects and 'testing effects'?

Testing effects occur when the test itself is responsible for observed changes in the variable.

34
New cards

Which coefficient is used for quantitative measures in test-retest reliability?

Intraclass correlation coefficient (ICC).

35
New cards

Which statistics are used for categorical data in test-retest reliability?

Percent agreement and the Kappa statistic.

36
New cards

What assumption is made to establish rater reliability?

The instrument and the response variable are stable, meaning score differences are attributed to rater error.

37
New cards

What are the two main types of rater raliability

Intra-rater
Inter-rater

38
New cards

What is intra-rater reliability?

The stability of data recorded by one individual across two or more recordings.

39
New cards

What are the major concerns when a rater is not blinded to their previous scores?

Carryover and practice effects
Rater bias

40
New cards

What is inter-rater reliability?

The variation between two or more raters who measure the same characteristic.

41
New cards

Which type of rater reliability should be established first?

Intra-rater reliability.

42
New cards

How is inter-rater reliability best assessed?

When all raters assess the exact same trial simultaneously and independently.

43
New cards

What is alternate forms reliability?

Establishing equivalence between multiple versions of a measurement instrument.

44
New cards

How is alternate forms reliability typically achieved?

Giving both versions of a test to the same group in one sitting and correlating the results.

45
New cards

What is a limitation of using correlation coefficients to describe reliability?

Correlation measures the degree of association but not the extent of agreement between data sets.

46
New cards

What are most reliability coefficients based off of

Correlation metrics

47
New cards

What is generally applicable to internal consistency

Surveys, questionnaires, written examinations, and interviews

48
New cards

What does internal consistency reflect in a survey or questionnaire?

The extent to which items homogeneously measure various aspects of the same characteristic.

49
New cards

What is Cronbach’s alpha?

A relative reliability index used to measure internal consistency.

50
New cards

How is split-half reliability conducted?

Combining two sets of items testing the same content into one long instrument with redundant halves
Score the halves and correlate the results

51
New cards

What needs to happen when assessing for change

Need to have confidence in the instrument is reliable to assume the observed difference represents true change

52
New cards

What is a change score?

The difference between a first measure and a subsequent measure (e.g., pretest to posttest).

53
New cards

What is 'regression to the mean'?

The tendency for extreme scores to move closer to the expected average score when re-tested.

54
New cards

What is Minimum Detectable Change (MDC)?

The amount of change in a variable required to reflect a true difference rather than measurement error.

55
New cards

What is the relationship between an instrument's reliability and its Minimum Detectable Change (MDC)?

The greater the reliability, the smaller the MDC.

56
New cards

How does the Minimum Detectable Change (MDC) generally compare to the Minimally Clinically Important Difference (MCID)?

The MDC is generally smaller than the MCID.

57
New cards

Why is reliability considered population-specific?

Reliability estimates from one population (e.g., healthy) may not apply to another (e.g., pathologic).

58
New cards

Name practical steps that can be taken to improve reliability in a clinical setting.

Standardize measurement protocols
Train raters
Pilot the procedures
Calibrate and improve the instrument
Take multiple measures