SI - Understanding Reliability and Validity in Measurements

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/21

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

22 Terms

1
New cards

Reliability

The degree to which the same event produces consistent results.

2
New cards

Types of Measurement Error

Includes systematic errors (predictable, consistent) and random errors (unpredictable, inconsistent).

3
New cards

Systematic Errors

Errors that are predictable and consistent, such as improper use of landmarks or improperly calibrated tools.

4
New cards

Random Errors

Errors that are unpredictable and inconsistent, such as examiner fatigue or environmental disruptions.

5
New cards

Test-retest Reliability

The consistency of a test over time.

6
New cards

Internal Consistency

The correlation among items within a test, commonly measured using Cronbach's alpha.

7
New cards

Intra-rater Reliability

The stability of one rater's measurements over time.

8
New cards

Inter-rater Reliability

The agreement between two or more raters.

9
New cards

Intraclass Correlation Coefficient (ICC)

Ideal for continuous data, reflecting both relationship and agreement; ranges from excellent (>0.90) to poor (<0.75) reliability.

10
New cards

Cronbach's alpha

A measure of internal consistency.

11
New cards

Kappa statistic

Used for assessing agreement in categorical data.

12
New cards

Face Validity

Determines whether the test appears effective at first glance.

13
New cards

Content Validity

The extent to which a test covers all aspects of the concept being measured.

14
New cards

Construct Validity

Measures an abstract concept, such as intelligence or personality.

15
New cards

Criterion-related Validity

Compares a test to an established gold standard.

16
New cards

Concurrent Validity

Correlates with current performance.

17
New cards

Predictive Validity

Predicts future outcomes.

18
New cards

Effect Size (ES)

Measures change between groups or over time, categorized into large (0.8), moderate (0.5), and small (0.2) effects.

19
New cards

Key Differences Between Reliability and Validity

Reliability is necessary for validity, but reliability alone does not guarantee validity.

20
New cards

Minimizing Errors in Measurement

Include using operational definitions, proper training of examiners, regular equipment inspection, and blinded assessments.

21
New cards

Knee Outcome Survey

Demonstrated high ICC (0.97) indicating excellent reliability and strong associations between baseline and follow-up scores.

22
New cards

Responsiveness

The ability to detect meaningful changes over time in clinical outcomes.