Assessing Validity and Reliability of Diagnostic and Screening Tests

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/23

flashcard set

Earn XP

Description and Tags

Flashcards covering the concepts of validity and reliability in diagnostic and screening tests, including sensitivity, specificity, predictive values, and different types of testing and variation.

Last updated 11:56 PM on 9/24/25
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

24 Terms

1
New cards

What is the primary objective of Module 5: Assessing Validity and Reliability of Diagnostic and Screening Tests?

To assess the quality of newly available screening and diagnostic tests and make reasonable decisions about their use and interpretation.

2
New cards

How is the validity of a test defined?

Its ability to distinguish between those who have a disease and those who do not.

3
New cards

What are the two types of validity for screening tests?

Sensitivity and Specificity.

4
New cards

What is sensitivity in the context of screening tests?

The ability of the test to correctly identify those with the disease (True Positive / (True Positive + False Negative)).

5
New cards

What is specificity in the context of screening tests?

The ability of the test to correctly identify those without the disease (True Negative / (True Negative + False Positive)).

6
New cards

Given 80 True Positives, 20 False Negatives, 100 False Positives, and 800 True Negatives, what is the sensitivity?

80 / (80 + 20) = 80%.

7
New cards

Given 80 True Positives, 20 False Negatives, 100 False Positives, and 800 True Negatives, what is the specificity?

800 / (800 + 100) = 89%.

8
New cards

What are the potential burdens of a false positive test result?

More expensive follow-up tests, burden on the healthcare system, and anxiety/worry for the individual, affecting insurance and employment.

9
New cards

What is Sequential (Two-Stage) Testing used for?

To reduce false positives by performing a less expensive/invasive test first, and recalling only those who screen positive for further, more accurate testing.

10
New cards

When using two simultaneous tests, what is the impact on net sensitivity and net specificity compared to either test alone?

There is a net gain in sensitivity and a net loss in specificity.

11
New cards

What is Positive Predictive Value (PPV)?

The probability that a patient has the disease if their test result is positive (TP / (TP + FP)).

12
New cards

What is Negative Predictive Value (NPV)?

The probability that a patient does not have the disease if their test result is negative (TN / (TN + FN)).

13
New cards

Given 80 True Positives, 20 False Negatives, 100 False Positives, and 800 True Negatives, what is the Positive Predictive Value (PPV)?

80 / (80 + 100) = 44.4%.

14
New cards

Given 80 True Positives, 20 False Negatives, 100 False Positives, and 800 True Negatives, what is the Negative Predictive Value (NPV)?

800 / (800 + 20) = 97.6%.

15
New cards

How does disease prevalence affect Positive Predictive Value (PPV)?

The higher the prevalence, the higher the PPV will be, making screening programs more productive and cost-effective when directed to high-risk populations.

16
New cards

When disease prevalence is low, which measure has a higher effect on PPV: sensitivity or specificity?

Specificity has a higher effect on PPV.

17
New cards

What is reliability (repeatability) of tests?

Whether the test result obtained can be replicated if the test is repeated.

18
New cards

What factors affect the reliability of tests?

Intra-subject variation, intra-observer variation, and inter-observer variation.

19
New cards

What is intra-subject variation?

Variation in measured human characteristics over time, even within short periods, such as changes in blood pressure readings.

20
New cards

What is intra-observer variation?

A variation that occurs when the same observer reads the same test results differently at two different times, often due to subjective factors.

21
New cards

What is inter-observer variation?

A variation that occurs when two different examiners do not give the same result for the same test.

22
New cards

What is the purpose of the Kappa statistic?

It quantifies the extent to which the observed agreement between two observers exceeds the agreement expected by chance alone.

23
New cards

According to Landis and Koch, what Kappa value represents excellent agreement beyond chance?

A kappa greater than 0.75.

24
New cards

What is the relationship between test reliability and validity for an individual?

When the reliability or repeatability of a test is poor, the validity of the test for a given individual also may be poor.