1/25
Flashcards covering reliability, validity, and related concepts (CVR/CVI, triangulation, pilot studies) from the lecture notes.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What does reliability refer to in measurement?
The consistency of a measure and whether the results can be reproduced under the same conditions.
What does validity refer to?
The accuracy of a measure—whether it measures what it is supposed to measure.
Name four common types of reliability.
Internal consistency, test-retest, inter-rater, and parallel-forms reliability.
Internal consistency reliability (Cronbach's alpha)
A measure of how consistently items on a test measure the same construct; Cronbach's alpha (often ≥0.70 is acceptable).
Cronbach's alpha interpretation scale
Excellent >0.90; Good 0.80–0.89; Acceptable 0.70–0.79; Questionable 0.60–0.69; Poor 0.50–0.59; Unacceptable <0.50.
Inter-rater reliability
The degree of agreement among different raters; assessed with statistics like kappa or intraclass correlation (ICC).
Test-retest reliability
Stability of scores over time when no treatment occurs between tests; measured by correlation between two administrations.
Parallel-forms reliability
Correlation between two equivalent forms of the same test to assess consistency.
Face validity
Subjective judgment about whether a test appears to measure the intended construct; focuses on appearance, readability, and formatting.
Content validity
The degree to which test items represent the content domain and include all essential items while excluding irrelevant ones.
Criterion validity
The extent to which a measure relates to an outcome or criterion; includes predictive and concurrent validity.
Predictive validity
The ability of a measure to predict future performance or outcomes.
Concurrent validity
The correlation between the measure and a criterion assessed at the same time.
Postdictive validity
Validity related to inferring past states or outcomes from current data (post hoc inference).
Convergent validity
Constructs that should be related are indeed related; high correlation between related measures.
Discriminant validity
Constructs that should not be related show low correlation, indicating distinctness.
Construct validity
The overall measurement validity; includes convergent and discriminant validity, assessing whether the test measures the intended construct.
Content validity ratio (CVR)
A formula: CVR = (ne − N/2) / (N/2), where ne is the number of experts indicating 'essential' and N is the total number of experts.
Content validity index (CVI)
The average of CVR values across items; reflects overall content validity (e.g., CVI = 0.31 indicates limited validity).
Triangulation
Using multiple datasets, methods, theories, or investigators to strengthen validity and credibility and reduce bias.
Types of triangulation
Data triangulation, methodological triangulation, investigator triangulation, and theory triangulation.
Pilot study
A small-scale study conducted before the main project to assess feasibility, recruitment, and procedures.
Relationship between validity and reliability
Reliability is necessary but not sufficient for validity; a test can be reliable without being valid, and both are required for sound measurement.
Rice measurement example for reliability
If you measure a cup of rice three times and obtain the same result, the measurement is reliable; validity would require that the result matches the true standard (e.g., 5 grams).
What does content validity ensure in test design?
Ensures the instrument covers all essential items for the construct and excludes irrelevant items.
Why is triangulation used in research?
To strengthen the validity and credibility of findings by using multiple data sources, methods, or perspectives and to reduce bias.