1/21
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Readability Level
Conducted to determine the participants' ability to read and comprehend the items on an instrument.
1 multiple choice option
Reliability
Denotes the consistency of the measures it obtains of an attribute, concept, or situation in a study or clinical practice.
Reliability Testing
Examines the amount of random error in an instrument that is used in a study.
Stability Reliability Testing
Is concerned with the consistency of repeated measures of the same variable or attribute with the same scale or measurement method over time.
Poor Reliability
Less than 0.50
Moderate Reliability
0.50-0.75
Good Reliability
0.75-0.90
Excellent Reliability
Greater than 0.90
Equivalence Reliability
Compares two versions of the same scale or instrument or two observers measuring the same event.
Interrater (interobserver) reliability
Comparison of two observers.
2 multiple choice options
Alternate-Forms Reliability
Comparison of two versions of a test or scale.
Internal Consistency
Also known as homogeneity reliability testing.
1 multiple choice option
Cronbach's Alpha Coefficient
Is the most commonly used measure of internal reliability for scales with multiple items that are at least at the interval level of measurement.
2 multiple choice options
Validity
A measurement method this if it accurately reflects the concept it was developed to measure.
Content Validity
Examines the extent to which the measurement method includes all the major elements relevant to the concept being measured.
1 multiple choice option
Construct Validity
Focuses on determining whether the instrument actually measures the theoretical construct that is purports to measure.
1 multiple choice option
Convergent Validity
Comparing a newer instrument with an existing instrument that measures the same concept or construct.
2 multiple choice options
Divergent Validity
Scores from an existing instrument are correlated with the scores from an instrument measuring an opposite concept.
Validity from Factor Analysis
If the factor analysis results identify the essential elements of the concept to be measured by an instrument, then the validity of the instrument in strengthened.
Validity from Contrasting (Known) Groups
Is tested by identifying groups that are expected or known to have contrasting scores on an instrument and then asking the groups to complete the instrument.
1 multiple choice option
Successive Verification Validity
Is achieved when an instrument is used in several studies with a variety of study participants in various settings.
1 multiple choice option
Criterion-Related Validity
Is examined by using a study participant's score on an instrument or scale to infer his or her performance on a criterion (Predictive Validity and Concurrent Validity).
1 multiple choice option