1/32
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Internal Consistency/Homogeneity Testing
examines the extent to which all the items in a multiple- item instrument or scale consistently measure a variable
what is internal consistency measured with
Cronbach Alpha (likert and differential scale) and Kuder-Richardson formula (KR-20) (dichotomous or nominal data)
what do multi item scales usally include
subscales and internal consistency is determined for these subscales and the total scale
Cronbach’s Alpha
• A score of 1.00 reflects perfect reliability
• This is never reported in a study because all instruments have some form of error
• A score of 0.00 indicates no scale reliabilty
• Strong internal reliability ≥ 0.80
• Moderate internal reliability 0.70 to 0.80
• Low internal reliability <0.60
how is random error identified by
Taking the cronbach alpha squared and subtract it from 1.00
general formula for cronbach alphas
General Form = 1.00 − (CA )^2
Content-Related Validity
examines the extent to which the measurement method contains all the major elements relevant to the concept being measured
what is Content-Related Validity determined by
• 1) review of the literature;
• 2) construction of the scale based on the literature and researcher expertise;
• 3) review by experts (completeness, conciseness, clarity and readability)
• Readability is an essential component of both validity and reliability
Construct Validity
the focus is on determining wheater the instrument actully measures the theoretical construct that t is expected to measure
Examine the fit between the conceptual and operational definitions of a variable
convergent validity
examined by comparing a newer instrument with an existing insturment that measures the same concept or construct (looking at new and old studies)
Validity of both instruments is strengthened when the values obtained have a
moderate to strong positive relationship (if both strong show similar things)
divergent validity
examined when the scores from an instrument measuring the opposite concept
Two scales measuring opposite things could be examined
for divergent validity (i.e. hope vs. hopelessness)
comparing scores from old research and something completely different
Criterion-related Validity
validity from the prediction of future events- achieved when the scores on an instument can be used to predict future behaviors, attitudes and events (fall risks)
assesses how well a test's results correlate with an external criterion or "gold standard" that is related to the construct being measured
when is criterion- related validity strengthened by
when a study participants score on an instrument can be used to infer his or hers performance on another variable or criterion
validity from prediction of concurrent events can be tested by
examining the ability to predict the concurrent value of an instument on the basis of the value obtained on an instrument to measure another concept ( you might use the results of a self-esteem scale to predict the results of a confidence scale)
accuracy
the accuracy of physiological and biochemical measures is similar to the validity of scales used in research
what does accuracy involve
determining the closeness of the agreement between the measured value and the true value of the physiological variable being measured
New measurement devices are being compared to old
There should be a very strong positive correlation
precision
degree of consistency or reproductability of measurements made with physiological measures of the same variable or object under specified conditions
what is precision most often determined by
the manufacturer and is in part controlled by the agency using the device
Physiological equipment should be recalibrated as indicated by
the manufacturer
higher levels of precision (0.90- 0.99) are more important when
monitoring critical physiological functions
validity
determines whether the measurement method accurately reflects the concept it was develop to measure
construct
the abstract concept or trait that a test or measurement instrument is intended to measure
refusal rate formula
number of people refusing to participate/ total nuber of people asked (approached)
attrusion rate formula
number of people who dropped out/ total sample size X 100
attrusion rate
Percent of people who dropped out of the study
reliability
indicates the consistency of the measures it obtains of an attitude, concept, or situation in a study or clinical practice
reliability testing
examined the amount of measurement error in an instrument that is used in a study
looking at amount of error we test and retest
stability reliability
concerned with the consistency of repeated measures of the same variable or attribute
test- retest reliability
repeated measurement of a variable over time
equivalence reliability
compares 2 versions of the same scale or instrument or 2 observers measuring the same event
interrater reliability
comparison of 2 observers
one observer gives you a 100 the other gives you a 60- not reliable
alternate forms of reliability/ parallel reliability
comparisons of 2 scales
sucessive verification validity
achieved when an instument is used in additional studies with a variety of subjects