1/23
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
3 common ways to operationalize a construct
Self-report measures
Observational/ behavioral measures
Physiological measures
When you operationalize a variable, it must have atleast 2 levels
levels of categorical variables —> Categories (nominal)
levels of quantitative variables —> meaningful numbers (Ordinal, scale)
Scales of Measurement: Nominal
Non-ordered categorical responses
Allow us to determine whether 2 individuals are different, but we cannot make quantitative comparisons
Scales of Measurements: Ordinal
Ordered categorical responses
We can determine whether 2 individuals are different and determine the direction of difference
We cannot determine the magnitude of the difference
Scales of Measurement: Interval
Numerical categories that are equally spaced
Can determine magnitude of difference
No absolute 0- cannot make ratio statements
Scales of Measurement: Ratio
Can determine magnitude of difference
Can make ratio statements because —> absolute 0
Reliability
if you measure something multiple times under the same conditions, you should get the same results each time. It’s about being consistent and dependable.
true score + measurement error = observed score
Assessing Reliability (all 3 use the correlation coefficient)
test- retest
internal consistency
Interrater
correlation coefficient (x)
indicative of strength (-1 to +1) and direction (+ or -;slope)
Test-Retest Reliability
Indicates that the scores on a test will be similar when participants complete the test more than once
A strong reliability coefficient for test-retest reliability is….
0.5
internal Consistency
tests relationships between scores on different items of a survey
Cronbach’s alpha
average correlation between scores on all pairs of items on survey
Split-half reliability
divide test in half and correlate scores from each half
what is considered strong for internal consistency and inter-rater reliability
0.7
Inter-Rater Reliability
Degree of agreement between observers who are measuring the same behaviors
can be assessed by examining the judgements/ratings by multiple observers
5 different types of validity (is the test measuring what it is intended to measure)
Face, content, criterion, convergent, discriminant
Face Validity
A measurement procedure superficially appears to measure what it claims to measure
Content Validity
How well do the items represent the entire universe of items?
construct, variable, concept
Criterion Validity
Does the measure under consideration associate with a concrete behavioral outcome that it should be associated with?
Convergent Validity
a measure should correlate more strongly with other measures of the same similar constructs
Discriminant Validity
A measure should not correlate strongly with other measures of unrelated constructs
Convergent and Discriminant validity
often evaluated together at patterns of correlations among self-report measures
Reliability and validity
Can be reliable without being valid
can’t be valid without being reliable