1/35
from study guide
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Mean
weighted balance score, arithmetic average
standard deviation
how far each score is “on average” from the mean
z score
finding the number of standard deviations a score is from the mean. a way of standardizing the deviation.
standard scores
a transformed score with a known mean and standard deviation (like a z score, to all whole numbers with positive integers)
correlation coefficient
measures the strength and direction of the linear relationship between variables.
psychological construct
a concept that’s not able to be looked at directly. Things that are inferred, a theoretical thing we can define (introversion/behavior)
operational definition
the thing that you do to measure the construct
operationalism
the philosophical position that a construct is defined by the procedures used to establish it. “you can’t let theoretical definition get too far ahead of measurement tool".”
reliability
the consistency of measurement
retest reliability
correlation of the same test given twice. the more time between tests the less test/retest reliability
alternate form reliability
correlation between 2 versions of the same test
internal consistency coefficient
estimating reliability of the sum of many part scores from the correlations among the part scores. Cronbach’s Alpha is an example of
true score
a hypothetical average of many measurements using a particular test with no carryover effects (reliability)
construct score
platonic true score, a person’s true level on a latent construct, independent of measurement
observed score
true score plus the measurement error
measurement error
deviations from the True Score. the difference between an observed score and the true score of a quantity being measured.
split-half reliability
correlation between two halves of tests
carryover effects
practice effects and fatigue effects
construct validity
the totality of evidence supporting the use of a test
how many forms of validity? name them!
9! construct, content, test, face, concurrent, convergent, predictive, incremental, discriminate
face validity
just by inspecting the items of a measure, do they seem like measures of the intended construct?
convergent validity
does the test correlate with the variables it’s predicted to correlate with?
discriminate validity
is the test uncorrelated with variables it should not correlate with?
concurrent validity
two things trying to measure the same thing and it’s correlation. Matches results of previous tests.
predictive validity
one thing meant to predict the other. Guess the future.
incremental validity
multiple predictors predicting multiple outcomes. Does each predictor give us new information that the other one doesn’t?
statistical bias
systematic error applied to test validity. consistently off in a specific direction/wrong in a particular direction. never cancels out, compounds with each measurement. test bias does not equal test unfairness.
central tendency bias
never giving extreme answers to anything. often seen in sophisticated adults. soften extreme response options (almost never vs never)
jingle fallacy
when things sound the same but measure different things
jangle fallacy
when things are measuring the same thing but use different terms
bloated specifics
highly similar items correlate well with each other but are mostly redundant (increases reliability but is redundant)
examples of bad test items that seem obvious?
double barreled questions. leading questions. negative words, extreme words. opting in/out. presumptuous questions. ranked responses.
response bias
systematically give inaccurate answers
leniency bias/severity bias
excessively forgiving/harsh
social desirability bias
making oneself look good. includes impression management and self-deception.
standardized test
test with known mean and standard deviation