1/21
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Construct validity Look all these up in the book
Adequacy of the operational definition of variables To what extent does the operational definition of a variable reflect the true theoretical meaning of the variable
predictive validity
If the research shows that the scores accurately predict the behavior or outcome it is intended to predict.
Concurrent Validity
Research that examines the relationship betweent he measure and a criterion behavior at the same time (I:e a study where two or more groups differ on the measure in expected ways)
Convergent Validity
Extent to which scores on the measure in question are related to scores on other measures of the same construct or similar constructs
Discriminate validity
When a measure is not related to variables with which it should not be related
Reactivity of a measure of behavior and ways to minimize it
When awareness of being measured changes an individuals behavior
Nominal Scales the next three are for the properties of the four scales
Have no numerical or quantitative properties instead categories or groups simply differ from one another (I:e being left handed right handed or ambidextrous being left handed does not imply a greater amount of handedness)
Ordinal Scales
rank order the levels of the variable being studied categories can be ordered first to last letter grades are a good example or movie stars
Interval scale
the deference between the numbers on the scale is meaningful specifically
A thermometer is a good example.
Ratio scales
Have an absoulute zero point that indicates the absence of the variable being measured
examples such as length weight or time (A person weighs 220 pounds weighs twice as much as a person who weighs 110 pounds)
internal consistency reliability
is the assessment of reliability using responses at only one point in time because all items measure the same variable they should yield similar or consistent results
interrater reliability
is the extent to which raters agree in their observations
thus if two raters are judging wether behaviors are aggressive and they both make the same judgment
Measurement Error
measurement error is the distance between an unobserable true state(the true score) and a measured (observed score)
cronbach’s alpha
Most commonly used indicator of reliability based on internal consistency
Provides us with the average of all possible split-half reliability coefficients
Cohen’s kappa
ohen's kappa coefficient ('κ', lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability for qualitative (categorical) items.[1] It is generally thought to be a more robust measure than simple percent agreement calculation, as κ incorporates the possibility of the agreement occurring by chance.
Demand characteristics
any feature of an experiment that might inform participants of the purpose of the study
Social desirability
This response leads the individual to answer in the most socially desirable way
Scales of measurement
Comparing group, percentages, correlating scores of individuals on two variables and, comparing group means
Heisenberg uncertainity principle
Fundamental limit to how precisely certain pairs of physical properties of a particle
Observer effect
Being observed changes the response of those observed
Experimenter effect
the tendency for a researcher's actions, expectations, or characteristics to unintentionally influence the results of a study
True score
Somones’s real true value on a given variable