1/41
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Construct
the thing a questionnaire is supposed to measure
Preconditions for validity
discrimination: reliability : structure
Validity
pattern of links to external constructs
Preconditions necessary but
not sufficient for validity
Observed score
when you have a construct and measure it and this gives you a score
True score
if the score were completely valid - you never measure any construct perfectly, there is always some error so the true score is hypothetical
Hypothetical correlation between them
If the link between the observed score and the true score
matched perfectly then you would have a perfect measurement
Two sources of invalidity
Systematic error and random error
systematic error
a potentially knowable bias, pushing scores one way or another. Directional confound
Random error
lots of unknown miscellaneous influences, pushing scores every which way, jittery noise or a random variable, in no particular direction and is caused by lots of different things that you can't identify
5 steps to achieving construct or measurement validity for a questionnaire:
Item design
Item analysis
Reliability analysis
Factor analysis
Scale validation
Item
the name for one question in a questionnaire
scale
all the items together
inventory
A questionnaire can have more than one scale and this is called inventory
Item design: types of items
open-ended and close-ended
Face validity
If an item looks like it measures some construct, it is probably more likely to do so than if it didn’t. But intuition is not a perfect index of validity
Response Options: How many options are optimal?
With at least above 2, you may get more information
5 to 7 : no advantage to more
Another question that arises is whether to include a neutral option
can increase accuracy but can also increase laziness so there is no overall benefit
Item Analysis
What should an item do?
Discriminate between different people, Discrimination to different degrees
Three primary types of reliability analysis
Between items, over time, between scores
reliability analysis : between items
assess the same thing to the same degree (internal consistency)
reliability analysis : over time
(test-retest reliability). This should be high over short periods of time for personality traits
reliability analysis: between scores
(inter-rater reliability) - the last is only an issue for opened-ended items. If two or more coder agree on the meaning of the response
How should items relate to one another?
Related = on the same team, distinct = in different positions
Parts should relate to the whole so an item should
neither measure something unrelated to what the scale as a whole measures, nor overlap completely with what another item does
Assessing internal consistency
Overall form of reliability and is assessed
with alpha
scale-level index
Overall internal consistency: alpha
more about scale-level index
Increases with number of items
Lower alphas reduce possible correlations
A factor is
a single underlying dimension
Why does a factor analysis have to be conducted?
Possible to have two or more factors underlying a highly reliable scale
Essence of factor analysis
Question:
where do the correlations clump?
Three stages of FA
factor extraction, factor rotation, factor interpretation
Factor extraction
Use Principal Axes Factoring or Maximum Likelihood
Use scree plot gap in factor extraction
to infer number of factors
Eigenvalue is
a number associated with each factor
The higher the eigenvalue, the
more important the factor
Eigenvalue must exceed 1
Factor Rotation
Use Orthogonal Rotation:
Assumes factors independent and Solution more interpretable
Factor rotation : Use Oblique Rotation
allows factors to be correlated and solution less interpretable
Factor Interpretation
Items load on different factors (Typically loading > .35)
Items don't load on same factors (Typically loading <.35)
These correlations are called
loadings. An items loads on a factor
Confirmatory Factor Analysis
Hypothesize links between variables
Check how well it fits the data
Item Response Theory
Models responding as function of person (trait) and item (many aspects)