1/10
Flashcards created from notes on basic psychometrics and how to construct a questionnaire.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What key question must be addressed regarding construct validity in questionnaires?
Does a questionnaire actually measure what its proponents claim it measures?
What are the two further questions related to construct validity?
Are the preconditions for validity met? Is validity itself present?
What are the two sources of invalidity mentioned in the lecture?
Systematic error and random error.
What is systematic error?
A potentially knowable bias that pushes scores one way or another.
What does random error refer to in the context of construct validity?
Lots of unknown miscellaneous influences that push scores every which way.
What are the five steps in the standard process of constructing a questionnaire?
1) Item design 2) Item analysis 3) Reliability analysis 4) Factor analysis 5) Scale validation
What types of items are involved in item design for questionnaires?
Open-ended and close-ended items.
What type of item encourages qualitative feedback and can be quantitatively coded later?
Open-ended items.
What is face validity in the context of questionnaire item design?
It seems to measure what it should but is not sufficient alone to confirm validity.
What should the number of forward-scored and reverse-scored items be in a questionnaire?
They should be roughly equal.
What is a bad example of item design from the Oxford Capacity Analysis?
Questions that lead to biased responses or are vague and complex.