1/29
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Main Methods of Assessment
observer ratings
interviews (structured and unstructured)
self reports
implicit assessments
Observer Ratings
Evaluations made by another person, based on observation
structured interviews
Uses pre-set questions; yields more consistent data
unstructured interviews
Open-ended questions; allows for flexibility but less reliable
self-reports
Individuals report on their own behaviors, feelings, or thoughts
self-report formats
True/False
Multi-point rating scales
Likert scales
Observations
observations
Conducted in natural or classroom settings to assess real-time behavior
implicit assessments
Techniques that do not directly ask about traits but infer them
example: Thematic Apperception Test, Rorschach inkblots
subjective assessment
Based on interpretation; susceptible to bias. Individuals may try to appear objective
objective assessment
Standardized; less interpretive, but not immune to misreading
(e.g., "What's wrong?" "Nothing").
reliability
Consistency of a Measurement
The extent to which an assessment produces stable and consistent results.
internal consistency
Repeated items in a self-report should align if measuring the same trait.
inter-rater reliability
Different raters should produce similar scores when observing the same behavior.
stability across time
A reliable assessment should give similar results when repeated over time.
Personality is generally stable, but certain events can cause changes
validity
The degree to which a test measures what it claims to measure
types of validity
Construct Validity
Predictive/External Validity
Convergent Validity
Discriminant Validity
Face Validity
Construct Validity
The measure reflects the theoretical trait (e.g., depression)
Predictive/External Validity
The measure predicts relevant outcomes (e.g., success, risk)
Convergent Validity
The measure is related to other measures of similar constructs
Discriminant Validity
The measure does not overlap with unrelated constructs
Face Validity
On the surface, the measure appears to assess what it is intended to
culture and assessment
Cultural context affects interpretation of items and behaviors.
Language differences and cultural norms must be considered.
Avoid over-pathologizing (e.g., MMPI-2 vs. MMPI-RF in diverse populations).
types of bias in assessment
Response Sets
Acquiescence
Social Desirability
Malingering
Response Sets
Tendency to respond in a set way regardless of content.
Acquiescence
Tendency to agree with items
Social Desirability
Responding in a way that makes one look good
Malingering
Exaggerating or faking symptoms for secondary gain
types of empirical approaches
Empirical Assessment
Criterion Keying
Empirical Assessment
Based on data; helps classify individuals based on responses
Criterion Keying
Identifies items that differentiate between groups (e.g., MMPI items answered differently by clinical vs. non-clinical groups).