1/49
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What is Psychological Assessment?
A comprehensive process of gathering information about an individual using multiple methods (tests, interviews, observations) to answer a referral question or make a decision.
Differentiate Psychological Testing from Psychological Assessment.
Psychological testing is the process of administering a test and obtaining a score, while assessment is a broader, problem-solving process that integrates test data with other information (e.g., history, observations) to make a diagnosis or inform interventions.
Define a Psychological Test.
A standardized procedure for sampling behavior and describing it with categories or scores, often with norms or standards for interpretation.
What is meant by "Standardization" in psychological testing?
The process of ensuring uniformity in the procedures for administering, scoring, and interpreting a test, including specified instructions, time limits, and scoring rubrics.
What are "Test Norms"?
The average or typical scores on a test for a particular group, providing a basis for comparison to understand an individual's score relative to others.
What is "Reliability" in psychological testing?
The consistency or stability of a test score, indicating the extent to which the test yields the same results on repeated trials or with different items measuring the same construct.
Name and briefly describe four common types of Reliability.
What is "Validity" in psychological testing?
The extent to which a test measures what it claims to measure, and how appropriately the test scores can be interpreted for a specific purpose.
What is "Content Validity"?
The extent to which test items adequately sample the entire domain or content area that the test is designed to measure.
Explain "Criterion-related Validity" and its two types.
The extent to which test scores are related to an external criterion measure.
What is "Construct Validity"?
The most fundamental type of validity, referring to how well a test measures an abstract psychological construct (e.g., intelligence, anxiety). It involves accumulating evidence from various sources.
Differentiate between "Objective" and "Projective" personality tests.
Objective tests use structured response formats (e.g., True/False, Likert scales) and empirically derived scoring. Projective tests use ambiguous stimuli, allowing for open-ended responses that are interpreted for underlying personality dynamics.
Provide one example each of an Objective and a Projective personality test.
Objective: Minnesota Multiphasic Personality Inventory (MMPI), Neo Personality Inventory (NEO-PI-R). Projective: Rorschach Inkblot Test, Thematic Apperception Test (TAT).
What is a "Raw Score"?
The initial score obtained by an individual on a test before any interpretation or transformation is applied (e.g., number of correct answers).
Explain the purpose of converting raw scores to "Standard Scores."
Standard scores (e.g., Z-scores, T-scores, IQ scores) transform raw scores into a common scale, indicating an individual's performance relative to the norm group, allowing for comparison across different tests.
How is a "Percentile Rank" interpreted?
A percentile rank indicates the percentage of individuals in the norm group who scored at or below a given raw score. For example, a 75th percentile means the person scored as well as or better than 75% of the norm group.
What is the "Standard Error of Measurement (SEM)"?
An estimate of the amount of error inherent in an observed test score, representing the typical distance between the observed score and the true score.
Define "Intelligence Testing."
The measurement of an individual's cognitive abilities and intellectual potential, often encompassing areas such as verbal comprehension, perceptual reasoning, working memory, and processing speed.
Differentiate "Aptitude Tests" from "Achievement Tests."
Aptitude tests measure an individual's potential or capacity to learn a new skill or perform a specific task. Achievement tests measure what an individual has already learned or mastered.
What are key ethical principles that must guide psychological assessment?
Competence, informed consent, confidentiality, beneficence and non-maleficence, integrity, and justice.
Why is "Informed Consent" crucial in psychological assessment?
It ensures that individuals voluntarily agree to participate in assessment after being fully informed about its purpose, procedures, potential risks and benefits, confidentiality limits, and their right to withdraw.
What is "Test Bias"?
A systematic error in a test score that results in an unfair disadvantage or advantage for a particular group of test-takers, often related to cultural or demographic factors.
What is the purpose of a "Clinical Interview" in the assessment process?
To gather comprehensive subjective information about a client's history, symptoms, experiences, and current functioning, complementing objective test data.
What role does a Psychometrician play in the assessment process?
Psychometricians are involved in the administration, scoring, and initial interpretation of psychological tests under the supervision of a licensed psychologist, ensuring adherence to standardized procedures.
Define "Response Style" or "Set" in testing.
A consistent way of responding to test items that is unrelated to the content of the items, such as tendency to agree (acquiescence), socially desirable responding, or malingering.
How does Classical Test Theory (CTT) conceptualize an observed test score?
In CTT, an observed score (X) is conceptualized as the sum of a true score (T) and random error (E), expressed as X = T + E. The goal is to estimate the true score by minimizing error.
What is the primary difference between Classical Test Theory (CTT) and Item Response Theory (IRT)?
CTT focuses on the total test score and assumes error is consistent across individuals. IRT, conversely, models the relationship between an individual's latent trait (ability) and their probability of endorsing specific test items, allowing for item-specific difficulty and discrimination parameters, and producing person and item parameters that are sample-independent.
Explain Item Characteristic Curves (ICCs) in the context of Item Response Theory (IRT).
ICCs graphically represent the probability of a test-taker with a given level of ability (latent trait) answering an item correctly (or endorsing it in a certain way). They illustrate an item's difficulty, discrimination, and sometimes guessing parameters.
Describe the concept of Item Difficulty (b parameter) in IRT.
The item difficulty (b parameter) in IRT indicates the point on the latent trait continuum where a test-taker has a 0.50 probability of answering the item correctly. A higher b value means a more difficult item.
Describe the concept of Item Discrimination (a parameter) in IRT.
The item discrimination (a parameter) in IRT indicates how well an item differentiates between test-takers with different levels of the latent trait. A steeper slope on the ICC indicates higher discrimination.
What is the significance of the Standard Error of Estimate (SEE) in psychological assessment?
The SEE is a measure of the accuracy of predictions made by a regression equation. In psychological assessment, it quantifies the typical distance between the predicted score on a criterion (e.g., job performance) and the actual observed score, reflecting the error in prediction.
How is a Confidence Interval (CI) for a test score constructed and what does it represent?
A CI for a test score is constructed using the observed score and the Standard Error of Measurement (SEM). It represents a range of scores within which the individual's "true score" is likely to fall with a certain degree of probability (e.g., 95% CI means there's a 95% chance the true score lies within that range).
Differentiate between Convergent Validity and Discriminant Validity as facets of Construct Validity.
Convergent validity demonstrates that a test is highly correlated with other measures of the same construct. Discriminant validity (or divergent validity) shows that a test is not significantly correlated with measures of theoretically different constructs.
What is Face Validity and why is it generally not considered a strong form of validity evidence?
Face validity refers to whether a test appears to measure what it's supposed to measure to the test-taker or layperson. It's not a strong form of validity because it's subjective and based on superficial appearance rather than empirical evidence, though it can impact test-taker motivation.
Explain "Incremental Validity" in test selection.
Incremental validity refers to the extent to which a new test or assessment procedure adds to the predictive power beyond that which can be achieved with existing assessment methods. It assesses the unique contribution of a new predictor.
What are the four levels of measurement scales, and provide a psychological example for each.
Nominal: Categories without order (e.g., gender, diagnostic categories like ADHD/Autism).
Ordinal: Categories with a meaningful order but unequal intervals (e.g., Likert scale responses: strongly disagree, disagree, neutral, agree, strongly agree).
Interval: Ordered categories with equal intervals but no true zero point (e.g., IQ scores, Celsius temperature).
Ratio: Ordered categories with equal intervals and a true zero point (e.g., reaction time, number of correct answers on a test).
Define Adverse Impact in the context of employment testing.
Adverse impact occurs when a selection process or test disproportionately disadvantages members of a protected group (e.g., racial minorities, women), even if the test appears neutral. It is typically identified using the 80% rule (or 4/5ths rule).
What is the "80% Rule" (or "4/5ths Rule") used for in employment testing?
The 80% Rule is a guideline used to determine if adverse impact exists. It states that selections rate for any race, sex, or ethnic group which is less than 80% (or 4/5ths) of the rate for the group with the highest selection rate is generally regarded as evidence of adverse impact.
What is the concept of "Malingering" in psychological assessment, and how do assessors generally address it?
Malingering is the intentional exaggeration or feigning of psychological or physical symptoms for external incentives (e.g., avoiding work, financial compensation). Assessors address it by using specific validity scales on tests (e.g., MMPI-2 F, Fb, L, K scales), consistency checks across different data sources, and careful clinical observation.
How does Cultural Competence apply to psychological assessment?
Cultural competence in assessment means understanding and considering the influence of cultural background (e.g., language, values, beliefs, acculturation) on test performance and interpretation. It involves selecting appropriate tests, adapting administration procedures if necessary, and interpreting results within the client's cultural context.
Explain the concept of a Response Set (or Response Style) and provide examples other than those previously mentioned.
A response set is a tendency to respond to test items in a particular way that is unrelated to the specific content of the items. Examples include:
What is the role of Factor Analysis in establishing the construct validity of a test?
Factor analysis is a statistical method used to identify underlying dimensions or "factors" that explain patterns of correlations among a set of observed variables (test items). It helps to confirm if test items group together in a way that aligns with the theoretical construct the test is supposed to measure.
Explain Authentic Assessment and provide an example.
Authentic assessment involves evaluating a test-taker's abilities in real-world or highly realistic contexts, often utilizing tasks that simulate actual situations or problems. An example is a clinical simulation where a psychology student interviews a "standardized patient" to assess their diagnostic skills.
What is the difference between a Norm-Referenced Test and a Criterion-Referenced Test?
Discuss the ethical consideration of Confidentiality versus Privilege in psychological assessment.
Confidentiality is an ethical principle requiring psychologists to protect client information from unauthorized disclosure. Privilege is a legal concept that protects certain communications between a client and psychologist from being disclosed in a legal proceeding, unless specific exceptions (e.g., danger to self/others, child abuse) apply or waived by the client.
How do test developers typically engage in Item Analysis during test construction?
Item analysis involves statistical procedures used to evaluate item quality after initial piloting. It includes assessing item difficulty (proportion of correct answers), item discrimination (how well items differentiate between high and low scorers), and identifying distractor effectiveness for multiple-choice items. This helps refine or remove poor items.
What is the purpose of Cross-Validation in test development and predictive validity?
Cross-validation is a procedure where a prediction equation or test validity is established on one sample (the "development sample") and then applied to a new, independent sample (the "cross-validation sample") to determine if the predictive accuracy or validity coefficient holds up. It guards against overfitting and shrinkage.
Briefly explain the concept of a Flesch-Kincaid Readability Test and its relevance to psychological assessment.
The Flesch-Kincaid Readability Test is a formula used to assess the readability of written text, providing a score that correlates to a U.S. grade level. In psychological assessment, it's relevant for ensuring that assessment materials (e.g., consent forms, instructions, test items) are comprehensible to the intended population, especially those with varying literacy levels.
What are the key components typically included in a comprehensive Psychological Assessment Report?
A comprehensive psychological assessment report typically includes:
Define Test Utility and explain its importance in selecting assessment instruments.
Test utility refers to the practical value or usefulness of a test for making decisions. It considers factors such as the test's reliability and validity, cost (time, money), administrative ease, and the base rate of the phenomenon being predicted. A test with high utility provides significant benefits that outweigh its costs.