1/304
Vocabulary flashcards covering key terms from Dr. Kliatchko’s 298-term glossary in Psychological Assessment. Use them to review definitions and reinforce conceptual understanding.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Absolute decisions
Decisions made by determining who meets the minimum required score needed to qualify
Abstract attributes
Attributes that are more difficult to describe using behaviors because people disagree on which behaviors represent the attribute; examples include personality, intelligence, creativity, and aggressiveness
Accessibility
The degree to which a test allows test takers to demonstrate their standing on the construct the test was designed to measure without being disadvantaged by other individual characteristics such as age, race, gender, or native language.
Acculturation
Degree to which an immigrant/minority member has adapted to mainstream culture.
Achievement tests
Tests that are designed to measure a person’s previous learning in a specific academic area.
Acquiescence
The tendency of some test takers to agree with any ideas or behaviors presented.
Adaptive testing
Using tests developed from a large test bank in which the test questions are chosen to match the skill and ability level of the test taker
Age norms
Norms that allow test users to compare an individual’s test score with scores of people in the same age group.
Alternate forms
Two forms of a test that are alike in every way except for the questions; used to overcome problems such as practice effects; also referred to as parallel forms.
Anchors
Numbers or words on a rating scale that the rater chooses to indicate the category that best represents the employee’s performance on the specified dimension.
Anonymity
Collecting data without obtaining participants’ identities.
Aptitude tests
Tests that are designed to assess the test taker’s potential for learning or the individual’s ability to perform in an area in which he or she has not been specifically trained.
Area transformations
A method for changing scores for interpretation purposes that changes the unit of measurement and the unit of reference, such as percentile ranks.
Assessment center
A large-scale replication of a job that requires test takers to solve typical job problems by role playing or to demonstrate proficiency at job functions such as making presentations and fulfilling administrative duties; used for assessing job-related dimensions such as leadership, decision making, planning, and organizing
Attenuation due to unreliability
The reduction in an observed validity coefficient due to unreliability of either the test or the criterion measure
Authentic assessment
Assessment that measures a student’s ability to apply in real-world settings the knowledge and skills he or she has learned.
Autism spectrum disorders
Developmental disabilities that affect communication and social interaction and involve restricted interests and stereotyped, repetitive patterns of behavior.
b weight
In regression, the slope of the regression line, or the expected change in the criterion (Y) for a one unit change in the predictor (X).
Behavior
An observable and measurable action.
Behavior observation tests
Tests that involve observing people’s behavior to learn how they typically respond in a particular context.
Behavioral checklist
When a rater evaluates performance by rating the frequency of important behaviors required for the job.
Behavioral interviews
IInterviews that focus on behaviors rather than on attitudes or opinions
Behaviorally Anchored Rating Scale (BARS)
A type of performance appraisal that uses behaviors as anchors; the rater rates by choosing the behavior that is most representative of the employee’s performance
Bivariate analyses
Analyses that provide information on two variables or groups.
Categorical data
Data grouped according to a common property
Categorical model of scoring
A test scoring model that places test takers in a particular group or class.
Central tendency errors
Rating errors that result when raters use only the middle of the rating scale and ignore the highest and lowest scale categories.
Certification
A professional credential individuals earn by demonstrating that they have met predetermined qualifications (e.g., that they have specific knowledge, skills, and/or experience)
Class intervals
A way of grouping adjacent scores to display them in a table or graph
Cluster sampling
A type of sampling that involves selecting clusters of respondents and then selecting respondents from each cluster.
Coefficient of determination
The amount of variance shared by two variables being correlated, such as a test and a criterion, obtained by squaring the validity coefficient.
Coefficient of multiple determination
A statistic that is obtained through multiple regression analysis, which is interpreted as the total proportion of variance in the criterion variable that is accounted for by all the predictors in the multiple regression equation. It is the square of the multiple correlation coefficient, R.
Cognitive impairments
Mental disorders that include mental retardation, learning disabilities, and traumatic brain injuries.
Cognitive tests
Assessments that measure the test taker’s mental capabilities, such as general mental ability tests, intelligence tests, and academic skills tests.
Cohen’s kappa
An index of agreement for two sets of scores or ratings.
Comorbid disorders
Presence of additional mental health disorders alongside a primary one (e.g., depression).
Comparative decisions
Decisions that are made by comparing test scores to see who has the best score.
Competency modeling
A procedure that identifies the knowledge, skills, abilities, and other characteristics most critical for success for some or all the jobs in an organization.
Computerized Adaptive Rating Scales (CARS)
Testing in which the computer software, as in computerized adaptive testing, selects behavioral statements for rating based on the rater’s previous responses.
Computerized Adaptive Testing (CAT)
Testing in which the computer software chooses and presents the test taker with harder or easier questions as the test progresses, depending on how well the test taker answered previous questions.
Concrete attributes
Attributes that can be described in terms of specific behaviors, such as the ability to play the piano
Concurrent evidence of validity
A method for establishing evidence of validity based on a test’s relationships with other variables in which test administration and criterion measurement happen at roughly the same time
Confidence interval
A range of scores that the test user can feel confident includes the true score.
Confidentiality
The assurance that all personal information will be kept private and not be disclosed without explicit permission.
Confirmatory factor analysis
Testing whether hypothesized factors fit the data using factor analysis.
Construct
Abstract attribute inferred from behaviors.
Construct explication
Three-step process for defining/explaining a psychological construct.
Construct validity
Evidence that a test measures the intended theoretical construct.
Content areas
Knowledge, skills, and attributes a test is designed to assess.
Content validity
Extent to which test items represent the target domain.
Content validity ratio
Index of how essential each item is to measuring its intended attribute.
Convenience sampling
Using an available group to represent the population.
Convergent evidence of validity
High correlations with other measures of the same construct.
Correction for attenuation
Statistical adjustment estimating correlation free of measurement error.
Correlation
Statistic describing strength and direction of linear relationship between variables.
Correlation coefficient
Numeric index of relationship between two sets of scores.
Criterion
Outcome measure expected to relate to test scores.
Criterion contamination
Criterion measures dimensions beyond those measured by the test.
Criterion-referenced tests
Compare scores to an objective standard of mastery rather than to norms.
Criterion-related validity
Evidence that test scores correlate with or predict relevant behaviors.
Cross-validation
Re-testing to confirm validation study results on a new sample.
Cumulative model of scoring
Assumes more "correct" responses indicate more of the attribute; raw score is total correct.
Cut scores
Score thresholds dividing pass/fail categories.
Database
Spreadsheet matrix of participant responses (rows) by items (columns).
Decennial census survey
U.S. Census conducted every 10 years to measure population.
Descriptive research techniques
Methods that describe situations or phenomena without inferring causality.
Descriptive statistics
Numbers summarizing distribution properties (mean, SD, etc.).
Diagnostic assessment
In-depth evaluation to identify characteristics for treatment or enhancement.
Differential validity
When validity coefficients differ significantly across subgroups.
Discriminant evidence of validity
Low correlations with measures of unrelated constructs.
Discrimination index
Statistic comparing high and low scorers’ performance on individual items.
Distractors
Incorrect answer choices in multiple-choice items.
Double-barreled question
Survey or test question asking two things at once.
Emotional intelligence
Ability to perceive, use, understand, and regulate emotions effectively.
Empirically based tests
Classification decisions made solely on statistical predictor-criterion relationships.
Essay questions
Subjective items requiring extended written responses.
Ethical standards
Professional guidelines on appropriate/inappropriate practices.
Ethics
Considerations of right and wrong in decision making.
Evidence-based practice
Integration of best research with clinical expertise and patient characteristics.
Evidence-based treatment method
Therapies with documented research demonstrating effectiveness.
Experimental research techniques
Designs providing evidence for cause-and-effect relationships.
Experts
Individuals knowledgeable about or affected by the topic.
Exploratory factor analysis
Factor analysis without predefined hypotheses to identify underlying components.
Face validity
Extent to which a test appears to measure what it claims, as judged by examinees.
Face-to-face surveys
Interviewer administers survey questions in person.
Factor analysis
Statistical procedure using correlations to identify underlying factors.
Factors (testing)
Underlying commonalities measured by groups of items.
Faking
Deliberate answering to obtain desired outcome or impression.
False positive
Incorrectly classifying an innocent person as guilty.
Field test
Large-scale tryout of a survey or test to locate administration/item problems.
Five-factor model
Personality theory with five dimensions: extraversion, neuroticism, agreeableness, conscientiousness, openness.
Focus group
Discussion with participants similar to target respondents to explore survey issues.
Forced choice
Item format requiring selection among equally acceptable options.
Forced distribution
Ranking method assigning set numbers of employees to rating categories.
Forced ranking
Managers rank employees on predetermined criteria.
Formative assessments
Assessments during instruction to gauge ongoing learning.
Frequency distribution
Ordered listing of scores with counts of occurrences.
Generalizability theory
Framework analyzing multiple error sources to improve measurement consistency.
Generalizable
Expecting similar results across different administration settings.
Grade norms
Compare a student’s score with others in the same grade.