1/66
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
standardization
uniformity in procedures for administering and scoring predictors
norms
distribution of scores by a representative sample; compares applicant scores to the distribution
reliability
consistency/stability of measurement results
test-retest reliability
one group is tested at two different times and the scores are correlated
advantage: simple way to calculate reliability
disadvantage: practice effects
equivalent forms reliability
one group is tested on two different but equivalent forms of measure and the scores are correlated
advantage: reduces practice effects
disadvantage:difficult/expensive to develop equivalent forms
split half
administer and divide items into 2 halves (even, odd) and correlate scores from 1 items scores with 2 items scores
coefficient alpha
average of split half reliability coefficients
internal consistency (split half and coefficient alpha)
advantage: easiest, one administration
disadvantage: not appropriate for speed tests
causes of unreliability
testing conditions, personal factors, task sampling
reliability standards
applied settings minimum of .90, but ideally .95
validity
the degree to which the predictor measures what it claims to and allows for accurate inferences
criterion related validity
correlate predictor scores with job performance
predictive criterion validity
administered to applicants and correlated with later job performance (as employees)
advantage: less biased
disadvantage: takes 6+ months
concurrent criterion validity
administer to current employees and is correlated with performance data
advantage: faster results
disadvantage: employee motivation and experience differences
construct validity
if a predictor can measure the characteristic it is intended to
convergent construct validity
positive correlation with similar constructs and distinguishes from dissimilar constructs
divergent construct validity
zero or negative correlation with different constructs (test is measuring a unique trait distinct from other variables and does not overlap with other constructs)
content validity
if a predictor gives a representative sample of knowledge, skills, and behaviors related to the domain being measured (judgement based)
psychological test scores are used for:
selection (choosing hires), placement (assigning roles), and training (identifying needs)
cognitive ability standardized tests
measures general aptitude of G (general capacity to learn) and specific aptitudes (verbal, spatial, quantitative)
psychomotor ability standardized tests
assess physical attributes (strength, flexibility, dexterity)
self report standardized personality tests
big 5 (on agree to disagree scale), but shows limited job performance prediction
projective standardized personality tests
ambiguous stimuli to assess unconscious traits, but is costly
forced choice standardized personality tests
statements are presented and applicant ranks them from most to least agreed with
advantage of personality tests
there are no group differences as a function of race, but possibly sex and personality is independent of G
work samples
simulates aspects of the job for potential employees
high fidelity work samples
direct simulation of job skills (e.g. CPR performance, flight simulator)
low fidelity work samples
makes an attempt to simulate job scenarios (assessment centers, inbasket, leaderless group, business games)
assessment centers
standardized tests, interviews, and simulations to test applicants suitability for the job
inbasket
decision making simulation with a follow up interview to understand why those decisions were made
leaderless group discussion
competitive problem solving group discussion where everyone is given a secret position to argue for beforehand
business games
group formulates business strategies and employees are evaluated based on their contribution
adv and dis of work samples
advantage: shows relationship between test and job performance, and has good predictive accuracy
disadvantage: difficult to score and shows ability, not potential
unstructured interviews
no constraints on types of questions, shows early judgement and similar-to-me effects, low predictive accuracy
structured interviews
specific topic of conversation with standardized scoring, same questions are asked in the same order for all applicants
situational interview
hypothetical (what would you do) and historical (describe a time) questions to improve prediction accuracy
selection battery
multiple predictors used and chosen based on job analysis to make hiring decisions
content validity
extent to which a predictor accurately represents all constructs it aims to measure but it is not statistically sufficient alone; if content validity evidence is strong, it is sufficient to defend the use of those test scores
validating standardized tests
must demonstrate reliable scores and that the scores predict relevant criteria
criterion related validity
shows that test scores predict desired criteria
validity coefficient
correlation between scores on a test and the criteria; if it is closer to + 1 then it is more valid
predictive criterion related validity
test as applicants and then later as employees and validity coefficient is computed
concurrent criterion related validity
test applicants when criterion data already exists and compute validity coefficient
only content validity is available
predictor is almost always kept
what rule is made to cull predictors in standardized tests
single decision; is the increase in predictive accuracy statistically sufficient? yes=keep no=drop
multiple regression
model relationship between multiple predictors and job performance
y=a+b1Ă—1+b2Ă—2+b3Ă—3
a= y-intercept, b=weight, x=individual score on that predictor, y=criterion
compensatory
high scores on one predictor can offset low scores on another
non-compensatory
all predictor cut offs must be met
single hurdle
one yes/no decision
multiple hurdle
multiple yes/no decisions; first predictor scores are administers and then used to make the first yes/no decision and if passed, then the remaining predictors are administered and used to make another yes/no decision
single hurdle compensatory
final overall job performance criterion has a cutoff
single hurdle non-compensatory
each predictor (e.g. IQ and extraversion) has a cut off and applicant must score above all
multiple hurdle compensatory
predictors in one regression model (hurdle 1) has a cut off score that must be met to go on to the next regression model (hurdle 2) which also has a cut off
multiple hurdle non-compensatory
if hurdle 1 is passed, then move on to the next regression model that has separate predictors (e.g. interpersonal skills and persistence) that have different cut off that also have to be passed
predictive accuracy
function of percentage of times the selection battery makes incorrect hiring decisions
true positives
hits; successful hires
true negatives
correct rejections; properly rejected applicants
false positives
false alarms; unsuccessful hires
false negatives
misses; rejected applicants who would have succeeded
utility
function of predictors usefulness in hiring success
Office of Federal Contract Compliance
Jimmy Carter 1978; enforce compliance of nondiscrimination laws by federal contractors
Civil Rights Act
1964, 1991; prohibits discrimination and allowed for jury trials and punative damages
EO Act
1972; allows race and sex conscious selection plans and ensures fair treatment in employment
uniform guidelines
ensures hiring and selection process is lawful (compliance monitoring)
adverse impact
determines if a selection process harms a protected group
80% rule
if the selection rate for a group is less than 80% of the rate of the group with the highest selection rate then it is unlawful