1/31
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Reliability
Consistency of measure.
Test-retest Reliability
The scores from the first administration of the test are correlated with scores from the second to determine whether they are similar.
Alternate-Forms Reliability
Two forms of the same test are constructed. This counterbalancing of test-taking order is designed to eliminate any effects that taking one form of the test first may have on scores on the second form.
Scorer Reliability
Discussed in terms of interrater reliability. That is, will two interviewers give an applicant similar ratings, or will to supervisors give an employee similar performance ratings?
Internal Reliability
Looks at the consistency with which an applicant responds to items measuring a similar dimension or construct (e.g., personality, trait, ability, area of knowledge).
Split-half method and Cronbach’s Coefficient Alpha
What are the methods used to determine internal consistency?
Split-half method
the easiest method to use to determine internal consistency, as items on a test are split into two groups. Usually, all of the odd-numbered items are in one group and all the even-numbered items are in the other group.
Spearman-Brown
Split-half method is the easiest to use, as items on a test are split into two groups. Usually, all of the odd-numbered items are in one group and all the even-numbered items are in the other group.
Because the number of items in the test has been reduced, researchers have to use a formula called ____________ prophecy to adjust the correlation.
Cronbach's Coefficient Alpha
This method in determining internal consistency can be be used not only for dichotomous items but also for tests containing interval and ratio items such as five-point rating scales.
Kuder- Richardson formula 20 (KR20)
is used for tests containing dichotomous items (e.g., yes/no, true/false)
Validity
It refers to how well a test actually measures what it was created to measure
Content Validity, Construct Validity, Criterion Validity
Three types of ValidityC
Content Validity
The extent to which test items sample the content that they are supposed to measure.
Criterion Validity
The extent to which a test score is related to some measure of job performance called a criterion.
Concurrent Validity Design and Predictive Validity Design
Two types of criterion validity that assess the relationship between test scores and job performance at different times, indicating the test's effectiveness in predicting outcomes.
Construct Validity
The extent to which a test actually measures the construct that it purports to measure. It is concerned with inferences about test scores.
Face Validity
The extent to which a test appears to be job related
Cost-efficiency
If two or more tests have similar validities, then cost should be considered
Taylor-Russell tables
This selection device is designed to estimate the percentage of future employees who will be successful on the job if an organization uses a particular test.
Proportion of correct decisions
This selection device is easier to do but less accurate than the Taylor-Russell tables. The only information needed to determine the proportion of correct decisions is employee test scores and the scores on the criterion.
The Lawshe tables
This selection device is created to know the probability that a particular applicant will be successful.
Brogden-Cronbach-Gleser Utility Formula
Another way to determine the value of a test in a given situation is by computing the amount of money an organization would save if it used the test to select employees.
Adverse Impact, Single-group validity, Differential validity
Determining the Fairness of a Test
Adverse Impact
the first step in determining a test's potential bias. There are two basic ways to determine this: looking at test results or anticipating adverse impact prior to the test.
Single-group validity
the test will significantly predict performance for one group and not others.
Differential validity
a test is valid for two groups but more valid for one than for the other.
Unadjusted Top-down selection
Applicants are rank ordered on the basis of their test scores. Selection is then made by starting with the highest score and moving down until all openings have been filled.
Rule of three (or Rule of Five)
In which the names of the top three scorers are given to the person making the hiring decision. This person can then choose any of the three based on the immediate needs of the employer.
Passing Scores
• With a multiple-cutoff approach, the applicants would be administered all of the tests at one time.
• With a multiple-hurdle approach, the applicant is administered one test at a time, usually beginning with the least expensive.
multiple-cutoff approach
the applicants would be administered all of the tests at one time.
multiple-hurdle approach
the applicant is administered one test at a time, usually beginning with the least expensive.
Banding
_____ takes into consideration the degree of error associated with any test score. Thus, even though one applicant might score two points higher than another, the two-point difference might be the result of chance (error) rather than actual differences in ability.