MY EVALUATING SELECTION TECHNIQUES AND DECISIONS

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/31

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

32 Terms

1
New cards

Reliability

Consistency of measure.

2
New cards

Test-retest Reliability

The scores from the first administration of the test are correlated with scores from the second to determine whether they are similar.

3
New cards

Alternate-Forms Reliability

Two forms of the same test are constructed. This counterbalancing of test-taking order is designed to eliminate any effects that taking one form of the test first may have on scores on the second form.

4
New cards

Scorer Reliability

Discussed in terms of interrater reliability. That is, will two interviewers give an applicant similar ratings, or will to supervisors give an employee similar performance ratings?

5
New cards

Internal Reliability

Looks at the consistency with which an applicant responds to items measuring a similar dimension or construct (e.g., personality, trait, ability, area of knowledge).

6
New cards

Split-half method and Cronbach’s Coefficient Alpha

What are the methods used to determine internal consistency?

7
New cards

Split-half method

the easiest method to use to determine internal consistency, as items on a test are split into two groups. Usually, all of the odd-numbered items are in one group and all the even-numbered items are in the other group.

8
New cards

Spearman-Brown

Split-half method is the easiest to use, as items on a test are split into two groups. Usually, all of the odd-numbered items are in one group and all the even-numbered items are in the other group.

Because the number of items in the test has been reduced, researchers have to use a formula called ____________ prophecy to adjust the correlation.

9
New cards

Cronbach's Coefficient Alpha

This method in determining internal consistency can be be used not only for dichotomous items but also for tests containing interval and ratio items such as five-point rating scales.

10
New cards

Kuder- Richardson formula 20 (KR20)

is used for tests containing dichotomous items (e.g., yes/no, true/false)

11
New cards

Validity

It refers to how well a test actually measures what it was created to measure

12
New cards

Content Validity, Construct Validity, Criterion Validity

Three types of ValidityC

13
New cards

Content Validity

The extent to which test items sample the content that they are supposed to measure.

14
New cards

Criterion Validity

The extent to which a test score is related to some measure of job performance called a criterion.

15
New cards

Concurrent Validity Design and Predictive Validity Design

Two types of criterion validity that assess the relationship between test scores and job performance at different times, indicating the test's effectiveness in predicting outcomes.

16
New cards

Construct Validity

The extent to which a test actually measures the construct that it purports to measure. It is concerned with inferences about test scores.

17
New cards

Face Validity

The extent to which a test appears to be job related

18
New cards

Cost-efficiency

If two or more tests have similar validities, then cost should be considered

19
New cards

Taylor-Russell tables

This selection device is designed to estimate the percentage of future employees who will be successful on the job if an organization uses a particular test.

20
New cards

Proportion of correct decisions

This selection device is easier to do but less accurate than the Taylor-Russell tables. The only information needed to determine the proportion of correct decisions is employee test scores and the scores on the criterion.

21
New cards

The Lawshe tables

This selection device is created to know the probability that a particular applicant will be successful.

22
New cards

Brogden-Cronbach-Gleser Utility Formula

Another way to determine the value of a test in a given situation is by computing the amount of money an organization would save if it used the test to select employees.

23
New cards

Adverse Impact, Single-group validity, Differential validity

Determining the Fairness of a Test

24
New cards

Adverse Impact

the first step in determining a test's potential bias. There are two basic ways to determine this: looking at test results or anticipating adverse impact prior to the test.

25
New cards

Single-group validity

the test will significantly predict performance for one group and not others.

26
New cards

Differential validity

a test is valid for two groups but more valid for one than for the other.

27
New cards

Unadjusted Top-down selection

Applicants are rank ordered on the basis of their test scores. Selection is then made by starting with the highest score and moving down until all openings have been filled.

28
New cards

Rule of three (or Rule of Five)

In which the names of the top three scorers are given to the person making the hiring decision. This person can then choose any of the three based on the immediate needs of the employer.

29
New cards

Passing Scores

• With a multiple-cutoff approach, the applicants would be administered all of the tests at one time.

• With a multiple-hurdle approach, the applicant is administered one test at a time, usually beginning with the least expensive.

30
New cards

multiple-cutoff approach

the applicants would be administered all of the tests at one time.

31
New cards

multiple-hurdle approach

the applicant is administered one test at a time, usually beginning with the least expensive.

32
New cards

Banding

_____ takes into consideration the degree of error associated with any test score. Thus, even though one applicant might score two points higher than another, the two-point difference might be the result of chance (error) rather than actual differences in ability.