Clp 4433 Final

studied byStudied by 47 people
5.0(1)
Get a hint
Hint

Traditional interview

1 / 168

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

169 Terms

1

Traditional interview

Interviewer pursues \n different areas of inquiry with each job candidate.

New cards
2

Differential validity

test results mean different things for different groups

New cards
3

Single Group validity

test results mean same thing for same group

New cards
4

Organizational Settings

Employment interview, Personality inventories, Situation Judgment Test (SJT), Pre-employment testing

New cards
5

Structured interview

standardized, with the \n same questions asked of each job candidate.

New cards
6

Behavioral interview

focus on past behaviors

rather than attitudes or opinions.

New cards
7
New cards
8

Formative assessments

help teachers determine what information students are and are not learning during the instructional process \n – Helps teachers identify where students need help and \n decide whether it is appropriate to move to the next \n unit of instruction.

New cards
9

Norm-Referenced Tests

standardized tests in which one test taker’s score is compared with the scores of a group of test takers who took the test previously.

-Ex: Top 25% passing

New cards
10

Criterion-Referenced Tests

tests that compare a test taker’s scores with an objectively stated standard of achievement

-Ex: Need a 70 to pass

New cards
11

Authentic assessment

assessing a student’s ability to perform real-world tasks by applying the knowledge and skills he or she has learned

New cards
12

Cognitive testing

use of tests that measure global and narrow intellectual abilities

New cards
13

Memory testing

specific questions about memory functioning

-Often used with older adults who report \n memory concern

New cards
14

Personality inventories

– Often used as part of pre-employment testing \n – Individual personality features may not be as important \n ■ Overall personality has predictive validity for a variety of \n outcomes \n – Provide incremental validity (e.g., above cognitive ability)

New cards
15

Validity coefficient (r)

Correlation between test scores (predictors) and \n performance (criterion) representing the strength of \n the validity evidence

New cards
16

Range restriction

The reduction in the range of sores that results when \n some people are dropped from a validity study, such \n as when low performers are not hired, causing the \n validity coefficients to be lower than it would be if \n all persons were included in the study

New cards
17

Coefficient of determination (R^2)

– The amount of shared variance between predictor and \n criterion \n – Squared correlation

New cards
18

Employment interview

-Traditional interview: interviewer pursues \n different areas of inquiry with each job candidate. \n – Structured interviews: standardized, with the \n same questions asked of each job candidate. \n – Behavioral interviews: focus on past behaviors \n rather than attitudes or opinions.

New cards
19
New cards
20
New cards
21

Situation Judgment Test (SJT)

– Written or video-based scenarios on work- \n related dilemmas \n – Asked to identify effective course of action \n – May ask to rank from most to least effective \n actions

New cards
22

Pre-employment testing

Who should we hire for a job? \n ■ What individual characteristics are indicative of \n – successful performance on the job \n – satisfaction with the job \n – successful performance while training for the job \n – remaining committed to the job and organization \n – staying with the job for the long term

New cards
23

Clinical/Counseling Settings

Cognitive and Memory Testing, Comprehensive, Clinically Oriented Self-Report, Symptom Self-Report Tests, Symptom Checklists, Behavior Rating Scales, interviews

New cards
24

Cognitive and Memory Testing ex

Weschler’s

New cards
25

Comprehensive, Clinically Oriented Self- \n Report

– Gather information on symptoms, functioning, \n personality, and more \n – Often used to help diagnose and plan \n treatment \n – Not frequently used to monitor progress \n – Examples: MMPI, PAI, MCMI-III

New cards
26

Symptom Self-Report Tests

More specific, narrow tests

Ex: likert scale \n – 1. \n 0 I do not feel sad. \n 1 I feel sad \n 2 I am sad all the time and I can't snap out of it. \n 3 I am so sad and unhappy that I can't stand it.

New cards
27

Symptom Checklist

– Client answers questions about their symptoms \n – Can cover a broad area of symptoms

Ex: Generalized Anxiety Disorder and CBU

New cards
28

Behavior Rating Scales

– Often used with children \n – Outside informant (parent, teacher, etc.) \n answers questions about the individual’s behavior

New cards
29

Semi-structured interviews

some structure, but allow more flexibility \n – Sometimes questions can change based on \n previous answers

New cards
30

Interviews

Allow psychologist to gather information by asking \n questions to the client

New cards
31

Structured clinical interviews

require the interviewer to follow a fixed format in asking \n questions.

– Often accompanied by a formal scoring plan

New cards
32

Evidence-based practice

the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences.

New cards
33

Educational Settings

Placement, Formative, Diagnostic, Summative, and Authentic Assesments, Norm and Criterion-referenced tests

New cards
34

Authentic assessment examples

■ Dissertations

■ Journal writing \n ■ Project \n ■ Presentations \n ■ Experiments

New cards
35

Traditional assessments

Norm and criterion referenced tests

New cards
36

Summative assessments:

– Determine what students do and do not know \n – Gauge student learning \n – Assign earned grades

New cards
37

Decisions made at the end of instruction

summative assessments

New cards
38

Diagnostic assessment

– Assessments that involves an in-depth evaluation of \n an individual to identify characteristics for treatment \n or enhancement

New cards
39

Formative assessment ex

Midterm

New cards
40

Decisions made during instruction

formative and diagnostic assessments

New cards
41

Placement assessments are

assessments that are used to: \n – determine whether students have the skills or \n knowledge necessary to understand new material \n – determine how much information students already \n know about the new material

New cards
42

Decisions made at the beginning of instruction

Placement assessments

New cards
43

Making decisions in the classroom

– What knowledge and skills do students already possess? \n – What knowledge and skills are my students learning? \n – Where do my students need help? \n – What knowledge and skills have my students learned? \n – What grade have my students earned?

New cards
44

Range Restriction

Not to raise the standard for better accuracy

New cards
45

Only do something once

Higher chance of error

New cards
46

Simple linear regression

Y = a +bX \n – Where: \n ■ Y = predicted criterion score \n ■ a = intercept \n ■ b = slope \n ■ X = predictor score

New cards
47

Simple

one predictor

New cards
48

Line of best fit

Use this line to make predictions \n – That is, the regression equation

New cards
49

True positives

Predicted to succeed and were successful

New cards
50

False Negatives

Predicted to fail \n and were \n successful

New cards
51

High Standard of Accuracy

Ex: Malingering

<p>Ex: Malingering</p>
New cards
52

The best way: increase validity

Only way to maximize accuracy

New cards
53

Calibration sample (a.k.a Training Set)

Sample for which regression parameters are estimated

New cards
54

Measurement bias

Scores on a test taken by different subgroups in the \n population (e.g., men and women; minority and \n majority) need to be interpreted differently because of \n some characteristic of the test not related to the \n construct being measured

New cards
55

The source of bias could be

in the criterion \n – E.g., a rater systematically rates women higher than men

New cards
56

Measurement Bias in Cognitive Ability Testing

is a Controversial topic

-Some studies have found subpopulation differences in intelligence test scores

New cards
57

Why do these differences exist?

Subjective vs. Objective job performance criteria; bias in \n performance ratings

■ Subjective ratings may favor one group of participants \n ■ Objective ratings may favor another group of participants

New cards
58

Validity is a

statistical concept

New cards
59

Group differences do not

always indicate unfair testing

New cards
60

Fairness is a

social concept

New cards
61

Purpose of testing is

to identify individual differences \n – But people should be evaluated on what the test \n purports to measure, and nothing more.

New cards
62

Why is measurement bias /bias in performance rating important?

– Relative validity differences seem small in absolute value \n of the correlations, but are large in relative percentages \n – Differences in who gets selected for a job, accepted into college, receives services, etc...

New cards
63

New cards
64

While the differences discussed exist, it is important to note:

– There is more variation within subpopulation groups than between \n groups \n – Bias inherent in a test may also cause differences

New cards
65

The source of bias may also

represent true subpopulation differences \n – E.g., men and women difference in spatial rotation ability

New cards
66

The source of bias is

not always the test

New cards
67

Types of Measurement Bias

Differential and Single-Group validity

New cards
68

Validation sample (a.k.a Test Set)

Sample used to predict criterion scores

New cards
69

Cross-validation

– The process of administering a test to another sample of test takers, representative of the target population \n ■ Because of chance factors that contribute to random error, this second administration can be expected to yield lower correlations with criterion measures. \n – Can also simply gather a large enough dataset and \n randomly split it into two samples.

New cards
70

Low Standard of Accuracy

Ex: TSA being overly cautious

<p>Ex: TSA being overly cautious</p>
New cards
71

Why not just raise \n the standard for \n better accuracy?

It depends

<p>It depends</p>
New cards
72

False positives

Predicted to succeed and were failures

New cards
73

True negatives

Predicted to fail and were failures

New cards
74

Psychology:

describe, explain, predict

New cards
75

Discriminant validity

Test scores are not related to unrelated \n constructs

New cards
76

Ex of discriminant validity

self-esteem and intelligence

New cards
77

How do we know different measures of the same construct are truly measuring the same thing?

– E.g., two different tests of intelligence

– E.g., two different tests of personality

New cards
78

Traditional view of validity

Content and criterion-related

New cards
79

What is validity?

Evidence that the interpretations that are being made from the scores on a test are appropriate for their intended purpose

New cards
80

A test must first be

reliable before it can be valid

  1. Reliability → 2. Validity

New cards
81

Traditional model of validity

“Measures what it is designed to measure”

New cards
82

There are truly no “types” of validity,

only different sources of validity evidence

New cards
83

Careful of validity inference

E.g., valid measure of personality may not predict sales performance.

New cards
84

Three forms of validity evidence

– Content validity

– Construct validity

– Criterion-related validity

New cards
85

Validity is

a unitary/single concept

New cards
86

Sources of Validity Evidence

– Evidence based on test content

– Evidence based on response process

– Evidence based on internal structure

– Evidence based on relationships with other variables

– Evidence based on the consequences of testing

New cards
87

Evidence based on test content

What is actually on the test

New cards
88

Evidence based on response process

Observations or interviews to understand mental processes that test takers use to respond

New cards
89

Factor analysis:

Is the test accounted for by one factor?

New cards
90

Criterion-related validity

The extent to which scores on a test correlate with \n scores on a measure of performance or behavior \n – Evidence that test scores correlate with or predict \n independent behaviors, attitude, or events

-correlate test scores with other measures

New cards
91

Evidence based on relationships with other variables

Criterion-related validity: correlate test scores with other measures

New cards
92

Evidence based on internal structure

Factor analysis: Is the test accounted for by one factor?

New cards
93

Evidence based on the consequences of testing

Intended and unintended consequences: is the test biased?

New cards
94

Content validity (evidence based on test content)

The extent to which the questions on a test are representative of the material that should be covered by the test

New cards
95

Logically examining and evaluating the content of a test(including the test questions, format, wording, and tasks required of test takers)

to determine the extent to which the content is representative of the concepts that the test is designed to measure without either underrepresenting those concepts or including elements that are irrelevant to their measurement. (Content validity)

New cards
96

Construct

– An attribute, trait, or other characteristic that is abstracted from observable behavior

– Attribute, trait, or characteristic the test is designed to measure

New cards
97

Example of Construct

Aggression

New cards
98

Examples of content validity

– Psych Tests and Measurement Final– items should assess knowledge of course material

New cards
99

Example of Construct in a test

– FIRO-B – test of interpersonal interaction in organizations

■ Inclusion

■ Control

■ Affection

New cards
100

Purpose of most tests is to

make predictions

SAT → College grades

Personality → Job performance

New cards

Explore top notes

note Note
studied byStudied by 11 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 7 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 20 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 3526 people
Updated ... ago
4.9 Stars(15)
note Note
studied byStudied by 17 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 4637 people
Updated ... ago
5.0 Stars(10)
note Note
studied byStudied by 12 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 4 people
Updated ... ago
5.0 Stars(1)

Explore top flashcards

flashcards Flashcard60 terms
studied byStudied by 11 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard75 terms
studied byStudied by 10 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard48 terms
studied byStudied by 9 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard28 terms
studied byStudied by 3 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard50 terms
studied byStudied by 8 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard41 terms
studied byStudied by 5 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard58 terms
studied byStudied by 2928 people
Updated ... ago
3.9 Stars(38)
flashcards Flashcard21 terms
studied byStudied by 16 people
Updated ... ago
5.0 Stars(1)