Chapter 6 - PSYC 385 Exam 2

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
GameKnowt Play
New
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/32

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

33 Terms

1
New cards

Validity

a judgement or estimate of how well a test measures what it is supposed to measure within a particular context

2
New cards

Validation

the process of gathering and evaluating evidence about validity

  • both test developers and test users may play a role in the validation of a test

  • test users may validate a test with their own group test takers — local validation

3
New cards

What 3 categories is validity often conceptualized as?

  1. content validity

  2. criterion-related validity

  3. construct validity

4
New cards

Content Validity

evaluation of the subjects, topics, or content covered by the items in the test

  • how how well a test samples behaviors that are representative of the broader set of behaviors it was designed to measure

    • do the test items adequately represent the content that should be included in the test?

5
New cards

Criterion-Related Validity

evaluating the relationship of scores obtained on the test to scores on other tests or measures

6
New cards

Construct Validity

the ability of a test to measure a theorized construct (intelligence, aggression, personality, etc.) that it aims to measure

  • this is a measure of validity that is arrived at by executing a comprehensive analysis of:

    • how scores on the test relate to other test scores and measures

    • how test scores can be interpreted within a theoretical framework that explains the construct the test was designed to measure

7
New cards

Face Validity

a judgement concerning how relevant the test items appear to be

  • if a test appears to measure what it is supposed to be measuring “on the face of it,” it could be said to be high in face validity

    • a perceived lack of face validity may contribute to a lack of confidence in the test

8
New cards

Test Blueprint

a plan regarding the types of information to be covered by the items

  • the number of items tapping each area of coverage

  • the organization of the items in the test

9
New cards

How is Content Validity estabished?

by recruiting a team of experts on the subject matter and obtaining expert ratings on the degree of item importance as well as scrutinize what is missing from the measure

  • important to remember that content validity of a test varies across cultures and time

10
New cards

Criterion

the standard against which a test or a test score is evaluated

11
New cards

Characteristics of a Criterion

an adequate criterion is relevant for the matter at hand, valid for the purpose for which it is being used, and uncontaminated, meaning it is not part of the predictor

12
New cards

Concurrent Validity

an index of the degree to which a test score is related to some criterion measure obtained at the same time (concurrently)

13
New cards

Predictive Validity

an index of the degree to which a test score predicts some criterion, or outcome, measure in the future

  • tests are evaluated as to their predictive validity

14
New cards

Predictive Validity Considerations

  • base rate

  • hit rate

  • miss rate

    • false-positive

    • false-negative

15
New cards

Base Rate

extent to which the phenomenon exists in the population

  • how likely is it to happen

16
New cards

Hit Rate

accurate identification (true-positive or true-negative)

  • how accurate the measure is at predicting the criterion

  • easier to have a high hit rate on a high base rate

17
New cards

Miss Rate

failure to identify accurately

  • false-positive → same as type 1 error

  • false-negative → same as type 2 error

18
New cards

Type 1 Error

saying something is significant when its not OR saying something is going to happen when it didn’t

  • false-positive

19
New cards

Type 2 Error

missing something that did actually happen

  • false-negative

20
New cards

The Validity Coefficient (criterion-related validity)

a correlation coefficient between test scores and scores on the criterion measure

  • validity coefficients are affected by restriction or inflation of range

21
New cards

Incremental Validity (criterion-related validity)

the degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use

  • to what extent does a test predict the criterion over and above other variables?

22
New cards

Evidence of Construct Validity

evidence of homogeneity

  • evidence of changes

  • evidence of pre-test/post-test changes

  • evidence from distinct groups

  • convergent evidence

  • discriminant evidence

  • factor analysis

23
New cards

Evidence of Homogeneity (construct validity evidence)

how uniform a test is in measuring a single concept

24
New cards

Evidence of Changes (construct validity evidence)

some constructs are expected to change over time (e.g., reading rate)

25
New cards

Evidence of Pre-Test / Post-Test Changes (construct validity evidence)

test scores change as a result of some experience between a pre-test and a post-test (e.g., therapy)

26
New cards

Evidence from Distinct Groups (construct validity evidence)

scores on a test vary predictably as a function of membership in some groups (e.g., scores on the Psychopathy Checklist for prisoners vs. civilians)

27
New cards

Convergent Evidence (construct validity evidence)

correlates highly in the predicted direction with scores on previously psychometrically established tests designed to measure the same (or similar) constructs

  • similar to concurrent validity

28
New cards

Discriminant Evidence (construct validity evidence)

showing little relationship between test scores and other variables with which scores on the test should not theoretically be correlated

29
New cards

Factor Analysis (construct validity evidence)

a new test should load on a common factor with other tests of the same construct

30
New cards

Bias

a factor inherent in a test that systematically prevents accurate, impartial measurement

  • implies systemic variation in test scores

  • prevention during test development is the best cure for test biasR

31
New cards

Rating Error

a judgement resulting from the intentional or unintentional misuse of a rating scale

  • raters may be either too lenient, too severe, or reluctant to give ratings at the extremes (central tendency error)

  • halo effect

32
New cards

Halo Effect (rating error)

a tendency to give a particular person a higher rating than he or she objectively deserves because of a favorable overall impression

33
New cards

Fairness

the extent to which a test is used in an impartial, just, and equitable way