Reliability

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/58

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

59 Terms

1
New cards

Consistency in measurement; a reliable test yields similar results under consistent conditions.

What is reliability in psychological testing?

2
New cards

A numerical index showing the proportion of true score variance to total test variance.

What is a reliability coefficient?

3
New cards

A large portion of the score variance reflects true differences rather than error.

What does a high reliability coefficient indicate?

4
New cards

statistical measure of variability in test scores, which includes both true and error variance.

What is variance in testing?

5
New cards

The portion of total variance caused by actual differences in the trait being measured.

What is true variance?

6
New cards

The part of total variance caused by factors unrelated to the measured trait (random or systematic errors).

What is error variance?

7
New cards

Total Variance = True Variance + Error Variance

Formula: What makes up total variance?

8
New cards

The test is more reliable, with less interference from random or systematic error.

What does high true variance mean in a test?

9
New cards

Any influence on a score that is not part of what the test is trying to measure.

What is measurement error?

10
New cards

Random error and systematic error.

Two types of measurement error?

11
New cards

Unpredictable, inconsistent influences on test scores (e.g., testtaker's sudden illness).

What is random error in testing?

12
New cards

A testtaker experiencing a sudden blood pressure surge during testing.

What is an example of random error?

13
New cards

A consistent or proportional error that affects scores in the same direction.

What is systematic error?

14
New cards

A ruler that is always one-tenth inch too long, causing consistent mismeasurement.

Example of systematic error?

15
New cards

To reduce error variance and improve the reliability and accuracy of score interpretations.

Why is understanding error important in testing?

16
New cards

Test construction, test administration, and test scoring/interpretation.

What are the three main sources of error variance?

17
New cards

Variability in test scores caused by differences in the test items selected or constructed.

What is item or content sampling?

18
New cards

A testtaker may perform better or worse depending on which items were included in the test.

How does item sampling affect scores?

19
New cards

Scoring higher on a test that includes familiar or expected questions versus a different version of the same test

Give an example of item sampling error.

20
New cards

Environmental distractions like noise, lighting, or temperature affecting testtaker performance.

What is a source of error in test administration?

21
New cards

Fatigue, illness, emotional distress, or lack of sleep.

What are testtaker variables that can cause error?

22
New cards

Errors caused by the examiner deviating from test procedures or influencing responses.

What are examiner-related variables?

23
New cards

An examiner unintentionally nodding to indicate a correct answer during an oral test.

Example of examiner-related error?

24
New cards

Poor lighting, ventilation, or uncomfortable settings can distract the testtaker.

How can the testing environment introduce error?

25
New cards

Technical glitches, subjective judgments, or inconsistencies in applying scoring rules.

What is a source of error in test scoring?

26
New cards

When scorers must interpret open-ended or creative responses without clear criteria.

What increases error in subjective scoring?

27
New cards

Different scorers disagreeing on which creative block designs deserve credit.

Example of subjective scoring error?

28
New cards

By using standardized, computerized, or well-documented scoring procedures.

How can objectivity reduce scoring error?

29
New cards

To improve test reliability and ensure accurate, fair assessment outcomes.

Why is identifying sources of error variance important?

30
New cards

The consistency or stability of test scores over time, forms, or items.

What does reliability measure in psychometrics?

31
New cards

A value from 0 to 1 that reflects the proportion of true score variance in observed scores.

What is a reliability coefficient?

32
New cards

90% of score variance is due to true differences; 10% is due to error.

What does a reliability coefficient of 0.90 mean?

33
New cards

Inversely proportional — as reliability increases, error variance decreases.

What is the relationship between reliability and error variance?

34
New cards

Correlation between scores on two test administrations with the same individuals.

What is test-retest reliability?

35
New cards

For stable traits like intelligence or personality.

When is test-retest reliability appropriate?

36
New cards

Test-retest reliability when the time gap between administrations is over 6 months.

What is the coefficient of stability?

37
New cards

Longer intervals reduce reliability due to greater chances for error variance.

How does time affect test-retest reliability?

38
New cards

Forms with equal means and variances that measure the same construct.

What are parallel forms of a test?

39
New cards

Correlation between different but equivalent versions of a test.

What is alternate-forms reliability?

40
New cards

Reliability estimate between alternate or parallel forms of a test.

What is coefficient of equivalence?

41
New cards

Item sampling, motivation, fatigue, practice, and therapy effects.

What errors affect alternate-form reliability?

42
New cards

The degree of correlation among items on the same test.

What is internal consistency reliability?

43
New cards

When evaluating how well test items measure the same construct.

When is internal consistency most useful?

44
New cards

Correlating scores of two halves of a single test administered once.

What is split-half reliability?

45
New cards

Spearman-Brown formula.

What formula adjusts split-half reliability?

46
New cards

To estimate the reliability of a full-length test based on split-half data.

What is the Spearman-Brown formula used for?

47
New cards

A type of split-half reliability where odd-numbered items are compared to even-numbered ones.

What is odd-even reliability?

48
New cards

It may introduce bias due to fatigue, anxiety, or uneven item difficulty.

Why avoid splitting a test in the middle for split-half reliability?

49
New cards

For tests with dichotomous (right/wrong) items with varying difficulty.

When is KR-20 used?

50
New cards

When all test items are assumed to have equal difficulty.

When is KR-21 used?

51
New cards

A reliability index for tests with nondichotomous items, measuring internal consistency.

What is coefficient alpha (Cronbach's alpha)?

52
New cards

0.70–0.90 is acceptable; over 0.90 may indicate redundancy.

What is a good Cronbach's alpha value?

53
New cards

It generally increases — longer tests tend to be more reliable.

What happens to alpha as the number of items increases?

54
New cards

Internal consistency based on score differences between items.

What does Average Proportional Distance (APD) measure?

55
New cards

≤ 0.20 is considered excellent; ≤ 0.25 is acceptable.

What is an excellent APD value?

56
New cards

Agreement between two or more scorers on the same test.

What is inter-scorer reliability?

57
New cards

When tests involve subjective judgment (e.g., essays, creativity tasks).

When is inter-scorer reliability most important?

58
New cards

An estimate of how much a test score deviates from the true score due to measurement error.

What is Standard Error of Measurement (SEM)

59
New cards

SEM is inversely related to reliability — higher reliability means lower SEM.

How is SEM related to reliability?