Measurement & Reliability – Lecture Review

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/14

flashcard set

Earn XP

Description and Tags

Fifteen Q&A flashcards covering key concepts of reliability, its consequences, types, calculations, and practical considerations from the lecture.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

15 Terms

1
New cards

What is the Classical Test Theory equation that relates observed, true, and error scores?

X = T + E (Observed score = True score + Error).

2
New cards

How does large random error variance affect the ability to understand real differences between people?

Greater random error masks true differences, making it harder to interpret results and lowering reliability.

3
New cards

What impact does low reliability have on the reproducibility of research findings?

Low reliability makes results unstable and seemingly random, reducing the chance of obtaining the same outcome on replication.

4
New cards

According to the lecture, what is the simplest way to improve an instrument’s reliability?

Collect multiple measurements and average them; more measurements tend to approximate the true score.

5
New cards

What reliability coefficient value is generally considered the minimum acceptable for research tools?

A reliability coefficient of 0.70 or higher.

6
New cards

List the four main operational types of reliability introduced in the lecture.

1) Test–retest (stability), 2) Parallel/alternate forms (equivalence), 3) Internal consistency (split-half, Cronbach’s α), 4) Inter-rater reliability.

7
New cards

What does a high correlation between two administrations of the same test indicate?

High test–retest reliability, showing the measure is stable over time.

8
New cards

Give one major limitation of using test–retest reliability.

Participants may remember or learn from the first administration, creating carry-over effects that bias the second test.

9
New cards

Why is the Spearman–Brown formula applied after calculating split-half reliability?

Because splitting the test underestimates reliability; the formula adjusts (inflates) the correlation to estimate reliability of the full test.

10
New cards

Which reliability index is most appropriate for a Likert-type questionnaire measuring a single construct?

Cronbach’s alpha, which assesses internal consistency among items.

11
New cards

According to common thresholds, what Cronbach’s alpha value is classified as "excellent"?

An alpha (α) greater than 0.90.

12
New cards

Why is a perfect correlation among items in a questionnaire not desirable even though it implies consistency?

It suggests item redundancy; the items may all ask the same thing and fail to capture different aspects of the construct.

13
New cards

What is inter-rater reliability and when is it typically used?

It is the degree of agreement among two or more judges; used when scoring involves human judgment rather than objective instruments.

14
New cards

Which statistical coefficient is commonly employed to quantify agreement among multiple raters?

The intra-class correlation coefficient (ICC); Spearman-Brown can be applied when aggregating ratings.

15
New cards

When selecting a reliability assessment method, what is one crucial question to ask about the trait being measured?

Is the trait stable over time (i.e., will repeated measurements reflect the same underlying value)?