Psychometrics - Semantic 1

0.0(0)
studied byStudied by 15 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/29

flashcard set

Earn XP

Description and Tags

Definitions, facts, general knowledge

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

30 Terms

1
New cards

Selection ratio

The proportion of applicants selected for a particular position or program compared to the total number of applicants. It is often used to assess the effectiveness of a selection process.

2
New cards

Predictive value

Indicates how well a test or assessment predicts the likelihood of a certain outcome, typically used in tests with dichotomous items.

3
New cards

Norm-referenced test

A type of test that assesses an individual's performance in relation to a defined group or norm, often used to compare test-taker scores to the average performance of a peer group.

4
New cards

Goodness of fit index

A statistical measure used to evaluate how well a model fits a set of observations, measures the proportion of variance predicted by the model.

It ranges from 0.00 - 1.00 and is ideally >0.90.

5
New cards

Adjusted goodness of fit index

A modified version of the goodness of fit index that decreases as the number of factors increases.

It ranges from 0.00 - 1.00 and is ideally >0.90.

6
New cards

Parsimony goodness of fit index

A modified version of the goodness of fit index that decreases as the degrees of freedom increases.

It ranges from 0.00 - 1.00 and is ideally >0.90.

7
New cards

Rank-order method

A technique of comparing the performance of two or more individuals in a way that accounts for their performance across items of varied difficulty.

8
New cards

Standard error of the measure

The average distance an obtained score is expected to fall from a test-taker’s true score.

9
New cards

Reliable change index

Used to determine if a change in an individual's score on a psychological test is significant, calculated as a ratio of their difference in pre- and post-test scores to the amount of error estimated in their obtained scores.

10
New cards

λ/λMaximum

The proportion of shared variance among variables in a factor, calculated as the factor’s Eigenvalue divided by the number of principal components (variables).

11
New cards

Exploratory factor analysis

A statistical technique used to identify the underlying relationships between variables by grouping them into factors. It helps to reduce data dimensionality and discover latent constructs.

12
New cards

Developmental norms

Standardized benchmarks used to assess the typical development of children across various domains, such as physical, cognitive, and social-emotional skills. They are measured on an interval scale.

13
New cards

Test bias

Difference in performance across groups not due to the true scores for either group. The proportion of variance not attributable to the true score variance between groups of scores.

14
New cards

Comparative fit index

A measure of goodness-of-fit for statistical models that evaluates how well the proposed model approximates the observed data compared to a “null” model (where all variables are assumed to be uncorrelated to one another).

It ranges from 0.00 - 1.00 and is ideally >0.90

15
New cards

r12

Split-half reliability, the percent of score variance over different ordering of test items attributable to true score variance; internal consistency between two differentially-ordered halves of the same test.

Assessed using the Spearman-Brown formula.

16
New cards

Intercept bias

Occurs when an item characteristic curve (ICC) describes performance of two or more groups (or people) at different ability levels when their ability is known to be the same. In a linear regression model, the performance of each group intercepts the vertical axis at the same point.

17
New cards

Content validity

The extent to which a test adequately represents the construct it aims to measure; it ensures that all components of the construct are covered and in the same proportions expected in the construct itself.

18
New cards

Object

A measurable entity used to assess specific attributes or traits, often represented as a variable in testing.

19
New cards

λMaximum

The number of principal components (variables) in an exploratory factor analysis.

20
New cards

Empirical criterion keying

A method of selecting test items based on their ability to distinguish between groups defined by a known, pre-established, chosen criterion.

21
New cards

Non-uniform bias

Item characteristic curves between groups of people that differ in ability level and slope. It is considered to be an interaction between group and item.

22
New cards

Confirmatory factor analysis

A test of whether a proposed model (taken from an exploratory factor analysis) is preferable to alternative models with varying numbers of factors.

23
New cards

Test information curve

A graphical representation of the amount of information a test provides across various trait levels. The ability level at the apex of the curve (highest information) signifies the ability level that the test is most conducive to.

24
New cards

Within-group norms

Statistical benchmarks derived from test scores of individuals within a specific group, allowing for comparisons against the group's performance in the context of a specific trait.

25
New cards

Power test

In Item Response Theory, a test in which every item has the same item quality but designed for a different ability level.

In Critical Test Theory, a test in which the pass rate of each item varies and all items are meant to be completed in the time frame allotted.

26
New cards

Reliable clinically significant change

In a true experimental design, when a post-treatment score for an individual in the treatment group falls within one positive standard deviation of the control group mean.

27
New cards

Nested model

In a confirmatory factor analysis, a model that has a different number of factors (but the same number of principal components) than the proposed model created during exploratory factor analysis.

28
New cards

Test information

A measure in item response theory indicating the reliability and precision of the test across different ability levels, reflecting how much information the test provides about an examinee's ability. It is a summation of item information at every ability level.

29
New cards

Item information curve

A graph of item information versus ability level. It describes which ability level a given test item provides the most information for, and how much information it provides at all levels of ability.

30
New cards

Theta (IRT)

In item response theory, theta (θ) represents the latent trait or ability level of an examinee, estimated as the number of standard deviations from a mean of zero.