1/29
CA1 - Final Term
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Test Development (5)
Test Conceptualization
Test Construction
Test Tryout
Test Analysis
Test Revision
Test Conceptualization (4)
What
Why
Who
How
What (Test Conceptualization)
What is the test design?
What will the test cover?
What will it measure?
What is the objective of the test?
How is it different from other similar tests that exist?
What is the format of your test? (MCQ / True or False, etc.)
Why (Test Conceptualization)
Why is there a need for your test?
To determine if there is a niche it fulfills.
Who (Test Conceptualization)
Who will benefit from this test?
Who will use the test?
Who will take the test?
What are the specific criteria?
How (Test Conceptualization)
Is it individual or group administration?
Computerized or paper and pencil?
How long will the test take to answer?
Test Construction (3)
Scaling
Writing items
Scoring items
Scaling
Process of setting rules for assigning numbers in measurement.
Likert Scale
Scale measuring degree, e.g., strongly disagree, disagree, neutral, agree, strongly agree (ordinal data results).
Paired Comparisons
Makes the test taker select choices to avoid social desirability and faking good (both have good choices).
Guttman Scale
A type of scale where questions are arranged from easiest to hardest, and if someone agrees with a harder statement, it’s assumed they also agree with all the easier ones.
Writing Items (2)
Item pool
Item bank
Item Pool
The reservoir or well from which items will or will not be drawn for the final version of the test; initial list of items.
Untreated items (could have good or bad items)
All of the items that you will add or remove to your final version of the test.
Item Bank
Usually used in Computer Administered Testing which reduces floor effect and ceiling effect
Collection of test questions you have that is already treated
Floor Effect
When a test is too hard, so most people score at the very bottom, making it hard to see differences between them.
Ceiling Effect
When a test is too easy, so most people score at the very top, making it hard to see differences between them.
Item Branching
A testing method where the next question depends on how you answered the previous one, so the test adjusts to your level.
Scoring Items (3)
Cumulative model
Class scoring or category scoring
Ipsative scoring
Cumulative Model
The higher the score, the higher is the ability.
Class Scoring or Category Scoring
A way of scoring where answers are grouped into categories (like types or classes) instead of giving a continuous score.
Ipsative Scoring
Comparing a testtaker’s score on one scale within a test to another scale within the same test.
Intraindividual Comparison
Comparing a person’s performance with their own past performance, instead of comparing them to other people.
Test try-out
A stage in test development where a draft version of the test is given to a sample group to check how well the items work before the final test is used.
Item Analysis
A set of methods used to check how well test questions work, deciding which items stay in the item bank.
Item Difficulty
How easy or hard a question is, based on the number of people who answered it correctly.
0.30 to 0.70
What is the optimal range for item difficulty?
Item Discriminability
How well a question separates high scorers from low scorers on the overall test.
Extreme Group Method
Compare correct answers between top scorers and bottom scorers.
Point-Biserial Method
Use correlation to see if item performance matches overall test performance.
Test Revision
A stage in test development where poor items are removed and others may be rewritten to improve the test.