1/28
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What makes a good experiment
reliability
sensitivity
validity
Reliability
consistency/dependability of a measure (producing similar results on repeated administrations)
Internal reliability
The extent to which a measure is consistent within itself
Split half reliability
Compares the results of one half of a test with the other half.
Cronbachās alpha
A test that splits items equally in every way possible, It then correlates ALL halves with ALL other halves (Should have a high correlation coefficient >.70)
External reliability
Ā The extent to which a measure varies from one use to another (e.g. IQ test).
Test-retest reliability
The stability of a test over time, administer the test now, then give the same test later to the same participants (should be consistent)
Improving inter-rater reliability
clear categories/ definitions
training
Improving reliability
Improve quality of items
Increase/decrease number of items.
Increase sample size ā control individual differences (outliers)
Choose appropriate sample ā target population
Control conditions ā keep things constant across participants
Sensitivity
Detecting even a small effect of the IV on the DV (affected by large sample, variability, floor & ceiling effects)
Floor effect
when data points cluster at the lower end of a measurement scale. this happens when a test or measurement is too difficult
Ceiling effect
occurs when data points cluster at the upper end of a measurement scale, this happens when a test or measurement is too easy,
What you want in an experiment
DV not too hard or easy
Wide range of scores
Objective
Ordinal
Appropriate sample
Control conditions
Right level of difficulty
Right number of questions
Test validity
The ātruthfulnessā of a measure in that it measures what it claims to
Face validity
whether the test appears to measure what it claims to
Content validity
Does it cover the full range of symptoms/facets of a construct
Construct validity
The degree to which a test measures the construct/psychological concept which it is aimed
Convergent validity
The degree to which it correlates with other measurements assessing the same construct.
Divergent/discriminant validity
The degree to which it does not correlate with other measurements assessing different concepts.
Criterion validity
Whether a test reflects a certain set of abilities i.e. the degree to which a measurement can accurately predict specific criterion variables.Ā
Concurrent validity
How well a test correlates with a previously validated measure, given at the same time.
Predictive validity
How well it predicts future performance.
The validity of a study/ experiment
external validity
internal validity
ecological validity
External validity
The extent to which the results of a study can be generalised to different populations, settings and conditions (often the real world). (must extend to new people/ situations, have high construct validity, use a representative sample, replicate with a new group)
Internal validity
When we can be confident that manipulating the IV affects the DV ā there is a causal relationship
Criteria for causation
show co-variation/ correlation
show time-order relationships
eliminate all other possible causes
Key threats to internal validity
testing intact groups (non-random allocation)
order/ practice effects
fatigue/ boredom effects (overcome by counterbalancing)
transfer effects (within-subjects design)
extraneous variables
unequal loss across groups
expectancy effects (need double-blind procedure)
demand characteristics
How to have internal and external validity
control= external validity
āreal worldā DV= external validity
multi-method approach: Conduct a controlled experiment and a naturalistic study.
Ecological validity
How much a method of research resembles āreal lifeā, are they generalisable (using large, representative and diverse samples)