1/14
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
How can something be reliable but not valid?
consistently getting an incorrect answer
Reliability definition
how consistent something is - if the same thing is measured twice, do we get the same results?
Ways of assessing reliability
Test-retest method
→ do it again with the same participants to ensure the same results occur each time.
→ If correlation coefficient between the 2 sets of results is +0.8 or more there is reliability → the test is producing consistent results over time
Inter-observer reliability
→ two observers both collecting data, their data is then compared
→ avoids investigator bias and reduces the chances that the investigators may miss something
→ If there is +0.8 correlation, it suggests the IOR is reliable.
→ If there is -0.8 correlation, they are seeing things differently and the categories should be checked to make sure they are fully operationalised and easy to understand/ apply.
Ways of assessing internal reliability
Split-half technique
→ scores for ½ the test are correlated with scores for the other 1/2 .
→ if significant positive correlation there is internal reliability = the test items are consistent (+0.8 or more)
What are some ways of improving reliability?
have a second observer
standardise/ operationalise to make it more replicable
reduce confounding/ extraneous variables
repeat study a lot to gain reliability
pilot study
more participants → larger sample size
avoid leading questions and use a script in interviews
Validity definition
the extent to which an observed effect is genuine - does it measure what it was supposed to measure, can it be generalised beyond the research setting it was found it?
→ how accurate/ valid it is?
Internal validity
extent to which a study is free of design faults, which may affect results, meaning we can be sure IV affected DV and nothing else did
External validity
the extent to which findings can be applied beyond the current study e.g. Lab experiments tend to have poor external validity
Ecological validity
the extent to which findings from a research study can be generalised to other settings and situations → form of external validity
Population validity
whether the sample reflects the people out of the research setting
Temporal validity
how relevant it is to the time - extent to which findings from a research study can be generalised to other historical times and eras.
What threatens internal validity?
leading questions, confounding variables, extraneous variables, demand characteristics and researcher bias
How to assess validity?
Face validity → does the test appear ‘on the face of it’ to measure what it is supposed to measure?
→ done by ‘eyeballing’ the measuring instrument or by passing it to an expert to check
- refers to whether the way the DV is measured in the experiment looks like it measures what it claims to measure. e.g. the students must think that finding ten errors in a passage of text while being timed appears to measure concentration.
Concurrent validity → test results are measured against another recognised and well-established tests to check if results are consistent. Close agreement with two sets of data indicate that the new test has high concurrent validity → close agreement is indicated if correlation between two sets of scores exceeds +0.8
How to improve internal validity?
remove all confounding variables
include single-blind or double-blind tests to mitigate against investigator bias and demand characteristics
pilot studies
Link between internal and external validity
Its like a trade off.
→ the more you control confounding variables the less likely it will have external validity/ But the more you allow for external validity, the more likely it is to let in confounding variables.
High internal validity often leads to low external validity.