1/37
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
What is reliability in psychology?
Reliability is the consistency of a measurement or study.
A test is reliable if it produces the same results when repeated under the same conditions.
Key word: Consistency
Example of reliability
A memory test that gives similar scores when taken by the same person on two different days shows high reliability.
What are the types of reliability?
Inter - Observer reliability (OR)
What is inter-observer reliability?
Inter observer reliability refers to the extent to which two or more observers produce consistent and agreed- upon results when recording the same behaviour.
If different observers record similar data, the observation is considered reliable.
How can we assess (test) whether we have IOR (inter observer reliability)?
By correlating the findings of the two different observers score watching the same people, using the same behavioural checklist.
If there is +0.8 correlation, we have inter observer reliability as it suggests both observers are making a similar number of observations for each category = reliable
If the correlation coefficient is less than +0.8, the observers are not reliable = they are seeing things differently.
How is inter observer reliability achieved?
Operationalising behavioural categories clearly; so all observers know exactly what counts as each behaviour
Training observers; to use the behavioural categories in the same way
Conducting a pilot study; so observers can practise and refine their coding units
Having observers record behaviour independently, without discussing what they see
Comparing their results using a correlation (coefficient of +0.8 or above is typically considered high inter observer reliability)
What is the test - retest method?
A way of measuring reliability
It assesses external reliability of a psychological test or measure. It involves giving the same test to the same participants on two separate occasions and then comparing the results.
If the scores from both testing sessions are highly similar or strongly correlated, the test is considered to have high test-retest reliability, meaning it produces consistent results over time.
How do researches achieve test -retest reliability?
By:
Using standardised procedures; so the test is administered in the same war each time
Ensure the time interval between tests is long enough to prevent recall but short enough that the behaviour being unlikely to change
Compare the two sets of scores using a correlation coefficient (a value of +0.8 or above indicates good reliability)
What is internal reliability?
Internal reliability is how consistent a test is within itself, e.g., all items on a questionnaire measure the same construct.
How do you test internal reliability?
Use the split-half technique: scores for half the test are correlated with the other half.
We know there is internal reliability if the with halves have a correlation coefficient of +0.8 or more.
This shows that the test items are consistent and that the test is measuring what it intends to
What is external reliability?
External reliability is how consistent a test is over time or across researchers.
Give some ways in which reliability can be improved
Operationalising questions/ hypothesis
Use larger sample size (to reduce inconsistencies)
Standardise the experiment
Control extraneous variables/ environment - include counter balancing
objective/ empirical measurements - use empirical data
carry out pilot study
take more than one measuremt
use a script
have a structured interview
train observer ect
What is testāretest reliability?
Giving the same test to the same participants on two occasions. Similar results indicate high reliability.
What is interāobserver reliability?
The extent to which two or more observers produce consistent results when recording the same behaviour.
How do you assess interāobserver reliability?
Train observers, observe independently, then correlate results. A correlation of +0.8 or above indicates high reliability.
What is validity in psychology?
Validity refers to whether a test or study measures what it claims to measure.
The extent to which an observed effect is genuine
Key word: accuracy
Example of validity
A questionnaire designed to measure anxiety should measure anxiety, not stress or low mood.
What is internal validity?
Whether the study accurately measures the intended variables and whether results are due to the IV, not extraneous variables.
What is external validity?
The extent to which findings can be generalised beyond the study, including ecological, population, and temporal validity.
What is ecological validity?
Whether findings generalise to realālife settings.
What is population validity?
Whether findings generalise to different groups of people.
What is temporal validity?
Whether findings generalise across different time periods.
What is face validity?
Whether a test appears to measure what it claims to measure at face value.
Does the study/ investigation make sense?
How is face validity carried out?
Ask a psychologist if they think it makes sense
What is construct validity?
Whether a test truly measures the theoretical construct, such as intelligence or aggression.
What is concurrent validity?
Comparing results from a new test with an old + well known study
How is concurrent validity carried out?
If both tests give the same results for a ser of participants, they must be measuring the same thing. (correlation coefficient of +0.8 or above)
Give some ways in which internal validity can be improved
Controlling extraneous and confounding variables so they do not influence the dependent variable.
Using standardised procedures, ensuring all participants experience the study in the same way.
Counterbalancing in repeatedāmeasures designs to reduce order effects such as fatigue or practice.
Using singleāblind or doubleāblind procedures to reduce demand characteristics and experimenter bias.
Improving the operationalisation of variables, making sure the IV and DV are clearly and precisely defined.
Conducting pilot studies to identify flaws in the design before running the full experiment.
These steps help ensure that any changes in the dependent variable are genuinely caused by the independent variable, increasing the studyās internal validity.
How do you assess internal reliability?
Use splitāhalf reliability by dividing the test into two halves and correlating the scores.
How do you assess external reliability?
Use testāretest or interāobserver reliability checks.
How do you assess concurrent validity?
Give participants the new test and an established test, then correlate the scores.
How do you assess construct validity?
Check whether the test aligns with existing theories and research on the construct.
How do you improve internal reliability?
Standardise instructions, rewrite unclear questions, and use precise behavioural categories.
How do you improve external reliability?
Train observers thoroughly, use clear operationalised categories, and run pilot studies.
How do you improve internal validity?
Control extraneous variables, standardise procedures, counterbalance, and use blind designs.
How do you improve external validity?
Use realistic tasks, conduct field experiments, and use more diverse samples.
How do you improve construct validity?
Base measures on established psychological theories and consult experts when designing tests.
How can something be reliable but not valid?
You may repeat s study, and get the same results = reliable, but if the study doesnāt measure what you intend it to it lack validity.
You can consistently measure the wrong thing