1/31
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
what are the components of ethical research
Informed consent, deception, protection from physical and psychological harm, right to withdraw, confidentiality and ensure anonymity, freedom from coercion (persuading someone using threats). Study followed by debriefs
define coercion
persuading someone using threats
define ‘ construct ‘
a variable that you want to measure e.g. personality
Aim of measurement in psychology
test theoretical hypothesis, and measurement/ quantification of psychological constructs allows us to do this
Psychometrics
an area of psych that focuses on scientific measurement of individual differences within psychological constructs. Psychometrics is at the intersection between statistics and psych
What are psychometric tests?
standardised instruments for assessing an aspect of an individual, like ability, aptitude, attitude and personality. Same or statistically similar questions are administered and scored in a consistent way every time - compare individuals
Concerns of psychometrics
addressing the quality of scales and items that were designed to measure psychological constructs
Define ‘ discriminatory power’
should be able to differentiate individuals across a range of profile, and not just pick of extreme cases
define ‘population comparison’
Allow its results to be applicable to a population and not only the individuals that were tested (e.g. through standardisation)
What 2 metrics are important for screeners/ assessments of disorders?
Sensitivity and specificity
define ‘ sensitivity’
TRUE POSITIVE rate. Proportion of actual cases correctly identified by the test
what does a high sensitivity ensure?
most people with the trait/ condition are detected (low false negatives)
define ‘ specificity’
TRUE NEGATIVE rate. Proportion of non-cases correctly classified by the test
What does a high specificity ensure?
most people without the trait/condition are correctly ruled out (low false positives)
define reliability
consistency and stability of a measurement tool over time, across items, and between raters
when is a test reliable?
when it produces the same results under consistent conditions
what is the rule of reliability and validity?
An instrument can’t be valid unless it’s reliable. An instrument can be reliable and not valid
Why is reliability important?
ensures psychological tests measure constructs consistently
reduces measurement error, improving the accuracy of research findings and assessments
necessary for validity - a test cannot be valid if it is not reliable
What is external reliability?
assesses the degree that a measure varies from one use to another
What is internal reliability?
assesses the degree to which the individual items within a measure is consistent within itself
What are the different types of reliability?
Test-retest reliability
Inter-rater reliability
Internal consistency
Parallel-forms reliability
define ‘test-retest’ reliability
the stability (consistency) of scores over time.
measured when giving the same test to same group at different time points and compute the correlation
define inter-rater reliability
consistency of scores across different raters/observers
measured when assessing degree of agreement between multiple raters
define internal consistency
the degree to which items on a test measure the same construct
measured using Cronbach’s alpha to assess how well test items correlate with one another
define ‘parallel-forms reliability’
consistency of 2 equivalent versions of a test
measured by correlating scores from 2 different forms of a test measuring the same construct
what should a good retest reliability correlation be?
What does the correlation depend on?
.75 - .80
The length of time between administrations of the test, especially with children (developmental stage). The type of construct measured - stable (IQ) vs malleable construct (e.g. mood)
which ways are there to calculate internal reliability?
Cronbach’s alpha, split half reliability, Kuder Richardson - the one you choose depends on the types of items you have and format of responses
Split half reliability
split the test in half (either randomly or assign all odd items to one set and all even to another), then calculate the correlation between the scores on the 2 halves of the test. Higher correlations = better internal reliability
What are the 5 factors affecting reliability?
test length - longer tests = more reliable
item quality - poorly written or ambiguous questions reduce reliability
testing conditions - environmental factors
score variability
participant variables
define validity
the extent to which a test measures what it is intended to measure. ensures test scores are meaningful and useful for decision making
Why is validity important?
ensures accurate interpretation of test results
increases confidence in using test scores for research, diagnosis and education
essential for fairness - valid tests reduce bias in psychological and educational assessment