1/51
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What is parallel forms reliability, and how does it help minimize memory effects?
It compares results across different versions of a measure to reduce memory biases.
define sample
a subset of individuals from the larger population (defined by sample statistics)
define population
entire collection of people that share a common characteristic (defined by population parameters)
what is multifactorial causation
when a phenomenon is determined by many interacting factors
how is concurrent validity used to test criterion validity
compare new test with established test
how is predictive validity used to measure criterion validity
whether the test predict outcome of another variable
8 forms of validity
content validity
face validity
criterion validity
concurrent
predictive
construct validity
convergent validity
discriminant validity
what is conceptual replication and how does it contribute to reliability
using previously reported methods, collect new data using a younger sample to check that the same conclusions are made
how does internal consistency contribute to reliability
determines whether all items (e.g. questionnaire) are measuring the same construct
why are order effects and issue with test-retest reliability
participants may remember and repeat answers from earlier tests
4 forms of reliability
test-retest reliability
inter-rater reliability
parallel forms reliability
internal consistency
define ecological validity
the extent to which results gathered in a lab will generalise o other settings in real life
how does history threaten internal validity
uncontrolled events that take place between testing occasions
4 threats to internal validity + solution
selection → random allocation
history → pre-test, post-test design
maturation → counter-balancing
instrumentation → ?
What distinguishes precision from accuracy in measurement?
Precision refers to exactness (consistency - reliability), while accuracy refers to correctness (truthfulness - validity).
What is the primary aim of research design in relation to extraneous variables + 2 methods to achieve
To eliminate or at least control the influence of extraneous variables
random allocation
counterbalancing
How can random allocation help prevent extraneous variables from becoming confounding variables?
It spreads the influence of extraneous variables to prevent them from disproportionately affecting outcomes.
What does maturation refer to in the context of internal validity?
Intrinsic changes in the characteristics of participants between different test occasions.
In what way do confounding variables threaten the validity of experiments?
threat to internal validity
What is the impact of constant or systematic errors on experimental results?
They bias the results; push measurements in same direction
What are the methods to counteract reactivity in psychological research?
Blind procedures, either single or double.
How can uncontrolled events during testing affect the internal validity of a study?
Uncontrolled events can influence results between testing occasions.
What effects can confounding variables have on the measurement of dependent variables?
They can cause the appearance of effects where none exist or mask effects that do exist.
How do inter-rater reliability contribute to reliability?
They assess agreement between different raters or observers measuring the same variable.
What are confounding variables and how do they differ from extraneous variables?
They disproportionately affect one level of the IV more than other levels, adding constant error.
What is construct validity
It determines if the construct being measured exists and is supported by research evidence.
What is reactivity, and how can it threaten internal validity?
Reactivity is awareness of being observed that may alter behavior
How do extraneous variables add error to the measurement of the dependent variable?
They can influence the measurement of the dependent variable other than the manipulation of interest.
What are extraneous variables and why are they undesirable in experiments?
They are undesirable variables that add error to measurement of the DV.
How do changes in measurement instrumentation impact a studys internal validity?
Changes in sensitivity or reliability of instruments can affect measurement consistency.
How can internal consistency be assessed in a questionnaire?
Using split-half reliability where items are divided and administered on separate occasions.
How do reliability and validity differ in psychological tests?
Reliability is the consistency of results, whereas validity is the accuracy of measuring the intended construct.
What does content validity assess in a psychological test?
Whether the test measures the construct fully.
what is required to establish true causation
both necessity (Y must exist for X) and sufficiency (Y can cause X) must be satisfied.
How do random errors affect the results of an experiment?
They obscure the results; tend to push the measurements up and down around the value
Name two types of related artefacts that can result from reactivity.
Subject related e.g. demand characteristics
Experimenter related e.g. experimenter bias.
What are the two broad categories of measurement error?
Random error and constant (systematic) error.
How can convergent and discriminant validity be used to assess construct validity?
Convergent validity shows correlation with related constructs; discriminant validity shows no correlation with unrelated constructs.
What is random allocation, and how does it help control extraneous variables?
Its a method that results in an even addition of error variance across levels of the independent variable.
What types of threats to internal validity arise from selection of participants?
Bias resulting from the selection or assignment of participants to different levels of the IV due to inherent characteristics
What is criterion validity
if measures correlate with other measures of the same construct.
What is test-retest reliability, and why is it important for constructs?
It measures consistency of results over time - indicates constructs are stable
How is face validity defined?
It refers to whether a test appears to measure what it claims to
4 reasons why we sample a population
time
money
access
sufficiency
6 sampling methods
random sample
systematic
stratified
cluster
opportunity/convenience
snowball
“gold standard” method of sampling
random
briefly define systematic sampling
drawing from the population at fixed intervals
briefly define stratified sampling
creating strata; random sampling to reflect proportion of population
define cluster sample
sampling of naturally occurring groups
define external validity
ability to generalise our results
2 types of external validity
population and ecological validity
3 factors to consider when determining sample size
subjects design, experimental design
response rate