1/19
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
what is construct validity?
how well a measure relates to the theoretical concept you are studying.
this is the overarching type of validity.
what is the difference between a study and an experiment?
only an experiment if something is manipulated - an important feature of experiments is that they explore the causal relationship between variables
what is internal validity?
how well an account measures the manipulated change, e.g., strength of a causal relationship, or potential influence of confounding factors
what are some factors that could lead to a decrease in internal validity?
participant selection
ppt motivation
maturation
experimenter training
equipment decay/use
lack of random assignment
what is external validity?
the generalisation of findings - how well findings relate to conceptually similar circumstances
what may high external validity lead to e.g., in field experiments?
less control for confounds
what are the 3 types of external validity?
population validity (experimental sample to defined population)
ecological validity (experimental setting to real world/other settings)
multiple-treatment interferences - sequence/carry-over effects
in terms of population validity, what has been suggested to be the most used demographic in psych literature?
W.E.I.R.D populations
what makes a measure reliable?
if multiple measures taken are consistent across times, items and researchers
what is test-retest reliability?
consistent across times (test-retest correlation, bland-altman plot)
what is internal consistency?
consistent across items (split-half correlation, cronbach’s alpha)
what is inter-rater reliability?
consistent across researchers (intraclass correlation, cohen’s k)
what is the cycle for research practices and how they can be questionable?
generate and specify hypothesis - fail to control for bias
design study - low statistical power
conduct and collect data - poor quality control
analyse data and test hypothesis - p-hacking
interpret results - p-hacking
publish and/or conduct next experiment - publication bias
what is p-hacking?
same as data fishing, data dredging, data snooping
the practice of manipulating data/analysis until statistically significant results are found, often by trying multiple tests or selectively reporting findings
what is optional stopping?
data peeking - doing analyses as you recruit ppts and stopping data collection if the results look good
what are some acceptable practices surrounding data stopping?
assessing data quality
sequential analyses
what is meant by researcher degrees of freedom?
given the same research question, would all researchers do the same analysis?
same data, different conclusions
what is meant by HARKing?
hypothesising after results are known
presenting post-hoc hypotheses as if they were a priori
what is the file drawer problem?
positive results more likely to be published than negative ones
many studies are not published, and cannot be used as background for later research
what are 2 approaches to avoid QRPs?
pre-registration - post timestamped version of methods and analysis plan online prior to doing the study
registered reports - submit intro and method for peer review to be accepted in principle prior to data collection