1/30
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
what is null hypothesis significance testing (NHST)
check whether a statistical relationship in a sample reflects a real relationship in the population or is just due to chance, predict what the sample is doing based on real life, is it because we selected a weird sample?
how do we do NHST
we assume the H0 is true in the population -> how likely is the result of the sample
what is the P-value
– the probability of the observed data under the null hypothesis (if alcohol doesn’t affect reaction times, what’s the probability that people who drank alcoholic beer would be 100ms slower on average than those who drank non-alcoholic beer)
what do we do if the probability is small
reject null hypothesis
what do we do if probability is large
don’t reject null hypothesis
what does it mean if we get a result far away from 0
there is no chance the result is occurring if its that likely
How do we decide whether to reject the null hypothesis
use .05 or 5% as cut off
what does it mean if p < .05
it is statistically significant, reject H0 in favour of H1
what does it mean if p > .05
not significant so it supports
what happens if we get a large sample of white swan population and there’s one black one in the sample
because we have the one black swan there must be other black swans in the population too
what do we do if the null hypothesis is fake
reject null hypothesis as our decision as effect is found
what do we do if null hypothesis is true
we decide to not reject null hypothesis as no effet is found
what is a type one error
false positive
what is a type 2 error
false negative
what is a familywise error
: probability of making more false positive results (type 1 errors), the more tests you run simultaneously
examples of questionable research practices (3)
p-hacking and logical fallacy and low statistical power
what is p-hacking
failing to report all of a study’s dependant measures or all of a study’s conditions – cherry picking what you are presenting
what is wrong with rounding off a p value (reporting a 0.54 is less than 0.5) and calling it marginally significant
the 5% cut off is artibury, you can change it, use 10% to show what you want to show but stick to the 5%
what is the file drawer problem associated with p hacking
Selectively reporting studies that ‘worked’
what other two things fall under p-hacking
Deciding whether to collect more data after looking to see whether the results were significant
Stopping collecting data earlier than planned because one found the result that one had been looking for – interferes with integrity
what falls under logical fallacy (2)
HARKing and sharp-shooter fallacy
what is HARKing
hypothesis after results is known and in a paper, reporting an unexpected finding as having been predicted from the start – changing hypothesis after the results have went the opposite way from expected.
what is sharp-shooter fallacy
when someone cherry picks specific data points or patterns after the fact and then claims that those patterns were meaningful o significant
which error is low statistical power related to
type 2 error
what is low statistical power
high participant numbers are good while low may lead to problems
why do we need high numbers in samples (3)
1. Different studies need different sample sizes depending on the size of effects (e.g. male/female height difference vs semantic priming) – probem is the small sample sizes
2. Best practice – determine sample size before beginning data collection
Law of large numbers: the larger the sample, the more precise the estimate
some questions to ask when conducting research (3)
1. What do I predict will happen?
2. How many people should I test?
3. Who should I test?
what is pre-registration
write down what you plan to do and predictions before data collecrion starts (view what to do)
what are registered reports
traditionally studies submitted for publication after the results have been analysed, this leads to publication bias
what’s the importance of plotting data
the way the data is plotted can influence how the story is told. How you choose to funnel in can give a wrong impression of the truth of the story.
what is the thing about bar graphs
if you have continuous variables and you just want the mean maye it isn’t presenting what you want to present. Bar graph doesn’t give full story.