1/11
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Inferential statistics
Statically tests which infer whether the results of a study are significant
Significant
When the results of a study are thought to be due to a real relationship between the variables that are being tested rather than due to chance
Statistical test
A mathematical formula used to determine whether the results of a study are significant (there are 8 different statical tests-used for different designs and levels of measurements)
Probability
The likelihood that results are due to chance- known as level of significance or p value
Steps to calculating a sign test
Add the signs to the sign of difference column (if there is neither a - or + difference e.g. the score stays the same, the participants data should be ignored and they should be excluded from the N value)
Find the least frequent sign- the number of the least frequent sign is the S value
Compare your calculated value (s) with the critical value (table will be given to you)
Using critical value table: level of significance/p-value (0.5 unless stated otherwise), N=number of participants excluding the ones whose score stayed the same, whether one (non-directional) or two (directional) tailed test
Calculated value
Result of your statistical test
Level of significance
Probability of results occurring due to chance
How to choose the right statistical test?
Is the research experimental(difference) or correlational (relationship)
If research is experimental- related (matched pairs and repeated measures) or unrelated design( independent and groups)
What is the level of measurement of the data collected
Type one error
We conclude our findings are significant when in fact they are not
Incorrectly accepting an alternative hypothesis
Incorrectly rejecting a NULL hypothesis
More common when p value is high/more lenient
Type two error
We assume our findings are not significant when in fact they are
Incorrectly accepting a NULL hypothesis
Incorrectly rejecting alternate hypothesis
More common when the p value is lower/more stringent
Why is 0.5 used for a p value?
To avoid type 1 or type 2 errors
Why might we not use a p value of 0.5 in some cases?
When there could be consequences of the results being due to chance e.g. in drug testing then we need to be more stringent and use a lower p value because the consequences of making a type 2 error are more preferable than making a type 1 error