1/62
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Mean (M/x̄)
The sum of scores in a distribution divided by the number of scores in the distribution.
Central tendency
complain statistic-standard deviation: shows how far things vary from the average
Median (Mdn)
The midpoint of number in a distribution have 50% of the scores above it and 50% of the scores below it
odd scores median= middle score
Mode (Mo)
The number that occurs most frequently in a distribution of score/numbers.
modal score
Interquartile range (IQR)
a measure statistical dispersion being equal to the difference between the 3rd and 1st quartiles. (Q3-Q1=IQR)
Q1= lowest 25% of data
Q2= median or half, 50% of data
Q3= highest 25% or lowest 75%
Range (Ra)
The difference between the highest and lowest scores in a distribution; measure of variability
Standard deviation (SD)
The most stable measure of variability, each and every score in a normal distribution. How far individual scores vary in standard unit lengths from its midpoint of 0.
95% of the area is within 1.96 standard deviation of the mean
Variance (SD2)
A measure of the dispersion of a set of data points around their mean value. It is the mathematical expectation of the average squared deviations from the mean.
Analysis of Covariance (ANCOVA)
A statistical technique for equating groups on one or more variables when testing for statistical significance using the F-test statistic.
Analysis of Variance (ANOVA)
A statistical technique for determining the statistical significance of differences among means; it can be used with two or more groups and uses the F-test statistic
Autoregressive integrated moving average (ARIMA)
Box-Jenkins approach to the time series analysis. Test for change in the data patterns pre and post intervention within the context of analyzing the outcomes of a time series design
Binomial Test
an exact test of the statistical significances of derivations from a theoretically expected distribution of observations into two categories
Chi-square (x2)
a nonparametric test of statistical significance's appropriate when the data are in the form of frequency counts; it compares frequencies actually observed in a study with expected frequencies to see if they are significantly different
Cochrans Q
Used to evaluate the relationship between two variables that are measured on a normals scale. One of the variables that are measured may even be dichotomous or consisting of the two possible values.
Coefficient of Determination (r2)
The square of the correlation coefficient r2, it indicates the degree of relationship strength by potentially explained variance between two variables
Cohens d
A standardized way of measuring the effect size of difference by comparing two means by a simple math formula.
can be used to accompany the reporting of a t-test or ANOVA result and is often used in meta-analysis.
The conventional benchmark scores for the magnitude of effect sizes
small d= 0.2
medium d= 0.5
large d= 0.8
cohens kappa (k)
a statistical measure of interrater agreement from qualitative (categorical) items
-1 to 1
confidence interval (CI)
Quantifies the uncertainty in the measurement. It is usually reported as 95% CI, which is the range of values within which it can be 95% Ci, which is the range of values within which it can be 95% certain that the true values for the whole population lies
correlation coefficient (r)
a decimal number between 0 and +- 1 that indicated the degree to which two quantitative variables are related. The most common one used is the Pearson Product Moment correlation coefficient or just the Pearson coefficient
Cronbach alpha coefficient
a coefficient of consistency that measures how well a set of variables or items measures a single unidimensional latent construct in a scale or inventory.
high >- .9
medium .7 to .89
low .55 to .69
Cumulative frequency Distribution
A graphic depiction of how many times groups of scores appear in a sample
Dependent t-test
a data analysis procedure that assesses whether the means of two related groups are statistically different form each other
paired sample t-tests
Effect size (θ)
any measure of the strength of a relationship between two variables
asses comparisons between correlations, percentages, mean differences and probabilities
Eta (η)
an index that indicates the degree of a curvilinear relationship
F-test (F)
a parametric statistical test of equality of the means of two or more samples
it compares the means and variances between and within groups over time
Factor analysis
a statistical method for reducing a set of variables to a smaller number of factors or basic components in a scale or instrument being analyzed
Fishers exact test
A nonparametric statistical significance test used in the analysis of contingency tables where sample sizes are small. The test is useful for categorical data that result from classifying objects in two different ways
significance of the association between two kinds of classifications
Friedman two-way analysis of variance
a nonparametric inferential statistic used to compare two or more groups by ranks that are not independent
G2
This is a more conservative goodness-of-fit statistics than x2 and is used when comparing hierarchical models in a categorical contingency table
Independent t-test
A statistical procedure for comparing measurements of mean scores in two different groups or samples
independent samples t-test
Kendalls Tau (t)
A nonparametric statistic used to measure the degree of correspondence between two rankings and to assess the significance of the correspondence
Kolmogorov-Smirnov (K-S) test
A nonparametric goodness of fit test that used to decided if a sample comes from a population with a specific distribution. The testis based on the empirical distribution function (ECDF)
Kruskal-Wallis one-way analysis of variance
A nonparametric inferential statistic used to compare two or more independent groups for statistical significance of differences.
Mann-Whitney U-test (U)
A nonparametric inferential statistic used to determine whether two uncorrelated groups differ significantly
McNemars test
A nonparametric method used on nominal data to determine whether the row and column marginal frequencies are equal
Median test
A nonparametric test that test the null hypothesis that the medians of the populations from which two samples are drawn are identical
Multiple correlation (R)
A numerical index describing the relationship between predicted and actual scores using multiple regression. The correlation between a criterion and the best combination of predictors.
Multivariate analysis of Covariance
An extension of ANOVA that incorporates two or more dependent variables in the same analysis. It is an extension of MANOVA where artificial dependent variables (DVs) are initially adjusted for differences in one or more covariates. It computes the multivariate F statistic
Multivariate analysis of variance (MANOVA)
It is an ANOVA with several dependent variables
Newman-Keuls test
A type of post hoc or posteriori multiple comparison test of data that makes precise comparisons of groups means after ANOVA has rejected the null hypothesis
One way analysis of variance (ANOVA)
An extension of the independent group t-test where you have more than two groups. It computes the difference in means both between and within groups and compares variability between groups and variables. Its parametric test statistic is the F-test
Pearson Correlation Coefficient (r)
This is a measure of the correlation or linear relationship between two variables x and y giving a value between +1 and -1 inclusive. It is widely used in the sciences as a measure of the strength of linear dependence between two variables
pooled point estimate
an approximation of a point usually a mean or variance that combines information from two or more independent samples believed to have the same characteristics.
used to assess the effects of treatment samples versus comparative samples
post hoc test
is used at the second stage of the analysis of variance or multiple analyses of variance if the null hypothesis is rejected
Runs test
where measurements are made according to some well-defined ordering in either time or space. A frequent question is whether or not the average value of the measurement is different at different points in the sequence.
provided the mean
Siegel-Tukey test
A nonparametric test named after Sidney Siegel and John Tukey, which test for differences in scale between two groups. Data measured must be at least be ordinal
Sign test
a test that can be used whenever an experiment is conducted to compare a treatment with a control on a number of matched pairs, provided the two treatments are assigned to the members of each pair at random
Spearman’s rank order correlation (p)
A nonparametric test used to measure the relationship between two rank ordered scales. Data are in ordinal form
Standard error of mean (SEM)
An estimate of the amount by which an obtained mean may be expected to differ by chance from the true mean. It is an indication of how well the mean of a sample estimates the mean of a population
Statistical power
the capability of a test to detect a significant effect or how often a correct interoperation can be reached about the effect if it were possible to repeat the test many times
Student-Newman-Keuls (SNK)
A nonparametric post-ANOVA test also called post hoc test. It is used to analyze the differences found after the preformed F-test is found to be significant
to locate where differences truly occur between means
Student t-test
any statistical hypothesis test in which the test statistic follows a students t distribution if the null hypothesis is true
a t-test for paired or independent samples
t-distribution
a statistical distribution describing the means of samples taken from a population with an unknown variance
T-score
A standard score derived from a a-score by multiplying the z-scores by 10 and adding 50. It is useful in comparing various test scores to each other as it is a standard metric that reflects the cumulative frequency distribution of the raw scores
T-test for correlated means
A parametric test of statistical significance used to determine whether there is a statistically significant difference between the means of two matched or nonindependent samples. Pre and post comparisons uses
T-test correlated proportions
A parametric test of statistical significance used to determine whether there is a statistically significant difference between two proportions bases on the same sample or otherwise nonindependent groups
T-test for independent means
A parametric test of significance used to determine whether there is a statistically significant difference between the means of two independent samples
T-test for independent proportions
A parametric test of statistical used to determine whether there is a statistically significant difference between two independent proportions
Tukeys test of significance
A single-step multiple comparison procedure and statistical test generally used in conjunction with an ANOVA to find which means are significantly different from one another. Named after John Tukey it compared all possible pairs of means and is based on a student used range distribution q
Wald-Wolfowitz test
A nonparametric statistical test used to test the hypothesis that series of numbers is random
Wilcoxon sign rank test (W+)
A nonparametric statistical hypothesis test for the case of two related samples or repeated measurements on a single sample.
Used as an alternative to paired students t-test when the population cannot be assumed to be normally distributed
Wilks’s lambda
A general test statistic used in multi-variate test of mean differences along more than two groups
Numeral index calculated when carrying out MANOVA or MANCOVA
Z-scores
A score expressed in units of standard deviations from the mean
Standard score
Z -test
A test of any of number of hypothesis in inferential statistics that has validity if sample sizes are sufficiently large and underlying data are normally distributed