1/31
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Purpose of independent samples t-tests (ISTT)
Comparing means between 2 groups to see a difference
Characteristics of IV and DV in an ISTT
Independent variable is the one measured, dependent variable is the outcome measured.
Basic assumptions of the ISTT
Data normally distributed, samples are independent, variances of 2 groups are equal.
Stating the null hypothesis of the ISTT
No difference in the two groups being compared
Calculating the t statistics (tobs)
(x1-x2)/ S/sqrtN
Degrees of freedom
m-1(or 2)
Levene's Test of Equality of Variance
Test used to assess whether 2 groups or more groups variances are equal.
Effect size for an ISTT
Magnitude of difference between groups, Cohen's D
Dependent Samples t-tests (DSTT)
Compare means from same group
Difference between a 'between subjects' and 'within subjects' design
Between= different groups and Within= same group different conditions
Purpose of dependent samples t-tests (DSTT)
To determine if there is a significant difference between groups
Homogeneity of variance?
No because same group should be same variance
Basic assumptions of the DSTT
The differences between paired observations should be normally distributed.
Definition of validity
refers to the extent to which a test measures what it claims to measure.
Definition of a construct
define something using similar things
Basic purpose and types of validity
To ensure that a test accurately measures the intended construct; types include content, criterion, and construct validity.
Content vs criterion vs construct validity
Content validity assesses if the test covers the intended content, criterion validity compares test results with an external criterion, and construct validity evaluates if the test truly measures the theoretical construct.
Ecological vs. external validity
validity refers to the extent to which findings can be generalized to real-world settings, while validity refers to the generalizability of findings beyond the study sample.
Assessing systematic bias
Evaluating whether a measurement consistently deviates from the true value in a particular direction.
The Pearson product moment correlation: interpretation
A measure of the strength and direction of the linear relationship between two continuous variables.
The concept of error in measurements
difference between observed value and true value
Interpreting coefficient of determination
R^2 indicates variance
Definition of reliability
Consistency, dart board
Factors impacting reliability
test length, magnitude, and conditions
Observed scores vs true values
scores are the actual measurements obtained, while true values are the scores that would be obtained without error.
Reliability as a proportion
Reliability can be expressed as the ratio of true variance to total variance in the observed scores.
Variance resulting from error
Error variance is the portion of the total variance in scores that is due to measurement error.
Test factors impacting reliability
clarity of instructions, the environment in which the test is administered, and the characteristics of the test-takers.
Differences between intra-rater, inter-rater, test-retest, and split-half reliability
reliability assesses consistency of one rater over time, reliability assesses consistency between different raters, reliability measures stability over time, and reliability evaluates internal consistency.
Definition of intra-class correlation
correlation between dependent samples
Relationship of reliability and validity
Reliability is necessary for validity; a test can be reliable but not valid, but a valid test must be reliable.
Calculate and interpret the standard error of the measurement & coefficient of variation
The standard error of measurement quantifies the amount of error in an observed score, while the coefficient of variation expresses the standard deviation as a percentage of the mean.