1/44
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Independent variable
is the dimension that the experimenter intentionally manipulates; it is the antecedent the experimenter chooses to vary.
treatment, manipulation, interventions, conditions
Levels of the independent variable
To meet the definition of an experiment, at least two different treatment conditions are required; thus, an IV must be given at least two possible values in every experiment. These values are called the
Dependent variable
is the particular behavior we expect to change because of our experimental treatment; it is the outcome we are trying to explain.
measures, effects, outcomes, results
Operational definition
specifies the precise meaning of a variable within an experiment: It defines a variable in terms of observable operations, procedures and measurements.
Experimental operational definition
specifies the exact procedure for creating values of the independent variable.
Measured operational definition
specifies the exact procedure for measuring the dependent variable.
Hypothetical constructs
or concepts, which are unseen processes postulated to explain behavior.
cannot be observed directly
Reliability
means consistency and dependability. Good operational definitions are ___: If we apply them in more than one experiment, they ought to work in similar ways each time.
Test-retest reliability
estimates that are used to evaluate the error associated with administering a test at two different times.
type of analysis is of value when we measure “traits”
Ideally, six months or more interval.
statistical used: Pearson’s R
Traits
characteristics that do not change over time.
relatively enduring
Coefficient of stability
When interval between testing is greater than six months, referred as
Parallel forms
compares two equivalent forms of a test that measure the same attribute.
exactly the same
Alternate forms
The two forms use different items; however, the rules used to select items of a particular difficulty level are the same.
Coefficient of equivalence
assesses how consistently different versions of a test (or measurement tool) measure the same construct. It's determined by correlating scores from two equivalent forms of a test administered to the same group.
alternate-forms reliability
Split-half method
a test is given and divided into halves that are scored separately.
The results of one half of the test are then compared with the results of the another.
Use odd-even system,
Spearman-Brown Formula
Statistical test used in split-half method
Inter-item consistency
refers to the degree of correlation among all the items on scale.
Useful in assessing the homogeneity of the test
Coefficient alpha/Cronbach alpha and KR 20/21
Methods used to obtain estimates of internal consistency:
Coefficient alpha
is a measure of internal consistency, that is, how closely related a set of items are as a group.
data are nondichotomous (e.g multiple choice)
Kuder-Richardson 20 and 21
reliability coefficients used to assess the internal consistency of a test, with KR20 being a more general formula and KR21 a simplified version used when item difficulties are assumed to be equal.
KR20
zero level of difficulty
for dichotomous data (true or false; yes or no)
KR 21
equal level of difficulty
for dichotomous data (true or false; yes or no)
Inter-rater validity
the degree of agreement or consistency between two or more scorers (or judges or raters) with regard to a particular measure.
External validity
.70
acceptable inter-rater reliability
.60
acceptable inter-item reliability if good evidence for validity, theoretical/practical reason, short scale
Validity
is a judgment or estimate of how well a test measures what it purports to measure in a particular context.
Face validity
the judgment about the items appropriateness is made by test taker rather than expert in the domain. (test-taker is involved)
Content validity
it is the type of validity that is important whenever a test is used to make inferences about the broader domain of knowledge and or skills represented by a sample of items. (professional is involved)
Construct validity
refers to how well a test or tool measures the construct that it was designed to measure
Convergent validity
measure correlating to the same constructs. (positive relationship)
Divergent validity
measure correlating to different constructs. (negative correlation/no relationship)
Criterion – related validity
is a judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest being the criterion.
Predictive validity
an index of the degree to which a test score predicts some criterion measure.
Concurrent validity
an index of the degree to which a test score is related to some criterion measure obtained at the same time
Internal validity
is the degree to which changes in the dependent variable across treatment conditions were due to the independent variable.
establishes a cause-and-effect relationship between the independent and dependent variable.
History
occurs when an event outside the experiment threatens the internal validity by changing the dependent variable.
Maturation
is produced when physical or psychological changes in the subject threaten internal validity by changing DV.
Testing
occurs when prior exposure to a measurement procedure affects performance on this measure during the experiment.
Instrumentation
is when changes in the measurement instrument or measuring procedure threatens internal validity.
Statistical regression
occurs when subjects are assigned to conditions on the basis of extreme scores, the measurement procedure is not completely reliable, and subjects are retested using the same procedure to measure change on the dependent variable.
Selection
occurs when individual differences are not balanced across treatment conditions by the assignment procedure.
Subject mortality
occurs when subjects drop out of experimental conditions at different rates.
Selection interactions
occurs when a selection threat combines with at least one other threat (history, maturation, statistical regression, subject mortality, or testing).
Participants, Apparatus/materials, and Procedure
The Method section of an APA research report describes the
Method section
This section provides the reader with sufficient detail (who, what, when, and how) to exactly replicate your study.