Chapter 7: The Basics of Experimentation

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/44

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

45 Terms

1
New cards

Independent variable

  • is the dimension that the experimenter intentionally manipulates; it is the antecedent the experimenter chooses to vary.

  • treatment, manipulation, interventions, conditions

2
New cards

Levels of the independent variable

To meet the definition of an experiment, at least two different treatment conditions are required; thus, an IV must be given at least two possible values in every experiment. These values are called the

3
New cards

Dependent variable

  • is the particular behavior we expect to change because of our experimental treatment; it is the outcome we are trying to explain.

  • measures, effects, outcomes, results

4
New cards

Operational definition

  • specifies the precise meaning of a variable within an experiment: It defines a variable in terms of observable operations, procedures and measurements.

5
New cards

Experimental operational definition

specifies the exact procedure for creating values of the independent variable.

6
New cards

Measured operational definition

specifies the exact procedure for measuring the dependent variable.

7
New cards

Hypothetical constructs

  • or concepts, which are unseen processes postulated to explain behavior.

  • cannot be observed directly

8
New cards

Reliability

means consistency and dependability. Good operational definitions are ___: If we apply them in more than one experiment, they ought to work in similar ways each time.

9
New cards

Test-retest reliability

  • estimates that are used to evaluate the error associated with administering a test at two different times.

  • type of analysis is of value when we measure “traits”

  • Ideally, six months or more interval.

  • statistical used: Pearson’s R

10
New cards

Traits

  • characteristics that do not change over time.

  • relatively enduring

11
New cards

Coefficient of stability

When interval between testing is greater than six months, referred as

12
New cards

Parallel forms

  • compares two equivalent forms of a test that measure the same attribute.

  • exactly the same

13
New cards

Alternate forms

The two forms use different items; however, the rules used to select items of a particular difficulty level are the same.

14
New cards

Coefficient of equivalence

  • assesses how consistently different versions of a test (or measurement tool) measure the same construct. It's determined by correlating scores from two equivalent forms of a test administered to the same group. 

  • alternate-forms reliability

15
New cards

Split-half method

  • a test is given and divided into halves that are scored separately.

  • The results of one half of the test are then compared with the results of the another.

  • Use odd-even system,

16
New cards

Spearman-Brown Formula

Statistical test used in split-half method

17
New cards

Inter-item consistency

  • refers to the degree of correlation among all the items on scale.

  • Useful in assessing the homogeneity of the test

18
New cards

Coefficient alpha/Cronbach alpha and KR 20/21

Methods used to obtain estimates of internal consistency:

19
New cards

Coefficient alpha

  • is a measure of internal consistency, that is, how closely related a set of items are as a group.

  • data are nondichotomous (e.g multiple choice)

20
New cards

Kuder-Richardson 20 and 21

  • reliability coefficients used to assess the internal consistency of a test, with KR20 being a more general formula and KR21 a simplified version used when item difficulties are assumed to be equal. 

21
New cards

KR20

  • zero level of difficulty

  • for dichotomous data (true or false; yes or no)

22
New cards

KR 21

  • equal level of difficulty

  • for dichotomous data (true or false; yes or no)

23
New cards

Inter-rater validity

  • the degree of agreement or consistency between two or more scorers (or judges or raters) with regard to a particular measure.

  • External validity

24
New cards

.70

acceptable inter-rater reliability

25
New cards

.60

acceptable inter-item reliability if good evidence for validity, theoretical/practical reason, short scale

26
New cards

Validity

is a judgment or estimate of how well a test measures what it purports to measure in a particular context.

27
New cards

Face validity

the judgment about the items appropriateness is made by test taker rather than expert in the domain. (test-taker is involved)

28
New cards

Content validity

it is the type of validity that is important whenever a test is used to make inferences about the broader domain of knowledge and or skills represented by a sample of items. (professional is involved)

29
New cards

Construct validity

refers to how well a test or tool measures the construct that it was designed to measure

30
New cards

Convergent validity

measure correlating to the same constructs. (positive relationship)

31
New cards

Divergent validity

measure correlating to different constructs. (negative correlation/no relationship)

32
New cards

Criterion – related validity

is a judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest being the criterion.

33
New cards

Predictive validity

an index of the degree to which a test score predicts some criterion measure.

34
New cards

Concurrent validity

an index of the degree to which a test score is related to some criterion measure obtained at the same time

35
New cards

Internal validity

  • is the degree to which changes in the dependent variable across treatment conditions were due to the independent variable.

  • establishes a cause-and-effect relationship between the independent and dependent variable.

36
New cards

History

occurs when an event outside the experiment threatens the internal validity by changing the dependent variable.

37
New cards

Maturation

is produced when physical or psychological changes in the subject threaten internal validity by changing DV.

38
New cards

Testing

occurs when prior exposure to a measurement procedure affects performance on this measure during the experiment.

39
New cards

Instrumentation

is when changes in the measurement instrument or measuring procedure threatens internal validity.

40
New cards

Statistical regression

occurs when subjects are assigned to conditions on the basis of extreme scores, the measurement procedure is not completely reliable, and subjects are retested using the same procedure to measure change on the dependent variable.

41
New cards

Selection

occurs when individual differences are not balanced across treatment conditions by the assignment procedure.

42
New cards

Subject mortality

occurs when subjects drop out of experimental conditions at different rates.

43
New cards

Selection interactions

occurs when a selection threat combines with at least one other threat (history, maturation, statistical regression, subject mortality, or testing).

44
New cards

Participants, Apparatus/materials, and Procedure

The Method section of an APA research report describes the

45
New cards

Method section

This section provides the reader with sufficient detail (who, what, when, and how) to exactly replicate your study.