Research Design Elements and Controls in Experimental Psychology

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/82

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

83 Terms

1
New cards

Experimental design

A structured approach to planning and conducting manipulative or 'natural' experiments, intended to investigate relationships between variables.

2
New cards

Set of treatments

The set of treatments included in a research study.

3
New cards

Set of experimental units

The set of experimental units included in the study.

4
New cards

Rules and procedures

The rules and procedures by which the treatments are assigned to the experimental units (or vice versa).

5
New cards

Measurements

The measurements made on the experimental units after the treatments have been applied.

6
New cards

Experimental treatment

Any specific intervention or manipulation of an independent variable (IV) that is applied to experimental units.

7
New cards

Control treatment

Designed to eliminate alternate explanations of experimental results, specifically focusing on mitigating experimenter bias and experimental errors.

8
New cards

Negative control treatment

A basis of comparison that is identical to the experimental treatment except that the specific manipulation or intervention is not applied.

9
New cards

Positive control treatment

Used to verify that an experimental procedure is functioning as expected, where experimental units are treated in a way that is known to show an expected result.

10
New cards

Sham control treatment

An additional negative control treatment in which the experimental procedure is used, but the actual specific treatment is not applied.

11
New cards

Placebo

An inactive substance or treatment, often used in clinical trials, to compare against an active treatment.

12
New cards

No control treatment

A design that compares two or more levels of the experimental treatment against each other to test for a treatment effect.

13
New cards

Treatment

Any specific intervention or manipulation of an IV applied to units.

14
New cards

Levels

The different forms or quantities of the manipulation (e.g., different levels of hunger on aggressive behavior).

15
New cards

Experimental unit

The smallest entity to which a treatment is randomly applied and on which a response measurement is taken.

16
New cards

Replicate

Each experimental unit belonging to one treatment group.

17
New cards

Sample size (N)

Determined by the number of replicates per treatment group; research designs are considered legitimate only if they provide adequate replication.

18
New cards

Pseudoreplication

A statistical error that occurs when a researcher's analysis mistakenly treats non-independent samples as if they are independent replicates.

19
New cards

Spatial pseudoreplication

Taking multiple samples from within the same experimental unit.

20
New cards

Temporal pseudoreplication

Taking multiple measurements over time from the same experimental unit.

21
New cards

Type I statistical error

A false positive error where a true null hypothesis is incorrectly rejected.

22
New cards

Type II statistical error

A false negative error where a false null hypothesis is incorrectly accepted.

23
New cards

Blind designs

Participants do not know if they are in the treatment group or the control group.

24
New cards

Double-blind designs

Neither the participants nor the researchers who evaluate them know who is in the treatment group or the control group.

25
New cards

Triple blinding

An approach where patients, doctors, and statisticians are all kept unaware of which group is the control group until the analysis is complete.

26
New cards

Randomization

The technique of assigning experimental units to a particular treatment without any particular pattern.

27
New cards

Importance of randomization

Eliminates or reduces bias in the research design, preventing systematic biases between test groups.

28
New cards

Regression to the mean

The observation that subjects who stand out on an initial measurement will, on average, show measurements closer to their true mean on subsequent measurements.

29
New cards

How regression to the mean arises

Occurs when subjects are selected based on extreme scores, representing both true ability and temporary factors.

30
New cards

Example of regression to the mean

If a researcher selects patients specifically for high blood pressure, their initial high reading may be due to temporary factors, leading to improvement regardless of treatment.

31
New cards

Effect of pseudoreplication

Increases the likelihood of a Type I error.

32
New cards

Fisher's justification of randomization

Not provided in detail regarding his hypothetical tea-tasting experiment.

33
New cards

Lady Tasting Coffee case study

No comparative explanation of which error is 'worse' in that context is provided.

34
New cards

Confounding factors

Factors that can systematically affect the results if randomization is not successful.

35
New cards

Statistical significance

Indicates that the intervention is the only possible cause if randomization is successful.

36
New cards

Inflation of Type I error rate

Occurs when randomization is not used, leading to regression to the mean.

37
New cards

Temporary factors

Elements like a particularly stressful week or measurement error that can affect initial measurements.

38
New cards

Independent samples

Samples that are not influenced by each other, which is necessary for valid statistical analysis.

39
New cards

Experimental units

The subjects or items to which treatments are assigned in an experiment.

40
New cards

Control group

The group in an experiment that does not receive the treatment, used for comparison.

41
New cards

Treatment group

The group in an experiment that receives the treatment being tested.

42
New cards

Simple random sampling

Individuals are assigned randomly to specific treatments.

43
New cards

Cluster sampling

Arbitrary groups or clusters of individuals are assigned randomly to specific treatments.

44
New cards

Stratified random sampling

Individuals belonging to particular categories or strata are assigned randomly to specific treatments in proportion to the size of each category.

45
New cards

Haphazard sampling

Selects items without a plan, attempting to avoid bias but still relying on convenience and personal judgment.

46
New cards

Random sampling

Uses a statistical method where every item has an equal chance of selection, making it more reliable and unbiased.

47
New cards

Constants

Potentially confounding variables that are maintained consistent or uniform across all replicates.

48
New cards

Independent Variable (IV)

A variable that is manipulated (as a 'treatment') in some way to test if it has an effect.

49
New cards

Dependent Variable (DV)

What is measured in response to the manipulation of the IV.

50
New cards

Categorical Variable

Variables whose levels are discrete categories (i.e., non-continuous).

51
New cards

Quantitative Variable

Variables whose levels represent numerical measurements on a continuous scale.

52
New cards

Ordinal Scale

Numerical measurements represent a ranked order, but the intervals between ranks may be unequal.

53
New cards

Interval Scale

Numerical measurements represent equal distances between intervals, but there is no true zero.

54
New cards

Ratio Scale

Numerical measurements represent equal distances between intervals, and zero represents 'none' of the variable being measured.

55
New cards

Column graph (Bar graph)

Recommended visualization for categorical IV and categorical DV.

56
New cards

Line graph

Recommended visualization for categorical IV and quantitative DV.

57
New cards

Box plot

Recommended visualization for quantitative IV and categorical DV.

58
New cards

Scatterplot

Recommended visualization for quantitative IV and quantitative DV.

59
New cards

Correlation

Neither variable is dependent on the other; IV/DV cannot be defined.

60
New cards

Regression

One variable is identified as the IV and one as the DV, often based on temporal precedence.

61
New cards

Association

Deals with an association or a non-causal relationship.

62
New cards

Causation

Deals with a causal link based on design.

63
New cards

Interpretation/Limitations of Correlation

An association is not the same as a causal link; associations are limited because they can be influenced by uncontrolled confounding variables.

64
New cards

Interpretation/Limitations of Regression

Requires three criteria to establish cause and effect: (1) purported cause precedes the effect, (2) cause and effect vary together, and (3) the research design eliminates alternative explanations.

65
New cards

Cohort Study

Good for studying the consequences of rare exposures. Can directly establish the relative risk of developing a disease.

66
New cards

Cohort Study Limitations

Expensive (especially prospective designs), requires a large sample size, takes a long time to complete, and is prone to attrition bias.

67
New cards

Cohort Study Data Collection

Starts in the past (retrospective) or present (prospective) and collects data moving forward in time. Compares the future incidence rate of developing an outcome between an exposed group and an unexposed group.

68
New cards

Case-Control Study

Good for studying the causes of rare outcomes. Can determine odds ratios associated with increased or decreased risk.

69
New cards

Case-Control Study Data Collection

Starts in the present and collects data moving backward in time. Compares the exposure history of a group with a disease/condition (case group) to a control group without the disease/condition.

70
New cards

Case-Control Study Limitations

Difficult to obtain a comparatively appropriate control group. Historical data may be incomplete or of limited quality. Highly susceptible to confounding variables.

71
New cards

Internal Validity

The degree to which a research study successfully establishes a causal relationship between the independent and dependent variables.

72
New cards

Factors Affecting Internal Validity

High internal validity depends on the proper application of research design principles, such as legitimate control, randomization, and the identification of confounds.

73
New cards

Criteria for Establishing Causality

The purported cause must precede the effect, the cause and effect must vary together, and the design must eliminate alternative explanations.

74
New cards

External Validity

The degree to which the study results can be generalized across different times, populations, settings, and so on.

75
New cards

Enhancing External Validity

Conducting experiments in natural settings (e.g., in the field vs. a lab) and by having replicability in different settings.

76
New cards

Relationship Between Validities

Internal and external validity are not mutually inclusive. A study might have high internal validity but be irrelevant to the real world, or it might have high relevance but results that are not trustworthy (low internal validity).

77
New cards

Base Rate Fallacy

The error of misinterpreting the p value (the probability of observing data assuming there is no effect) as the probability that the finding itself is a fluke or that the hypothesis is true.

78
New cards

Misinterpretation of p-value

People often wrongly assume a low p-value means the chance of error is similarly low (e.g., that p<0.05 implies a 95% chance the result is true).

79
New cards

Importance of Base Rate

To calculate the actual probability that a result is true, one must factor in the base rate (the prior probability that the hypothesis is correct).

80
New cards

False Positives in Research

In fields where the base rate of true associations is low (e.g., early drug trials), the base rate fallacy ensures that a high fraction of statistically significant results—sometimes up to 38% or more, depending on the conditions—are actually false positives.

81
New cards

Issues Magnifying Base Rate Fallacy

This error is magnified by issues like multiple comparisons.

82
New cards

Interpreting Research Results

Understanding the base rate is essential to avoid being misled by low p-values.

83
New cards

Example of Base Rate Fallacy

Even a highly sensitive test (like a mammogram) may yield a positive result where the actual chance of disease is very low (e.g., 9%), due to the low base rate of the disease in the general population being tested.