4.0. experimental method

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/75

flashcard set

Earn XP

Description and Tags

4.0. experimental method

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

76 Terms

1
New cards

demonstrating cause & effect

  • experiment

2
New cards

experimental method

  • demonstrates a relatively unambiguous connection between cause & effect (aim)

  • these connections are what science tries to establish.

  • a type of research in which the researcher carefully manipulates a limited number of factors (IVs) and measures the impact on other factors (DVs).

3
New cards

alternative interpretations of findings

  • possible to data that supports a hypothesis → often, we cannot rule out competing explanations; that is we cannot confidently point to a clear cause & effect relationship between events & human behaviour

4
New cards

independent variable (IV).

manipulate independently of what the other variables are doing

5
New cards

dependent variable (DV).

we expect to experience a change which depends on the manipulation we’re doing

6
New cards

confounding variables

other variables that might have an effect on the dependent variable

7
New cards

experiments in psychology looks at

  • the effect of the experimental change (IV) on a behavior (DV).

8
New cards

Experiments can meet the three causal rules

  • covariance

  • temporal precedence

  • internal validity

9
New cards

Covariance

  • signifies the direction of the linear relationship between the two variables (directly proportional or inversely proportional

  • increased values may have positive or negative impact on the other value

10
New cards

Temporal precedence

  • IV comes before DV

11
New cards

internal validity (very important)

  • the extent to which a study establishes a trustworthy cause-and-effect relationship between a treatment & an outcome

12
New cards

variation among scores recorded in an experiment can be divided into three sources

  • variance from the treatment (the effect under investigation)

  • systematic variance caused by confounding

  • unsystematic variance coming from random errors.

13
New cards

error variance emerges when

  • behavior of participants is influenced by variables that the researcher does not examine (did not include in his or her study)

  • by means of measurement error (errors made during the measurement). 

14
New cards

systematic variance

refers to that part of the total variance that can predictably be related to the variables that the researcher examines.

15
New cards

Three types of Groups in an experiment  

  • experimental group

  • control group

  • placebo group

16
New cards

experimental group

  • The one that is being manipulated

17
New cards

control group

  • Act as a BASELINE MEASURE of behaviour without treatment

  • A control group is not always necessary

18
New cards

placebo group

  • One group of patients is often given an inert substance (a ‘placebo’) → patients think they have had the treatment.

  • They are similar to a control group because they experience all the same conditions as the experimental group, except they do not receive the change in the independent variable that is expected to influence the dependent variable

19
New cards

limitation in experimental method → can sometimes be 

  • inappropriate 

  • unethical 

  • artificial → limit the range of sensibly studies 

20
New cards

non-experimental methods suited for

  • naturally occurring phenomena: 

  • reactions to parental discipline

  • gender-specific behaviour

  • everyday health behaviour

21
New cards

Experimental design between group design

  • single case experimental design

  • between group design

22
New cards

single case experimental design

  • Withdrawal or reversal designs (ABAB)

  • Multi-treatment design (ABCBC)

  • Multiple baseline design 

  • Alternating design 

  • Variable criteria design 

23
New cards

Between group designs

  • Parallel-group designs

  • Crossover or within groups design 

  • Cluster design 

  • Factorial design 

  • Dismantling design 

24
New cards

general dimensions of group design

  • Selection of the Sample: Random/non-random

  • Assignment to the groups: Random/non-random

  • Treatment Information: Blinded/open

25
New cards

Random sampling

  • pool of research participants that represents the population you’re trying to learn about

26
New cards

Random assignment

  • participants to control or experimental groups

  • control all variables except the one you’re manipulating.

27
New cards

random sampling vs random assignment

  • first → random sampling

  • next → random assignment

28
New cards

single-blind study

  • participants are blinded.

29
New cards

double-blind study

  • both participants and experimenters are blinded.

30
New cards

triple-blind study,

  • assignment is hidden from participants, experimenters, & the researchers analyzing the data.

31
New cards

illustration of parallel-group

32
New cards

No-Treatment Control

  • What it is: Participants get no treatment at all.

  • What happens: They are only assessed before and after the study.

  • Purpose: Shows how much change happens naturally, without any intervention.

33
New cards

Waitlist Control

  • What it is: Participants do not get treatment now, but will receive it later.

  • What happens: They are assessed while waiting, then treated after the experiment.

  • Purpose: Helps control for expectation effects (people may expect to improve just by knowing treatment is coming).

34
New cards

Attention-Placebo / Nonspecific Control

  • What it is: Participants get some kind of interaction (e.g., therapist attention) but not the real treatment.

  • What happens: They receive support/attention, but not the active therapeutic techniques.

  • Purpose: Controls for the fact that attention alone can improve symptoms.

35
New cards

Standard Treatment / Routine Care Control

  • What it is: Participants get the usual or current standard treatment, not the new experimental one.

  • What happens: They receive normal care, not the experimental intervention.

  • Purpose: Tests whether the new treatment is better than what is already commonly used.

36
New cards

Parallel-group or INDEPENDENT GROUPS DESIGNS → mulitlevel design

  • greater than two levels of IV 

  • more realistic

  • non-linear effect can be discovered 

    knowt flashcard image
knowt flashcard image

<ul><li><p><strong><mark data-color="purple" style="background-color: purple; color: inherit;">greater than two levels of IV&nbsp;</mark></strong></p></li><li><p><strong><mark data-color="purple" style="background-color: purple; color: inherit;">more realistic</mark></strong></p></li><li><p><strong><mark data-color="purple" style="background-color: purple; color: inherit;">non-linear effect can be discovered&nbsp;</mark></strong></p><img src="https://knowt-user-attachments.s3.amazonaws.com/60ee50c0-f21c-4952-a7d3-9e35f4d664cf.png" data-width="100%" data-align="center" alt="knowt flashcard image"></li></ul><img src="https://knowt-user-attachments.s3.amazonaws.com/aca622cc-36b6-4cea-b011-750dbf78fb47.png" data-width="100%" data-align="center" alt="knowt flashcard image"><p></p>
37
New cards

crossover or within groups design

38
New cards

Independent sample designs

39
New cards

within-subject design (crossover or within groups design)

  • same measure is repeated on each participant

  • under the various conditions of the IV

  • participants are the same for both conditions

  • all other variables are controlled 

  • differences within participants → effect of the manipulated IV

  • individuals → serve own control 

40
New cards

multiple testing

  • is not repeated measures. 

41
New cards

illustrations of crossover designs / repeated design 

knowt flashcard imageknowt flashcard image

<img src="https://knowt-user-attachments.s3.amazonaws.com/05086ecc-a9f1-4bd9-81d4-b791f9742379.png" data-width="100%" data-align="center" alt="knowt flashcard image"><img src="https://knowt-user-attachments.s3.amazonaws.com/e37d2a04-4092-4796-bb7a-0627c49801ae.png" data-width="100%" data-align="center" alt="knowt flashcard image"><p></p>
42
New cards

ORDER EFFECTS

  • Effects from the order in which people participate in conditions

43
New cards

how to deal with order effect?

  • Counterbalancing

  • Complex counterbalancing

  • Randomisation of condition order

  • Randomisation of stimulus items

  • Elapsed time

  • Using another design

44
New cards

Counterbalancing 

  • Having two conditions (A & B), one group does the AB order while the other group does the BA order.

45
New cards

Complex counterbalancing

  • To balance asymmetrical order effects all participants take conditions in the order ABBA

  • When there are more than 2 conditions, you divide the participants that many groups as possible orders

knowt flashcard image

<ul><li><p><span><span>To </span><strong><mark data-color="purple" style="background-color: purple; color: inherit;"><span>balance</span></mark></strong><span> asymmetrical order effects all </span><strong><mark data-color="purple" style="background-color: purple; color: inherit;"><span>participants take conditions in the order ABBA</span></mark></strong></span></p></li><li><p><span><span>When there are </span><strong><mark data-color="green" style="background-color: green; color: inherit;"><span>more than 2 conditions</span></mark></strong><span>, you </span><strong><mark data-color="green" style="background-color: green; color: inherit;"><span>divide the participants that many groups as possible orders</span></mark></strong></span></p><p></p></li></ul><img src="https://knowt-user-attachments.s3.amazonaws.com/9db00654-74d7-42f5-a7a8-5444c0501fa4.png" data-width="100%" data-align="center" alt="knowt flashcard image"><p></p>
46
New cards

Randomisation of condition order 

  • Present the conditions to each participant in a different randomly arranged order. 

47
New cards

Randomisation of stimulus items

  • Present items from different conditions

48
New cards

Elapsed time 

Leave enough time between conditions for any learning or fatigue effects to dissipate

49
New cards

Using another design

Move to an independent samples design

50
New cards

INDEPENDENT SAMPLES DESIGNS

  • More sample needed

  • Too much variance make the analysis harder

  • No contamination across independent variable levels

51
New cards

REPEATED MEASURES

  • Order effects

  • Effect of attrition

  • Taking both conditions create demand characteristics bias

  • Practice effect

  • Need of equivalent stimuli

52
New cards

comparison between independent sample design, & repeated mesures 

knowt flashcard image

<img src="https://knowt-user-attachments.s3.amazonaws.com/ab6e84d6-b6a0-431d-aadb-cffdcd54afc8.png" data-width="100%" data-align="center" alt="knowt flashcard image"><p></p>
53
New cards

PARTICIPANT VARIABLES

  • Participant variables = individual differences

  • can cause threat to internal validity

54
New cards

Independent groups designs

  • these differences can accidentally cause differences in the results.

  • This is a threat to internal validity

55
New cards

Ways to Control Participant Differences

  • Random Assignment

  • Matching

  • Pretest

56
New cards

Random assignment

  • Assign participants to groups by chance.

  • Makes groups equal on average → reduces bias

57
New cards

Matching

  • Pair participants based on a variable (e.g., IQ), then split pairs into groups

  • Ensures groups are equal on key characteristics

58
New cards

Pretest

  • Measure DV before & after treatment

  • Helps detect initial group differences

59
New cards

Factorial design 

  • A study with two IV.

  • Each IV has at least two levels.

  • IVs are crossed with each other, creating all possible combinations of the levels.

  • IVs can be participant variables or manipulated variables.

  • IVs can be within-groups variables or independent-groups variables

60
New cards

illustration of factorial design

knowt flashcard imageknowt flashcard image

<p></p><img src="https://knowt-user-attachments.s3.amazonaws.com/a880faf9-5914-4e4b-8674-3de1e8af7efe.png" data-width="100%" data-align="center" alt="knowt flashcard image"><img src="https://knowt-user-attachments.s3.amazonaws.com/0eda4fdf-45ea-4fc6-967a-0cb340b50f59.png" data-width="100%" data-align="center" alt="knowt flashcard image"><p></p>
61
New cards

Factorial design aims at

  • test theories

  • test limits

  • show interactions

62
New cards

main effect of factorial design in experiment & analysis

  • effect of an IV on a DV averaged across the levels of any other IV

  • used to distinguish main effects from interaction effects.

63
New cards

when there is an interaction in the data

  • usually more important than any main effects you may find. 

64
New cards

essential tables to study

<img src="https://knowt-user-attachments.s3.amazonaws.com/d8c4cdc9-1998-4f15-82fc-6b0a15cd855b.png" data-width="100%" data-align="center"><img src="https://knowt-user-attachments.s3.amazonaws.com/b3e98b25-78ba-44b5-9e3a-a038d95cdadb.png" data-width="100%" data-align="center"><p></p>
65
New cards

identifying factorial design 

  • empirical journal articles: 

  • method section this was a factorial design”

  • results section “describe the statistical tests for main effects and interactions

66
New cards

IVs can in factorial design be manipulated as

  • three-way-interaction between → within-groups or between- groups + mixed design 

67
New cards

implications related to how IVs can be manipulated in factorial design

  • Implications for numbers of participants

  • Implications for statistical testing

68
New cards

Randomized controlled trials in clinical research

  • compares a proposed new treatment against an existing standard of care

  • →  these are then termed the 'experimental' and 'control' treatments, respectively.

69
New cards

What are the four main types of RCT study designs?

  • Parallel-group: Different groups receive different conditions (most common).

  • Crossover: Each participant receives multiple conditions in sequence.

  • Cluster: Pre-existing groups (e.g., classrooms, clinics) are randomly assigned to conditions.

  • Factorial: Participants receive combinations of interventions (tests multiple variables at once).

70
New cards

What is the purpose of randomization and blinding in RCTs?

  • Randomization: Makes groups comparable by balancing participant differences → reduces selection & allocation bias → improves internal validity.

  • Blinding: Prevents participants and/or researchers from knowing group assignments → reduces placebo effects, demand characteristics, and experimenter bias.

71
New cards

What is an RCT and why is it the gold standard?

  • RCT (Randomized Controlled Trial) compares a new treatment to a control/standard treatment.

  • Participants are randomly assigned to groups → this reduces selection bias & balances known & unknown participant differences.

  • Blinding (participants/researchers) reduces expectancy and experimenter bias.
    Therefore, RCTs provide strong evidence for causality.

72
New cards

Single-subject experiments

  • control has to be on a individual level

  • A B A B design

  • more subjects can be added/not only one single

73
New cards

A way to see the effect in a single-subject experiments 

extending the time of the baseline without adding more variables. 

74
New cards

Validity in experiments (causal claim)

  • Construct validity→ ? how well are variables measured & manipulated? 

  • External validity → to whom/what can the causal claim be generalized?

  • Statistical validity → ? how well does data support causal conclusion?  

  • Internal validity → ? alternative explanation to the outcome? 

75
New cards

Strengths of Validity in experiments

  • Can establish cause & effect because the IV & extraneous variables are controlled.

  • High internal validity: alternative explanations can be ruled out.

  • Replicable → results can be repeated to check reliability.

76
New cards

Weaknesses of Validity in experiments

  • Artificial setting → results may lack ecological validity.

  • Reactivity: participants may behave differently because they know they are being studied.

  • Limited in the range of real-world behaviors that can be studied, because variables must be tightly controlled.

  • Participants have little personal input, & this may raise ethical issues.