Chapter 11 exam 3 experimental psychology

0.0(0)
studied byStudied by 3 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/38

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

39 Terms

1
New cards

Double – blind study

designed that nether the participants nor the experimenters working with the participants know who is in the control group and experimental group

2
New cards

Blind design -

A study design in which the observers are unaware of the experimental conditions to which participants have been assigned

3
New cards

one-group, pretest/posttest design -

An experiment in which a researcher recruits one group of participants, measures them on a pretest, exposes them to a treatment, intervention, or change, and then measures them on a posttest (the worst experiment)

4
New cards

List of 12 threats to internal validity

  1. regression to the mean

  2. attrition

  3. testing

  4. instrumentation

  5. observer bias

  6. demand characteristics

  7. Placebo effects

  8. design confound

  9. selection effects

  10. order effects

  11. maturation

  12. history

5
New cards

Regression to the mean

An experimental group whose average is extremely low (or high) at pretest will get better (or worse) over time because the random events that caused the extreme pretest scores do not recur the same way at Posttest

6
New cards

Attrition

An experimental group changes over time, but only because the most extreme cases have systematically dropped out and their scores are not included in the posttest.

7
New cards

Testing

A type of order effect: An experimental group changes over time because repeated testing has affected the participants. Practice effects (fatigue effects) are one subtype.

8
New cards

Instrumentation

An experimental group changes over time, but only because the measurement instrument has changed.

9
New cards

Observer bias

An experimental group’s ratings differ from those of a comparison group but only because the researcher expects the groups’ ratings to differ.

10
New cards

Demand characteristic

Participants guess what the study’s purpose is and change their behavior in the expected direction.

11
New cards

Placebo effect-  

Participants in an experimental group improve only because they believe in the efficacy of the therapy or drug they receive.

12
New cards

Design confound

A second variable that unintentionally varies systematically with the independent variable.

13
New cards

Selection effect

In an independent groups design, when the two independent variable groups have systematically different kinds of participants in them.

14
New cards

Order effect

In a repeated-measures design, when the effect of the independent variable is confounded with carryover from one level to the other, or with practice, fatigue, or boredom.

15
New cards

Maturation

An experimental group improves over time only because of natural development or spontaneous improvement.

16
New cards

History

An experimental group changes over time because of an external factor that affects all or most members of the group

17
New cards

What treats to internal validity especially apply to one group pre test post test designs?

maturation threats, history threats, regression threats, attrition threats, testing threats, and instrumentation threats design confounds, selection effects, and order effects

18
New cards

Witch treats to internal validity applies to all study’s and possibly one group pretest/posttest designs?

observer bias, demand characteristics, and placebo effects

19
New cards

Null hypothesis

the alternative hypothesis – saying that there is no difference or no effect null effect is when this hypothesis is accepted

20
New cards

null effect

A finding that an independent variable did not make a difference in the dependent variable; there is no significant covariance between the two.

21
New cards

Selection history threat-

an outside event or factor systematically affects participants at one level of the IV

22
New cards

Selection attrition threat

participants in only one experimental group experience attrition

23
New cards

Janes possible causes for null effect

-          Perhaps there is not enough between groups differences

-          Perhaps within groups variability obscured the group differences ( more to between then within)

-          Perhaps there really is no difference

24
New cards

Insensitive measures

Sometimes, no results occurs because the researchers haven’t operationalized the dependent variable with enough sensitivity ( your prep course will improve students’ scores by 20% Is a precise measure instead of if you take it you will be in the high group or pass) the more quantifiable increments the more sensitivity the study has

25
New cards

Ceiling and floor effects

types of insensitive measures on the dependent variable – if the tool you are using is too easy or too hard in terms of manipulation you will not obtain an effect – no matter what you do they will score the same – if you give a calc test to 2nd graders they will be on the floor unless you change the measure used

26
New cards

Manipulation checks

to help detect weak manipulations of ceilings and floors. In an experiment, an extra dependent variable that researchers can include to determine how well a
manipulation worked.

27
New cards

design confound acting in reverse -

You can introduce a confound to the levels of your independent variable to make sure that your measures are sensitive ( something you know will have an effect) to see if your IV and operational definition of DV is sensitive enough

28
New cards

Book possible causes of null effects

not enough variability between variables

Too much variability within levels

If the study was sound, place in context of the body of evidence

29
New cards

not enough variability between variables - null effect

-          Ineffective manipulation of independent variable

-          Insufficiently sensitive measurement of dependent variable

-          Ceiling or floor effects on independent variable

-          Ceiling or floor effects on dependent variable

30
New cards

Too much variability within levels

-          Measurement error

-          Individual differences

-          Situation noise

31
New cards

If the study was sound, place in context of the body of evidence

-          The independent variable could, in truth, have almost no effect on the dependent variable

-          The independent variable could have a true effect on the dependent variable, but because of random errors of measurement or sampling, this one study didn’t detect it.

32
New cards

floor effect -

An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the low end of their possible distribution.

33
New cards

ceiling effect -

An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the high end of their possible distribution.

34
New cards

Reverse confounds-

is when you design your study to include a confound anticipating that, that confound actually counteracts or reverses some true effect of an independent variable,

35
New cards

Error Variance-

Unsystematic variability among the members of a group in an experiment, which might be caused by situation noise, individual differences, or measurement error.

36
New cards

dose between (across) or within (inside) have less variance?

within has less variance.

37
New cards

Placebo effects

this effect is present when people receive a treatment and improve but only because they believe they are receiving the valid or effective treatment

38
New cards

One way to evaluate or asses wither there is a placebo effect is to

add a placebo/a comparison/placebo control group.

You can also add a double blind to remove bias

39
New cards

The most important thing is to design a study where you can evaluate the placebo effect in relation to the

independent and dependent variables