All of Psyc 012

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/210

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

211 Terms

1
New cards

Quasi-experiment

A study that is similar to an experiment but lacks random assignment to conditions. Researchers examine the effect of an independent variable, but participants are assigned to groups based on factors that are not under the researcher’s control (e.g., gender, classrooms, naturally occurring groups).

2
New cards

Quasi-independent Variable

A variable that functions like an independent variable in a quasi-experiment but is not manipulated by the researcher. It is usually a pre-existing characteristic or grouping (e.g., age group, school attended, before vs. after an event).

3
New cards

Nonequivalent control group posttest-only design

A quasi-experimental design in which participants are assigned to groups (treatment and control) without random assignment, and only a posttest is given to measure outcomes. There is no pretest to assess baseline equivalence.

4
New cards

Nonequivalent control group pretest/posttest-only design

A  quasi-experimental design that includes a pretest and posttest for both the treatment and control groups, but participants are not randomly assigned to the groups. This helps assess changes over time and control for initial group differences.

5
New cards

Interrupted time-series design

 A quasi-experimental design where a single group is measured on a dependent variable multiple times before and after an intervention or event. The goal is to detect whether the “interruption” caused a significant change in the trend

6
New cards

Nonequivalent Control Group Interrupted Time-Series Design

 A more advanced design combining elements of a nonequivalent control group and an interrupted time-series. Two or more groups are compared over time, with one group experiencing an intervention or event, allowing researchers to evaluate effects more confidently.

7
New cards

Wait-list design

A quasi-experimental design where all participants plan to receive the treatment, but some receive it later than others. The group waiting acts as a control during the delay, helping to assess the treatment's effectiveness.

8
New cards

Small-N Design

A research design that focuses on a small number of participants (often just a few) to gather detailed data over time. Often used in clinical or applied settings to assess individual responses to interventions

9
New cards

Stable-baseline Design

A type of small-N design where researchers observe a participant’s behavior for a long baseline period before introducing the treatment. If behavior remains stable during baseline and changes after treatment, this suggests an effect.

10
New cards

Multiple-baseline Design

A small-N design where researchers stagger the introduction of the treatment across different times, situations, or individuals. This helps control for external factors and strengthens causal inference

11
New cards

Reversal Design

 Also known as ABA or ABAB design, this involves introducing and then removing the treatment to see if the behavior returns to baseline. It helps confirm that the treatment, not other factors, caused the change.

12
New cards

Single-N design

 A type of small-N design that focuses on just one participant. It involves repeated, systematic measurement and often includes baseline and treatment phases to assess changes in behavior.

13
New cards

Replicable

A study is replicable if its results can be obtained again when the study is repeated. Replication is essential for confirming the reliability of scientific findings.

14
New cards

Direct Replication

 A type of replication in which researchers repeat an original study as closely as possible to see whether the same results are obtained with a new sample.

15
New cards

Conceptual Replication

A replication that tests the same hypothesis as the original study, but uses different methods, operational definitions, or procedures to see if the effect generalizes across settings.

16
New cards

Replication-plus-extension

A replication study that repeats the original experiment but adds new variables or conditions to test additional questions or expand on the original findings.

17
New cards

Scientific literature

 The collection of all published studies in a particular area of research. It includes original studies, review articles, and theoretical papers.

18
New cards

Meta-analysis

A statistical technique that combines the results of many studies on the same topic to estimate the overall effect size and identify patterns among study results.

19
New cards

File Drawer problem

A problem in scientific literature where studies with null or non-significant results are less likely to be published, meaning the published literature may overestimate the true effect size.

20
New cards

HARKing

 Stands for "Hypothesizing After the Results are Known." This refers to creating or revising a hypothesis after seeing the results, which threatens the integrity of the research process.

21
New cards

P-hacking

The practice of manipulating data or analysis (e.g., stopping data collection early, removing outliers, testing many variables) to produce a statistically significant result (p < .05).

22
New cards

Open science

A movement toward transparency in research, encouraging practices like sharing data, materials, and preregistering studies so others can verify or build on the work.

23
New cards

open data

 When researchers share the raw data from their study publicly so that others can analyze it or reproduce the findings.

24
New cards

open materials

 When researchers make their study materials (e.g., surveys, stimuli, instructions) publicly available to allow for replication or adaptation.

25
New cards

Preregistration

The practice of registering the study's hypothesis, design, and analysis plan before collecting data, which helps prevent HARKing and p-hacking.

26
New cards

Ecological validity

A type of external validity that refers to how well the study setting or tasks resemble real-world situations.

27
New cards

Theory-testing mode

 A research approach focused on testing theories in controlled settings, often prioritizing internal validity over real-world applicability.

28
New cards

generalization mode

A research approach focused on applying findings to the real world and generalizing results to different people, places, or times; emphasizes external validity.

29
New cards

Cultural psychology

A subfield of psychology that studies how cultural contexts shape people’s thoughts, feelings, and behaviors. It often emphasizes generalization mode to understand diverse populations.

  • WEIRD

30
New cards

Field setting

 A real-world environment where a study is conducted, as opposed to a laboratory. Field settings often have higher external validity

31
New cards

Experimental realism

The degree to which a study is psychologically engaging and participants experience it as real and involving, regardless of whether it looks like the real world.

32
New cards

Experiment

 study where the researcher manipulates one variable and measures the effect on another.

  • ex. Giving one group caffeine and another no caffeine to see its effect on memory performance.

33
New cards

Manipulated variable

The variable the researcher changes to test its effects.

  • Ex: The amount of caffeine given (none, low, high).

  • independent

34
New cards

Measured variable

 Records of thoughts, feelings, or behaviors, not directly influenced by the researchers

  • Ex. memory test scores after caffeine consumption

  • dependent

35
New cards

condition variable

the levels or versions of the independent variable

36
New cards

control variable

factors kept to avoid affecting the outcome; things kept the same

Ex. time of day 

37
New cards

comparison group

a group used to compare results against the experimental group 

  • Ex. a group that receives no caffeine when testing caffeine effects

38
New cards

control group

 a type of comparison group that doesn’t receive the treatment/ neutral condition

  • Ex. participants receive a sugar pill instead of caffeine 

39
New cards

treatment group

 group receiving the manipulated variable 

  • Ex. participants who receive caffeine 

40
New cards

placebo group

 a control group that receives a fake treatment 

  • Ex. Participants who drink decaffeinated coffee, thinking it has caffeine.

41
New cards

cofound

 a threat to internal validity

 Ex. If caffeine and sugar are both given, it’s unclear what caused the effect.

42
New cards

design cofound

A specific kind of confound that occurs due to poor experimental design.

Ex. The caffeine group gets more attention from researchers than the control group.

43
New cards

Systematic variability

Variability that is related to the independent variable.

  • ex. Only the caffeine group gets a more engaging researcher

44
New cards

Unsystematic Variability

 Random differences across participants that are unrelated to the IV.

  • Ex. Some participants are naturally more alert than others.

45
New cards

Selection effects

 Occurs when participants in different groups are not randomly assigned.

  • Ex. Participants choose whether they want caffeine or not

46
New cards

random assignment

Participants are randomly placed in groups to avoid selection effects

  • Ex.  Names drawn from a hat to assign groups.

47
New cards

matched groups

 participants are matched on a key variable before random assignment

  • Ex. Matching by age or IQ before assigning to caffeine or control

48
New cards

Independent-group design

 each participant experiences only one condition.

  • ex. One group gets caffeine, another gets a placebo.

49
New cards

Within-group design

Each participant experiences all conditions.

  • ex. Everyone tries caffeine one week and no caffeine another week.

50
New cards

Posttest only design

 Participants are tested only after the manipulation and then complete the measure once

  • ex: Memory is tested only after caffeine is consumed.

  • Advantage: simple

  • Disadvantage: possible random assignment failure 

51
New cards

Pretest/posttest designs

 Participants are tested before and after the manipulation

  •  ex: Memory is tested before and after caffeine consumption.

  • Advantage: controls for failures of random assignment

  • Disadvantage: demand characteristics  

52
New cards

Repeated-measures design

A within-groups design where participants are exposed to each condition and measured after each.

  • ex. Each person takes a memory test after drinking caffeine and again after no caffeine.

  • Advantages: Equivalence across conditions • Increased statistical power • Functionally doubles* sample size • Decreased noise

  • Disadvantage: carryover effect, Sensitization effects • Practice effects • Fatigue effects

53
New cards

Concurrent measures designs

 Participants are exposed to all levels of the IV at the same time, and a preference or choice is recorded.

  • ex. Babies are shown two faces at once, one male and one female, and researchers see which they look at longer

54
New cards

Order effect

the order in which conditions are presented affects results 

  • Ex. Participants do better on the second memory test because they practiced

55
New cards

Practice effect

 improvement due to repeated exposure 

Ex. Participants score higher the second time because they’re more familiar with the test.

56
New cards

Carryover effect

one condition affects performance in the next 

  • Ex.  Caffeine taken earlier still affects results during the second condition.

57
New cards

Counterbalancing

 Presenting conditions in different orders to cancel out order effects.

  • ex.  Half get caffeine first, half get placebo first

58
New cards

Full counterbalancing:

all possible condition orders are used.

  • Ex. For two conditions (A and B), participants are split between AB and BA.

59
New cards

partial counterbalancing

Only some of the possible condition orders are used

  •  ex. Randomly choosing a few sequences from all possibilities

60
New cards

latin square

A counterbalancing technique ensuring each condition appears in each position.

  •  ex: In a study with three tasks, each task is shown first, second, and third equally across participants

61
New cards

Demand characteristic

Participants guess the study’s purpose and change their behavior.

ex: Participants try harder on a memory test if they think that’s the goal

62
New cards

Manipulation check

A test to see if the manipulation worked.

  • ex: Asking participants how alert they felt after drinking caffeine

63
New cards

pilot study

A small, preliminary study to test the design

  • Ex. Running the caffeine study on a few people to identify any issues

64
New cards

One-group pretest/posttest design

A study where one group is measured before and after a treatment, but no comparison group is used.

  • Example: Measuring stress levels before and after a meditation program in the same group.

65
New cards

Maturation Threat

A threat to internal validity where participants change over time naturally.

  • Example: Kids improve in reading simply because they’re getting older, not due to an intervention.

  • prevention: comparison groups

66
New cards

History threat

An external event happens during the study that affects all participants.

  • Example: A new national health campaign starts while you're testing a health program.

  • prevention: use comparison group

67
New cards

regression threat

Extremely high or low scores tend to move closer to average on a retest due to chance.

  • Example: Students scoring very poorly on a pretest do better later simply by chance

  • prevention: use comparison group

68
New cards

regression to the mean

The phenomenon behind regression threat — extreme scores tend to become less extreme over time.

69
New cards

Attrition Threat

occurs when participants drop out of the study between pre- and posttest.

Example: If only the least stressed participants complete the posttest, it skews results.

  • prevention: remove pretest scores

70
New cards

testing threat

a threat when taking a test more than once affects scores 

  • Example: Practice effects improve scores, not the treatment.

71
New cards

Instrumentation threat

A change in the measuring instrument over time.

  • Example: Observers become stricter or more lenient in coding behaviors over the course of a study.

72
New cards

Selection-history threat

An outside event affects only one group in a multiple-group design

Example: One class gets a new teacher during an educational experiment

73
New cards

Selection-attrition threat

One group has higher dropout rates than another, potentially biasing results.

  • Example: More people leave the experimental group than the control group in a therapy study.

74
New cards

Observer bias

bias that occurs when observer expectations influence the interpretation of participant behaviors or the outcome of the study.

  • Example: A researcher unconsciously rates behaviors as more positive in the treatment group.

75
New cards

double blind study

Neither the participants nor the researchers know who’s in the treatment or control group

Example: Helps reduce observer bias and demand characteristics

76
New cards

masked design

Only the observers are unaware of group assignments (also called single-blind).
- Example: The person rating a behavior doesn’t know whether the participant got the treatment.

77
New cards

placebo effect

improvement caused by participants' belief in the treatment, not the treatment itself

  •  Example: A person feels less pain after taking a sugar pill they believe is a painkiller.

78
New cards

Double blind placebo control study

 Both participants and experimenters are unaware of who gets the placebo

79
New cards

null effect

 When there’s no significant difference between groups or conditions.

  •  Example: A study finds no difference in anxiety between a meditation and control group.

80
New cards

celling effect

 All the scores are high, leaving no room to detect differences.

  • Example: A math test is too easy, so everyone scores near 100%

81
New cards

floor effect

All the scores are low, making it hard to see improvement. 

  •  Example: A reading test is too hard and everyone scores poorly.

82
New cards

measurement error

Inaccuracy in measuring the dependent variable.

  •  Example: A bathroom scale gives different weights for the same person.

83
New cards

insensitive measure

a dependent variable (or measurement tool) that lacks the precision or range to detect meaningful differences or effects between experimental groups; using the wrong tool

84
New cards

noise

Random variability within the data that can obscure true effects.

  • Example: Differences in lighting, mood, or noise levels during testing

85
New cards

situation noise

 External distractions or uncontrolled variables in the environment.

  • Example: Construction noise outside affecting concentration during testing

86
New cards

power

The likelihood that a study will show a statistically significant result when an independent variable truly has an effect in the population; the probability of not making a Type II error.

  • Example: A study with high power is more likely to detect a real difference between groups.

87
New cards

interaction effect

A result from a factorial design, in which the difference in the levels of one independent variable changes, depending on the level of the other independent variable

  • example: you’re testing how exercise (high vs. low) and diet (healthy vs. unhealthy) affect weight loss.

    • The benefit of exercise may be greater when the diet is healthy, showing an interaction between exercise and diet.

88
New cards

factorial design

experiment with two or more independent variables 

  • Example: A 2×2 factorial design studying: Teaching method (lecture vs. hands-on) & Study time (short vs. long)

89
New cards

cell

a particular combination of the conditions of each independent variable

90
New cards

Participant variable

 A variable that is measured, not manipulated, but used as an IV in factorial designs.

  • Often demographic or trait-based (e.g., age, gender, personality, etc.)

  • Example:  Studying how test anxiety (high vs. low, measured) interacts with study method (manipulated) on performance.

91
New cards

main effect

The overall effect of one independent variable on the dependent variable

  • Example: If hands-on teaching improves scores regardless of study time, that’s a main effect of teaching method.

92
New cards

marginal means

The mean for ONE level of ONE independent variable

  • Example: If scores for hands-on = 85 and lecture = 75 (averaged over both study times), these are marginal means.

93
New cards

valence

refers to the emotional value associated with a stimulus — whether it is positive, negative, or neutral in emotional tone

94
New cards

bivariate correlation

An association that involves exactly two variables

95
New cards

correlation

Two continuous

96
New cards

T-test

one continuous, one categorical

97
New cards

Chi Square

two categorical

98
New cards

Statistical significance

In NHST, the conclusion is assigned when p < .05; that is, when it is unlikely the result came from the null-hypothesis population

99
New cards

Replication

The process of conducting a study again to test whether the result is consistent

100
New cards

Outlier

 A score that stands out as either much higher or much lower than most of the other scores in a sample