Experimental Methods Exam 4

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/71

flashcard set

Earn XP

Description and Tags

Psychology

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

72 Terms

1
New cards

Factorial Design

more than one factor/variable, and each factor/variable has more than one condition/level; generally, there should be a condition for each possible combination of independent variables

2
New cards

Factor

independent variable, something that can influence an outcome of dependent variable ex. medication, therapy, training

3
New cards

Condition

level, actual manifestations of your IV in the study, ex. placebo, low dose, high dose

4
New cards

3×3×2 Design

number of IV’s = 3; Levels in first IV = 3; Levels in second IV = 3; total number of conditions = 18

5
New cards

True Experimental Factorial Design

each independent variable can be manipulated

6
New cards

Hybrid Factorial Design

at least one variable cannot be manipulated (quasi-independent), and is instead measured (ex. gender); no causation can be established for the variable that is not manipulated

7
New cards

Mixed Design

at least one independent variable is within-persons

8
New cards

Cell Mean

the average on the dependent variable for participants with a specific combination of the levels of the independent variables; specific to the condition (which is a combination of variables)

9
New cards

Marginal Mean

the average of all participants on one level of the independent variable, ignoring the other independent variables; specific to the variable (an average of conditions for that variable)

10
New cards

Main Effects

the effect of the independent variable on the dependent variable; all, some, or none of the independent variables may have a main effect on the dependent variable; marginal means used to test main effects

11
New cards

Main Effect Hypothesis

a prediction that focuses on the effect of one independent variable on the dependent variable at a time, ignoring all other independent variables

12
New cards

Interaction Effects

the effect of one independent variable on the dependent variable depends on another independent variable; you need to know about both independent variables to most accurately understand the dependent variable; can have a significant interaction, even in the absence of significant main effects; cell means relevant here

13
New cards

Interaction Effect Hypothesis

a prediction about how the levels of one independent variable will combine with another independent variable to impact the dependent variable in a way that extends beyond the sum of the two separate main effects

14
New cards

Vignettes

a description of a hypothetical situation, event, or scenario to which participants react

15
New cards

Analysis in Factorial Design

reliability, manipulation check, descriptive statistics, and inferential statistics

16
New cards

Descriptive Statistics (Factorial Design)

who was in our study? what was the average receptivity? does the mean receptivity look different by condition?

17
New cards

Two-Way ANOVA

a statistical test that allows us to simultaneously test how two separate nominal or categorical independent variables (or factors) influence the dependent variable, and how those independent varaibles interact to influence the dependent variable (inferential statistics); one test will tell us the significance of each main effect, as well as the interaction

18
New cards

F(2,87) = #.##, p= .##, eta²= .##

F-test symbol(Between-subjects df, within-subjects df)=F-score, p=significance level, eta²= effect size

19
New cards

Single-Subject Design

a special type of within-subjects design using one participant or one group to assess changes within that individual or group; if nothing is manipulated, it is a case study; if something is manipulated, it is a single-subject design;

20
New cards

How can something be manipulated with just one subject?

special case of within-person design, measures are taken before and after treatment to test for change, may remove treatment (or reapply treatment) for additional tests of change

21
New cards

A-B Design

a single subject design in which researchers take a baseline measurement (A), then introduce the intervention, and then measure the same variable again (B)

22
New cards

A-B-A Design

a single-subject design in which researchers establish a baseline (A), introduce the intervention and measure the same variable again (B), then remove the intervention and take another measurement (A)

23
New cards

A-B-A-B Design

a single-subject design in which researchers establish baseline (A), introduce the intervention (B), remove the intervention (A), and then reintroduce the intervention (B), measuring the DV each time

24
New cards

Mixed Design

an experimental design that combines within-subjects and between-subjects method of data collection; multiple IVs with 2+ levels, one or more variables manipulated between-subjects and within subjects; for any one subject, they get multiple conditions for one variable, and only one condition for another variable

25
New cards

Mixed Design Analysis

manipulation check, reliability, descriptive, inferential

26
New cards

Between-Subjects

expose participants to one level of treatment, randomly assign participants to one condition, strength is internal and external validity, weakness is power; ex. two-group design, multigroup design, factorial design, and mixed design

27
New cards

Within-Subjects

expose participants to all level of treatment, randomly assign participants to a sequence of treatment conditions, strength is power, weakness is internal and external validity; ex. pretest-posttest design, repeated-measures design, and mixed design

28
New cards

Benefits of Mixed Design

combines some of the strengths of within and between subjects designs, power or within-subjects design, fewer subjects needed for this variable, control of between-subjects design (random assignment, no order effects); can test main effects and interactions

29
New cards

Drawbacks of Mixed Design

weakness of both within- and between-subjects design, order effects of within-subject design, increased number of participants needed for between-subject design; potential confounds if conditions are not equivalent

30
New cards

Waiting-List Control Group

no treatment given initially; a control group often used in clinical research; participants in this group do not receive the treatment or intervention until after the completion of the study

31
New cards

Treatment-as-Usual Control Group

a comparison group often used in clinical research in which an already-established treatment is administered for comparison to experimental treatment

32
New cards

Experimenter-Expectancy Effect

occurs when a bias causes a researcher to unconsciously influence the participants of an experiment

33
New cards

Double-Blind

both the participants and the administrators of treatment are not aware of, or blind to, the types of treatment being provided in order to reduce the likelihood that expectancies or knowledge of condition will influence the results

34
New cards

Single-Blind

only one party is aware of the condition (usually the researcher is aware and the participant is not)

35
New cards

Mixed Design ANOVA

a statistical analysis that tests for differences between two or more categorical independent variables, where at least one is a between-subjects variable and another is a within-subjects variable; can tell us if there are differences in conditions, then use a post-hoc test to explore specific differences; also explores main effects and interaction effects (inferential analysis)

36
New cards

Program Evaluation

using the scientific method to assess whether an organized activity is achieving its intended objectives; essentially research in an applied setting

37
New cards

Areas of Program Evaluation

needs, process, and outcome

38
New cards

Needs

an assessment of which features of a program are most valuable and who they benefit most; what would be valuable and for whom?; purpose is to identify program features to continue or discontinue and to determine potential new components to add

39
New cards

Process

an assessment of a general program operation, including whom the program serves and how the program delivers services to that population; who does an existing program serve and how?; purpose is to determine ways to improve program implementation and delivery, also seeks to ensure a match between program goals and those serviced

40
New cards

Outcome

an assessment of whether a program effectively produces outcomes that are consistent with the stated objectives or goals; is the program meeting its goals?; purpose is to identify unmet goals and outcomes the program can improve in order to better serve clients, or to establish evidence of program effectiveness

41
New cards

Focus Groups

several participants form a group and discuss a specific topic with the researcher/facilitator/moderator; qualitative approach

42
New cards

Interviews

researcher/facilitator conducts a conversation with a participant; generally one participant at a time; qualitative approach

43
New cards

Case Study

usually a specific person who is studied in detail over time (can be a group or organization too); qualitative design

44
New cards

Content Analysis

reviewing written records for themes; qualitative apprach

45
New cards

Visual Ethnography

reviewing visual media, like pictures and movie; qualitative approach

46
New cards

Quantitative Approach

survey (correlational); quasi-experiment to compare different programs/conditions; archival data

47
New cards

Archival Data

existing employee or client records that can be gathered, encoded, and analyzed

48
New cards

Structured Interviews

set list of questions that get asked to everyone

49
New cards

Unstructured Interviews

not a set list of questions, an idea of topics to be covered

50
New cards

Critical Incident Questions

focusing on a key event

51
New cards

Behavioral Questions

focus on discussing things the participant has actually done

52
New cards

Situational Questions

discuss what the participant would do in a situation

53
New cards

Single-Item Indicator

only one item or question being used to measure a variable

54
New cards

Good Interview Questions

clear and easy to understand, open-ended, eliminates assumptions

55
New cards

Plan for Program Design

who are stakeholders; understand goals of evaluation; describe current program; create a plan; execute; communicate results

56
New cards

Stakeholders

individuals who will use the evaluation and who will benefit from it. Identified by engaging organizational leadership, as well as those with less influence

57
New cards

Understand Goals of Evaluation

we identify what the evaluation hopes to accomplish, the steps needed to do so, and how the organization hopes to use the evaluation’s results

58
New cards

Describe the Current Program

we collect information about the program’s mission and specific goals, and the nature of the program and services delivered, and whom the program serves

59
New cards

Create a plan

we formulate and describe a plan that outlines our evaluation’s design; what measures to use?; literature review to obtain established measures; who to collect data from and when

60
New cards

Execute

collect data based on our design, analyze the data using appropriate statistics

61
New cards

Communicate Results

form conclusions, dispense results to stakeholders (written report, presentation, meeting)

62
New cards

Single Sample T-Test

a statistic to evaluate whether a sample mean statistically differs from a specific value; compares scale score to some set value (such as the mid-point)

63
New cards

Word Clouds

a visual representation of the frequency that certain words are used in a qualitative assessment; larger words indicate higher frequency of use

64
New cards

Biases

our assumptions and experiences influence how we perceive things, as a human studying other humans, it can be hard to set these aside

65
New cards

Heuristics

mental shortcuts

66
New cards

To test causality

manipulating the IV before measuring the DV, random assignment, making conditions equivalent (removing confounding variables)

67
New cards

Pseudoscience

anecdotes; evidence = “proof”; confirmation bias; handpicking supporting evidence, ideas are fixed and hostile to change; no peer review; overstates findings; replication is unimportant

68
New cards

Science

empirical data; evidence = “support for” not proves; falsifiable and open to refutation; examines all evidence, updates ideas as data is considered (open to change); peer review; cautious interpretation; replication is required

69
New cards

Ethics

avoid exposing participants to undo harm; who benefits from the findings; avoid exploiting vulnerable populations; participation is voluntary and can stop at any point, consent necessary; debrief participants after the study ends, especially if deceit was used; data integrity

70
New cards

Reliability

is the assessment consistent? across items of the same scale (internal consistency; across raters (inter-rater); across forms (parallel forms); across time (test-retest)

71
New cards

Validity

is it measuring what it is supposed to?

72
New cards

Skills to be a good scientist

creativity, objectivity, communication, empiricism, skepticism, open-mindedness