Research Methods Final Exam

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/389

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

390 Terms

1
New cards

Causal Claim

One variable causes the other (X affects/influences/causes Y); requires an experiment

2
New cards

Example of Causal Claim

Using longhand notes improves memory compared to typing

3
New cards

Experiment

The researcher manipulates at least one variable and measures at least one outcome

4
New cards

Independent Variable (Manipulated)

Researcher randomly assigns participants to levels of a variable

5
New cards

Conditions

The levels of the independent variable

6
New cards

Dependent Variable (Measured)

Researcher records the outcome (behavior, attitudes, etc.)

7
New cards

Control Variable

any variable the researcher keeps constant for all participants. Nothing about it “varies”—it is held steady.

8
New cards

Experiments Meet the 3 Criteria to Support Causal Claims

  1. Experiments establish covariance

  2. Experiments establish temporal precedence

  3. Well-designed experiments establish internal validity

9
New cards

Covariance

the dependent variable changes when the independent variable changes—the variables covary together

10
New cards

Comparison Groups

help us eliminate alternative explanations

11
New cards

Control Group

baseline/no-treatment comparison

12
New cards

Treatment group(s)

active levels of the independent variable

13
New cards

Placebo Group

special type of control to compare real effects from expectation effects

14
New cards

No covariance exists if…

the groups do not differ (no evidence the independent variable is related to the dependent variable)

15
New cards

Temporal Precedence

the cause must come before the effect; the independent variable must occur before changes in the dependent variable

16
New cards

Why do experiments excel at Temporal Precedence?

Researchers manipulate the independent variable first, then measure the dependent variable after. So, the sequence is clear (cause—→effect)

17
New cards

Internal Validity

Confidence that the independent variable caused the change in the dependent variable. Alternative explanations (confounds) must be ruled out because these can threaten internal validity.

18
New cards

Why are experiments strong with Internal Validity?

Well-designed experiments control for potential confounds

19
New cards

3 major threats to internal validity:

  1. Design confounds

  2. Selection effects

  3. Order effects

20
New cards

Design Confound (aka design flaw)

when another variable systematically varies with the independent variable, creating an alternative explanation for the results

21
New cards

Why do design confounds threaten Internal Validity?

you can’t tell whether the independent variable caused the change in the dependent variable or whether the confounding variable did

22
New cards

Why are design confounds systematic?

The extra variable forms patterned differences tied to the independent variable, which is why it threatens internal validity

23
New cards

Systematic Variability

Varies with the independent variable (patterned differences). Creates confounds—→threatens internal validity

24
New cards

Unsystematic Variability

Random differences across participants in both groups. Doesn’t systematically track the independent variable—→not a confound. Adds noise but does NOT threaten internal validity

25
New cards

Selection Effect

Occurs in an experiment when the participants in one level of the independent variable are systematically different than participants in the other levels of the independent variable. Groups differ because of who ends up in them, not because of the independent variable

26
New cards

Why do Selection Effects threaten Internal Validity?

You can’t tell whether the dependent variable changed because of the independent variable or because the groups were different types of people to begin with

27
New cards

Generally, what is the main priority for experimental studies?

internal validity

28
New cards

The question “Can the causal relationship generalize to other people, places, and times?” refers to what type of validity?

external

29
New cards

Which of the following is a reason that researchers typically choose to prioritize internal over external validity?

Having a confound-free setting allows them to make causal claims

30
New cards

Random selection enhances __________ validity, and random assignment enhances __________ validity.

external; internal

31
New cards

A threat to internal validity occurs only if a potential design confound varies with the independent variable

systematically

32
New cards

Experiments use random assignment to avoid which of the following?

selection effects

33
New cards

Which of the following research designs is used to address possible selection effects?

matched-groups designs

34
New cards

One reason researchers use within-group designs is

they require fewer participants

35
New cards

Which of the following is a threat to internal validity found in within-groups designs but not in independent-groups designs?

practice effects

36
New cards

__________ is used to control order effects in an experiment.

counterbalancing

37
New cards

When interrogating experiments, on which of the big validities should a person focus?

internal validity

38
New cards

Which of the following is never found in a one-group, pretest/posttest design?

a comparison group

39
New cards

Which of the following threats to internal validity can apply even when a control group is used?

demand characteristics

40
New cards

Which of the following is a method researchers use to identify or correct for attrition?

determine whether those who dropped out of the study had a different pattern of scores than those who stayed in the study

41
New cards

Which of the following can help prevent testing effects?

establishing reliability of the measure

42
New cards

Which of the following is a reason that a study might yield a null result?

too much within-group variance

43
New cards

Which of the following is true of ceiling and floor effects?

They can be caused by poorly designed dependent variables

44
New cards

A confound that keeps a researcher from finding a relationship between two variables is known as a(n) __________ confound.

reverse

45
New cards

Which of the following things can be done to reduce measurement error?

using more reliable measurements

46
New cards

A causal claim has what basic form?

x—→y

47
New cards

What is the primary purpose of a simple experiment?

To test causal claims

48
New cards

In an experiment, the independent variable (IV) is the variable that is

Manipulated by the researcher

49
New cards

Which statement best describes a control variable?

A variable held constant across all groups

50
New cards

Why is the dependent variable (DV) important to internal validity?

It shows whether the manipulation produced an effect

51
New cards

Which of the following satisfies the temporal precedence criterion?

The IV is manipulated before the DV is measured

52
New cards

what is the key difference between independent-groups and within-groups designs?

Independent=different participants per condition; within=same participants in all conditions

53
New cards

Which is required for an independent-groups experiment?

Random assignment

54
New cards

In an experiment, a comparison group is used to:

Provide a baseline to compare the treatment effect

55
New cards

A study measures the DV only after participants complete the IV condition. Which design is this?

Posttest-only

56
New cards

In a repeated-measures design:

Each person experiences all levels of the IV at different times

57
New cards

A design confound occurs when:

An outside variable varies systematically with the IV

58
New cards

Selection effects occur when:

Participants self-select into conditions

59
New cards

What is an example of a practice effect?

Participants remember a previous condition, influencing the next

60
New cards

What is the purpose of counterbalancing?

To control for order effects in within-groups designs

61
New cards

Random assignment supports which type of validity?

Internal Validity

62
New cards

A maturation threat occurs when

Participants naturally change over time

63
New cards

Which solution best addresses a maturation threat?

Deception

64
New cards

A history threat occurs when:

An external event affects most participants between pretest & posttest

65
New cards

What is an effective way to reduce a history threat?

Use alternate test forms

66
New cards

Regression to the mean is most likely when:

Participants guess the hypothesis

67
New cards

What helps rule out regression to the mean?

Equivalent pretest scores in treatment & comparison groups

68
New cards

A systematic attrition threat occurs when:

Extreme scorers drop out more often

69
New cards

A solution for attrition threats includes:

Removing data from dropouts

70
New cards

A testing threat occurs when:

The measurement tool changes

71
New cards

Which solution reduces testing threats?

Matched groups

72
New cards

An instrumentation threat occurs when:

Participants leave due to difficulty

73
New cards

Which solution helps reduce instrumentation threats?

Calibrating instruments or retraining observers

74
New cards

Which solution helps reduce placebo effects?

Counterbalancing

75
New cards

A null effect may occur because

Weak manipulation or too much noise obscured group differences

76
New cards

A researcher concludes a null effect but later discovers the DV had low reliability. What explanation fits?

Measurement error obscured true differences

77
New cards

Which situation would most likely lead a researcher to conclude the null effect may be real?

Strong manipulation, precise measures, low variability, but still no difference

78
New cards

What factor is most likely to obscure real group differences and produce a null effect?

Individual differences adding variability

79
New cards

What does an interaction effect describe?

How two IVs work together to influence the DV

80
New cards

Why do researchers use factorial designs?

To test for interactions, limits (moderators), and theories

81
New cards

In a crossover interaction:

The direction of the effect reverses depending on the other IV

82
New cards

What phrase best describes a crossover interaction?

“It depends”

83
New cards

A spreading interaction occurs when:

The effect of one IV is stronger at one level of another IV and weaker at another level

84
New cards

In factorial designs, independent variables are called:

Factors

85
New cards

What is a main effect?

The simple, overall effect of one IV on the DV

86
New cards

How many effects are interpreted in a 2 × 2 factorial design?

Two main effects and one interaction

87
New cards

In a 3 × 2 factorial design, how many total conditions are there?

6

88
New cards

What is the best way to visualize factorial design results, especially interactions?

Line graph

89
New cards

Surveys & Polls

—Method of asking ppl to self-report their attitudes, behaviors, or opinions

—Conducted face-to-face, otp, written questionnaires, or online

—Diff ways of asking these questions

—Must ensure we’re collecting accurate data

90
New cards

Ensuring Construct Validity of Surveys/Polls

  1. Choosing types of questions

  2. Writing well-worded questions

  3. Encouraging Accurate Responses

91
New cards

Types of Survey Questions

  1. Open-Ended 

  2. Forced-Choice

  3. Likert Scale

  4. Semantic Differential

92
New cards

Open-Ended Questions

Respondents can answer freely

93
New cards

Example of Open-Ended Question

“Comment on your experience as a student at SUNY New Paltz”

94
New cards

Pro and Con of Open-Ended Questions

Pro: Provides rich, detailed information

Con: Coding and categorizing responses is time-consuming

95
New cards
<p>Forced-Choice Questions</p>

Forced-Choice Questions

Participants choose the best option from two or more choices, commonly used in political polls 

96
New cards

Examples for Forced-Choice Questions

  • “For which candidate will you vote: A or B?”

  • “What describes you best?”

    • I like being the center of attention

    • It makes me uncomfortable to be the center of attention

97
New cards

Pros and Cons of Forced-Choice Questions

Pros: Quick to analyze; clear, comparable responses

Cons: Limited insight; may not reflect true views

98
New cards
<p>Likert Scale Questions</p>

Likert Scale Questions

Respondents rate their level or intensity of attitude, opinion, or experience on a scale. Measures how respondents think.

99
New cards

Example of Likert Scale Questions

Scales typically range from one extreme to another, Common anchors:

  • Strongly disagree—> Strongly agree

  • Never—→ Always

100
New cards

Pros and Cons of Likert Scale Questions

Pros: Easy to interpret; captures degree of opinion

Cons: Can lead to neutral or patterned responses