1/389
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Causal Claim
One variable causes the other (X affects/influences/causes Y); requires an experiment
Example of Causal Claim
Using longhand notes improves memory compared to typing
Experiment
The researcher manipulates at least one variable and measures at least one outcome
Independent Variable (Manipulated)
Researcher randomly assigns participants to levels of a variable
Conditions
The levels of the independent variable
Dependent Variable (Measured)
Researcher records the outcome (behavior, attitudes, etc.)
Control Variable
any variable the researcher keeps constant for all participants. Nothing about it “varies”—it is held steady.
Experiments Meet the 3 Criteria to Support Causal Claims
Experiments establish covariance
Experiments establish temporal precedence
Well-designed experiments establish internal validity
Covariance
the dependent variable changes when the independent variable changes—the variables covary together
Comparison Groups
help us eliminate alternative explanations
Control Group
baseline/no-treatment comparison
Treatment group(s)
active levels of the independent variable
Placebo Group
special type of control to compare real effects from expectation effects
No covariance exists if…
the groups do not differ (no evidence the independent variable is related to the dependent variable)
Temporal Precedence
the cause must come before the effect; the independent variable must occur before changes in the dependent variable
Why do experiments excel at Temporal Precedence?
Researchers manipulate the independent variable first, then measure the dependent variable after. So, the sequence is clear (cause—→effect)
Internal Validity
Confidence that the independent variable caused the change in the dependent variable. Alternative explanations (confounds) must be ruled out because these can threaten internal validity.
Why are experiments strong with Internal Validity?
Well-designed experiments control for potential confounds
3 major threats to internal validity:
Design confounds
Selection effects
Order effects
Design Confound (aka design flaw)
when another variable systematically varies with the independent variable, creating an alternative explanation for the results
Why do design confounds threaten Internal Validity?
you can’t tell whether the independent variable caused the change in the dependent variable or whether the confounding variable did
Why are design confounds systematic?
The extra variable forms patterned differences tied to the independent variable, which is why it threatens internal validity
Systematic Variability
Varies with the independent variable (patterned differences). Creates confounds—→threatens internal validity
Unsystematic Variability
Random differences across participants in both groups. Doesn’t systematically track the independent variable—→not a confound. Adds noise but does NOT threaten internal validity
Selection Effect
Occurs in an experiment when the participants in one level of the independent variable are systematically different than participants in the other levels of the independent variable. Groups differ because of who ends up in them, not because of the independent variable
Why do Selection Effects threaten Internal Validity?
You can’t tell whether the dependent variable changed because of the independent variable or because the groups were different types of people to begin with
Generally, what is the main priority for experimental studies?
internal validity
The question “Can the causal relationship generalize to other people, places, and times?” refers to what type of validity?
external
Which of the following is a reason that researchers typically choose to prioritize internal over external validity?
Having a confound-free setting allows them to make causal claims
Random selection enhances __________ validity, and random assignment enhances __________ validity.
external; internal
A threat to internal validity occurs only if a potential design confound varies with the independent variable
systematically
Experiments use random assignment to avoid which of the following?
selection effects
Which of the following research designs is used to address possible selection effects?
matched-groups designs
One reason researchers use within-group designs is
they require fewer participants
Which of the following is a threat to internal validity found in within-groups designs but not in independent-groups designs?
practice effects
__________ is used to control order effects in an experiment.
counterbalancing
When interrogating experiments, on which of the big validities should a person focus?
internal validity
Which of the following is never found in a one-group, pretest/posttest design?
a comparison group
Which of the following threats to internal validity can apply even when a control group is used?
demand characteristics
Which of the following is a method researchers use to identify or correct for attrition?
determine whether those who dropped out of the study had a different pattern of scores than those who stayed in the study
Which of the following can help prevent testing effects?
establishing reliability of the measure
Which of the following is a reason that a study might yield a null result?
too much within-group variance
Which of the following is true of ceiling and floor effects?
They can be caused by poorly designed dependent variables
A confound that keeps a researcher from finding a relationship between two variables is known as a(n) __________ confound.
reverse
Which of the following things can be done to reduce measurement error?
using more reliable measurements
A causal claim has what basic form?
x—→y
What is the primary purpose of a simple experiment?
To test causal claims
In an experiment, the independent variable (IV) is the variable that is
Manipulated by the researcher
Which statement best describes a control variable?
A variable held constant across all groups
Why is the dependent variable (DV) important to internal validity?
It shows whether the manipulation produced an effect
Which of the following satisfies the temporal precedence criterion?
The IV is manipulated before the DV is measured
what is the key difference between independent-groups and within-groups designs?
Independent=different participants per condition; within=same participants in all conditions
Which is required for an independent-groups experiment?
Random assignment
In an experiment, a comparison group is used to:
Provide a baseline to compare the treatment effect
A study measures the DV only after participants complete the IV condition. Which design is this?
Posttest-only
In a repeated-measures design:
Each person experiences all levels of the IV at different times
A design confound occurs when:
An outside variable varies systematically with the IV
Selection effects occur when:
Participants self-select into conditions
What is an example of a practice effect?
Participants remember a previous condition, influencing the next
What is the purpose of counterbalancing?
To control for order effects in within-groups designs
Random assignment supports which type of validity?
Internal Validity
A maturation threat occurs when
Participants naturally change over time
Which solution best addresses a maturation threat?
Deception
A history threat occurs when:
An external event affects most participants between pretest & posttest
What is an effective way to reduce a history threat?
Use alternate test forms
Regression to the mean is most likely when:
Participants guess the hypothesis
What helps rule out regression to the mean?
Equivalent pretest scores in treatment & comparison groups
A systematic attrition threat occurs when:
Extreme scorers drop out more often
A solution for attrition threats includes:
Removing data from dropouts
A testing threat occurs when:
The measurement tool changes
Which solution reduces testing threats?
Matched groups
An instrumentation threat occurs when:
Participants leave due to difficulty
Which solution helps reduce instrumentation threats?
Calibrating instruments or retraining observers
Which solution helps reduce placebo effects?
Counterbalancing
A null effect may occur because
Weak manipulation or too much noise obscured group differences
A researcher concludes a null effect but later discovers the DV had low reliability. What explanation fits?
Measurement error obscured true differences
Which situation would most likely lead a researcher to conclude the null effect may be real?
Strong manipulation, precise measures, low variability, but still no difference
What factor is most likely to obscure real group differences and produce a null effect?
Individual differences adding variability
What does an interaction effect describe?
How two IVs work together to influence the DV
Why do researchers use factorial designs?
To test for interactions, limits (moderators), and theories
In a crossover interaction:
The direction of the effect reverses depending on the other IV
What phrase best describes a crossover interaction?
“It depends”
A spreading interaction occurs when:
The effect of one IV is stronger at one level of another IV and weaker at another level
In factorial designs, independent variables are called:
Factors
What is a main effect?
The simple, overall effect of one IV on the DV
How many effects are interpreted in a 2 × 2 factorial design?
Two main effects and one interaction
In a 3 × 2 factorial design, how many total conditions are there?
6
What is the best way to visualize factorial design results, especially interactions?
Line graph
Surveys & Polls
—Method of asking ppl to self-report their attitudes, behaviors, or opinions
—Conducted face-to-face, otp, written questionnaires, or online
—Diff ways of asking these questions
—Must ensure we’re collecting accurate data
Ensuring Construct Validity of Surveys/Polls
Choosing types of questions
Writing well-worded questions
Encouraging Accurate Responses
Types of Survey Questions
Open-Ended
Forced-Choice
Likert Scale
Semantic Differential
Open-Ended Questions
Respondents can answer freely
Example of Open-Ended Question
“Comment on your experience as a student at SUNY New Paltz”
Pro and Con of Open-Ended Questions
Pro: Provides rich, detailed information
Con: Coding and categorizing responses is time-consuming

Forced-Choice Questions
Participants choose the best option from two or more choices, commonly used in political polls
Examples for Forced-Choice Questions
“For which candidate will you vote: A or B?”
“What describes you best?”
I like being the center of attention
It makes me uncomfortable to be the center of attention
Pros and Cons of Forced-Choice Questions
Pros: Quick to analyze; clear, comparable responses
Cons: Limited insight; may not reflect true views

Likert Scale Questions
Respondents rate their level or intensity of attitude, opinion, or experience on a scale. Measures how respondents think.
Example of Likert Scale Questions
Scales typically range from one extreme to another, Common anchors:
Strongly disagree—> Strongly agree
Never—→ Always
Pros and Cons of Likert Scale Questions
Pros: Easy to interpret; captures degree of opinion
Cons: Can lead to neutral or patterned responses