1/68
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Nonexperimental Design
focus on determining what happens; though each subject may complete the same measures over time, if nothing is manipulated, it is not experimental
Experimental Design
focus on determining why something happens; require a manipulation; ex. low dose, moderate dose, and a placebo dose of medication
Requirements for establishing causation
variables must be related in a systematic way; the variable thought to cause the other must come first in time; rule out other potential causes/extraneous variables; covariation; temporal precedence; internal validity
Covariation
when changes in one variable are associated with changes in another variable; part of determining causality; two variables must vary or change together in a systematic
Temporal Precedence
when changes in the suspected cause (treatment) occur before the changes in the effect (outcome); established by dictating the study’s order and manipulating the IV
Extraneous Variable
a factor other than the intended treatment that might change the outcome variable
Internal Validity
the degree to which we can rule out other possible causal explanations for an observed relationship between the independent and dependent variables
External Validity
Can these findings actually apply back to the population; Can be impaired by poor sampling, lack of realism in lab, etc.
Mundane Realism
extent to which the conditions in the study mimic “real life” and the degree to which a study parallels everyday situations in the real world; laboratory and field research helps eliminate this
Between Subjects
Simple/Two Groups, Multigroup, Factorial
Simple/Two Group Design
an experimental design that compares two groups or conditions and is the most basic way to establish cause and effect (simple experiment) generally a control and an experimental group
Multigroup Design
more than two groups; ex. a placebo, nocebo, small dose, medium dose, high dose; can include a control and different treatment levels
Factorial Design
multiple IV’s are manipulated; ex. temperature and humidity on plant growth, temperature: low vs. high, humidity: low vs. high
Within Subjects Design
each person gets each treatment; participants are exposed to all levels of treatment and statistically compared against themselves
Mixed Design
combination of within and between subjects design; two or more variables, some measured within subjects and some between subjects
Experimental Control
the ability to keep everything between groups the same except for the one element we want to test in the study; if your conditions differ on factors other than your IV, it is much harder to establish causation
Experimental Hypothesis
a clear and specific prediction of how the independent variable will influence the dependent variable
Null Hypothesis
the hypothesis of no difference; usually the hypothesis the researcher is trying to statistically reject
Independence
the assumption that each participant represents a unique and individual data point
Experimental Group
the group of condition that gets the key treatment in an experiment
Control Group
any condition that serves as the comparison group in the experiment
Conditions
the way the independent variable is manipulated is through the creation of different conditions; each condition is some manipulation of the same independent variable; can also be described as levels of the independent variable
Random Assignment
any method of placing participants in groups that is nonsystematic and unbiased; each participant has an equal chance of being assigned to any of the given conditions in the experiment; “accounts for” potential influences on the study, not “eliminates”; may help ensure the composition of the people in the control group are similar to the composition of the people in the experimental group (ex. age gender)
Matched Pair Design
a design in which one creates a set of two participants who are highly similar on a key trait and then randomly assigns individuals in the pair to different groups; NOT random
Experimental Realism
the degree to which a study participant becomes engrossed in a manipulation
Researcher Notes
a place to keep track of anything out of the ordinary that happens during a study
Manipulation Check
a measure that helps determine whether the manipulation effectively changed or varied the independent variable across conditions
Descriptive Analysis
what does the sample look like? (means, sd’s) and what the participants were like (average age, gender); averages on additudinal DV overall in each condition, tallies/percent per choice on behavioral DV
Inferential Analysis (to test our hypothesis)
what does this sample tell me about the population as a whole? good for drawing conclusions and testing hypotheses (ex. t-test, which compares the average score on a DV across exactly two groups)
Independent T-Test
a statistical test comparing groups’ means to see if the groups differ to a degree that could not have happened accidentally or by chance; the people in each group are different
Paired/dependent t-test
the people in each group are the same; a statistic used to determine if there is a significant difference between two related sets of scores; takes into account the non-independence of the data
Categorical Variables
grouping variable, generally meant to categorize (ex. section A/B); analyze by looking at frequency distributions (how many/what percentage of respondents fell into a certain category?)
Continuous Analysis
a numerical variable, where generally an infinite number of values are possible and decimal points have meaning (ex. distance, average score); mean, median, mode, and standard deviation
Effect Size
a statistical measure of the magnitude of the difference between groups; we can generate a Cohen’s d to measure effect size; can tell us how big the differences are between groups and help us determine practical significance; can also check for other differences to rule out other explanations
Empty Control Group
a group that does not receive any form of the treatment and just completes the dependent variable
Placebo Group
a group where participants believe they are getting the treatment, but in reality they are not
Strengths of Multigroup Design
can test more than two levels of IV at once, can save time and resources, and may uncover nonlinear relationships
Nonlinear (functional) relationships
any association between variables that the use of just two comparison groups cannot uncover. These relationships, often identified on a graph as a curved line, help provide us with a clearer picture of how variables relate to each other
Drawbacks of Multigroup Design
mundane realism within the experiment, external validity, and confounds
Confounds
a variable that the researcher unintentionally varies along with the manipulation; accidentally manipulate something along with the intended IV
Methodological Pluralism
the use of multiple methods or strategies to answer a research question
Hypothesis Guessing
when a participant actively attempts to identify the purpose of the research
One-Way ANOVA
a statistical test that determines whether responses from the different conditions are essentially the same or whether the responses from the different conditions are essentially the same or whether the responses from at least one of the conditions differ from the others; for differences among 3+ groups; will tell us if group differences are supported by our data, but will not specify which groups are different from one another; F-value, p-value, effect size
Exploratory Analysis
statistical tests that examine the potential differences that were not predicted prior to conducting the study
Planned Contrast
statistical tests that examine comparisons between groups that were predicted ahead of time. These tests have the added benefit of allowing the comparison of combined conditions to other conditions in the study
Chi-Square Test of Independence
a statistical test in which both variables are categorical. This test generally examines if the distribution of participants across categories is different from what would happen if there were no difference between the groups
Post-Hoc Test
statistical tests that examine all possible combinations of conditions in a way that statistically accounts for the fact that not all of them were predicted ahead of time; takes into account multiple tests are being run and corrects, statistically, to reduce your error; compare each group to all other groups
Pretest-Posttest
a within-subjects design where participants are measured before and after exposure to a treatment of intervention to determine if a change occurs
Baseline Measurement
the initial assessment of a participant at the onset of a study, prior to any intervention or treatment (pretest measure)
Repeated-Measures
a within-subjects design where participants are exposed to each level of the independent variable and are measured on the dependent variable after each level; unlike the pretest-posttest design, there is no baseline measurement
Behavioral Diary
a self-report data collection strategy where individuals record their behaviors, thoughts, and feelings as they occur over multiple time points
Longitudinal
over multiple time points, usually spread out over large amounts of time
Advantages of Within-Subjects Design
well-suited to assess change in individuals (ex. learning) and to test preference (ex. taste test); statistical power; can remove the issue of individual differences between groups and random assignment is no longer needed
Power
a study’s ability to find differences between groups when there is a real differences (when the null hypothesis is false); the probability that a study will yield significant results
Disadvantages of Within-Subjects Design
external validity concerns (did the level of IV cause the effect, or was it multiple exposures to the iv); internal validity concerns (attrition, testing effect. instrumentation, history, maturation)
Attrition
the differential dropping out of participants from a study; subjects may not complete all conditions; also known as mortality; can be solved by making continuation in the study appealing or nonthreatening
Testing Effect
a threat to the internal validity of a study where the participants’ scores may change on subsequent measurements simply because of their increased familiarity with the instrument; scores on a DV are a reflection of taking it multiple times and not the IV; can be solved with distractor items and increased time between the different conditions
Instrumentation
in terms of threats to internal validity, a change in how a variable is measured or administered during the course of a study; DV is measured in different ways, which may not be directly comparable; can be solved by maintaining consistency in the measurement instrument and how it is administered throughout the study
History
a threat to the internal validity of a study due to an external event potentially influencing participants’ behavior during the study (ex. a pandemic); difficult to prevent, record in the researcher’s notes any unexpected occurrences
Maturation
a threat to the internal validity of a study stemming from either long-term physiological changes occurring naturally within the participants that may influence the dependent variable; participants change over time (or just become bored); can be solved by using a comparison group not exposed to the treatment or intervention to determine if maturation is a possible effect
Order Effects
a threat to the internal validity in a within-subjects design resulting from influence that the sequence of experimental conditions can have on the dependent variable (practice, fatigue, carryover, sensitization)
Practice Effect
changes in a participant’s responses or behavior due to increased experience with the measurement instrument, not the variable under investigation; multiple exposures to the DV has influence on the DV; provide participants with extensive training on the task before starting the actual study, and do a trial run so the participants can learn and improve before measurement begins
Fatigue Effect
deterioration in quality of measurements due to participants becoming tired, less attentive, or careless during the course of the study and stop trying; make the tasks more interesting, brief, and not taxing
Carryover Effect
exposure to earlier conditions influences responses to subsequent conditions; lengthen the time between treatments and use other strategies that clear the effect before exposing participants to the next condition
Sensitization Effect
continued exposure to experimental conditions in a within-subjects study increasing the likelihood of hypothesis guessing, potentially influencing participants’ responses in later experimental conditions; hypothesis guessing and response adjustment after exposure to multiple conditions; mislead participants as to the study’s purpose and use other strategies that prevent participants from knowing what you are varying in your study
Counterbalancing
identifying and using all potential treatment sequences in a within-subjects design to prevent order effects
Latin Square Design
a counterbalancing strategy where each experimental condition appears at every position in the sequence order equally often
Repeated measures ANOVA
a statistic used to test a hypothesis from a within-subjects design with three or more conditions; accounts for measures being repeated across persons; more than two conditions
F(2,87) = #.##, p = .##, eta² = . ##
F-test symbol (between subjects df, within subjects df) = F score, p = significance level, eta² = calculated effect size