1/92
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Social norm
widely held belief that about what most people in a group believe or value
Experiment,
A study in which at least one variable is manipulated and another is measured.
Manipulated variable
A variable in an experiment that a researcher controls, such as by assigning participants to its different levels (values).
measure variable
A variable in a study whose levels (values) are observed and recorded.
Independent variable
In an experiment, a variable that is manipulated; in a multiple-regression analysis, a predictor variable used to explain variance in the criterion variable. See also dependent variable.
condition
One of the levels of the independent variable in an experiment.
dependent variable
In an experiment, the variable that is measured. In a multiple-regression analysis, the single outcome, or criterion variable the researchers are most interested in understanding or predicting. Also called outcome variable. See also independent variable.
Control variable
In an experiment, a variable that a researcher holds constant on purpose.
Comparison group
A group in an experiment whose levels on the independent variable differ from those of the treatment group in some intended and meaningful way.
Control group
A level of an independent variable that is intended to represent “no treatment” or a neutral condition.
Treatment group
The participants in an experiment who are exposed to the level of the independent variable that involves a medication, therapy, or intervention.
Placebo group
A control group in an experiment that is exposed to an inert treatment, such as a sugar pill.
confound
A general term for a potential alternative explanation for a research finding; a threat to internal validity.
Design confound
A threat to internal validity in an experiment in which a second variable happens to vary systematically along with the independent variable and therefore is an alternative explanation for the results.
Systematic varaiblity
In an experiment, a description of when the levels of a variable coincide in some predictable way with experimental group membership, creating a potential confound.
Unsystematic variability
In an experiment, a description of when the levels of a variable fluctuate independently of experimental group membership, contributing to variability within groups.
Selection effect
A threat to internal validity that occurs in an independent-groups design when the kinds of participants at one level of the independent variable are systematically different from those at the other level.
Random assignment
The use of a random method (e.g., flipping a coin) to assign participants into different experimental groups
Matched groups
An experimental design technique in which participants who are similar on some measured variable are grouped into sets; the members of each matched set are then randomly assigned to different experimental conditions.
Independent-groups design
An experimental design in which different groups of participants are exposed to different levels of the independent variable, such that each participant experiences only one level of the independent variable.
Within-groups design
An experimental design in which each participant is presented with all levels of the independent variable.
Posttest-only design
An experiment using an independent-groups design in which participants are tested on the dependent variable only once. Also called equivalent groups.
Pretest/posttest design
An experiment using an independent-groups design in which participants are tested on the key dependent variable twice: once before and once after exposure to the independent variable.
Repeated-measure design
An experiment using a within-groups design in which participants respond to a dependent variable more than once, after exposure to each level of the independent variable.
Concurrent-measures design
An experiment using a within-groups design in which participants are exposed to all the levels of an independent variable at roughly the same time, and a single attitudinal or behavioral preference is the dependent variable.
Order effect
In a within-groups design, a threat to internal validity in which exposure to one condition changes participant responses to a later condition. See also carryover effect, practice effect, testing threat.
Practice effect
A type of order effect in which participants’ performance improves over time because they become practiced at the dependent measure (not because of the manipulation or treatment). Also called fatigue effect. See also order effect, testing threat.
Fatigue effect
grades over time because they become tired, not because of the manipulation or treatment. See also order effect, practice effect.
Carryone effect
A type of order effect, in which some form of contamination carries over from one condition to the next
Counterbalancing
In a repeated-measures experiment, presenting the levels of the independent variable to participants in different sequences to control for order effects. See also full counterbalancing, partial counterbalancing.
Full counterbalancing
A method of counterbalancing in which all possible condition orders are represented. See also counterbalancing, partial counterbalancing.
Partial counterbalancing
A method of counterbalancing in which some, but not all, of the possible condition orders are represented. See also counterbalancing, full counterbalancing
Latin square
A formal system of partial counterbalancing to ensure that every condition in a withingroups design appears in each position at least once.
Demand characteristic
A cue that leads participants to guess a study’s hypotheses or goals; a threat to internal validity. Also called experimental demand.
Manipulation check
In an experiment, an extra dependent variable researchers can include to determine how well a manipulation worked.
Pilot study
A study completed before (or sometimes after) the study of primary interest, usually to test the effectiveness or characteristics of the manipulations.
one-group, pretest/posttest design
An experiment in which a researcher recruits one group of participants; measures them on a pretest; exposes them to a treatment, intervention, or change; and then measures them on a posttest.
maturation threat
A threat to internal validity that occurs when an observed change in an experimental group could have emerged more or less spontaneously over time.
history threat
A threat to internal validity that occurs when it is unclear whether a change in the treatment group is caused by the treatment itself or by an external or historical factor that affects most members of the group.
regression threat
A threat to internal validity related to regression to the mean, a phenomenon in which any extreme finding is likely to be closer to its own typical, or mean, level the next time it is measured (with or without the experimental treatment or intervention). See also regression to the mean.
regression to the mean
A phenomenon in which an extreme finding is likely to be closer to its own typical, or mean, level the next time it is measured, because the same combination of chance factors that made the finding extreme are not present the second time. See also regression threat.
attrition threat
In a pretest/posttest, repeated-measures, or quasi-experimental study, a threat to internal validity that occurs when a systematic type of participant drops out of the study before it ends.
testing threat
In a repeated-measures experiment or quasi-experiment, a kind of order effect in which scores change over time just because participants have taken the test more than once; includes practice effects.
instrumentation threat
A threat to internal validity that occurs when a measuring instrument changes over time.
selection
history threat-A threat to internal validity in which a historical or seasonal event systematically
affects
only the participants in the treatment group or only those in the comparison group, not both.
selection-attrition threat
A threat to internal validity in which participants are likely to drop out of either the treatment group or the comparison group, not both.
observer bias
A bias that occurs when observer expectations influence the interpretation of participant behaviors or the outcome of the study.
demand characteristic
A cue that leads participants to guess a study’s hypotheses or goals; a threat to internal validity. Also called experimental demand.
double-blind study
A study in which neither the participants nor the researchers who evaluate them know who is in the treatment group and who is in the comparison group.
masked design
A study design in which the observers are unaware of the experimental conditions to which participants have been assigned. Also called blind design.
placebo effect
A response or effect that occurs when people receiving an experimental treatment experience a change only because they believe they are receiving a valid treatment.
double-blind placebo control study
A study that uses a treatment group and a placebo group and in which neither the researchers nor the participants know who is in which group.
null effect
A finding that an independent variable did not make a difference in the dependent variable there is no significant covariance between the two.
ceiling effect
An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the high end of their possible
distribution.
floor effect
An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the low end of their possible
distribution. See also ceiling effect.
manipulation check
In an experiment, an extra dependent variable researchers can include to determine how well a manipulation worked.
noise
Unsystematic variability among the members of a group in an experiment, which might be
caused by situation noise, individual differences, or measurement error. Also called error
variance, unsystematic variance.
measurement error
The degree to which the recorded measure for a participant on some variable differs from the true value of the variable for that participant. Measurement errors may be random, such that scores that are too high and too low cancel each other out; or they may be systematic, such that most scores are biased either too high or too low.
situation noise
Unrelated events or distractions in the external environment that create unsystematic variability within groups in an experiment.
power
The likelihood that a study will show a statistically significant result when an independent
variable truly has an effect on the population.
interaction effect
A result from a factorial design, in which the difference in the levels of one independent variable changes, depending on the level of the other independent variable; a difference in differences.
factorial design
A study in which there are two or more independent variables, or factors.
cell
A condition in an experiment; in a simple experiment, a cell can represent the level of one
independent variable; in a factorial design, a cell represents one of the possible combinations of
two independent variables.
participant variable
A variable such as age, gender, or ethnicity whose levels are selected (i.e., measured), not manipulated.
main effect
In a factorial design, the overall effect of one independent variable on the dependent variable,averaging over the levels of the other independent variable.
marginal means
In a factorial design, the arithmetic means for each level of an independent variable, averaging over the levels of another independent variable.
quasi-experiment
A study similar to an experiment except that the researchers do not have full experimental control (e.g., they may not be able to randomly assign participants to the independent variable conditions).
quasi-independent variable
A variable that resembles an independent variable, but the researcher does not have true control over it (e.g., cannot randomly assign participants to its levels or cannot control its timing). See also independent variable.
nonequivalent control group posttest only desgin
A quasi-experiment that has at least one treatment group and one comparison group, but participants have not been randomly assigned to the two groups.
nonequivalent control group pretest/posttest design
A quasi-experiment that has at least one treatment group and one comparison group, in which participants have not been randomly assigned to the two groups, and in which at least one pretest and one posttest are administered.
interrupted time-series design
A quasi-experiment in which participants are measured repeatedly on a dependent variable before, during, and after the “interruption” caused by some event.
nonequivalent control group interrupted time series desgin
A quasi-experiment with two or more groups in which participants have not been randomly assigned to groups; participants are measured repeatedly on a dependent variable before, during, and after the “interruption” caused by some event, and the presence or timing of the interrupting event differs among the groups.
wait-list design
An experimental design for studying a therapeutic treatment, in which researchers randomly assign some participants to receive the therapy under investigation immediately, and others to receive it after a time delay.
small-N design
A study in which researchers gather information from just a few cases.
stable-baseline design
A small-N design in which a researcher observes behavior for an extended baseline period before beginning a treatment or other intervention, and continues observing behavior after the intervention.
multiple-baseline design
A small-N design in which researchers stagger their introduction of an intervention across a variety of contexts, times, or situations.
reversal design
A small-N design in which a researcher observes a problem behavior both before and during treatment, and then discontinues the treatment for a while to see if the problem behavior returns.
single-N design
A study in which researchers gather information from only one animal or one person
Our research question is about the whole population
The news channel conducted the poll on a sample, but
they are ultimately interested in making a prediction about what all of the voters will do.
The quality of the sample matters
The pollsters used data from a sample to make the estimate.
However, if they had used a biased sample (such as including only younger voters), the estimate would
probably be incorrect. A random sample of community voters is necessary to make the best predictions
(see Chapter 7).
The population value is unknown
After the poll is conducted, we don’t know what the
true support for the straw ban is in the population. We only know what the sample’s level of support is.
Larger samples give more certain estimates
If the sample had only 30 people in it, we would feel especially
uncertain about the estimate, even if these 30 people were drawn randomly. In contrast, if the sample had 1,000
randomly selected people in it, we would feel much more certain (our estimate would be more precise)
To get a better estimate, we should replicate our results (that is, do more than one poll)
One
well-conducted poll is good, but if other polls are conducted, we could combine the results of them all and
get a more precise estimate.
inferential statistics
A set of techniques that uses the laws of chance and probability to help researchers make
decisions about what their data mean and what inferences they can make from the data.
inferential statistics
A set of techniques that uses the laws of chance and probability to help researchers make
decisions about what their data mean and what inferences they can make from the data.
estimation
An approach to inferential statistics that uses data from a sample to calculate an effect size and a
95% confidence interval, with the goal to predict the magnitude of some value in the population.
estimation
An approach to inferential statistics that uses data from a sample to calculate an effect size and a
95% confidence interval, with the goal to predict the magnitude of some value in the population.
null hypothesis significance testing (NHST)
An inferential statistical technique in which a result is compared
to a hypothetical population in which there is no relationship or no difference.
null hypothesis significance testing (NHST)
An inferential statistical technique in which a result is compared to a hypothetical population in which there is no relationship or no difference.
point estimate
A single estimate of some population value (such as a percentage, a correlation, or a difference) based on data from a sample.
confidence interval (CI)
A given range indicated by a lower and upper value that is designed to capture the population value for some point estimate (e.g., percentage, difference, or correlation); a high proportion of CIs will capture the true population value.
standard error
The typical, or average, error researchers make when estimating a population value standard error