1/90
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
All-possible-orders design (Complete counterbalancing)
The conditions of an independent variable are arranged in every possible sequence, and an equal number of participants are assigned to each sequence
Between Subject Designs
different participants are assigned to each of the conditions in the experiment
Block Randomization
We conduct a single round of all the conditions, then another round, then another, for as many rounds as needed to complete the experiment. within each round, the order of conditions is randomly determined
Block Randomization Design
Every participant is exposed to multiple blocks of trials, with each block for each participants containing a newly randomized order of all conditions
Carryover effects
Occur when participant’s responses in one condition are uniquely influenced by the particular condition or conditions that preceded it
Confounding variable
A factor that covaries with the independent variable in such a way that we can no longer determine which one has caused the changes in the dependent variable
Control Condition (control Group)
Participants do not receive the treatment of interest or are exposed to a baseline level of an independent variable
Counterbalancing
A procedure in which the order of conditions is varied so that no condition has an overall advantage relative to the other conditions
Dependent Variable
Refers to the response that is measured to determine whether an independent variable has produced an effect
Experimental Condition (experimental group)
Involves exposing participants to a treatment or an “active” level of the independent variable
Experimental Control
Includes the ability to 1.) Manipulate one or more independent variables; 2.) choose the types of dependent variables that will be measured so that the effect of the independent variable can be assessed; and 3.) regulate other aspects of the research environment, including the manner in which participants are exposed to the various conditions in the experiment
Extraneous Variable
A factor that is not the focus of interest in a particular study, but that could influence the outcome of the study if left uncontrolled
Fatigue Effect
A performance decline that results from becoming tired, inattentive, or less motivated to perform well with repeated exposure to a task
Independent-group design (Random-group design)
Participants are randomly assigned to the various conditions of the experiments
Independent Variable
Refers to the variable manipulated by the researchers
Latin Square
in a n (number of positions in a series) x n (number of orders) matrix in which each condition will appear only once in each column and each row
Matched-groups design
each set of participants that has been matched on one or more attributes is randomly assigned to the various conditions of the experiment
Matching Variable
A characteristics on which we match sets of individuals as closely as possible
Natural-group design
Researcher measures a subject variable, forms different groups based on people’s level of that variable, then measures how different groups respond on other variables
Order Effects (Sequence effects)
Occurs when participants responses are affected by the order of conditions to which they are exposed
Practice Effect
A performance improvement due to greater experience with a task
Progressive Effect
Reflects changes in participant’s responses that result from their cumulative exposure to prior conditions
Random Assignment
a procedure in which each participants has an equal probability of being assigned to any one of the conditions in the experiments
Reverse-counterbalancing design (ABBA-counterbalancing design)
From the entire set of all possible orders, a subset of orders is randomly selected and each order is administered to one participants
Sensitization
In which exposure to multiple conditions increases participant’s awareness of, or sensitivity to, the variable that is being experimentally manipulated
Single factor design
Has only one independent variable
Subject variable
A personal characteristic on which individuals vary from one another
Within-Subject Design
Each participants engages in every condition of the experiment one or more times
Between-subject Factorial Design
A factorial design in which each subject engages in only one conditions
Interaction (Interaction effect)
Occurs when the way in which an independent variable influences a dependent variable differs, depending on the level of another independent variable
Mixed-Factorial Design
A factorial design that includes at least one between-subject variable and at least one within-subjects variable
Person x situation _ (Person x environment) factorial design
An experimental design that incorporates at least one subject variable along with at least one manipulated situational variable
Simple Main effect
Represents the effect of one independent variable at a particular level of another independent variable
Three-way interaction
The interactions of two independent variables depends on the level of a third independent variable
Two way Interaction
Among two independent variable, the way that one independent variable influences a dependent variable depends on the level of the second independent variable
Within-Subject Factorial Design
A factorial Design in which each subject engages in every conditions
Attrition (subject loss)
Occurs when participants fail to complete a study
Ceiling Effect
Occurs when scores on a dependent variable bunch up at the maximum score level
Complete Replication (full replication)
includes all the conditions of the original study
Conceptual Replication
Examines the same questions investigated in the original study, but operationalized the construct differently
Construct validity
Concerns the issue of whether the construct (the conceptual variables) that researchers claim to be studying are, in fact, the construct that they truly are manipulating and measuring
Debriefing
A conversation with the participant that conveys additional information about the study
Demand Characteristics
Refers to cues that influence participants beliefs about the hypothesis being tested and the behaviours expected of them
Differential attrition
Occurs when significantly different attrition rates or reasons for discontinuing exist, overall, across the various conditions
Direct replication (exact replication)
The researchers follow the procedures used in the original study as closely as possible
Double-blind procedure
Which neither the participants nor the experimenters are aware of who is receiving the actual treatment and who is receiving a placebo
Ecological Validity
Concerns the degree to which responses obtained in a research context generalize to behaviour in natural settings
floor effect
Occurs when scores on a dependent variable bunch up at the minimum score level
independent Replication
a Replication conducted by researchers who were not part of the original research group
Instrumentation
Refers to changes that occur in a measuring instrument during the course of data collection. (As long as random assignment (combined with block randomization) or proper counterbalancing procedures are used
Internal Replication
Occurs when researchers follow up their initial study with one or more replications and present this series of studies in a single research report
Internal Validity
Concerns the degree to which we can be confident that a study demonstrated that one variable had a causal effect on another variable
Manipulation check
Measures to assess whether the procedures used to manipulate an independent variable successfully captured the construct that was intended
Masking (blinding)
A procedure in which the parties involved in an experiment are kept unaware of: the hypothesis being tested, and/or the condition to which each participant has been assigned
Maturation
Refers to ways that people naturally change over time, independent of their participation in a study. random assignment is a way to fix this
Partial Replication
Only includes some of the original conditions
Pilot Study
is a trial run, usually conducted with a smaller number of participants, prior to initiating the actual experiment
Placebo Control Group
In which participants do not receive the core treatment, but are led to believe that they are (may be) receiving it
Placebo effect
People’s expectations about how a treatment will affect them influence their responses (on the dependent variable) to that treatment
Randomized controlled trial (Randomized clinical Trial)
An experiment in which participants are randomly assigned to different conditions for the purpose of examining the effectiveness of an interventions
Regression to the mean
Is the statistical concept that when two variables are not perfectly correlated, more extreme scores on one variable will be associated overall with less extreme scores on the other variables. should be equivalent across conditions as long as participants are randomly assigned.
Replication and extension (replication with extension)
Is a replication that adds a new design element to the original study
Selection
Refers to situations in which, at the start if a study, participants in the various conditions already differ on a characteristic that can partly or fully account for the eventual results. (Experiments involve multiple conditions, and when between-subjects designs are used, the key to prevent this confound is to create equivalent groups at the start. This is achieved by randomly assigning participants to conditions)
Sensitivity
Refers to the ability to detect an effect that actually is present
Single-blind procedure
Either the participants or experimenters, but not both, are masked to the participants condition
Statisitcal Conclusion Validity Testing
Concerns the proper statistical treatment of data and the soundness of the researcher’s statistical conclusion
Wait-list control group
A group of randomly selected participants who do not receive a treatment, but expect to and do receive it after treatment of the experimental groups ends
Yoked control group
In which each control group member is procedurally linked (i.e, yoked) to a particular experimental group member, whose behaviour will determine how both of them are treated
Efficiency Assessment
Weighs the program’s benefits and effectiveness in relation to its cost, to determine whether it is an efficient method for addressing the problem
Program Diffusion
Implementing and maintaining effective programs in other settings or with other groups
Quasi-experiment
Has some features of an experiment but lacks key aspects of experimental control
One-group posttest-only design
Treatment occurs and and afterward the dependent variable is measured once
one-group pretest-posttest design
A dependent variable is measured once before and once after a treatment occurs
Simple interrupted time-series design
A dependent variable is repeatedly measured at periodic intervals before and after a treatment
Selection Interactions
The interaction of selection with another threat to internal validity
Posttest-only design with a nonequivalent control group
Participants in one condition are exposed to a treatment, a nonequivalent group is not exposed to the treatment, and scores from both groups are obtained after the treatment ends
pretest-posttest design with a nonequivalent control group
Pre- and posttreatment scores are obtained for a treatment group and a nonequivalent control group
Interrupted time - series with a nonequivalent control group
A series of pre- and posttreatment scores are obtained for a treatment group and a nonequivalent control group
Switching replication design
One group receives a treatment and a nonequivalent group initially does not receive the treatment but is then exposed (i.e., switched) to it at a later point in time
Program evaluation
Involves the use of research methods to assess the need for, and the design, implementation, and effectiveness of, a social intervention
needs assessment
Determines whether there is a need for a social program and the general steps required to meet that need
program theory and design assessment
Evaluates the rationale for why a program has been, or will be, designed in a particular way
Process evaluation
Determines whether a program is being implemented as intended
Outcome evaluation
Assesses a program effectiveness
Contamination
Which occurs when knowledge, services or other experiences intended for one group are unintentionally received by another grouo
Factorial Design
Includes two or more independent variables and crosses (i.e., combines) every level of each independent variable
External Validity
Concerns the generalizability of the findings beyond the present study
Replication
refers to the process of repeating a study in order to determine whether the original findings will be upheld
History
Refers to events that occur while a study is being conducted, and that are not a part of the experimental manipulation or treatment Block randomization will render this.
Testing
Concerns whether the act of measuring participants response affect how they respond on subsequent measures. (Many experiments do not include a pretest because due to random assignment, the participants in the various conditions are, overall, assumed to be equivalent at the start of the experiment)
Experimenter expectancy Effects
Unintentional ways in which researchers influence their participants to respond in a manner consistent with the researcher’s hypothesis