1/103
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Likert Scales
A set of items to which subjects respond with levels of agreement or disagreement
Force-choice questions
People give their opinion by picking the best of two or more options
Semantic differential format
a survey question format using a response scale whose numbers are anchored with contrasting adjectives
open-ended questions
questions that allow respondents to answer however they want
Double-barreled question
a type of question in a survey or poll that is problematic because it asks two questions in one, thereby weakening its construct validity
Negatively worded questions
a question in a survey or poll that contains negatively phrased statements, making its wording complicated or confusing and potentially weakening its construct validity
Response sets
a shortcut respondents may use to answer the items in a self-report measure with multiple items, rather than responding to the content of each item
Acquiescence
answering "yes" or "strongly agree" to every item in a survey or interview (Yea-saying)
Fence sitting
Only answering in the middle of the scale
Socially desirable responding/faking good
giving answers on a survey (or other self-report measure) that make one look better than one really is
Observer bias
Observers' expectations influence their interpretation of the participants' behaviors or the outcome of the study
Observer Effect
A change in behavior of study participants in the direction of observer expectations. Also called expectancy effect.
Masked design
Observers are unaware of the purpose of the study and the conditions to which participants have been assigned
Reactivity
A change in behavior when study participants realize someone is watching
Unobtrusive observations
make yourself less noticeable
Participant observations
Researcher lives alongside the populations they are studying
External validity
the extent to which the results of a study can be generalized to other situations and to other people
Internal validity
extent to which we can draw cause-and-effect inferences from a study
Baised sample
Some members of the population of interest have a much higher probability than other members of being included in the sample
Unbiased sample
all members of the population have an equal and known chance of being included in the sample
self-selection
A sample is known to contain only people who volunteer to participate
Probability sampling/ random sampling
every member of the population of interest has an equal and known change of being selected for the sample
nonprobability sampling
nonrandom sampling and can result in a biased sample
simple random sampling
sample is chosen completely at random from the population of interest
Systematic sampling
The researcher used a randomly chosen number N and counts off every Nth member of a population to achieve a sample
Cluster sampling
Clusters of participants within the population of interest are selected at random
Multistage sampling
Probability sampling technique involving at least two stages
Stratified random sampling
Researcher purposefully selects particular demographic categories, or strata, and then randomly selects individuals within each of the categories
Strata
Meaningful categories (ethnic or religious groups)
Clusters
Arbitrary (any random set of high schools)
Oversampling
Researchers intentionally overrepresents one or more groups
Random sampling
Researchers create a sample using some random method (Enhance external validity)
Random assignment
Assigning participants randomly to different treatment options (Enhance internal validity)
Purposive sampling
A biased sampling technique in which only certain kinds of people are included in a sample
Snowball sampling
Participants are asked to recommend a few acquaintances for the study
Quota sampling
The researcher identifies subsets of the population of interest and then sets a target number for each category in the sample (selected nonrandomly)
Frequency Claims
estimate the exact rate or value in a population
causal claim
argues that one of the variables is responsible for changing the other
Association claim
argues that one level of a variable is likely to be associated with a particular level of another variable
Social norms
The common beliefs and standards for behavior in a group
Experiment
The researchers manipulated at least one means that the researchers manipulated at least one variable and measured another
Manipulated variable
the variable that is deliberately changed
Independent variable
variable that is manipulated
Conditions
One of the levels of the independent variable in an experiment
Where is the independent variable on a graph
x-axis
3 rules of causation
Covariance, temporal precedence, and internal validity
Covariance
Measures how two variables change together, indicating the direction of their relationship
Temporal precedence
Ensuring the presumed case (independent variable) occurs before the effect (dependent variable)
Design confound
A threat to internal validity in an experiment in which a second variable happens to vary systematically along with the independent variable and therefore is an alternative explanation for the results.
Systematic variability
The predictable, non-random fluctuation in data caused by specific, identifiable factors
Unsystematic variability
In an experiment, a description of when the levels of a variable fluctuate independently of experimental group membership, contributing to variability within groups
Selection effect
When the kinds of participants in one level of the independent variable are systematically different from those in the other
matched groups
an experimental design technique in which participants who are similar on some measured variable are grouped into sets; the members of each matched set are then randomly assigned to different experimental conditions
Independent-groups design (Between-subject/groups design)
Separate groups of participants are placed into different levels of the independent variable
Within-groups design
Each person is presented with all levels of the independent variable
Posttest-only design (equivalent groups, posttest-only design)
Participants are randomly assigned to independent variable groups and are tested on the dependent variable once
Pretest/posttest design
Participants are randomly assigned to at least two groups and are tested on the key dependent variable twice - once before and once after exposure
Repeated-measure design
A type of within-groups design in which participants are measured on a dependent variable more than once, after exposure to each level of the independent variable
Concurrent-measures design
Participants are exposed to all the levels of an independent variable at roughly the same time, and a single attitudinal or behavioral preference is the dependent variable
Order effects
Threatens internal validity because the exposure to one condition first changes how participants react to later condition
Practice effects
A type of order effect - participants' performance improves over time because they become practiced at the dependent measure
Fatigue effects
A type of order effect - participants' performance degrades over time because they become tired
Carryover effects
A type of order effect, some form of contamination carries over from one condition to the next
Counterbalancing
They present the levels of the independent variable to participants in different sequences to control for order effects
Full counterbalancing
All possible condition orders are represented
Partial counterbalancing
Only some of the possible condition orders are represented
EX: present the conditions in a randomized order for every subject
Latin square
A formal system to ensure that every condition appears in each position at least once
Disadvantages of Within-Groups Designs
1.
Potential for order effects
āSolution:counterbalancing
2.
Might not be practical or possible
3.
Experiencing all levels of the
independent variable (IV)
changes the way participants act
(demand characteristics)
Demand characteristic
A cue that leads participants to guess a study's hypotheses or goals
Advantages of Within-groups designs
1.
Participants
in your groups are
equivalent because they are
the same participants and
serve as their own
controls.
2.
These designs give
researchers
more
power
to notice
differences
between
conditions.
3.
Within-groups designs require
fewer participants than other
designs.
manipulation check
an extra dependent variable that researchers can insert into an experiment to help them quantify how well an experimental manipulation worked
Pilot Study
a study completed before (or sometimes after) the study of primary interest, usually to test the effectiveness or characteristics of the manipulations
Two ways to express effect size
1. Original units of the dependent variable
2.Standardized effect size
3 threats to internal validity
design confounds, selection effects, order effects
One group, pretest/posttest design
An experiment in which a researcher recruits one group of participants, measures them on a pretest, exposes them to treatment, intervention, or change, and then measures them on a posttest (NO Comparison group)
Masturation threat
A change in behavior that emerges more or less spontaneously over time
How to prevent maturation threat?
Include an appropriate comparison group
History threats
result from a "historical" or external event that affects most members of the treatment group at the same time as the treatment, making it unclear whether the change in the experimental group is caused by the treatment received or by the historical factor
how to prevent history threats
Include an appropriate comparison group
Regression threat
a threat to internal validity related to regression toward the mean, by which any extreme finding is likely to be closer to its own typical, or mean, level the next time it is measured (with or without the experimental treatment or intervention)
Attrition threat
In a pretest/posttest, repeated-measures, or quasi-experimental study, a systematic type of participant drops out of the study before it ends.
Testing threats
A specific kind of order effect refers to a change in the participants as a result of taking a test more than once
How to prevent testing threats
In order to avoid testing threats, researchers may abandon a pretest altogether and may use a posttest-only design. If they do use a pretest, researchers might opt to use an alternative form of the test for the two measurements. A comparison group may also help.
Instrumentation threat
A measuring instrument changes over time
how to prevent instrumentation threats
-abandon pretest
-Collect data from each instrument
-retrain coders throughout study duration
Selection-history threat
An outside event or factor affects only those at one level of the independent variable
Selection-attrition threat
only one of the experimental groups experiences attrition
Double-blind placebo control study
a study that uses a treatment group and a placebo group and in which neither the research staff nor the participants know who is in which group
Null effect
the independent variable did not make a difference in the dependent variable
Ceiling effect
All the scores are squeezed together at the high end
Floor Effect
All the scores cluster at the low end
noise
Too much unsystematic variability within each group
Measurement Error
a human or instrument factor that can inflate or deflate a person's true score on the dependent variable
How to prevent measurement error
- use reliable, precise tools
- measure more instances
Individual differences
personal attributes that vary from one person to another
how to prevent individual differences
-Change the design to a within-groups design
-Add more participants
Situation noise
unrelated events or distractions in the external environment that create unsystematic variability within groups in an experiment
Power
The likelihood that a study will show a statistically significant result when an independent variable truly has an effect in the population
Interaction effect
whether the effect of the original independent variable depends on the level of another independent variable
factorial design
there are two or more independent variables