1/113
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
producer
creating research and compiling data
consumer
analyzing and interpreting someone else research
What do you gain by being a critical consumer of information?
Knowing what kind of claim is being made
• Knowing whether to believe it, how to apply it
Empiricism
Basing conclusions on systematic and rigorous observations
• Using evidence from the senses (sight, hearing, touch) or from
instruments that can help the senses (thermometers, timers, scales)
as the basis for conclusions
– rather than intuition, personal experience, or authority
The Theory-Data Cycle
A theory leads to
questions, predictions,
data, and potentially
updating your theory
falsifiability
a theory must lead to
hypotheses that, when tested,
could fail to support the theory
does a study prove a theory?
no, A study either supports
or does not support a
theory.
basic research
initial process, foundational
translational research
bringing research to lab or study
applied research
applying research thoughts to real world applications and seeing results in real world setting
universalism
scientific claims are supported by the merit of the claim, not by the researchers reputation
communality
scientific knowledge and findings are found and supported by a community of science
disinterestedness
scientists want to discover the truth no matter public opinion and interests
organized skepticism
scientists question everything including their own theories
confound
an alternative explanation for
an outcome
comparison group
enables us to compare what
would happen both with and without the thing we are
interested in.
Research is probabilistic
its findings are not
expected to explain all the cases all the time
Alternatives to empirical research
personal experience
• intuition
• trusting authorities
Ways in which intuition is biased
We are swayed by good stories
– We are persuaded by what easily comes to mind
– We focus on evidence we expect
– We are biased about being biased
availability heuristic
a mental shortcut in which
people come to a conclusion based on what
information most easily comes to mind
confirmation bias
the tendency to look
only at information that agrees with what
we want to believe
bias blind spot
the belief that we are
unlikely to fall prey to the other biases
previously described
peer-review
the process in which a few
experts carefully read a paper that has been
submitted to a journal and tell the editor its
virtues and flaws
Empirical journal articles
Describe results of a study or studies
– Where research is first introduced to the world
– Thoroughly peer-reviewed
Review journal articles
not original research
– Review/summarize multiple papers that asked
similar questions or tested the same theory
– Can be found in same journals that publish original
empirical papers
– Sometimes include meta-analyses, which combine the results of many studies statistically
abstract
summary of the theoretical
question, the methods used, the
results, and the interpretation of
the study.
introduction
The theory or theories relevant to
the current study
• The relevant background research
• A description of the current
study’s approach and design
Methods
Information on the participants
• The variables used
• The tasks used
• The stimuli and materials used
• The analyses used
Results
A verbal description of the
statistical results.
• Tables displaying numerical
results
• Graphs/figures displaying
quantitative results
Discussion
Summarize the results
• Compare results to previous literature
• Link the results back to the original
question or theory
• State any caveats or potential
problems with the study
Three claims
frequency, association, causal
Four validities
construct, external, statistical, internal
variable
something that varies in a research
study; it must have at least two levels
measured variable
observed and recorded
manipulated variable
is controlled
construct or conceptual variable
the abstract idea or concept
operational variable
how the variable will actually be measured
Frequency Claims
describe a particular level or degree
of a single variable.
Frequency claims involve
only one measured variable.
Association Claims
argue that the level of one variable is likely
to be associated with a particular level of
another variable.
• Association claims are supported by studies
that have at least
two measured variables.
• Variables that are associated are
sometimes said to correlate
zero association
the values of one
variable do not predict the values of the
other variable
• In zero associations the variables
do not covary
Causal Claims
argues that one variable is responsible
for changing another variable
causal claims are only valid if they are
supported by experiments: studies that
have a manipulated variable and a
measured variable
construct validity
are the variables measuring
what they are supposed to measure?
Construct validity is about the quality of your measured or
manipulated variable and is important for all empirical claims
external validity
do the results generalize to
other people, times, or situations?
External validity is often evoked when considering how well the study
sample generalizes to the population of interest, but can also be
about specific choices the researchers make in their study design
statistical validity
how well do the numbers
support the claim?
Statistical validity refers to the extent to which statistical conclusions
are precise, reasonable, and replicable
internal validity
when a causal claim is made,
have alternative explanations been ruled out?
Have the researchers eliminated confounds?
support causal claims
(only) experiments
Three necessary criteria for establishing causation between
Variable A and Variable B
covariance
– the study’s results reveal that A and B covary
– can be a positive or negative association
2. temporal precedence
– the study’s methods ensure that A comes first in time, followed by B
3. internal validity
– the study’s methods ensure that there are no plausible alternative explanations
for the change in B; A is the only thing that changed
construct validity
are the variables measuring
what they are supposed to measure?
Construct validity is about the quality of your measured or
manipulated variable and is important for all empirical claims
reliability
The same measure will yield similar results...
—if the same person takes it on different days
—when it is coded by different researchers
—across slightly different questions/versions
validity
measures what its supposed to measure
Three Common Types of Measures
Self-report measure
2. Observational measure
3. Physiological measures
categorical/nominal variables
levels are qualitatively distinct categories;
order does not matter
quantitative variables
evels correspond to meaningful numbers. There are
three kinds of quantitative variables:
– ordinal scale
– interval scale
– ratio scale
ordinal scale
ranked order in which the
distance or interval between the levels does
not matter
interval scale
the distance, or interval,
between levels does matter, but there is no true
zero
ratio scale
the distance, or interval, between
levels does matter, and there
is a true zero
test-retest reliability consistency
consistent across time
interrater reliability consistency
consistent across researchers
internal reliability consistency
consistent across versions of question
test-retest reliability
consistent score each
time the measure is used for the same
participant
interrater reliability
consistent score no matter
who measures it
association direction
positive (r>0) or negative (r<0)
strong association strength
(r close to +/-1)
weak association strength
(r close to 0)
internal reliability
consistent score on
different versions of a question
The participant provides a consistent
pattern of responses, regardless of how
the researcher phrased the question
average inter-item
correlation (AIC)
average of all correlations between the different
items,good for internal validity
measure is valid
If it measures what it’s supposed to measure
face validity
it looks like it’s measuring what
it’s supposed to measure
content validity
it contains all the parts that
your theory says it should contain
criterion validity
the variable is related to
key outcomes: it can predict what it is supposed
to predict, use known groups to compare
known-groups paradigm
examine whether scores on your
measure meaningfully differ between
groups whose behavior is already
well understood.
convergent validity
the measure is correlated
with other measures of the same construct
discriminant/divergent validity
the measure is less correlated with other
measures of different constructs
Survey Question Formats
forced-choice format
• Likert scale
• semantic differential format
• open-ended
forced-choice format
people
provide an opinion by choosing the
best of two or more options
Likert scale
people are presented with a
statement and use a rating scale to reflect
their degree of agreement
semantic differential format
people are
asked to rate a target object using a numeric
scale anchored by adjectives or statements
open-ended questions
people can answer
the question any way they like
well-worded questions
The way a question is worded can influence
participants’ responses
question order
he order of questions can influence responses
how accurate are self-reports?
people will sometimes give inaccurate
responses because they:
• want to use shortcuts
• want to look good
• don’t have access to the relevant info
response sets
a type of shortcut where
participants adopt a consistent way of
answering every question
acquiescence
a type of response set in which
participants respond “yes” or “agree” to every item.
the fix: reverse-worded items
fence-sitting
a type of response set in which a
participant stays neutral on every question instead
of committing to a yes/agree or no/disagree.
the fix: remove the neutral option
socially desirable responding
when a
participant gives answers that make them look
better than they really are; this decreases
construct validity
observational research
when a researcher
“watches” people or animals and systematically
records how they behave or what they are
doing
potential threats to construct validity of observational research
observer bias
• observer effects
• reactivity
observer bias
when a researcher’s biases,
beliefs, or expectations influence how they
interpret participants’ behavior
observer effects
when a researcher’s biases,
beliefs, or expectations influence the actual
behavior of the participants
preventing observer bias/effects
codebooks, multiple observers, masked research designs
reactivity
when participants change their
behavior when they realize they are being
watched
correlational studies
bivariate correlational design
• multivariate correlational design
– longitudinal study
– multiple regression
experiments
independent groups design
– posttest only
– pretest/posttest
• within groups design
– repeated measures
– concurrent measures
• factorial design
correlational study
a study using only
measured variables
bivariate correlational study
comparing two measurable scores
longitudinal designs
Collect the same variables at two different time points
multiple regression
Collect additional variables to rule out alternative explanations
correlational studies: summary
bivariate correlational designs can establish covariance
• longitudinal designs can help establish temporal precedence
• multiple regression can help establish internal validity
experiment
a study using at least one
manipulated variable
independent groups design
participants only experience one
level of the IV