1/56
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
treatment group
composed of subjects who receive a treatment that the researcher believes is causally linked to the DV
control group
composed of subjects who do not receive the treatment that the researcher believes is causally linked to the DV
rival explanation
alternative cause for the DV
marked with letter Z
threatens Xâs ability to explain observed differences in Y
confounder
pretreatment variable that is related to both the treatment and outcome
fundamental problem of causal interference
challenge of establishing a causal relationship between variables when the counterfactual condition to be compared is not observed
life isnât a video game, you canât start over or quit without saving
how do we know is a variable affects an outcome if we donât try it?
the best we can do is make reasonable approximations of unobserved outcomes
three things that need to be true when demonstrating the cause-effect relationship between two variables
the variables are positively or negatively correlated
the cause precedes the effect
stimulus precedes that outcome
there are no alternative explanation for the cause- effect relationship between the two variables. another variable isnât the true cause of variation for both variables
hardest one to demonstrate
research design
overall set of procedures for evaluating the effect of the IV on a DV
poli sci researchers use it to estimate the effect of an IV and overcome the fundamental problem of causal indifference
our ability to rule out an alternative explanations depends of the power of our research design, an overall set of procedures for evaluating the effect of an IV on a DV
list a few kinds of research designs
experiments, qualitative designs, controlled comparisons, and more
experimental designs
ensures that the treatment group and the control group are the same in every way except one - the IV
random assignments
every participant has an equal chance of ending up in the control or treatment group (probability = .5)
allows us to measure the effect of an IV on a DV free from other factors that affect both the IV and the DV
selection bias
nonrandom processes determine the composition of the test and control groups
random assignments also reduces the risk of this, as participants are not purposefully selected for specific characteristic that may affect the outcome variable
posttreatment measurement
assessing outcomes or variables after participants have received the experimental treatment
includes pre and post treatment phases
pretreatment phase
investigator might measure the value of the dependent variable in both groups
posttreatment phase
DV is measured again for both groups
pretreatment measurement
assessing outcomes or variables before participants receive the experimental treatment
allows researcher to make a before-and-after comparison (w/in treatment group) in addition to a comparison between control and treatment groups
do all experimental designs have pre and post treatment measures?
no, some experimental designs donât use pretreatment measures bc they have to learn from having the DV measures
in some cases, subjects may react to or learn from having the DV measured
what else can pretest measurements be used for?
assess the effectiveness of randomization and to rule out the possibility that differences in outcomes are due to difference that existed before treatment
rather than using a pre-treatment measurement of the DV, the researcher can evaluate the success of random assignment by measuring and comparing variables other than the DV
internal validity
within the conditions created artificially by the researcher, the effect of the IV on the DV is isolated from other plausible explanations
impact is tiny, doesnât apply to the outside world
external validity
results of a study can be generalizedâthat is, its findings can be applied to situations in the non-artificial, natural world
field experiments
control and treatment groups are studied in their normal surroundings, living their lives as they naturally do, probably unaware that an experiment is taking place
conducted in the real world
divide people into two groups on the basis of the independent variableâthose who received a contact (the treatment group) and those who didnât (the control group)âand compare turnout rates
each individual has an equal chance of ending up in any group
multiple-group design
experiment design where multiple groups are compared, often to test the effects of different treatments or conditions
what can happen with field experiments?
though they have solid methodological foundations, they can be hampered by problems or validity
get out and vote: people donât wanna answer the phone, send a letter, etc
compliance with treatment
degree to which participants adhere to the assigned treatment conditions in an experiment
internal validity problem bc it involves the design of the protocol itself
average treatment effect
average difference in outcomes between the treatment and control group
provides overall measure of treatmentâs effect on the outcome of interest, including noncompliers in the estimate
average treatment effect on the treated
average effect of the treatment among individuals who actually received the treatment
considers only the treated individuals in the treatment group and compares their outcomes with the control group
what happens when researcher canât assign varying values of the independent variable?
the researcher selects observations for analysis and uses sample data to test hypotheses about causeâeffect relationships in the population
sampling frame
population the researcher wants to analyze and the source from which samples are drawn
random sample
sample that has been randomly drawn for the population
researcher ensures that every member of the population has an equal chance of being chosen for the sample
compositional difference
differences in the groups being compared, which can affect research outcomes and distort estimated effect of a treatment (or another IV)
be careful not to confuse random assignment and random sampling
response bias
when some cases in the sample are more likely than others to be measured
ways to minimize selection bias
simple random sample, stratified random sample, systematic sample, cluster samples
simple random sample
sample of observations generated when each member of a population has an equal probability of being selected for sample
ex: list everyoneâs name from 0001 to 1600
use a random number generator
leaves the sample composition entirely to chance
might not be representative of everyone
stratified random sample
random sample produced by dividing the population into distinct subgroups (strata) and then randomly sampling from each subgroup
ex: divide student population by class (freshmen, sophomores, juniors, seniors)
not entirely strict
systematic sample
type of random sample. every kth observation is selected for sample beginning with a random starting point between 1 and k
cluster samples
used when the population of interest is hard to define but occupies a definite geography
could be used in combination with other methods in a multistage sampling strategy
online samples
surveys conducted over the internet, often used for collecting data from a large and diverse population
gained popularity due to cost0effectiveness and ability to reach many respondents quickly
but concerns about representativeness of online samples and generalizability of the findings
not everyone has access to internet
underrepresentation (old people, certain demographics, involuntary, etc)
researchers can apply weights to adjust for discrepancies between the sample and the population on key variables
sampling weights
adjustments applied to survey data for unequal probabilities of selection or nonresponse bias
used if researchers know how their sample observations compare to the population on key dimensions
what can happen when sampling weights are used?
subpopulations that are underrepresented in the sample compared to their prevalence in the population get weighted more heavily, and subpopulations that are overrepresented have their weights lowered
correct for the systematic errors caused by nonrepresentative samples without changing the effective sample size
likely voter model
voter that weigh responses based on predictions about which respondents are likely to vote in an election
when pollsters obtain a random sample of registered voters, they try to identify which respondents are most likely to vote and analyze preferences of the likely voters
this means dropping or lowering the weight attributed to responses from respondents who are unlikely to vote and increasing the relative weight of respondents who are most likely to vote to accurately estimate candidatesâ vote shares in an election
convenience sample
researchers selects cases that can be studied most easily
studying cases close at hand
ex: academic professors using undergrad students for research (like kertzer!)
snowball sample
ask people they select for analysis to help identify others who could participate in the research
often used to conduct exploratory analysis on a hard-to-study population
ex: if you wanted to understand what motivates hacktivists, people who hack into computer networks to advance political goals, you would find it impossible to conduct a random sample of the population because they donât publish their names and contact information in a directory
might instead ask the hacktivists youâre able to contact for help contacting other hacktivists for your research project
purposive sample
aka judgmental sample, researcher selects cases that offer the best test of the research hypothesis
some members of the population may be considered âbellwetherâ observations, so representative of the population that they are thought to be especially useful for measurement purposes
most similar systems design
helps researcher evaluate possible explanations for variation in the DV
might select cases that seem similar but have varying values of the dependent variable
most different systems design
uses cases that are different in many respects except for sharing similar values of the dependent variable to rule out their dissimilarities as potential explanations for variation in the dependent variable
three core principles when conducting research on humans
respect, beneficence, justice
respect for persons
treat w/ respect and dignity
informed, prior consent before experiment
no exploiting authority to coerce
extra cautious when working with vulnerable populations (minors, prisoners who have diminished autonomy)
beneficence
minimize risk of harm and maximize benefits of experiments to those who participate
political topics can traumatize subjects
maintain anonymity of they prefer to
justice
random assignment and sample selection should be fair
no exploitation of certain groups
public funds should be used for public interest, not private
institutional review board (IRB)
independent ethics committee the reviews and approves research involving human participants to ensure participantsâ rights and welfare are protected
if a project is not eligible for exemption, it may be subject to expedited review by a single IRB staff member
informed prior consent
participantsâ voluntary agreement to participate in human subject research after being fully informed about the studyâs purpose, procedures, risks, and benefits
what is something that some poli sci journals are required to do with their research?
replication materials available upon publication, and many authors do so voluntarily
includes datasets, computer code, other files that allow others to exactly reproduce an articleâs results
whatâs another thing that all researchers shouldnât do?
never fabricate the public fraudulent research
post hoc theorizing
change the hypothesis and underlying theory after collecting data in order to predict results in line with the data
this is bad. do not do
P-hacking
purposely manipulating statistical analysis to achieve statistically significant results
double-blind peer review process
process before a poli sci paper gets published
reviewers donât know who the author is and the author doesnât know who the reviewers are
publishes articles based on merit
most journals receive more than they can accept
benfordâs law (aka first-digit law)
the leading digits of numbers in naturally occurring datasets are not uniformly distributed but follow a specific pattern
probability of the first digit being a particular number is not uniform but, instead, follows a logarithmic distribution
the digit 1 is the most common leading digit in real datasets, occurring about 30.1% of the time, followed by 2 (17.6%), 3 (12.5%), and so on, with 9 being the least common (4.6%)