* personal experience * Intuition * Authority - experts * Appeals to tradition, custom, faith * Magic, superstition & mysticism * Insufficient/incomplete data * No/biased inquiry
2
New cards
scientific method
* Systematic process * falsifiable theories * Replication * Reflective & self-critical approach * Cumulative & self-correcting process * Cyclical process
3
New cards
Positivism
* August Comte * Positivism = Search for the truth through systematic collection of observable facts * Sociology = scientific study of the social word
4
New cards
classical positivism
* naturalism: social science - natural sciences * Empiricism: knowledge of the World is limited to what can be observed and measured * Laws: social world is subject to regular and systematic processes (laws are explanatory and predictive) * Induction: observation → theory * Cause-and-effect relationship: observable constant junction (D. Hume) * Science is objective and value free (based on precise measurements)
* Karl Popper * Rejection of induction * Rejection of verifiability * Falsification
7
New cards
Deductive-Nomological model
* C. G. Hempel * Observed phenomenon is explained if it can be deduced form a universal law-like generalisation * Law expresses necessary connection between properties (accidental generalisation not)
8
New cards
Hypothetico-deductive model
* test ability of law to predict events * Law → hypothesis → explicit predictions * prediction correct = hypothesis corroborated * Prediction incorrect - hypothesis falsified
9
New cards
Scientific realism
* similarities to positivism * social and natural worlds (sciences) are similar * Realism: Objective really exists * Key difference * reality can consists of unobservable elements as well * We can indirectly test these it with indicators that reflects relationships * Assessment by observable consequences * “Best” theory is the one that explains phenomena the “best” * Mechanisms (Tilly) * environmental * Cognitive * Relational
10
New cards
interpretivism
* social world and natural world are fundamentally different * Social world * subjectively created * Understanding human behaviour by interpretation of meaning of social behaviour
“One can do a good job predicting what a study will find by knowing the preferences of the scholars who undertook it”
* critical theory: * Can’t be separated * Positivism: * normative theory * Empirical theory * Robert Cox: * all theory is normative * Max Weber: * distinguish, but values cannot be ignored
12
New cards
Normal science & scientific revolutions
* Thomas Kuhn * Science is a social construction * Scientific community subscribes to a common view, paradigm, or conceptual scheme * Defines objects, norms and methods of investigation * Truth is based on consensus in scientific community
13
New cards
scientific research programs
* Imre Lakatos * Falsification and methodology of scientific research programs * Scientific research programs = incremental, cumulative, progressive articulation of scientific research programs lead to the growth of scientific knowledge * Theories have a Hard core with protective bet of auxiliary hypothesis * Novel facts = progressive or degenerating research programs
14
New cards
What makes research question relevant?
* scientific relevance / importance * Social relevance / importance
15
New cards
What makes a research question useful?
* Is the research question specific enough that allows your research to come up with the result? * RQ has to be researchable (possible to answer) * RQ has to be new (Unanswered)
16
New cards
Finding good research question
* depends on individuality * RQ should reflect Real world events and problems * RQ can be inspired by gaps or controversies in literature * Puzzle - unexpected contradictions
17
New cards
RQ: Typical steps
* general research questions / working hypothesis * Literature review → shapes RQ * Theory / theoretical framework * relevant concepts and factors * Expectations and hypotheses * Research design → data and sources
18
New cards
RQ types: Descriptive
* academic audience * No idea how crisis works * Describing what is going on * Observation and describing things * First step before explanation
19
New cards
RQ types: Explanatory
* academic audience * Most typical RQ * Based on causal mechanism * Explanations should be universal
* applied research and consulting * EX.: climate change * What policies are necessary to accomplish goal (if we want to accomplish, what do we need to do)
\
22
New cards
RQ types: Normative
* academic audience * Philosophical * Value judgements of norms and values
23
New cards
Literature search: Sources
* databases (online, google scholar,…) * Reviews / state-of-the-art articles * handbooks/encyclopaedias * Annual review of political science * Snowball sampling - following references
24
New cards
Literature review
* Summarise - outline relevant existing research / knowledge / theories / methods / evidence * Evaluate - identify the contributions (strengths) and limitations (weaknesses) of existing research * Conceptualise - use it to define key concepts * Develop general info into specific research questions / hypotheses
25
New cards
Types of theories
* scope / level: grand theory vs. Middle-range theory * Process: inductive vs. Deductive * Nature of question: empirical vs. Normative
26
New cards
Grand Theory
* abstract * “Big theories” * EX.: Behaviouralism, Marxism, RCT * Assumes to explain everything * Can’t really be tested * Process * coding → close (tentative) coding of collected data * Sorting → compare, sort, synthesise the codes * Memo writing → write memos outlining codes
* mediator → meditates between cause and effect * Moderator → play role in relationship (suppressor, reinforcer, defector) * Confound → explain both variables
33
New cards
Research design: definition
\- “strategy for investigating the research question in a coherent and logical way, inclining what kind of data is needed, how the data is collected and what methods of analysis will be used”
* how can we answer our research question?
34
New cards
Research design: types
\- experimental
* experiments * Randomised intervention / treatment * Causal factor randomly assigned * Allows researchers to intervene in reality
35
New cards
Research design: types
\- cross-sectional & longitudinal
* cross-sectional * analysing sample or cross-section of population at a single point in time * Random selection * Longitudinal * explore changes over time * Panel - same units studied with same study over time * Cohort - same study conducted on different sample
36
New cards
Research design: types
\- comparative
* deliberate choice of cases * Case studies * In depth process tracing * Popular in IR & ICP
37
New cards
Research design: types
\- historical
* similar to case study design * Events or processes
38
New cards
Milligram study
* hypothesis - individuals will obey requests by authority even if request is considered to be unethical * Main result - 2/3 participants continue to maximum shock level * Full debriefing and follow up
39
New cards
Stanford experiment
* Hypothesis - personality traits of prisoners & guards are key to understand abusive prison situations * 2 weeks prison simulation * 24 participants * Random “guard” & “prisoner” assignment * Results - quickly spins out of control; 1/3 of “guards” exhibit sadistic tendencies
40
New cards
Research ethics: basic principles
* do no harm * Informed consent * Protection of privacy / confidentiality * Transparency and documentation
41
New cards
Research ethics: behaviour of researcher
* avoiding bias that can produce misleading results * No incorrect reporting * No inappropriate use of information
42
New cards
Research Ethics: behaviour of sponsors
* no restrictions imposed by the sponsor onto the research and the results * No misuse of information
* direct quotations * Summarising paraphrasing content
44
New cards
Threats to validity: internal validity, external validity, reliability
* internal validity → ability to draw causal inferences
* External validity → generalisability of research findings * Reliability → consistency and replicability of findings
45
New cards
Threats to internal & external validity
* history → context effects - one event demolishes whole research * Maturation → subject effect - change overtime * Mortality - subject dropping out or dying * Testing / performance effects → sensitisation - people get tired of long test participation * Instrumentation / instability - quality of tool used * Selection bias / generalizability * Statistical regression → “regression-towards-the-mean”
46
New cards
Data: types of data
* primary vs secondary data * Quantitative vs qualitative data * Data by source * people → interviews, surveys * Observation → actions, field notes * Documents * Secondary sources → official statistics
47
New cards
Unit of analysis: fallacies
\- unit of analysis → entity that is studied
* ecological fallacy → imposing group level characteristics on individuals * Individualistic / exception fallacy → characterising groups by individual characteristics * can be done if using representative sample
48
New cards
measurement: Conceptual goodness
* Gerring * Familiarity →established usage * Resonance → cognitive click * Parsimony → as simple as possible * Coherence → internal consistency * Differentiation → clear difference form other concepts * Depth → ability to bundle many characteristics * Theoretical utility → if the theory can be useful to new researches * Field utility → if the concept allows new observations * Criteria require trade-off
49
New cards
Conceptualisation
\- defining important factors researcher is interested in
* abstract definition
50
New cards
Operationalisation
\- way of measuring
* defining how something is measured
51
New cards
Measurement validity & reliability
* Validity = accuracy (precision of theory ) * Reliability = precision (consistent accuracy of the calculation) * Free of systematic error → unbiased * Low variance / random error → efficient
* test-retest reliability * Consistency across indicators → internal consistency * Consistency across researchers → inter-rater reliability
55
New cards
coefficient
\- quantitative measure for internal consistency
* range - 0.00 - 1 * Rule of thumb: * minimum → 0.70 * Desirable → 0.80 * Corbach’s Alpha → correlation of all indicators * Split-half method → combine all indicators into two sets / measure & correlate two measures
56
New cards
Measurement validity
\- how accurate is our measurement
* face validity * Content Validity → theory based * idea of conceptualisation and observation * Theoretical level matches observation / measurement * Criterion validity * concurrent and predictive validity
→ concurrent - different measures of the same concept expecting them to be related
→ predictive - using external criteria in the future (measured after) * Convergent validity → measures should correlate * Discrimination validity → specific; → measure only what they are supposed to measure
57
New cards
Comparative method
\- “rules, standards, procedures for identifying and explaining differences and similarities between cases, using concepts that are applicable in more than one case”
* core method to empirical science * Use: to develop theory & to test theory & to apply existing theory on new cases * Multiple-N help to avoid: false uniqueness & false universalism
58
New cards
Single case study
* focus on single case * Detailed (“thick”) description → internal validity * Engage wider academic discussion → external validity * Data collection → interviews, surveys, official statistics, process tracing * Purpose → descriptive contextualisation, examine exceptions, applying theories, generate theory
59
New cards
Single case selection criteria
* critical / crucial - to test/confirm/disconfirm theory * Revelatory - reveal relationship which cannot be studied by other mean * Unusual/deviant - throw light on extreme case
60
New cards
Single case study: advantages & disadvantages
* Disadvantage * low external validity * Lack of content → uncertainty about conclusions * advantage * rich/thick description * Good match of theory and evidence * High internal validity
61
New cards
Small-N comparison
* analysis of limited number of cases * Advantages * detailed in-depth analysis * Better ability to contextualise * Disadvantages * High risk of selection bias * Causality trends to be deterministic (not, probabilistic)
62
New cards
Most similar system design
\- identify causal factor that explains dissimilar outcomes in otherwise similar cases
* cases are similar in most characteristics * Cause (IV) and Effect (DV) are different
63
New cards
Most different system design
\- identify casual factors that explains similar outcomes in very different cases
* cases are different in most characteristics * Cause (IV) and effect (DV) are similar
64
New cards
Selection by outcome
* selection driven by outcome (effect) researcher wants to explain * Risk of selection bias
65
New cards
Large-N comparison
* advantages * large number of cases lowers risk of selection bias * Can account for many explanatory factors simultaneously * Disadvantages * limited ability to capture causal processes * “Thin” concepts and theories * Concept stretching * Concept stretching * equivalence of meaning across cases * Using concept which does not fully fit new cases * EX.: concept of democracy * Case section = sampling * representative sample of population
66
New cards
Qualitative Comparative Analysis
\- formalised systematic comparison
* truth table → list of cases with relevant condition & outcomes * crisp set → absent = 0 or present = 1 * Fuzzy set → interval scale from 0.00 to 1.0 * Analysis → process of paired comparison (all possible combination of factors/conditions) to generate or test summaries
\- vs. Quantitative analysis
* involves case selection, data collection and statistical data analysis * Advantages * large numbers of cases lowers risk of selection bias * Disadvantages * limited ability to capture causal process * “Thin” concepts and theories * Equivalence of meaning across cases → concept stretching
67
New cards
Historical research
* developed as separate field * Has always been part of political science * Temporality * context * Timing * Timing (historical institutionalism) * critical junctures → crucial events that changed history * Positive feedback → data that reinforce decision * Path dependence → commitment to certain development
68
New cards
Historical research: Typical methods
* narrative case studies * Process tracing * Event Structure Analysis
69
New cards
Historical research: Event Structure Analysis
\- analytic procedure to “unpack” an event into intermediary causal steps or constituent parts
* why did it happen? * Step 1 → construct narrative account of what happened * Step 2 → break narrative into series of short statements * Step 3 → order statements into a diagram that reflect causal sequence or relations
70
New cards
Historical research: Process tracing
\- identification of causal chain
* “domino theory” * Theory must predict all intervening steps * Theory development & theory testing * EX.: Ukriksen and Dadalauri
* parallel demonstration of theory → show applicability of general theory across variety of cases
* Contrast of contexts → set limits to the scope or claims of an overly general theory * Macro-causal analysis → make causal inferences about macro historical process and structures
72
New cards
Historical research: Data
* primary source * historical documents, writings, artefacts by politicians * Stored in archives and libraries * Secondary source * interpretation, commentary, analysis by scientists * Key task → establish authenticity, reliability & accuracy of information
73
New cards
Historical research: Example “Ulroksen and Dadalauri”
* using in-depth process tracing and single case study for theory testing * Case: Tax policy in Georgia * Data: * documents → policy proposals, recommendation by experts * Interviews with key actors → transcripts * Key finding: * transnationalisation model is supported by in later stages of reform process (not at the beginning) * Article provides useful model / template
74
New cards
Case selection & sampling
* common purpose → selection of a subset of cases (sampling units) from a population * deliberately or random sampling
* Crucial steps
1. Defining full set of data sources (population / universe of cases) → sampling frame 2. Selecting specific subset / sample of data sources from population → sampling unit * Key difference (in the second step) * case selection → deliberate, strategic, purposive (not random) * Sampling → probabilistic / non-probabilistic
* use → theory testing (confirmatory) * Representativeness → yes
77
New cards
Case selection: Techniques
Diverse cases
\- select countries in the way they represent different combinations
* use → theory testing & theory generating (confirmatory & exploratory) * Representativeness → maybe
78
New cards
Case selection: Techniques
Extreme cases
* use → theory generating (exploratory) * Representativeness → no
79
New cards
Case selection: Techniques
Deviant cases
\- cases far from average but not extreme
* use → theory generating (exploratory) \[theory testing (confirmatory)\] * Representativeness → maybe
80
New cards
Case selection: Techniques
Influential cases & crucial cases
\- cases that have dissproportioned effect on the “red line”
\- removing these case would have change an outcome
* use → theory testing (confirmatory) * Representativeness → no
81
New cards
Case selection: Techniques
MSSD & MDSD
* use → theory testing & theory generating * Representatives * MDSD → maybe * MSSD → usually based on differences
82
New cards
Survey types
Cross-sectional
* people do survey at the same time * Most common appoach * One sample at a time (snapshot)
\
83
New cards
Survey types
Longitudinal
\- longer period of time
* cohort study → pooled cross-sectional time series * same survey with new people * Panel survey → cross-lagged causal analysis * Studying same people overtime
* Rolling cross-section → dynamic changes and trends * interviews are spread overtime * Trend study → continuous time series * aggregate multiple samples from different surveys
84
New cards
Survey types
Non-scientific & unethical polls
* push polls → used to spread negative (campaign) information * Frugging → fund-raising under the disguise of research * Sugging → selling under the disguise if research
85
New cards
Surveys: methodological issues
* testing hypotheses (internal validity) * correlational design → look for the patterns but never a definite conclusion * Goal: more prediction than explanation * Sampling unit / unit of analysis * cross-level inferences * Ecological fallacy
86
New cards
Questionnaire design: issues
* reactivity * Close-ended vs open-ended questions * Response scales * Question order & wording
\- drawing samples as a subset of population and study the sample and use info obtained to make inferences about the population
89
New cards
Sampling: the goal - drawing inferences
* population → We study the sample and the information from it is statistic and then we use it back for population parameter * Characteristic of the population cannot be observed directly * Of the small sample we choose we can use information and translate it into stats
90
New cards
Sampling: the problem - sampling bias
* selection bias → researcher * Response bias → participant * Self-selection bias → participant (if they participate or not)
91
New cards
Sampling: the solution - random choice
* probability sampling
* Requirements for random selection * equal probability of being chosen * Observer cannot predict which units are chosen other than with chase probability * Sample must include any possible combination of units from the sampling frame
92
New cards
probability sampling
Simple random sampling
* gold standard * With replacement * Without replacement * Systematic random sample
93
New cards
probability sampling
Stratified sampling
* simple random sampling with known subgroups (based on census) * Disproportionate sampling possible → re-weighting
94
New cards
probability sampling
(Multistage) cluster sampling
* population → equivalent & internally heterogenous groups * Sampling in stages * Selection probability of clusters proportionate to size
95
New cards
Non-probability sampling
Convenience sampling
* available respond from participants and students * Volunteer sample → ask fro volunteer offer some incentive
96
New cards
Non-probability sampling
Purposive sampling
\- Selects the subjects based on specific characteristics
* snowball sampling * Quota sampling
97
New cards
Surveys: response rate
* contact rate → % of selected individuals contacted * Cooperation rate → % of individuals participating * Surveyed rate → % of respondents surveyed too often * Response rate → completed interviews/surveys * Recommended → pre-notification mailings
98
New cards
Surveys: weighting
* misrepresentation * Available a priori information * Post hoc corrections * Systematics cannot be fixed by weighting * Use of weighting in for statistical analyses
99
New cards
Sampling error
* random / non-systematic error * If sample become smaller the sampling error becomes bigger
100
New cards
Sampling size
* determines how precise the measurement will be * Depends on homogeneity and needed details * Larger samples decrease sampling error and increase statistical power