IPS Lecture 11- Research Design and the Comparative Method

0.0(0)
studied byStudied by 1 person
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/22

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

23 Terms

1
New cards

What is research design?

(essentially, different approaches to methodology)

What designs suit our questions best?

Why can't we choose ontological+epistemological positions based on our research question?

--> Skin, not a sweater!

We should know what we're looking for before choosing our designs (found within ontology+epistemology)

--> think of the David Marsh table:

- quantitative data --> positivists

- qualitative--> hermeneuticists

(these are preferences, what data they "privilege")

- Aim of research - to expand boundaries of existing knowledge

- Getting valid answers to research questions in a reliable and efficient way

- Applied (operational) epistemology - how do we know?

(L11, slide 2)

2
New cards

Three levels of generality

Level 1: most general

- Ontological, epistemological, broad theoretical outlook (for example what vantage point from which theory to use, when approaching the question of political inequality: feminist, Marxist, positivist, etc.)

Level 2: less abstract, more operational

- Research goal and question, relation with theory, conceptualization, operationalization, class of research methodology (qualitative, quantitative, both)

Level 3: most specific

- Cases, variables, evidence

(L11, slide 3)

3
New cards

Research questions (to expand our knowledge!) and research goals

The general aim of the research:

- Advance the 'state of the art' (state of existing knowledge, scientific literature)

The motivation of research question:

- Puzzles (= surprising sets of results, strange, rare/unique; e.g. a coup d'etàt in a solid democracy) and lacunae (=gaps in the research: a phenomenon that hasn't been widely studied, not been adequately addressed

- Substantive problem: current concerns (e.g. spillover of the Ukraine conflict on the rest of the world)

(L11, slide 4)

4
New cards

Research question and research goals

What sort of research are we going to be doing?

(positivists: causal relationships; interpretivists/hermeneuticists: explanatory/interpretative)

- (Scientific) description

- (Causal) explanation and prediction

- Interpretation

- Problem-solving

- Combinations possible!

(L11, slide 5)

5
New cards

Research goal: (scientific) description

Essential element: classification

- doesn't have to be perfect (important for awareness and structure)

- dependent on the quantity of causal research

--> few cases: more systematic, apprehensive

--> many cases: more circumspected, minimalist

- Analytical dimensions - often theoretically informed (conceptualization)

- Generalizability of case description to a larger population (external validity)

- Implications for both empirical and normative theory

- Comprehensive representation of one / few case(s) (many variables) OR

- Classification of many cases (one or few variables)

(L11, slide 6)

6
New cards

Description goes with classification

- KEY ASPECT

- 'The value of scientific description is in selecting a set of important analytical dimensions on which to project the empirical world' (LMS 223)

- Classifications imposed on the social/political world

- Classification is the first step to theory

--> a way of describing things in order to theorize and subsequently use data to try to come up with causal explanations (e.g. voting turnout: classification can be done by age, educational or socio-economic background, location, etc.)

(L11, slide 7)

7
New cards

Foucault, The Order of Things, preface

- Foucault is saying that classification is imposed on the world rather than done by the world (not found 'de natura')

- there are infinite modes of classification

--> some of the "most natural" ones, might not appear natural to others

--> some are "starkly impossible" (Foucault)

--> we need to be careful when classifying: our taxonomies should be based on clarity, utility and so on

(L11, slide 8)

8
New cards

Research goal: (causal) explanation

- explanation is the focus

- Not deterministic in the social sciences, instead probabilistic (recall Hume: causality can't directly be observed--> one needs to think about plausible mechanisms, hypotheses)

- If the cause had been different, the outcome would have been different

- Causal mechanism

1) have a scope for agency

2) It isn't realistic to find enough information to accurately predict; we should find relevant statistical data to PROBABILISTICALLY predict

- Strength of causal effect

- Retrospective (what was)

- Prospective (what would be)

(L11, slide 9)

9
New cards

Increase in explanatory research over time (%)

(look at graph on slide 10)

- there has been an upwards trend in explanatory research since the emergence of behaviouralism (1950s and 1960s)

- explanatory research has been perceived as more prestigious

(L11, slide 10)

10
New cards

Research goal: interpretation

- Interpretivists reject a 'scientific concept of causation' (Bevir and Rhodes 2016)

- Political actions are the product of subjective reasons, meanings and beliefs

- So we should focus on understanding rather than explanation (Weber)

- And on constitutive rather than causal arguments (Wendt)

- Is this distinguishable from 'thick description' (=interpretation of texts and data)?

(L11, slide 11)

11
New cards

Conceptualization and Operationalization

- Concepts are abstract and unobservable

(conceptualisation has to be judged by

1) usefulness and 2) coherence)

- They are contested

- Concepts can have an inductive element (how the terms are used in politics/by people) but this has limits

purely inductiveness--> misguided and incoherent, because concepts are often misused and abused (e.g. North Korea calling itself a democratic republic)

- This is why concepts must be precise in terms of their meaning, characteristics and scope: define your terms!

(L11, slide 12)

12
New cards

Experimental vs. observational methods

After choosing your ontological+epistemological positions (and then sticking to them!)--> choice of design features and classification

- In experimental designs, researchers can manipulate the design

- They must be able to assign cases to a 'treatment' group and a 'control' group

- That is not often possible in political science, and may be unethical!

- When we cannot set up experiments, political scientists work by observational methods

(L11, slide 13)

13
New cards

Observational methods

- Researchers cannot assign 'treatment' and 'control' groups: it must be done randomly so that every unit has the same chance of ending up in any of the groups.

--> Random assignment is crucial for valid causal inference because it helps rule out all alternative explanations of any difference between the groups we might observe after we apply the experimental manipulation

- Researcher does control...

1. Which cases to study

2. How many cases to observe

3. Which features of the cases to observe

4. What (sort of) data to collect (qualitative or quantitative)

(L11, slide 14)

14
New cards

How many cases? Large - N observational

- Population = individuals, countries, years, written content, etc.

- Ideal: randomly selected sample of individuals, written content, etc.

- Practice: often convenience samples, purposive sampling

Researcher does not control 'treatment' variation:

- Try to measure, and thus rule out, all potentially confounding variables (which to select = based on theory)

- Try to measure, and thus observe, causal mechanisms

(L11, slides 15, 16)

15
New cards

Fewer cases: the Comparative method

- less mathematical

- Comparative politics ≠ the comparative method!

--> the comparative method tends to focus on small-N case studies

- It is the go-to method for much empirical political science when:

- Experimental methods are not possible (no control)

- Statistical methods are not possible (not enough data/cases)

(L11, slide 17)

16
New cards

Method of Agreement

=compares and contrasts cases with different attributes but shared outcomes, seeking the one attribute these cases share in common to attribute causality (--> =most different systems design)

- Research puzzle about a political phenomenon (outcome)

1. Literature review / theorising - identify potential explanations (conditions)

2. Identify only cases in which the outcome is present (or not) 3. Look for conditions from literature review, identify the one condition that is present (or not) in all the cases

- Refine and repeat steps 1-3

(L11, slide 18)

17
New cards

Method of agreement: example

(look at the table on the slide)

- trying to understand, explain the absence of terrorism

- finding dependent variables

- independent variables: net immigration etc.

--> then we gather data

Country A

Country B

Country C, etc.

(L11, slide 19)

18
New cards

Limitations to the method of agreement

- Selecting on the dependent variable (biased sampling)

- Is the identified condition really both necessary and sufficient?

- Or is it necessary, but trivial/uninformative?

- Small sample size: too many variables for only a few cases

(L11, slide 20)

19
New cards

Most different systems design (MDSD)

=CASES are DIFFERENT but dependent variables are similar

--> might have a similar outcome, what are the common features?

(look at the two pictures on slide 21: a peacock and a penguin)

does the opposite of MDSD: looks to explain a different outcome in cases that are usually similar (in the next slides' examples: similar features are economic development, British colonial heritage, etc.)--> trying to explain their divergence

(L11, slides 22, 23)

20
New cards

most similar systems design (MSSD)

=a research design in which we compare cases that are similar with respect to a number of factors but with distinct outcomes

- Shared characteristics may actually be a scope condition (generalizability?)

- Is the identified condition really both necessary and sufficient?

- Or is it necessary, but trivial/uninformative?

- Small sample size: too many variables for only a few cases

(L11, slides 24, 25)

21
New cards

General limitations to Mill's methods

- There are a small number of cases

The methods are sensitive to case selection / variable specification

- Different result / conclusion with different cases?

- Selecting cases to fit theory, instead of to test theory

- What about interactions between variables?

- The methods do not provide much help in identifying a plausible causal mechanism

(L11, slide 26)

22
New cards

Single case studies

- most difficult to assess causality

- sometimes we only have one case (e.g. Brexit)

- If MSSD and MDSD are not possible (not enough cases, inadequate prior knowledge of relevant variables) we go back to single case studies

- Comparison is still important...

- Compare to concepts

- Compare to a wider universe of cases and concepts

Case studies themselves are often descriptive, but

- Contribute to general knowledge

- Contribute to theory generating and theory testing (= explanatory)

- Important steps in a general approach to further understanding and explanation

(L11, slide 27)

23
New cards

Single-case studies

- One case = no variation in sample, no variation in treatment

- Generalisability (external validity) difficult to achieve

- Still useful for:

- Testing competing theories in one case

- Generating initial theoretical insight

- Testing theory under most difficult conditions (least likely, most likely)

(L11, slide 28)