1/100
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Empirical research vs normative research
The study of what is vs what ought to be
Positivism (4)
Applies methods of science directly to the social world
Make law-like generalisations based on observations that establish cause and effect explanations
Karl Popper's criticism classical positivism
Deduction is possible, with theory-building occurring through falsification
Criticisms of positivism (4)
Logical positivism (deduction possible)
Karl Popper's falsification
Scientific realism (unobserved aspects)
Post-positivism (researchers influenced by social world)
Interpretivism (3)
Social world not the same as the natural world
There is no objective reality, need to interpret meanings and sub-texts of what shapes behaviour to gain scientific knowledge
Qualitative data (5)
Non-numerical
Less standardised, richer data
Analysis of single/ smaller number of cases
Deductive or inductive
Positivist or interpretivist
Quantitative data (5)
Numerical data (statistical methods)
Standardised
Analysis large number of cases
Often deductive but can be inductive
Positivist
Types of research questions (5)
Descriptive (how is the world)
Explanatory (why is the world)
Prescriptive (what is the best means to a given end, what should we do)
Predictive (likely effect or outcome of something)
Normative (what should the world/ ends be)
Properties of a good research question (4)
Researchable (focused, feasible, no logical fallacies)
Not already definitively answered
Social relevance
Scientific relevance
Fallacies a research question must avoid
False premises
Not answerable using empirical research
Tautologies (saying same thing twice)
False Dichotomies
What are the initial steps of the research process? (6)
1. Research question
2. Theoretical answer
3. Observable implications
4. Research design
5. Data collection
6. Data analysis
Why are research questions important? (3)
Focus question
Guides/ determined research decisions
Force consideration social and scientific relevance
Literature Review (4)
Develop research question with existing literature
Establishes RQ not definitively answered
Sets stage for own study (weaknesses/ gaps in existing studies)
Analytical purpose
Literature Survey
A descriptive summary of related studies
Stages of a literature review (3)
1. Read literature
2. Summarise literature
3. Introduce own argument
What is theory? (2)
An attempt to make sense of the complex world
A theoretical answer/hunch to your research question
What are concepts? (2)
Provide a label/ general term to observations/ events which are somehow alike
Need to be clearly and validly defined
How does good theory interact with concepts? (3)
Clearly outlines expected relationship
Provides clear argument for expected relationship
Builds on existing literature
Levels of theoretical analysis (3)
Micro (individual)
Macro (societal)
Meso (groups/ organisations)
Types of theory production (2)
Induction (from data to theory)
Deduction (from theory to data)
Use of induction (3)
Theory development
Interpretative research
Limited generalisability = limited use
Use of deduction (3)
Falsification
Standard approach to positivist research, sometimes used in interpretivist research
What is a hypothesis (2)
A concise statement of an observable implication of a theory
Must be falsifiable
Types of hypothesis (2, 2)
Explanatory (probabilistic or deterministic, have IV, DV and relationship between two)
Descriptive
Goals of inductive vs deductive theory
Inductive: to develop new theory
Deductive: to falsify theory
What is a research design? (3)
A strategy for providing a test or investigation of a working hypothesis
Specifies evidence needed to investigate hypothesis and how evidence will be collected + analysed
What is Operationalisation?
Specific definition allowing for the empirical measurement of a concept
What is the unit of analysis? (2)
The entity being being analysed
Determined by the research question, theory and what is possible/desirable
Examples of research design (4)
Experimental designs
Comparative case studies
Participant observations
Panel studies
What does research design need to do (2)
Operationalise key concepts
Specify unit of analysis
Measurement
The assignment of numbers of categories to objects/ events according to rules
Measurement error
Difference between true value and actual value
Validity
Is a test measuring what we want it to
Reliability
When something consistently produces similar results under similar conditions
Types of broad validity (4)
Face
Content
Construct
Criterion
Types of construct validity
Convergent (is an indicator similar to other indicators it should be similar to?)
Discriminant (is an indicator different from other indicators it should be different from?)
Types of criterion validity (2)
Concurrent validity (is a measure similar to established measures of the same concept?)
Predictive validity (how well does a measure predict a future outcome or behaviour?)
Face validity (2)
Does a measurement intuitively seem like a good measure of a concept?
Ad-hoc assessment
Content validity (2)
Does the measure reflect the full range of a concept?
Theoretical assessment
Types of reliability (3)
Intercoder reliability
Test-retest reliability
Internal consistency reliability
Intercoder reliability
Degree of agreeance between 2+ coders on the categorisation or interpretation of data
Test-retest reliability
If apply same test at different points in time, should get same results
Internal consistency reliability
Slightly different indicators of the same concept should get similar results
Unit of analysis vs measurement (2)
May be the same or different
Unit being analysed vs measured
Primary data
Researcher collects data
Analysis of primary data (3)
Full control over collection process
More time consuming
Can be expensive
Secondary data
Data collected by others
Analysis of secondary data (4)
No control over data collection
Less expensive
Faster
Have to be very clear about what is measured + how data collected
Methods of data collection
Quantitative data (numerical data, large-C)
Qualitative data (less standardised, small-C)
Experimental data (4)
Researcher intervenes in the data gathering process
Random assignment of experimental conditions
Explanatory questions
Large-C/ quantitative
Observational data
Researcher does not intervene in data gathering process
Explanatory and descriptive questions
Large-C or small-C
Measurement levels (2)(4)
Categorical (qualitative) (nominal, ordinal)
Quantitative (scale variables) (ratio, interval)
Categorical variables(2)
Nominal: unordered categories (martial status, car colour)
Ordinal: set of ordered categories allowing for ranking but cannot be measured mathematically (agree, somewhat disagree)
Quantitative variables
Interval: set of ordered categories with distance that can be expressed numerically (temp in C, SAT score)
Ratio: interval variable with a natural zero (weight, distance)
Types of data structures (4)
Cross-sectional
Time series
Repeated cross-sections
Panel
Cross sectional data
Data measuring multiple entities at a single point in time (temperature at 10am on one day)
Time series
Data measuring one unit of analysis over time, at regular intervals (hourly temperature reading over a day)
Repeated cross sections
Data measuring several differing entities over time (eg, new respondents at different time points)
Panel data
Data measuring a multitude of the same entities over time (eg, tracking income and age of a group of individuals over 9 years)
Causal inference (2)
Inferring something we do not know (Causal effects) from something we do know (data)
Only applicable to explanatory research
Requirements of causal effects for causal inferences (3)
Association between two variables
All confounders should be rules out
Reverse causality should be ruled out
Confounder
Third variable which is related to both X and Y; an alternative explanation
Spurious association
When a relationship/ association of two variables is assumed as causal, when is actually the result of a confounder
Experimental research (2)
Research design where researcher both controls and randomly assigns values of IV to pps
Researcher intervention in data gathering process
Randomised experiments (2)
Research method where pps are randomly assigned to a treatment and control group
A/B designs
Internal validity
Degree to which can be confident a study identifies the causal effect of the IV on the DV
External validity
Degree to which a study's findings can be generalised
Types of external validity (3)
Ecological: behaviour observed in artificial experiment may not generalise to real world
Population: experiments often involve unrepresentative subject pools, where cannot generalise study sample to population of interest
Reactivity: people may change behaviour once know are being observed
Types of experiment
Lab
Field
Survey
Lab experiment (3)
Recruited to common location
High level of control over variables and complex measurements
Population validity, ecological validity, reactivity concerns
Field experiment (4)
Natural environments
Researchers maintain ability manipulate variables
Higher ecological and population validity, lower reactivity
Issues of attrition (dropping out)
Survey experiments (3)
Reduced control over treatment application and environment
Higher population validity
Cost-effective
Observational research design (3)
Research design where researcher does not have control over values of IV
No researcher intervention in data gathering process
Natural variation
Limitations of experimental research
Practical objections (some things cannot be manipulated)
Ethical issues (deception)
External validity concerns
Descriptive work not possible (only causal questions)
Categories of observational research (3)
Qualitative/ quantitative
Explanatory / descriptive
Deductive/ inductive
Ways tackle confounders in observational research
Statistical control
Most-similar/ most-different designs
Panel design
Causal inference designs (eg, difference-in-difference)
Natural experiments (3)
Values of IV arise naturally to a point of 'as-if' random assignment
No researcher invention in data gathering process
Best way of establishing causal effect using observational research
Analysis of natural experiment (4)
High internal and external validity
Treatment assignment rarely fully random
Fewer confounders
Hard to find
When is small-C data limited? (2)
Hampers generalisation from sample to population
Complicates dealing with confounders
Forms of comparative research (3)
Case study
Small-C
Large-C
4 aims of case studies
Descriptive contextualisation
Applying theory to new contexts
Examine exceptions to the rule
Generate new theory
Selection of case studies
Critical
Revelatory
Unusual
Case study
High internal validity
Can have issues generalising
Small-C comparison
2+ cases, up to a dozen
In-depth analysis and general scope for contextualisation
Risk of selection bias
Case
A spatially and temporally delimited phenomenon of theoretical interest
Observation
The lowest-level unit of an analysis, where measured variable can only take on value
Sample
Set of cases/observations analysed in a given piece of research
Population
Set of cases which in combination make up universe of all cases
Sampling bias (4)
Cannot be fully avoided but should be mitigated
Avoid selecting large-C studies on the DV
Avoid cherry-picking cases
Do no perform inductive study and then reverse to a deductive study
Strengths large-C research (2.5)
Increased potential for generalisability
Increased ability identify causal effects
**are only potential strengths, will depend on design
Large-C limitations (3)
More stylised (no intensive study)
Less useful for inductive research
Limited usefulness for interpretivist research ('thin' form research)
Typical forms case selection in large-C research (2)
Total population sampling
Probability sampling
(examples of sample strategies)
Total population sampling (3)
Large-C
sample entire population
High external validity
Probability sampling (4)
Large-C
Can be expensive or impractical
Simple random sampling (random sample selection of pop)
Stratified random sampling (split into sub-groups (strata) and randomly selected from these)
Non-probability sampling
Alternative to expensive/ impractical probabilistic sampling
Use of non-random criteria
Eg: convenience (volunteer/ snowball) sampling, quota sampling
Small-C study (4)
Intensive study of a single case/ number of cases
Can take form of a single case study or comparative case study
Typically qualitative methods
Descriptive and explanatory
Advantages of Small-C research (3)
Better measurement (in-depth, nuanced)
Thick description
Inductive (or deductive) research (may reveal new explanations not considered)
Principles of Small-C case selection (2)
Should be purposeful
Will likely differ is descriptive or explanatory
Case selection for descriptive small-C (2)
Typical cases (represent larger population well on important features)
Diverse cases (case that capture diversity of population)
Case selection for explanatory small-C
Extreme cases (studies of unusual phenomena)
Deviant cases (1+ cases which deviate from common causal pattern)
Most-similar cases (similar in background, differ in X or Y)
Most-different cases (differ in background, similar in X or Y)
Crucial cases (most-likely / least-likely case)
Pathway cases (causal mechanisms)