1/34
Flashcards from lecture notes
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Evidence-Based Management (EBM)
A decision-making approach where managerial actions are guided by the best available evidence rather than intuition or tradition.
Common sense reasoning in management
Relies on heuristics, hunches, or anecdotal evidence, leading to poor decision-making due to cognitive biases.
Fast thinking
Instinctive and automatic thinking, relying on heuristics or rules of thumb.
Slow thinking
Deliberate and reasoned thinking, involving information collection and analysis.
Epistemology
The philosophy of knowledge, guiding how evidence is interpreted and what is considered valid knowledge in research.
Positivism in management research
Uses deductive reasoning and quantitative methods to test hypotheses, emphasising objectivity and statistical significance.
Interpretivism
Based on inductive reasoning and qualitative methods, focusing on understanding social phenomena from the perspectives of those involved.
Ethnography
Involves participant observation in a social setting to observe behaviors and interactions.
Research Interviews
Involves speaking to individuals to gather their experiences or views.
Theoretical sampling
Selecting participants based on emerging theory, not randomness.
Saturation
Reached when new data no longer adds new insights or themes.
Credibility
Accuracy of findings in qualitative research.
Transferability
Applicability to other contexts in qualitative research.
Dependability
Research process is logical and documented in qualitative research.
Confirmability
Findings are shaped by participants, not researcher bias in qualitative research.
Self-censorship
Researchers monitoring themselves to avoid causing harm or misrepresentation, especially in ethnographic studies.
Classic experimental design
Includes a treatment and control group, with measurements taken before and after the intervention.
Internal validity
The extent to which a study shows a causal relationship.
External validity
How generalisable the findings are to other settings or populations.
Field experiments
Take place in real-world settings, improving external validity.
Quasi-experiments
Lack random assignment but compare groups affected by different conditions.
Reliability
Consistency in measurement.
Stability
Consistency over time.
Internal reliability
Consistency across items.
Inter-observer consistency
Agreement among researchers
Face validity
Appears reasonable.
Convergent validity
Correlates with similar measures.
Discriminant validity
Does not correlate with different constructs.
Predictive validity
Predicts outcomes logically associated with the concept.
Probability sampling
Gives every individual a known chance of selection.
Non-probability sampling
Does not give every individual a known chance of selection.
1. What is the purpose of the Difference-in-Difference (DiD) method in research?
The DiD method estimates causal effects by comparing changes over time between a treatment group and a control group.
How is the treatment effect calculated in the DiD method?
The treatment effect is calculated by taking the difference in the changes (differences) in outcomes before and after the intervention between the two groups.
When is the Difference-in-Difference method typically used?
DiD is used in quasi-experiments, especially when random assignment is not feasible.
What advantage does the DiD method offer in research?
It helps control for external time trends that affect both groups equally, improving the accuracy of causal inference.