Moral Cognition: Trolley Problems, Double Effect, and Dual-Process Theory
Switch Dilemma (Keen to flip the switch?)
- Core question: Would you pull a switch to redirect a trolley, saving five people but killing one?
- Intuition aligns with intent: you are intentionally taking a life by pulling the switch, even if it’s for the greater good. If you do nothing, that outcome is framed as fate.
- Moral takeaway: there is a sense of moral responsibility for the act of killing when you actively choose it.
- A useful takeaway from the discussion: inaction is not necessarily morally neutral; choosing not to act can still be a decision with consequences.
- Practical reflection: people often resist action because of hesitation or fear about what to do; the “bad option” is still a choice, and delay can lead to worse outcomes.
- Numerical framing:
- If you flip the switch, you save 5 and kill 1.
- Net lives saved: \text{net}_{\text{switch}} = 5 - 1 = 4.
- Real-life analogue: inaction as a form of responsibility; the decision to act versus not act is ethically charged.
- Setup: You’re on a footbridge above the tracks. A large, strong man stands on the bridge. The trolley will kill five innocent workers unless you push the large man off the bridge to stop the trolley; the man will die, but five lives will be saved.
- Question: Would you push the large man off the bridge?
- Responses show a stark difference from the switch case:
- More people say yes to flipping the switch than to pushing the man off the bridge.
- Key point: the calculation in terms of net lives is the same (save five, kill one → net 4), yet moral intuitions differ due to direct physical contact and perceived ownership of the harm.
- Explanations for the difference:
- Direct physical action feels more morally charged than switching a lever.
- Perceived personal responsibility is higher when you are actively causing harm.
- Some participants view the situation as a question of onus or blame for the fatal outcome.
- Participant discussion snippets:
- Some indicate that in the switch case they’d act; in the bridge case they wouldn’t.
- Debates about whether there is an onus on the fat man to jump, or on you to sacrifice yourself.
- Variants and considerations:
- The setup uses a large person to ensure stopping the trolley, not because of self-sacrifice or altruism, but to keep the problem within the single-act framework.
- If the person on the bridge were smaller or unable to stop the trolley, the ethical dynamics shift.
- Conceptual takeaway: switch vs bridge prompts examination of harm, agency, and moral responsibility in direct contact scenarios.
History and Origins of the Problem
- Philippa Foot (1967): The trolley problem is rooted in the Doctrine of Double Effect in Catholic theology, which examines when unintended consequences may be morally permissible.
- Double Effect principle: if you perform an action with good intent, an unintended bad outcome may not render you morally responsible for that bad outcome.
- Abortion example (Foot): If removing a uterus is needed to save a woman's life, the pregnancy termination may be morally permissible under the double effect, since the primary intent is saving the mother's life.
- The trolley problem was used to illustrate nuances in moral reasoning and to challenge simplistic ethical rules.
- Judith Jarvis Thompson (1976): Introduced the footbridge version as a separate thought experiment to test differences between indirect versus direct harm when making moral decisions.
- Terminology in lecture: switch dilemma (Foot’s original) versus footbridge dilemma (Thompson's variation).
- Other thought experiments widely discussed in class:
- Imagined scenarios where five people need organs and one person is a perfect organ match; would it be permissible to kill the matched person to save the five?
- Variations with whether those in need are strangers, criminals, or have particular roles (e.g., teachers, leaders).
- Scenarios involving children vs. the elderly; workers on the tracks vs. passersby; other social roles.
- Purpose of variations: to probe how people’s judgments shift when the identities, relationships, and perceived worth of individuals change.
- Classroom note: these variations reveal how moral reasoning can be sensitive to context and salience, not just to numerical outcomes.
Neurocognition and the Ethics of Trolley Problems
- Joshua Green (2001): Uses fMRI to study how brains respond to switch vs. footbridge cases.
- Core finding: emotion-related brain areas light up when people say no to the footbridge or when they reject harming; executive-function areas light up when people deliberate about the switch or push decisions.
- Brain regions discussed:
- Emotion-related: amygdala; ventromedial prefrontal cortex (vmPFC).
- Executive/Reasoning: dorsolateral prefrontal cortex (DLPFC); anterior cingulate cortex (ACC).
- Interpretation:
- Deontological judgments (duty-based, rules) tend to be associated with automatic/emotional processing (System 1).
- Consequentialist judgments (outcome-based, utilitarian) tend to be associated with deliberate reasoning (System 2).
- Dual-process framework: two interacting systems
- System 1: fast, automatic, emotion-driven, intuitive responses.
- System 2: slow, deliberate, calculation-driven, planful reasoning.
- Practical takeaway: the trolley problem illustrates a robust interaction between emotion and reason in moral decision making.
- Caveats:
- The distinction is a descriptive account of how people think, not a prescription that one system is morally superior.
- The naturalistic fallacy warning: just because our brains default to automatic or manual modes does not imply one mode is universally better for ethical decision making.
- Conceptual metaphor used by Green:
- The brain as a camera with automatic settings (System 1) and manual settings (System 2); sometimes automatic settings yield correct moral judgments, sometimes manual settings are needed for nuanced cases.
- Additional context:
- The debate includes concerns about whether human decision making should rely on intuition or reasoning in ethical contexts.
- Related literature and resources (as provided in the lecture): accessible articles by Green aimed at general audiences.
Mathematical and Conceptual Summaries
- Net life calculation in trolley scenarios:
- \text{net}_{\text{switch}} = 5 - 1 = 4
- \text{net}_{\text{footbridge}} = 5 - 1 = 4
- Dual-process framing: System 1 vs System 2 is not a value judgment but a description of cognitive modes (automatic/emotional vs deliberate/reasoned).
- The ethical problem: the distinction between intention (deontological emphasis) and outcomes (consequentialist emphasis).
- The Good Place (TV show):
- The trolley problem is used to illustrate moral decision making; the show consulted a philosophy professor at UCLA.
- The show has multiple episodes that humanize these ethical dilemmas and connect philosophical ideas to narrative plots.
- Note on availability: the show was leaving Netflix; students are encouraged to watch over a break for further connections.
- Self-driving cars and Moral Machine:
- Interactive website/projects ask users to make decisions in trolley-like scenarios for autonomous vehicles.
- Scenarios vary: pedestrians vs occupants, pregnant woman vs doctor, elderly person vs cat, etc.
- The tool reveals common intuitions about how to prioritize lives and the tension between moral rules and outcomes.
- Real-life case study: Navy SEALs in Afghanistan (2006, Marcus Luttrell story):
- Dilemma: tie up or kill goat herders who could reveal the team vs. let them go and risk an attack.
- The team voted to spare the goat herders; later, the unit suffered casualties.
- Luttrell reports ongoing moral guilt; reflects on the complexity of real-world moral decision making where both options carry severe costs.
- Takeaway from case study:
- Both intuitive and deliberative systems are natural; there may be no perfect solution.
- Any decision may leave a residual moral voice from the alternative framework in the back of one's mind.
Implications for Ethics, Practice, and Teaching
- The trolley problem highlights the limits of rule-based ethics (deontology) versus outcome-based ethics (consequentialism).
- It shows the role of emotions in moral judgment, and how emotion and reasoning can support or conflict with each other.
- It cautions against the naturalistic fallacy: just because a cognitive system tends toward one type of judgment in familiar situations does not dictate the best ethical principle in novel or high-stakes contexts.
- It provides a framework for discussing emerging technologies (e.g., autonomous agents) and the ethical design choices they force.
- Practical class applications:
- Encourage students to reflect on when intuition should guide action and when deliberate analysis is necessary.
- Use neuroscience findings to discuss the strengths and limits of our moral cognition.
- Explore how framing and context alter moral judgments.
Closing Reflections and Next Steps
- The spotlight project: students will apply these ideas to a practical assignment (details to follow in class).
- Suggested further study resources include: foundational texts by Foot and Thompson, Green’s neuroethics work, and interdisciplinary readings combining philosophy, psychology, and neuroscience.
- Encourage students to watch The Good Place episodes and to explore the Moral Machine site for hands-on engagement with these concepts.