Morality, Utilitarianism & Moral Considerability – Detailed Lecture Notes
Course Context and Road-Map
- Lecture belongs to Units 6–7 ("Morality 1" & "Morality 2"), but ethics will re-appear in later units (political philosophy, applied topics, etc.).
- Philosophy asks the fundamental question “How should we live?”—hence the centrality of ethics.
- Today’s two-part agenda
- Part 1 (≈ first half): Detailed study of Utilitarianism (a major form of consequentialism) → plausibility and objections.
- Part 2 (≈ second half): Competing theories of moral considerability (Who/what counts?).
Ethics & Political Philosophy Connection
- Political philosophy investigates “the best way to organise society.”
- Hard to separate from ethics because judgments about desirable social orders are moral judgments writ large.
- Ethics operates on multiple “scales”
- Interpersonal / intimate level
- Generalised level of principles for anyone
- Society-wide level in political theory
Consequentialism – The Larger Family
- Core claim: Only the consequences of an action determine its moral status.
- Consequences = effects in the future (immediate or remote).
- Different versions diverge on (i) Which consequences matter? (ii) Who is included in the calculus?
Utilitarianism – The Flagship Version
- Golden Rule: “Always act so as to produce the greatest amount of happiness.”
- Happiness ≈ positive experience / pleasure; suffering ≈ negative experience.
- Net Happiness=Sum of Positive Experiences−Sum of Negative Experiences
- Universal scope: Count all beings affected who are capable of pleasure/pain (humans + most animals, possibly insects, etc.).
- Anti-egoistic: No special moral weight to agent’s own welfare per se (unless justified by impartial happiness calculus).
- Differentiation possible: More intense or longer lives, broader relational webs, etc., may legitimately receive higher weighting.
Hedonic Calculus & Expected Utility
- Moral deliberation resembles probabilistic cost–benefit analysis:
EU(Action)=∑<em>i=1nP</em>i×Vi
- $P_i$ = probability of consequence $i$
- $V_i$ = hedonic value (positive or negative) of consequence $i$
- Multiply by probability to avoid overweighting unlikely outcomes (e.g.
execution for attempted rescue has $P\approx0$).
Illustrative Cases
- Holding the door → immediate tiny positive; paradigmatic consequentialist intuition.
- Investment example → delayed payoff illustrates future-orientation.
- Drowning baby in shallow pond
- Pros: rescue avoids baby’s agony & parental grief; preserves lifetime of future happiness.
- Cons: wet shoes, tiny risk of misconstrual → negligible when weighted by probability.
- Result: Rescue is obviously obligatory under utilitarianism.
- Meta-point: Excessive real-time number-crunching can hinder prompt moral action.
Objections to Utilitarianism (I) – Principled
Slogan: “The ends don’t (always) justify the means.”
- Promise-Keeping Case
- Promise is a past fact; breaking it seems wrong even when net happiness favours doing so.
- Utilitarian must reduce its force to future-oriented effects only.
- Organ-Harvesting Thought Experiment
- Sacrifice one healthy patient to save five lives → raw utilitarian math says "Do it".
- Trolley Problem
- Pulling lever to kill 1 instead of 5 reveals same tension.
Utilitarian Replies
- Bad-Consequences Response: In real life, breaking promises or harvesting organs destroys trust, healthcare, etc. → net negative.
- Bite-the-Bullet Response: If truly 1-for-5 with no systemic harms, moral duty is to do it; squeamishness ≠ morality.
- Rule-Consequentialism
- Formulate rules whose adoption maximises happiness in general.
- Individuals usually follow rules without on-the-spot maximizing.
- Problem: If a rule breach would clearly maximise utility this time, why not break it?
Objections to Utilitarianism (II) – Practical / Epistemic
- Super-Hitler Scenario
- Hitler’s genocide unintentionally prevents birth of future tyrant who would kill billions.
- Shows we cannot know total long-term consequences — hence cannot know if we maximised happiness.
- Utilitarian Counter-move
- Moral deliberation is always probabilistic; perfect foresight is impossible for any ethic.
- We rely on historically reliable tendencies: mass murder almost always brings colossal suffering.
Moral Agents vs Moral Patients
- Moral Agent: Capable of doing right/wrong via reflective choice.
- Moral Patient: Capable of being affected (helped or harmed) by moral agents.
- Important: Theories of moral considerability should at least include patients.
Theories of Moral Considerability
| Theory | Who/What Counts? | Rationale / Notes |
|---|
| Ratio-centrism | Beings that can act reflectively on reasons (humans, rational aliens, AI). | Good test for agency; questionable as test for patiency. Infants & animals excluded. |
| Anthropocentrism | Only humans (because they are human). | Arbitrary “speciesist” boundary; widely rejected in philosophy. |
| Sentientism | All and only sentient beings (capable of experience). | Underlies utilitarianism; fetus counts once sentient; animals count; non-sentient entities count only indirectly. |
| Biocentrism | Anything alive. | Weeds, bacteria, redwoods have intrinsic value; may still prioritise sentient interests. |
| Ecocentrism | Whole ecosystems (biotic + abiotic). | System-level value may override individuals; raises “individual vs. system” puzzle. |
| Egocentrism | Only me. | Extreme, usually viewed as irrational/arbitrary. Associated with caricatured readings of Ayn Rand. |
| Ontocentrism | All entities that exist. | Maximalist extreme; value simply in being. Rare; overlaps with panpsychic or holistic worldviews. |
Application Highlights
- Abortion debate under Sentientism/Utilitarianism
- Moral status of fetus increases sharply at onset of sentience (neurological threshold).
- Post-threshold abortions require weighing maternal, social, fetal hedonic stakes.
- Environmentalism
- Anthropocentric mainstream rhetoric (“save planet for us”) differs from stricter ecocentrism or biocentrism.
- Animal Ethics
- Bentham’s dictum: “The question is not Can they reason? nor Can they talk? but Can they suffer?”
- Historical influence on Humane Society & modern animal-rights movement.
- Expected Utility (probabilistic hedonic calculus): EU=∑P<em>iV</em>i
- Net Happiness: Hnet=H+−H−
- Probabilistic weighting essential because outcomes are uncertain.
Practical Take-Aways
- Be aware of scale: personal vs. societal.
- Distinguish agent test (rational reflection) from patient test (capacity to suffer/enjoy).
- Utilitarianism provides clear action-guides but collides with deeply-held intuitions about rights & means.
- Predictive limits force humility; moral theories must grapple with epistemic uncertainty.