Definition of Consequentialism: A family of ethical theories that determine the morality of actions based on their outcomes rather than intentions.
Key Principle: An action is morally required if it produces the best overall results, termed as being "optimific."
Five Steps to Determine if an Action is Optimific:
Identify what is intrinsically good (e.g., happiness, autonomy, knowledge, virtue).
Identify what is intrinsically bad (e.g., physical pain, mental anguish, betrayal).
Determine all possible options available at the moment.
Assess the value of results for each option (what good and bad they yield).
Choose the action with the greatest net positive effect (the optimific choice).
Variations of Consequentialism: Many forms exist based on differing intrinsic values.
Examples include environmental health, world peace, and knowledge advancement.
Act Utilitarianism: A prominent consequentialist theory asserting that well-being is the sole intrinsic value. An action is right if it maximizes overall well-being.
Principle of Utility: The moral standard focusing on maximizing well-being for everyone affected, outweighing mere good intentions.
Impartiality: Every individual's well-being is equally important, regardless of status or identity.
Justification of Moral Beliefs: Utilitarianism aligns with our strong moral intuitions against acts like slavery and supports actions like helping the needy.
Resolving Moral Conflicts: Utilitarianism provides concrete guidance for moral dilemmas based on maximizing well-being.
Example: Deciding whether to disclose harmful gossip based on overall impact.
Moral Flexibility: No absolute moral rules; actions can break traditional moral boundaries if doing so increases overall well-being.
Inclusion of Animals: Nonhuman animals are considered part of the moral community due to their capacity to suffer. Their well-being should also be respected.
Demanding Nature: Critics argue it is unrealistic to expect constant altruism from individuals.
Distinction Between Decision Procedure and Standard of Rightness: Utilitarianism is a standard of rightness (actions are right if they are optimific) but not a decision procedure suitable for guiding all actions consistently.
Supererogation: Utilitarianism cannot accommodate actions considered 'above and beyond duty', as all actions must aim for the best outcomes.
Impartiality Concerns: The theory struggles with the idea that we should prioritize loved ones, suggesting potential injustices in urgent situations.
Potential Conflicts with Justice: Cases where maximizing well-being could justify serious injustices (e.g., slavery or vicarious punishment).
Reply 1: Justice is intrinsically valuable. Adapting utilitarianism to include justice along with well-being might still face difficulties prioritizing between the two under conflicting circumstances.
Reply 2: Injustice is never optimific, suggesting that unfair actions ultimately lead to worse outcomes in the long term. However, this is overly optimistic, as unjust outcomes can yield benefits.
Reply 3: Justice must sometimes yield to the greater good, admitting that well-being can sometimes trump individual rights in specific contexts.
Reply 4: Rule consequentialism proposes that actions are right if they conform to optimific social rules, allowing for general guidelines that promote societal well-being while considering justice and fairness.
Overall Appeal of Consequentialism: Its emphasis on impartiality, flexibility, and the future aligns with many ethical intuitions. However, it raises significant questions regarding justice and self-sacrifice.
Ongoing Issues: While it promises good potential for moral reasoning, criticisms emphasize the balance needed between maximizing well-being and upholding justice across all moral situations.
Absolute Rule
Act Utilitarianism
Consequentialism
Decision Procedure
Moral Community
Optimific
Optimific Social Rule
Principle of Utility
Rule Consequentialism
Standard of Rightness
Supererogation
Vicarious Punishment
Why do most utilitarians think that people are not always to blame for wrong actions?
Can any absolute moral rules exist beyond the principle of utility? What could justify their status?
Is there a way to measure happiness brought about by actions? How significant is this to utilitarianism?
Discuss the criticism that utilitarianism demands saintly motivations and its plausibility.
What limits on substantial sacrifice might a utilitarian recognize, and are these acceptable?
How does utilitarianism require impartiality, and is this a positive or negative aspect?
Which reply to the Argument from Injustice is most convincing? Is it ultimately successful?