1/50
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
A runaway train is heading toward 5 innocent people stuck on a train track. You can flip a switch that will send the train down a sidetrack, but there is 1 innocent person stuck there.
In the fat man version, there's no side track. Instead, there's a fat man on a footbridge that you could push onto the tracks to save the 5. Utilitarians would likely sacrifice the 1 to save the 5 in both cases, which probably flies in the face of moral intuition. Both actions would violate the 2nd formulation of the Immanuel Kant's categorical imperative (you'd be using the 1 as a mere means to the end of saving the 5).
Put Foot's Trolley case (1967) & Thomson 'Fat Man' (1976) variation into your own words. What would the utilitarian suggest? What about Kant?
Would you sacrifice one person to save five? - TEDex: https://www.youtube.com/watch?v=yg16u_bzjPE
Brain Games- Fat Man Trolley Problem - Nat Geo: https://www.youtube.com/watch?v=ngsxSh0tnNQ
BBC Radio: Trolley Problem: https://www.youtube.com/watch?v=bOpf6KcWYyw
What were some good web links to the trolley problem?
Either there's a relevant difference between the two cases or we are being logically inconsistent because most are willing to flip the switch but not push the fat man.
What is puzzling about the originally trolley problem compared to the fat man version?
Maybe there is no difference and the only reason that we hesitate to push the fat man is because we're squeamish. That said, perhaps the first case is permissible because it's a case of indirectly letting someone die, while the fat man case is a case of direct killing. Think how at the end of Batman Begins because he is good, Batman can't directly kill the major villain (Ra's al Ghul played by Liam Neeson), but it's okay for Batman to let him die. He says "I won't kill you, but I don't have to save you." Entanglement argument: Perhaps sacrificing the one person in the original case is justified because he is entangled with the others, and this isn't the case
in the other versions. Inherent danger argument: there is inherent danger to being on trolley/train tracks. Not the same inherent danger in being on a footbridge. However what if we know that the person was forced onto the side track?
What might be a relevant difference between Foot's cases?
NBC's The Good Place. Se2Ep5 cold open and 7:34
What TV sitcom portrayed the trolley problem?
Designed to be analogous to trolley case: You can save 5 innocents who need vital organs but only by harvesting the organs of 1 unwilling innocent who came in for a check up. The utilitarian would likely say to harvest the organs of the one because it would maximize utility. In Kant's terms, you can't 'universalize' the hospital case (as easily as the trolley case, say).
Put Thomson's Organ Transplant case (1976) into your own words. What does the utilitarian say is the moral thing to do?
Because when you zoom out, net utility would go down much more because trust in hospitals would be put in jeopardy (and trust in hospitals is worth way more than even the lives of 5).
How might a utilitarian argue against harvesting the organs of the one?
The self-driving car version.
The trolley problem is theoretical, but what real-world version of it did we discuss?
Because the car will need to be programmed to deal with trolley decisions like kill several children in a crosswalk or the driver.
As the articles we read indicated, self-driving cars will likely have to be programed to kill. Why?
The utilitarian and common-sense answer to the dilemma of kill the child pedestrians or adult passenger/owner of self-driving car is to kill the passenger. However, people are less likely to buy a car that will be programed to kill them (even if they agree with the programing from a bystander's perspective). The catch-22 is that if you program SD cars to kill the drivers in these cases, people won't buy them. If people don't buy them, the roads will continue to be less safe. (That's the whole point, moving toward 100% self-driving cars means safer roads because these machines will get in fewer accidents then we have now with human error.) If you don't program the SD to kill the driver these trolley cases, then they won't be safe as they should be. So it's a catch-22, to sell the cars, you have to program them to not be safe on the road. But if you want safe roads, you need self-driving cars.
What is the 'catch-22' problem with programing self-driving cars to kill?
An action is morally right iff it maximizes expected utility with respect to all affected sentient beings taken as a group.
How did we define the principle of utility?
Happiness, pleasure, or well-being. We want to maximize this and minimize pain, suffering, and dissatisfaction, says Mill & Bentham.
What is utility?
A being that experiences pain and pleasure.
What is a sentient being?
When we apply utilitarianism to the original trolley case, we get intuitive results, i.e., both the utilitarian and most people would say it's morally acceptable to pull the level to kill one to save five. However, when we apply the theory to the fat man case we get counterintuitive results, i.e., the utilitarian says push the fat man but most people find this morally unacceptable.
Which of the trolley cases is problematic for utilitarianism and why?
Solely the consequences (Think Yoda's "Do or do not, there is not try")
When it comes to moral praise and blame, does utilitarianism place more weight on the consequences or intention of an action?
Consequentialism (which falls under the umbrella of teleology -- ethical views that stress 'ends' or 'goals'.)
Is utilitarianism a form of consequentialism or deontology?
Epicurus & David Hume
Which two figures from the history of philosophy are credited with laying the groundwork for utilitarianism?
Bentham. Mill's father was a disciple of Bentham. Bentham was Mill's Godfather. Both were English philosophers.
Between Bentham and Mill, who came first?
Mill
Who said, "It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied"?
Ignorance is bliss -- the view of Cypher in the Matrix.
What saying is in disagreement with Mill's maxim?
Pleasures that are bodily, instinctual, ephemeral, instant gratification,
e.g., food, warmth, sex, sleep, etc.
What are 'low' pleasures?
Mental, intellectual, sustaining, long-term,
e.g., accomplishing goals, unconditional love, playing chess, brain teasers, stimulating conversation, enjoying 'high-brow' art and music, opera, jazz, poetry, museums, classical music, and, no surprise, doing philosophy.
What are 'high' pleasures?
Mill. Adding incommensurate high and low pleasures is like adding apples and oranges.
Who put pleasures on two 'incommensurate' scales?
Mill. Bentham thought this was snobbery, and that if one is happy being a phillistine, so be it!
Who believed that we ought to pursue higher pleasures over lower ones, i.e., chess over checkers, classical music over pop, PBS over reality TV, expensive wine over a $10 bottle?
It's the first 45 seconds of this DaBaby SNL clip:https://www.youtube.com/watch?v=pbGLwe53-LI4. First minute or so of this Mozart one:https://www.youtube.com/watch?v=0sGqkMU-mGQ5. Downton Abbey clip:https://www.youtube.com/watch?v=t3wF-Dly3mw6. Keeping up with the Kardashians clip:https://www.youtube.com/watch?v=rMt9g7g3cQI7. Batman v Superman: Dawn of Justice Trailer:https://www.youtube.com/watch?v=NhWg7AQLI_88. Roma Trailer:https://www.youtube.com/watch?v=6BS27ngZtxg
DaBaby "Suge"
Keeping up with the Kardashians
Batman v Superman: Dawn of Justice
Mozart Symphony 40
Downton Abbey
Roma
To compare and contrast Bentham vs. Mill on 'low' and 'high' pleasures. If you believe that Hamlet is objectively preferable because it is 'higher' art and more mentally stimulating, then you support Mill's view that we should prefer high pleasure (and that it is better to be Socrates dissatisfied than a fool satisfied). If you believe that there's no problem with preferring the Kardashians over Shakespeare that you agree with Bentham that there's no right or wrong when it comes to pleasure and preferences. In short, Mill argued that a life with few to no high pleasures is a waste of life and Bentham disagrees.
Why did we talk about The Simpsons, Keeping Up with the Kardashians, and a film adaptation of Shakespeare's Hamlet?
If you think of utilitarianism and Kantian ethics as heath care plans, utilitarianism has broader coverage, that is, it covers all sentient beings, including dogs, cats, pigs, chickens, cows, dolphins, etc. Kant's 'kingdom of ends' only includes rational agents, which are beings who can reason as well as healthy adult humans.
Why do animal rights people tend to prefer utilitarian ethics to Kant's?
Have an upper front tooth pulled out. ($4,500)*
Have a pinkie toe cut off. ($57k)
Eat a worm. ($100k)
Choke a stray cat to death with your bare hands. ($10k)
Live the rest of your life on a farm in Kansas. ($300k)
*The dollar amounts after is how much people would have to be paid on average.
Edward Thorndike conducted a survey that asked people how much they would have to be paid to do what sorts of things?
They demonstrated a difficulty in using a utilitarian framework in government policies and projects, namely, that to use the framework, you have to know what increases utility for people, but this varies from person to person, i.e., seems pretty subjective (e.g., I'd eat a worm over strangling a cat, but vice versa for you.)
What was the lesson of the Thorndike's findings?
Pond case: there's a child drowning in a pond. If we do nothing, the child will die. We can save the child but we will ruin an expensive suit & pair of shoes & be late to a job interview.
Oxfam case: there's children dying needless in third world countries. If we do nothing, they will die. If we sent $10 to the Oxfam charity, we can save 3 children's lives by getting them $3 hydration tablets.
Put Singer's Pond & Oxfam cases into your own words.
Just as it's immoral to do nothing in the pond case, it's immoral to do nothing in the Oxfam case -- especially since the latter requires much less effort and money.
What's the lesson of Singer's Pond & Oxfam cases?
Singer argues that proximity to those in need is not as morally relevant as we often make it out to be, we still have an obligation to help those who are far away.
According to Singer, is proximity to those in need morally relevant?
The utilitarian argues that you aren't being harmed. Someone like Kant might say even if you are unaware, your right to privacy was violated.
If you never find out about being spied upon (like Rachel's peeping Tom case), how are you being harmed?
The Titanic has just capsized and you're a survivor floating on some driftwood waiting for rescue. To your left you notice that your child is about to drown. You're about to swim over, but then you realize that to your right there are 2 children (whom you don't know) who are also drowning. You can't both save your child and the two strangers. Who do you save?
What are the details of the Titanic case?
This is problematic for utilitarians because they would demand you save the 2 stranger children over your own (as that would result in more net utility). This doesn't align with our moral common sense. In general, this case shows that utilitarianism doesn't allow bias for loved ones (because their happiness isn't any more valuable than a stranger's).
What's problematic for utilitarians about the Titanic case?
Probably not because you could increase net utility much more by spending on charities over luxury items.
Would the utilitarian find celebrities' purchase of hyper luxury cars morally acceptable?
It's not inherently wrong, and as long as there's no harm to any sentient beings, it would be acceptable.
What the is utilitarian stance on 'desecrating' dead human bodies (like in the Better Call Saul pilot with teenage prank)?
https://ca.pbslearningmedia.org/resource/uklg19-ela-ethics-video/ursula-le-guin/
Watch this PBS clip: Exploring Ethics in Literature: "The Ones Who Walk Away from Omelas" | Ursula K. Le Guin
Kant would find the suffering of the innocent child morally abhorrent; it would using them as a mere means to the happiness of everyone else. Mill would probably be forced to choose Omelas over our world because of the overall greater utility.
How would Mill and Kant react to the Omelas case?
Utility doesn't seem to be everything, i.e., it's not the ONLY thing that has value. We also value rights and are against exploitation.
What conclusion about the value of utility that could be reached from the Omelas case?
In the Omelas case, the blood is clearly on our hands; in our world the blame for the exploitation isn't as clear cut. For instance, the kidnaping, abuse, and false imprisonment fo Jaycee Dugard is similar to the treatment of the Omelas child, but society happiness is not causally tied to Dugard's suffering and we are not to blame for it.
What's the difference between the exploitation in Omelas and in our world.
Don't we get cheaper Nikes and designer clothes in part because of the poor working conditions of impoverished children and adults in third world countries. Aren't coffee farmers harmed if our coffee isn't fair trade? What about the case of the Chinese iPhone factory next to the workers dormitory where nets had to be installed because so many of the workers were jumping from high up to commit suicide?
To a certain extent, is our happiness on the backs of innocents? Hint: think about clothes, shoes, coffee, and iPhones.
Many utilitarians would say that we should. In fact the amount of exploited children would way down. In the real world there are thousands (millions?) of exploited children. In Omelas, there is only one. One potential relevant difference is that if we switch to Omelas, the blood is undoubtedly on our hands. In our world, the link more tenuous.
If we could push a button and turn our world into Omelas, should we do it?
https://youtu.be/dOIrPdBUekc
Watch "Hedonism & Nozick's Experience Machine Shelly Kagan | Philosophy Lecture" on YouTube
Nozick's claim is that some things are more important than maximizing utility. For instance, we would choose not to enter the experience machine despite the fact that entering would maximize utility. It's more important that our experiences are real, that we actually arethe people we want to be and actually do things, not just have fake, fabricated experiences of them.
What's Nozick's challenge to utilitarianism? What lesson does he think we learn from the 'experience machine'?
Rule utilitarianism is the view that we ought to follow the set of rules that if generally followed would maximize utility. To keep the varieties of utilitarianism clear, we can call the sort of utilitarianism discussed early act utilitarianism (AU).
What is rule utilitarianism (RU)? What's traditional utilitarianism called, then?
To understand RU, imagine that I handed out exam books to the whole class and said, "Everyone write the best set of moral rules you can think of." One students says "Don't lie," "Don't harm for fun," & "Don't neglect children." Another students says, "Don't lie," "Don't cheat", and "Don't steal." Which rulebook do we go with? We go with the one that if generally followed would maximize utility. That's RU.
How did we explain RU in terms of handing out exam books to the class?
Act utilitarianism faced serious problems (Titanic, Rachel's peeping Tom, Trolley etc.) like a phone faces a cracked screen. When we have a cracked screen, we can (1) replace (2) repair or (3) deal with it. RU is an attempt to repair the iPhone.
How did we apply the cracked iPhone example to act and rule utilitarianism?
What are we to do when we believe justice would be better served if a rule were allowed to be broken. For Smart this represents a dilemma with two horns: either we (a) stick to the rule (which for Smart is tantamount to 'rule worship' (just Kant's absolutism was too strict) or (b) we allow for the exception (which seems to entail that rule utilitarianism collapses in act utilitarianism).
What's J.J. Smart's problem with rule utilitarianism?
"There can be no justice so long as rules are absolute." This show that it's a dead end to make rules exceptionless.
How did the Picard quote go about justice? (Hint: it has to do with the society that wanted to execute Ensign Crusher for not obeying an alien rule of "Keep off the grass.") And what did we use this quote to show?
Rule utilitarianism (R.U.) necessitates exceptions (because otherwise it's too strict). Thus, we need a basis for when exceptions are allowed. That basis seems to the principle of utility, that is, doing what ever maximizes utility. The underlying basis, then, of R.U. is the principle of utility, and, hence, R.U. collapses into act utilitarianism (A.U.).
Put another way:
When do we follow a rule and when do we allow the exception? In the inquiring murderer case, one is allowed to lie because doing so would save a life. The rule here (and in general) seems to be: if breaking the rule would maximize utility, then do it. Further, if breaking the rule doesn't maximize utility, then you should follow the rule. But the rules are set up to maximize utility! So whether you follow or break the rule, you're doing whatever maximizes utility, and, thus, we're back to square one with act utilitarianism. The whole point of rule utilitarianism was to avoid the problems of A.U., but since R.U. collapses into A.U., we're stuck with those same problem cases (like Titanic, Omelas, Organ Donor, etc.).
Or even another way:
Smart's problem: do we allow for exceptions? If no, we're stuck with Kant's rule worship, like in Star trek justice episode or Constant's inquiring murderer. So that's a dead end.
So we allow for the exception, but when? In general, when are we allowed to make an exception? Smart notices that in all the cases that we allow exceptions, (e.g., the lie, steal, cheat cases) the exception leads to more utility, so RU seems to state, "follow the rules to maximize utility or allow exceptions if doing so would max. utility. So when you zoom out, how is that any different then "just do what maximizes utility." AU and RU seem to get the same answers and we're back to square one facing the same problems of AU.
Here is the thrust of the debate:
Act utilitarianism (AU): In the Titanic case you should let your child die. In the peeping Tom case it's okay to spy if you don't get caught.
Reply: But that doesn't seem okay!
Rule utilitarianism (RU): Amend AU to a set of rules including: Take care of your kids (w. priority to your own) and don't violate a person's right to privacy!
Reply: But there are exceptions to every rule (otherwise we'd have to tell the axe murder where our neighbor is -- cause we can't lie). What is the basis for the exception? It seems to be when following the rule leads to really bad consequences, i.e., leads to a major loss in net utility. That is, we can break rules when doing so maximizes utility.
Reply: But then we don't have to prioritize our kid in the Titanic case because breaking that rule would maximize utility (i.e., we could save more kids by letting them die). And we can break the rule of "don't let people spy" because in the peeping Tom case doing so leads to more utility.
Conclusion: RU reverts back to or collapses back into AU and is stuck with the same problem cases. We are back to square one!
The main issue is whether or not rules in RU should be absolute. If they are, then we fall victim to the rule worship that Smart worries about. If we allow exceptions, then we need a criteria for them that isn't just, "rules can be broken when doing so leads to more utility," otherwise RU collapses back into AU.
If you still don't quite understand:
Rule utilitarianism (RU) was an attempt to fix the problems that act utilitarianism (AU) faced. For instance, AU seems to allow 1) spying on someone getting undressed if they didn't find out, 2) the drowning of one's child to save multiple other unrelated children, and 3) the exploitation of the Omelas child. None of this bodes well for the theory, i.e., it indicates that AU is defective. RU was an attempt to remedy this. Instead of one rule, the principle of utility, RU provides a set of rules that if generally followed would maximize utility for all sentient beings. What the rules would exactly be is an open question. But if the rules included things like 1) don't spy on people (even if they won't know), 2) take care of children, especially your own, and 3) don't allow children to be exploited for the happiness or others, then RU wouldn't fall prey to the aforementioned problems of AU. Sounds good, right? Well that flew for some time, but then some philosophers, including J.J. Smart, wondered if exceptions were ever allowed in RU. If they weren't, then we'd be guilty of what Smart called 'rule worship', where we'd be forced to follow rules, even when it seems ridiculous, like in the inquiring ax-murdered case. If rules are breakable, then we need a criteria for when they can be broken, and the criteria can't be willy-nilly nor when more utility is generated because then, e.g., the peeping Tom and the people of Omelas would be allow to peep on the undressing person and exploit the Omelas child (respectively) because doing so yields more utility than following the rule. But if that's allowed, then the application of RU is no different than AU, and we're back to square one: RU would face the same problems that it was trying to circumvent in the first place.
Why does allowing for exceptions collapse rule utilitarianism into act utilitarianism?