1/83
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Hedonism
The view that happiness is the only thing intrinsically good
Intrinsic good
Things that are desirable for their own sake (good, in and of themselves) Ex. Happiness
Instrumental good
Often called good, but in reality are only things that make it easier to obtain some intrinsic good (a means to some truly good end) Ex. Money
The Satisfied Pig (Hedonism mis-conception)
A mis-conception about hedonism that it is better to be a satisfied pig than an unsatisfied human because "pleasure" is the only thing that matters. Mill points out that there are different kinds of pleasure, some much more valuable than others (intellectual, emotional, and moral pleasures)
The Experience Machine
A virtual reality machine that you can plug into and have completely happy experiences. When you are in the machine, your memory of getting into the machine is erased so you do not know you are in it. You cannot be unplugged and you cannot tell it apart from reality. Scientists say that you will 100% live a happier life in the machine, would you do it?
Why is the experience machine an objection to Hedonism?
Most people would say they do not want to be plugged into the machine. If happiness were the only thing of value, plugging in would be a no brainer. There seams to be something missing. People want to:
1. Do things
2. be a certain way
3. be in contact with reality
This implies that happiness is not the only intrinsically valuable thing.
The Dragon Tyrant
A dragon demands thousands of lives as tribute every day and increases this number as the population increases. Attempts to kill the dragon have all failed and the people have lived to accept their fate, even coming to believe the dragon is a good thing. Eventually scientists claim they can create a weapon to defeat the dragon, and they start putting resources towards creating it. However, every day they do not have it finished, 150,000 people are killed. The question then arises on why they did not start this earlier.
Dragon Tyrant Application
In this fable, the dragon is a symbol for death, and that we have a moral imperative to defeat death. We see that this correlates to death by aging, which seems like a bad thing, like the dragon. Scientists say that if we put more resources into research to cure aging, we could stop the aging process all together.
Bostrom's Argument to cure aging
1. The people in the fable had compelling moral reasons to defeat the dragon.
2. The people's situation with respect to the dragon is morally analogous to our situation with respect to death.
3. Therefore, we have compelling moral reasons to defeat death
Critique's to Bostrom's Argument (in favor of shorter life spans)
Lifespan extension could:
-ignore other people's harms (ignore starving children)
-lead to Overpopulation
-cause us to loose meaning/purpose (Death is what makes life meaningful)
The Deprivation Account on why death is bad, and why immortality could be good
Death is bad—namely because, when one dies, one is DEPRIVED of whatever goods they would have experienced, had they lived longer (by virtue of no longer existing). If we accept this, it seems that we are committed to the conclusion that immortality COULD be good, so long as more life has more good things to offer
Conditions for immortality to be good
The Identity Condition- The person living at any point in the future must be me
The Attractiveness Condition- The life that I live must be one that is desirable or appealing to me
The Makropulos Case
A woman has been drinking an elixir of life for 300 years, but eventually stops because life has gotten meaningless and boring. Williams says that any sort of eternal life would inevitably be like this. For any sort of life model of keeping life diverse and pleasurable, this attractiveness condition will eventually fail.
Immortality Dilemma
If I am the one who lives forever, then boredom will inevitably set in. If boredom does NOT set in, then it is not I who lives forever—but someone else (i.e. removing higher consciousness, complete engrossment in some project, eternal reincarnation, or forgetfulness). In short, one can meet EITHER The Identity Condition OR The Attractiveness Condition, but not both.
Objection to Williams' two conditions for immortality
Fischer says there are two types of pleasures.
1. Self-Exhausting pleasures- do them once, and you do not form a desire to do it again
2. Repeatable pleasures- do them once, and you still form a desire to do them again.
Fischer says that if there are a suitable amount of repeatable pleasures, then eternal life could be appealing
Why does genetic alteration seem logical?
1) Alterations in the environment have sculpted the human genome for many years
2) Breeding itself alters the human genome
-Genetic Alteration is just a third way of altering the human genome, but many see it as more threatening than 1 and 2 because it is intentional, and has the potential to create radical changes in a short amount of time.
Gene Therapy
Genetic alteration aimed at curing a disease or removing a genetic disability that someone has.
Genetic Enhancement
Genetic alteration aimed at altering or improving the features of someone with a genome that is already adequate or "normal".
Unfairness Objection to Genetic Enhancement
A world where parents could pay to genetically enhance their children would be unfair because it would give these children an unfair advantage over others
Other objections to Genetic Enhancement
-Un-expected side effects
-It would be hard to fix any genetic mistakes (i.e. creating someone who's extremely violent)
-Alienation (Would not feel as if they are related to their parents)
-Species stagnation/limitation (could limit diversity)
-"Playing God"
-Would there be a restriction to what parents could alter their kids to be?
Should we allow patients and their physicians to have access to our genetic information?
For Yes: It can help save lives
1. if you are genetically at risk for getting some disease
2. discovering which gene is responsible for your illness
3. discovering you are a carrier of a genetic disease
For No: This information may be hard to handle, especially if you carry a disease that will eventually give you a long and painful death
Should third parties (insurance companies or employers) have your genetic information?
For No:
1. Discrimination (could refuse service to people with certain genetically acquired diseases)
2. Irrelevance (you cannot tell if or when some things will happen when it comes to some pre-existing dispositions)
For Yes:
1. Economic (If patients have this info and employers/insurance companies do not, they will eventually go into bankruptcy if the patient knows what exactly needs to be covered)
2. Precedence (companies already have a lot of medical information, how is genetic info different?)
Exceptionalism
The view that genetic information is somehow "special", or DIFFERENT than the other sorts of information that are ALREADY shared with
insurance companies
Moral Right
A moral claim that one can exercise against another
Cohen's Claim
Since a right is a claim that one can exercise against another, the only individuals who HAVE rights are those who CAN exercise claims
against one another.
Animals are not capable of exercising claims against others, or comprehending the rules of moral duty. Therefore, animals have no rights
Objections to Cohen's argument
-Animals are smart (they do reason and exhibit intelligence so perhaps they do have rights)
-Animal Torture (If Cohen is right then it should be permissible to torture puppies)
-Animals feel pain (The question is not, can they reason? nor can they talk? but can they suffer?, results in speciesism, which compares similarly to racism and sexism)
-Killing infants and the disabled (If Cohen is right, then infants and the disabled then do not have rights, and therefore can be permissible to kill)
Speciesism
HUMAN pain is the only important pain
Cohen's New Claim
An individual has rights if it is a member of a SPECIES that is IN GENERAL capable of exercising moral claims against others, comprehending the rules of duty, etc
Scenarios in which Cohen's claim fails
Smart Cow- an intelligent cow that acts as a human and is capable of moral claims but does not have rights because its species does not
Post-Apocalypse- a nuke turns 70% of the population into "animals" and 30% stay capable of giving moral claims. According to Cohen, these humans would then not have rights.
Secret Experiment- 10 million humans born with mental capacity of a cow in secret, therefore the majority of the population would not have rights, making all other humans not have rights acc to Cohen
Fred and the Puppies
Fred can no longer taste chocolate because his godiva gland has been damaged and can no longer produce the hormone "cocoamone" (what makes us enjoy chocolate). A recent study shows that the brains of puppies produce cocoamone when they are tortured for 6 moths and then brutally killed. Fred sets up a lab in his basement and does this to get his taste for chocolate back.
Norcross' Argument from Fred case
1. What Fred does to the puppies is seriously wrong.
2. Purchasing factory farmed meat is morally analogous to what Fred does to the puppies.
3. Therefore, purchasing factory farmed meat is also seriously wrong.
Objections to Norcross' Argument
-Knowledge (Fred knows the puppies are being harmed, most people do not know about factory farms)
-Direct Harm (Fred directly harms the puppies, people who eat meat do not directly harm animals)
-Intentions (Fred INTENDS to harm the puppies as a MEANS to obtaining cocoamone, but the suffering that animals endure in factory farms is unintended. Rather it is a mere side-effect of modern farming methods)
-Causal Impotence (If Fred stops what he is doing, the puppies' suffering will end. If anyone stops eating meat, they will not end the animal suffering)
-
Blind Hens
Blind chickens don't seem to mind the cramped spaces of factory farms as much as reg sighted chickens. Breeding blind chickens would be both more cost-efficient and less painful to chickens.
Utilitarianism
The view that happiness is the only thing that is intrinsically valuable (hedonism) and that we are morally obligated to bring about the best possible consequences (consequentialism) by maximizing the overall balance of
pain/pleasure, or suffering/happiness.
Deontology
The view that we have certain moral obligations to perform/not perform that are independent of how much happiness or suffering this would bring about
Organ Harvest
A doctor has three dying patients who need a kidney, heart, and liver. A healthy patient comes in and he kills him and gives the healthy organs to the three dying patients.
Utilitarian view: This is morally permissible since it brought out the most happiness (saved three lives at the cost of one)
Deontology view: This man had the right to life, and this right should not be violated even if it would save three other people
Arguments in favor of animal disenhancement
Utilitarian:
1. If we can reduce animal suffering, then we ought to do so.
2. We can reduce animal suffering; namely, via dis-enhancement.
3. Therefore, we ought to dis-enhance animals.
Rights-Based Accounts (Deontologist):
1. If we can reduce the amount of animal suffering, then we ought to do so,
provided that no animal's rights are violated.
2. We can reduce the amount of animal suffering; namely, via dis-enhancement.
3. Animal dis-enhancement would not be a rights-violation.
4. Therefore, we ought to dis-enhance animals.
* Even if deontologists reject premise 3, in reality they are faced with two options:
-Continue violating animal's rights AND making them suffer unnecessarily
-Continue violating animal's rights but END unnecessary animal suffering
Since the second option is better, dis-enhancement is at least less wrong than harming normal animals
Objections to disenhancement
-The Yuck Factor
-Telos (each animal has a specific design or purpose and should not be altered with)
-Virtue (Overlooking the real problem, i.e. sweatshop aspirin)
Unhealthy Child
A woman has the option to conceive a child now that will live 40 years, or wait a month and conceive a child that will live a normal, healthy life. If the woman conceives now, would she have done something wrong?
The non-identity problem with "Unhealthy Child"
If the woman would have waited a month, her child would seize to exist. The two alternatives for the unhealthy child would be
a) Not have a life
b) Live 40 years
Surely a life cut short is better than no life at all, so it seems like she has not harmed the child by conceiving it now
Argument in favor of breeding blind chickens
1. Animal disenhancement does not make any animal worse off than it otherwise
would have been (assuming that disenhanced animals have lives worth living).
2. An act harms an individual if and only if it causes them to be worse off than they
otherwise would have been.
3. Therefore, disenhancement does not harm animals.
4. Disenhancement would benefit presently existing people (economically), and
harm no one.
5. Any action that harms no one, and benefits some, is not wrong.
6. Therefore, disenhancement is not wrong.
Individual Utility Principle
If there are two possible outcomes, where different individuals are created in each scenario, morally, you should bring about the outcome where the individual created has a higher quality of life
Human Organ Farms vs Happy Farms
If a human were to live a happy 18 years on a human farm, just to be slaughtered at 18, the non-identity problem of living a good 18 years vs not living at all isn't justified because it is still morally wrong to run the farm. The same can be said about chickens. It may be permissible to disenhance them, but to run a happy and factory-farms is not.
Reasons in favor of cultured meat
Animal meat is bad because:
-It results in animal death and suffering
-It is bad for the environment
-It is unhealthy
Cultured meat would in turn have none of these drawbacks
Objections to cultured meat
-Real Meat (May be a temptation that its fake meat)
-Danger (Could be worried about unexpected side effects)
-Unnatural
-Yuck
-Animal integrity (violates an animals dignity)
-Morally mis-guided (Should just give up meat period)
-Animal Death
-Non-identity
-Respect
-Cannibalism
Argument against Cultured Meat
1. If producing and consuming cultured animal meat is permissible, then producing and consuming cultured human meat is permissible.
2. But, producing and consuming cultured human meat is not permissible.
3. Therefore, producing and consuming cultured animal meat is not permissible.
Argument in favor of cultured meat
1. If producing and consuming cultured animal meat is permissible, then
producing and consuming cultured human meat is permissible.
2. Producing and consuming cultured animal meat is permissible.
3. Therefore, producing and consuming cultured human meat is permissible
Who would cloning benefit?
-Couples unable to reproduce but still want a child biologically related to them
-Have a biologically related child without the risk of passing on a disease.
-Cloning a lost dead child
-Pursuing research on human cloning
-Clone yourself for potential life saving organs
3 Fundamental Rights for all humans
(1) A right to reproduce.
(2) A right to some choice about what kind of child we will have.
(3) A right to direct the lives of our offspring as we see fit (within certain limits).
-Cloning seems to just extend these three
Objections to cloning
-Could be physically harmful to the clone
-Could be psychologically harmful to the clone
-Will turn procreation into manufacture (could feel superior to our clones)
-perversion of the parent-child relationship (weird if they are the same person)
-Playing God
Virtual Pedophilia vs Virtual Murder Argument
1. Committing virtual pedophilia (e.g., in a video game) is morally wrong.
2. However, there are no morally relevant differences between committing a virtual act of pedophilia and committing a virtual act of murder.
3. Therefore, committing virtual murder in a video game is also morally wrong.
The Gamer's Dilemma
If we accept premise 2, then we only have two options:
1. Both are permissible
2. Both are wrong
Differences Between Virtual Pedophilia and Virtual Murder
-Socially acceptable
-Likelihood of actual harm (if someone commits virtual murder, they aren't likely to commit real murder. If someone commits virtual pedophilia, they are likely to commit actual pedophilia)
-Moral Character and enjoying for it's own sake (when gamers kill in games, it is not because they enjoy the act itself. if gamers commit pedophilia in games, they do enjoy the act itself)
-Singling out a particular group (Virtual pedophilia singles out a specific group of people)
-Special Status of Children (a morally significance to harming a child)
-Child pornography
Who could be responsible for the bombing if the drone is autonomous?
-The robot itself
-The person who programmed the robot
-The officer who ordered the use of the robot
Argument on the use of autonomous droning (who is responsible, would a war of autonomous drones be just?)
1. A war is just only if we are able to justly hold someone morally responsible for the deaths of the enemies that we cause.1
2. If an AWS causes the death of an enemy, then there are only three plausible candidates for who, or what, can be held morally responsible for the death:
(a) The programmer
(b) The officer who ordered its use
(c) The AWS itself
3. But, neither (a), (b), nor (c) can be justly held morally responsible for the death.
4. Therefore, no one is morally responsible for an enemy death caused by an AWS; i.e., no war in which AWS's are employed is a just war
Why aren't any of the three possible culprits to blame?
The Programmer- cannot be held responsible since the machine is fully autonomous and acting on its own
The Commanding Officer- These are smart weapons and almost act as soldiers themselves, therefore the officer should not be help responsible
The machine- If the machine is to blame, then it would need some punishment, but you can't really punish a machine lol
Argument relating AWS to child soldiers
1. In the case of child soldiers, neither the child, nor the person who orders the child, nor the child's parents are responsible for any deaths that the child causes.
2. Regarding autonomy, an AWS is morally analogous to the child.
3. Therefore, in the case of AWS soldiers, neither the machine, the commanding officer, nor the programmers are responsible for any deaths that the AWS causes
Trolley Situation
There is a runaway trolley headed for 5 people on the track. You are standing in the distance next to a lever. If you pull the lever, you will move the trolley to a different track, with only 1 person on it. You can either:
a) watch the trolley kill the 5 people
b) divert the trolley and kill 1 person
Fat man Situation
There is a runaway trolley headed for 5 people on the track. You are on a bridge watching from above, but there is a very fat man next to you. If you push the fat man over the bridge, the trolley will hit and kill him, but save the other 5 people. You can either:
a) watch the trolley kill 5 people
b) push the fat man over so he dies, but the others live
Intending vs Foreseeing Harm
It is much worse, morally, to intentionally harm someone as a means to an end than it is to merely foresee that harm will be done to someone as an unintended, unnecessary side-effect of one's ends.
Strategic Bomber
A pilot bombs a factory that contains the enemy's store of weapons, and thus shortens the war, saving millions of lives. Unfortunately, 10,000
civilians live next to the factory who will die in the bombings.
Terror Bomber
A pilot deliberately bombs 10,000 innocent civilians in order to demoralize the enemy, thus shortening the war and saving millions of lives
Divert the Bomb
Terrorists have launched a nuclear missile at New York City. The only way to prevent the destruction of New York is to divert the missile. Unfortunately, if the missile is diverted, it will hit Ithaca (a much smaller city) instead. The president orders the bomb to be diverted.
Explode the Bomb
Terrorists have launched a nuclear missile at New York City. The only way to prevent the destruction of New York is to launch our own nuclear missile to destroy Ithaca. This blast will destroy the terrorist missile as it passes over Ithaca's airspace. The president orders an attack on Ithaca.
Diverting a threat vs Introducing a Threat
It is acceptable to divert a threat (divert the bomb), but not to introduce a new one (explode the bomb), because you are merely diverting a pre-existing threat vs creating a whole new threat of bombing Ithaca that was not originally there
Pre-meditated homicide
Driverless car will eventually be making this (trolley case) decision based on programming that was decided perhaps YEARS in advance
What are some moral principles for driverless cars?
-"Do" no harm
-Minimize Harm
-Legally Adjusted Minimization
-Passenger Safety
-Adjustable Ethics Settings
"Do" no harm
The car should never go out of its way to do harm. (i.e. if the car steers into the one pedestrian from the trolley case)
Problems
-Isn't it sometimes permissible to DO harm to a small amount of people in order to avoid ALLOWING harm to some greater number?
Minimize Harm
The car ought to do as little harm as possible
Problems
-Sometimes this will require your death
-Requires you to steer into a motorcyclist with a helmet on instead of one without
-A driver and his friend are playing chicken with a driverless car and the car instead hits one pedestrian to minimize harm
Legally-Adjusted Harm
Should do as little harm as possible, adjusting for facts about whether surrounding individuals are acting legally
Problems
-Would have difficulty making those judgements
-could act as judge, jury, and executioner
Passenger Safety
Protect its own passengers at all costs
Problems
-Killing a group of children over yourself
Adjustable Ethics Setting
Customers would have the choice between different moral settings
Problem
-Everyone might just pick passenger safety
-
Morally Obligatory
Morally obliged to do something (save a drowning kid right next to you)
Supererogatory
It would be a nice thing to do, but you are under no moral obligation to do something (donate to charity)
Geoengineering
The intentional manipulation of the environment on a global scale in order to counteract climate change.
How can we geoengineer?
-More vegetation (seed the oceans, reforestation)
-Carbon-sequestation (capture carbon from power plants and store it underground)
-Solar Deflection (Mirrors in space, painting roofs white, aerosols in the atmosphere)
"Arming the future" argument for geoengineering
1. There is good reason to think that the worst emitters will fail to mitigate their emissions sufficiently to counteract climate change.
2. If the world fails to reduce emissions, then in the near future we will be faced with two options:
(a) Geoengineering, or (b) Global catastrophe.
3. Though both of these options are bad, a global catastrophe would be a greater evil, and a more probable threat, than the potentially bad
consequences of geoengineering.
4. When faced with 2 evils, we ought to choose the "lesser of the 2 evils".
5. But, if we do not research methods of geoengineering NOW, we will not be able to do so (rather, we will be forced to suffer a global catastrophe).
6. Therefore, we ought to be presently pursuing research in order to "arm the future" with geoengineering, so that they have the option if they need it.
Toilet Plans
Every time I flush, the waste goes directly into your house. In the near future, your house will be filled with sewage. I go up to your house and give you a lemon scented febreze to mask the smell.
Problem: This is just obscuring the main problem, such as geoengineering is obscuring the main problem of global warming
Objections to geoengineering
-Obscuring the moral issue
-Self-fulfilling prophecy (failing to motivate people to reduce emission)
-Masking symptoms without addressing cause
-Only a partial solution
-Ignores other options
-Ignores moral relationship with nature
-Potential Catastrophic side effects
-Forcing "lesser of two evils" on future generations
-Lesser Evil isn't always morally permissible
Sophie's Choice
Sophie was given the option to watch both of her children die, or to save one of them. She chose to save her son. Putting this choice on someone is a very great evil to force upon and therefore essentially what we would be doing with geoengineering
Three Scenarios for the human race (Badness of Extinction)
1. Peace
2. Nuclear war that kills 99% of the population
3. Nuclear war that kills 100% of the population
Parfit Claims that the loss of lives from (2) to (3) is much worse than the loss of lives from (1) to (2) because total extinction would entail the loss of all future generations and possible years of future happiness
The Urn
There is an urn with a variety of colored balls. White balls represent improvements to our lives, grey balls are "mixed blessings" (good and bad), dark grey balls that make life worse for us, and one black ball that represents a discovery that will end our species. Throughout history, Bostrom says that we have continuously reached in and grabbed ball after ball from the urn. If there is a black ball in the urn, and we just keep pulling them out, then eventually we will get it.
Potential Man-made threats
Artificial intelligence
Nano-technology
Genetic engineering
Geoengineering
Totalitarianism-enabling technologies (e.g., think 1984)
Unknowns
The singularity
The point at which artificial intelligence out-performs humans and takes off toward infinite intelligence