Looks like no one added any tags here yet for you.
Sumner's criticism of the "typical pro choice argument"
>Okay to kill before birth
>Bad to kill after birth; like killing a normal human adult
pro-choice example
kill baby right before born= okay; nothing wrong
Sumner's belief
says that moral status comes gradually and in degrees
pro-life example:
1- Drunk driver
2- Drunk CDC Tech
3- Drunk fertility tech
drunk fertility tech is worse for killing 300,000 fertilized eggs before conception
Sumner's criticism of the "typical pro-life argument"
>Okay to kill before conception
>Bad to kill after conception; like killing human adult
Sumner's criticism of life #1
Worry #1: life is too broad
Not bad to kill rocks or plants/bacteria; but if life is between the two, its is bad to kill plants
Sumner's criticism of life #2
Worry #2: doesn't gradually come in degrees
Sumner's criticism of rationality (the ability to reason)
>Life is narrow;
killing human adults is wrong and above rationality but there are many things that fall below rationality: animals, severe mentally handicapped adults, babies
Sumner's account of when and to what extent it is bad to kill something
the degree of which it's inherently bad to kill something, is proportional to the degree to which its sentient
higher sentience=higher moral status=
worse to kill
sentience
the ability to perceive or feel things, distinguished from intellect or thought
Sumner: 0-5 months of pregnancy
fetus has no moral status= abortion okay
Sumner: between 5-7 months of pregnancy
develop moral status= questions about abortion
Sumner example:
Save 8mth fetus or 9 mth old baby
9 month old because more sentience
Sumner example:
Save NHA or Cow
Human because more sentience
>>> can reject because cow does have sentience but humans feel wider range of pain
Sumner example:
Save Cow vs chicken
Cow because more cognitively developed; feel more things
Sumner example:
Save NHA vs. baby
NHA because more sentience=higher moral status
Sumner example:
Save NHA vs. SMHA
NHA because more sentient and more rational
>>>SMHA not rational
Sumner example:
Save Chimp vs SMHA
depends how handicapped the adult is
Sumner example:
Save Cow vs 9 mth old baby
Cow
Marquis' account of when and to what extent it is bad to kill something
The degree to which it's inherently bad to kill something is proportional to the degree to which it's deprived of a valuable future like ours
Advantage #1 of Marquis' account
Explains why killing is so bad.
>>>Deprives person being killed so much more than anything else you can do.
Marquis: explanation of advantage #1
Deprives person being killed so much more than anything else you can do.
Advantage #2 of Marquis' account
Explains why euthanasia isn't as bad as "normal" killing.
Euthanasia is mercy killing
Marquis: explanation of advantage #2
Euthanasia is mercy killing
Advantage #3 of Marquis' account
Explains why killing non-humans can be bad
Advantage #4 of Marquis' account
Explains why killing babies is at least as bad as killing NHA's.
Contraception/chastity objection to Marquis
contraception≈ Murder ?! (Chastity)
Marquis: Interpretation #1
Don't know which sperm & egg combo deprived, so not bad.
Example:
>>Poisoning Milk
Cyanide injected in milk
Reply: Doesn't matter if you don't know who is deprived.
Marquis: Interpretation #2: Indeterminate which sperm & egg combo deprived, so not bad.
Ex: Schrodinger's Captives
>>Gas released into either one room or another, 50% chance that someone will die
Reply: Doesn't matter if who is deprived is indeterminate.
Marquis: Worry #1
Why is a fetus "one" thing?
Ex: Siamese-Twin Surgery
Possibility for one twin to die or both
Letting them die instead of doing the surgery, did doctor do something bad?
Marquis: Worry #2
Depriving two things of a shared valuable future can be bad.
Ex: Frankenstein
Marquis: Worry #3
Depriving one thing of a valuable future needn't be bad.
Marquis: Should you save?
NHA vs Cow?
Save human bc more valuable future
Marquis: Should you save?
Baby vs Cow?
Save baby bc more valuable future
Marquis: Should you save?
SMHA vs Cow?
Toss-up
Marquis: Should you save?
Grandma with alzheimer's vs UMass?
Umass bc more value
Marquis: Should you save?
Baby vs UMass?
Baby bc has more time to have a valuable future
Marquis: Should you save?
1-day zygote vs Umass?
1-day zygote because more time
Marquis: Should you save?
3 month fetus vs mother?
Fetus because more valuable future
Tooley's account of rights #1
Right to X entails (all and only) that others shouldn't deprive you of X.
Ex 1. Ugly Lamp
Ex 2. Masochism
Tooley's account of rights #2
Right to X entails (all and only) that others shouldn't deprive you of X if you
want X.
Ex 1. Depression
Killing them because they wanna die
Ex 2. Coma
Taking their stuff
Ex 3. Hypnosis
Hypnotizes people to give you stuff
Tooley's account of rights #3
(≈ Tooley): Right to X entails (all and only) that others shouldn't deprive you of X if you want X under normal circumstances.
Ex 1. Rock and Right to be Untouched (Trivial right)
Trivial right: a right that has no implications regarding how we should behave.
Tooley: Can have a non-trivial right to?: Pain-free
Tree (No)
Dog (Yes)
Normal Human Adult (Yes)
Baby (Yes)
1-day zygote (No)
Tooley: Can have a non-trivial right to?: Attorney
Tree (No)
Dog (No)
Normal Human Adult (Yes)
Baby (No)
1-day zygote (No)
Tooley: Can have a non-trivial right to?: Life
Tree (No)
Dog (No)
Normal Human Adult (Yes)
Baby (No)
1-day zygote (No)
Tooley's argument against potentiality views
(Cat Serum Argument )
Potentiality Views (e.g. Marquis): Kill fetus = Kill I-cat ≈ kill NHA
"I": inject -w- smart cat serum
"N": neutralize
"I-cat": cat injected -w- smart cat serum less than 9 months ago.
P1. Kill NHA > Kill cat
P2. Kill cat = I-ing cat and N-ing cat & kill cat.
P3. I-ing cat & N-ing cat & kill cat = kill I-cat
C. Kill NHA > Kill I-cat.
=> Potentiality Views are false.
Marquis: reject P2?
Valid? Yes
Sound? ?
Argument against Human Intuitions
P1. If dog fighting is wrong, the typical LD50 testing is wrong.
P2. Dog fighting is wrong.
C. Typical LD50 testing is wrong.
Argument against Human Intuitions Option 1
Reject P1: There's an important moral difference between dog fighting and typical LD50 testing.
>Dog fighting -- entertaining/ pleasure
>LD50 -- safety
Replies to option one of H.I argument
>Does LD50 testing really improve safety?
>>>A typical use of the product
>>>Hard to get good predictive results
>What about the safety/well-being of the animals?
>LD50 -- product safety -- pleasure
Argument against Human Intuitions Option 2
Reject P2: Animal suffering doesn't matter (much)
Argument against Human Intuitions Option 3
Accept C: LD50 testing is wrong.
Argument from Marginal Cases
Ex. SMHA (severely mentally handicapped adults) LD50 Testing Factory
P1. If SMHA LD50 testing is wrong, then animal LD50 testing is wrong.
P2. SMHA LD50 testing is wrong.
C. Animal LD50 testing is wrong.
Option 1 for the Argument from Marginal Cases
Reject P1: There's an important moral difference between SMHA LD50 testing and animal LD50 testing.
Option 2 for the Argument from Marginal Cases
Reject P2: Non-rational suffering doesn't matter (much).
Option 3 for the Argument from Marginal Cases
Accept C: LD50 testing is wrong.
Singer's Position
>Sexism: To value the interest (e.g. happiness) of one sex more than those of another.
>Racism: To value the interests of one race more than those of another.
>Speciesism: To value the interests of one species more than those of another.
>Singer: Accept option 3 w.r.t. (with respect to) both arguments
Singer's test
Would you be willing to perform this test on a SMHA, or yourself?
>>Ex: Jonas Salk
Invented a vaccine for polio and tested it on himself and his family
Inventor of yellow fever vaccination first tested on himself
>>LD50 testing fails.
Singer: LD50 testing is wrong.
How would a Rawlsian Contractualist assess animal rights
We should act according to rules agreed by agents who are 1. rational , 2. Self-invested, 3. Behind the veil of ignorance
ANIMALS-not rational
But rawlsian agents will be rational and since they're self-interested, why give other beings rights?
Carruthers argument for why it's bad for humans to torture animals
P1: If dog fighting is wrong, then LD50 testing is wrong
<b>Makes us more likely to hurt humans</b>
>P1 of argument from Humane intuitions is false
Worries for Carruther's argument
1. Does dog fighting make us more likely to hurt humans?
2. Are dogfighting and animal LD50 testing distinct in this way
Carruther's "Social Stability Argument"
(goes against P1 for the argument of M.C)
P1: If our immediate relatives were "marginal", and had no rights, the state could seize them whenever it was in societies interest to do so
marginal= not rational
P2: If the state could do this, society would become unstable
P3: Rawlsian agents would never agree to rules which make society unstable
C: Rawlsian agents would give "marginal" humans rights
Worries for SSA
1. Would this make society unstable?
>>>P2 false
2. If we had rigid property rights, our "marginal" relatives couldn't be seized
>>>P1 false
3. Only follows that immediate relatives of rational agents get rights
>>>Makes argument Invalid
4. Can some argument be run for animals having rights?
>>>P1 of argument from M.C will be true
Cohen's account of when it's inherently bad to kill something
It's inherently bad to kill something iff it belongs to a kind whose typical member is rational
Cohen: Okay to kill?
SMHA (kind=human)
not okay to kill bc humans are rational
Cohen: Okay to kill?
Fluffy (kind=dog)
okay to kill bc typical dog isn't rational
Worry #1 for Cohen's account
What are the right kinds? (What justifies this choice?)
>chris? (kind= mammal)
Okay to kill bc most mammals aren't rational
>baby chris? (kind= baby human)
Okay to kill because the typical baby is not rational
cohen: explanation for worry 1
Cohen: Assumes kinds should = species
--Morally relevant kinds=rational/sentient but not
Rational/ living but not
Example: Fluffy? (kind= sentient but not rational)
>okay to kill bc not rational
Example: SMHA (kind= sentient but not rational)
>okay to kill bc not rational
Worry #2 for Cohen's account
What's typical of your kind doesn't seem morally relevant
Example: heaven vs hell
hitler, stalin, pol pot, ghandi, (HELL)
rosa parks, mother teresa, st. catherine, bathory the blood countess (HEAVEN)
Worry #3 for Cohen's account
Implausible Consequences
Example: Fido the Genius Dog (kind= dog)
Okay to kill because typical dog is non-rational
Example: Pluto People Pasture..... You? (kind=human)
Okay because typical humans would be considered not rational