3F - Utilitarianism: Application of the Theories (Act and Rule)

studied byStudied by 0 people
0.0(0)
Get a hint
Hint

What are the two instances that the spec requires you to apply act and rule utilitarianism to?

1 / 16

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

17 Terms

1

What are the two instances that the spec requires you to apply act and rule utilitarianism to?

ā€£ Animal experimentation
ā€£ Nuclear weapons as a deterrent

New cards
2

What are the problems with applying either form of utilitarianism to animal experimentation?

ā€£ Due to the complexity of the issues + the complexity of the theory itself, there are different ways to apply
- Bass: "however good the utilitarian case against animal research in general, it will be possible in principle to find cases in which it seems justified

ā€£ The "greatest happiness for the greatest number" is open to interpretation:
- Projects happiness of many human lives if we consider lives already saved and potential human disaster by not controlling epidemics
- However, if there are many uncertainties / discrepancies (as the facts suggest), then the greatest happiness for humans is not guaranteed in relation to the suffering. What about failed experiments?

New cards
3

Apply act utilitarianism (Bentham) to the issue of animal experimentation.

ā€£ Judges each moral act uniquely āˆ“ would not take into account any previous moral judgements

ā€£ Principle of utility would be applied to try and decide whether the act creates the biggest gap between pleasure and pain (dependent on whether pain of animals should be considered)
- Pleasure = humans may benefit; pain = suffering to animals
- Which outweighs the other? (Bentham- quantity; Mill- quality)

ā€£ Bentham = considered a pioneer of animal rights
- He did not argue that humans and non-humans had equal moral significance, but argued that the latter's interests should be taken into account
- Rather than regarding them as inferior āˆµ of inability to reason, he said "The question is not, can they reason? [...] But can they suffer?"
- His "insuperable line" = that the ability to suffer rather than the ability to reason provided the framework of we treat animals

ā€£ Hedonic calculus: the only satisfactory element to consider = the principle of 'extent'
- Look to the long-term: if the suffering of animals in the present leads to less suffering in the future, then there is the biggest gap between pleasure and pain

New cards
4

Apply rule utilitarianism (Mill) to the issue of animal experimentation.

ā€£ Animal pleasures + pains do not equate their human counterparts in terms of value
- Animals do not appreciate the higher pleasures āˆ“ cannot operate as utilitarian beings

ā€£ Julia Driver: "Mill holds that while animals do have moral standing in virtue of their sentience [...] their moral standing is not the same as that of persons who have [...the] capacity to experience higher pleasures"

ā€£ Mill's harm principle = aimed at society/humans so that society benefits; but it would insist on the stringent application of rules to minimise suffering rather than to be completely against it

ā€£ Strong rule utilitarianism would most probably advocate it; e.g. William Harvey's work on the heart (he operated on a live pig) = integral to the understanding of blood circulation and heart bypass surgery, which has a greater benefit on humanity āˆ“ rule utilitarianism could have a rule that states that 'Animal experimentation is good/right'

ā€£ Weak rule utilitarianism more flexible; allows for exceptional cases e.g. deaths of millions in failed experiments āˆ“ ignore the rule and switch to act utilitarianism
- Mill: intervention should be based on "the intrinsic merits of the case" rather than "incidental consequences [...] to the interests of humans" āˆ“ a weak rule utilitarian would not consider the variants but would work with the underlying principles as advocated by the distinction between higher / lower pleasures

New cards
5

Give a quote from Peter Singer, a contemporary utilitarian, on animal experimentation.

ā€£ "Given the suffering that this routinely inflicts on millions of animals, and that probably very few of the experiments will be of significant benefit to humans or to other animals, it is better to put our resources into other methods of doing research that do not involve harming animals."

New cards
6

What is important to remember when answering a question about the use of nuclear weapons as a deterrent?

ā€£ The question is asking about their use as a DETERRENT, not the use of nuclear weapons

ā€£ As a result, both act and rule utilitarianism will support their use as a deterrent as they are not harming anyone but are preventing large amounts of pain

New cards
7

Give an introduction to the use of nuclear weapons as a deterrent.

ā€£ Many argue that the purpose of nuclear weapons is to serve as a deterrent

ā€£ Others see them as a waste of resources as money could be better spent

ā€£ CND (Campaign for Nuclear Disarmament): civilian casualties; radioactive fallout; recent research shows that a 'small' exchange of 50 nuclear weapons could cause "the largest climate change in recorded human history" (long-term pain) + could kill more than the entirety of WW2 (short-term pain)

ā€£ UK gov.'s reasons for nuclear weapons as a deterrent: the costs of an attack on the UK will outweigh any benefits; invulnerability + security of capability = key components of the credibility of our deterrent

ā€£ Michael Fallon:
- "Deterrence means convincing any potential aggressor that the benefits of an attack are far outweighed by its consequences"
- "To abandon our deterrent now would be an act of supreme irresponsibility"
- "for all the conventional conflicts since [WW2...] there hasn't been major conflict between nuclear armed states. The devasting possibilities of nuclear war have helped maintain strategic stability."

New cards
8

Apply act utilitarianism (Bentham) to the issue of nuclear weapons as a deterrent.

ā€£ Would not take into account previous moral judgements āˆµ it judges each moral act separately

ā€£ Principle of utility
- As nuclear weapons as a deterrent prevent attack, we are currently experiencing the sovereign good of happiness āˆµ no nuclear weapons are being used
- We are maximising happiness + maximising the gap between pleasure and pain

ā€£ Truman applied act utilitarianism when he dropped bombs on Japan (killing 240,000); estimates of between 500,000-1 million deaths if a ground attack

New cards
9

Explain how the first element of the hedonic calculus, intensity, supports the use of nuclear weapons as a deterrent.

ā€£ The strong pleasure of not being attacked

New cards
10

Explain how the second element of the hedonic calculus, duration, supports the use of nuclear weapons as a deterrent.

ā€£ Long-lasting lack of nuclear attack

New cards
11

Explain how the third element of the hedonic calculus, certainty, supports the use of nuclear weapons as a deterrent.

ā€£ Far more certain that we will not be attacked then without them as a deterrent

New cards
12

Explain how the fourth element of the hedonic calculus, propinquity, supports the use of nuclear weapons as a deterrent.

ā€£ Can witness family + friends enjoying the lack of attack in the present

New cards
13

Explain how the fifth element of the hedonic calculus, fecundity, supports the use of nuclear weapons as a deterrent.

ā€£ Continual use as a deterrent = chance of the pleasure never-ending

New cards
14

Explain how the sixth element of the hedonic calculus, purity, supports the use of nuclear weapons as a deterrent.

ā€£ Deterrent = less pain than being attacked

New cards
15

Explain how the seventh element of the hedonic calculus, extent, supports the use of nuclear weapons as a deterrent.

ā€£ The whole world is experiencing nuclear peace

New cards
16

Apply rule utilitarianism (Mill) to the issue of nuclear weapons as a deterrent.

ā€£ Use as a deterrent preserves the higher and lower pleasures āˆµ you are not dead from an attack

ā€£ The harm principle has two interpretations:
1) We are not harming others āˆµ it is only a deterrent āˆ“ protects society
2) We are harming others āˆµ of the potential to use them

ā€£ Could have two rules:
1) The use of nuclear weapons as a deterrent = good āˆµ of ongoing peace
2) The use of nuclear weapons as a deterrent = bad āˆµ the previous uses caused widespread death and destruction

ā€£ The potential pain is outweighed by the pleasure of ongoing peace

ā€£ This issue will always be the case for weak rule utilitarianism āˆµ every use of nuclear weapons is exceptional

New cards
17

Give a conclusion to the use of nuclear weapons as a deterrent.

ā€£ The only feasible reason, from a utilitarianism perspective, for not having nuclear weapons as a deterrent is āˆµ the money could be spent elsewhere to bring large amounts of happiness; the money spent on nuclear weapons = causing pain āˆµ others are suffering

ā€£ However, no other spending of money could guarantee worldwide protection than the use of nuclear weapons as a deterrent

ā€£ Only true solution = universal nuclear disarmament; but even then, other weapons would still be prevalent

New cards

Explore top notes

note Note
studied byStudied by 89 people
... ago
5.0(3)
note Note
studied byStudied by 82 people
... ago
5.0(1)
note Note
studied byStudied by 67 people
... ago
5.0(1)
note Note
studied byStudied by 31 people
... ago
5.0(2)
note Note
studied byStudied by 36 people
... ago
5.0(5)
note Note
studied byStudied by 11 people
... ago
5.0(1)
note Note
studied byStudied by 80 people
... ago
5.0(6)

Explore top flashcards

flashcards Flashcard (38)
studied byStudied by 3 people
... ago
5.0(1)
flashcards Flashcard (60)
studied byStudied by 454 people
... ago
5.0(1)
flashcards Flashcard (29)
studied byStudied by 5 people
... ago
5.0(1)
flashcards Flashcard (60)
studied byStudied by 40 people
... ago
5.0(1)
flashcards Flashcard (25)
studied byStudied by 4 people
... ago
5.0(1)
flashcards Flashcard (67)
studied byStudied by 2 people
... ago
5.0(1)
flashcards Flashcard (20)
studied byStudied by 3 people
... ago
5.0(1)
flashcards Flashcard (39)
studied byStudied by 1 person
... ago
5.0(2)
robot