Philosophy Exam 2

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/51

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

52 Terms

1
New cards

According to Chapter 7 “Where does morality come from?
of ethics: You’re doing it wrong!

Voltaire was wrong because:

there is more than one geometry and it applies to different things but that does not mean that either geometry is mistaken.

it just means that there is another way to solve the application. moral truths as well as mathematical truths would be true even if no creatures or anything else existed.

2
New cards

What are two replies (arguments) to singers arguments for ethical vegetarianism?

replaceability argument & property argument

3
New cards

What is the objection between humans and person pertaining unqualified speciesism?

the objection is the distinction between humans and persons. there can be persons that are not human and there can be humans that are not persons.

4
New cards

What is the view pertaining unqualified speciesism?

species itself is morally relevant and that particularly being human is morally relevant

5
New cards

What was singer suspicious of and why?

medical studies done on non-human animals.

he feels it’s unreliable and he think it’s extrapolations.

6
New cards

effective altruism

  • a life goal should be to create as much good as possible

  • arrange our lives to maximize the overall good

7
New cards

In Chapter 8, “Why Be Moral?” of Ethics: You’re doing it wrong! What is the point of the example of $250,000?

The point of the example of the $250,000 was to say that people base actions on a person’s upbringing or personality instead of what caused them to do that action.

We base what we do as a reaction to the dilemma we are in but look at another person’s actions as the type of human they are.

8
New cards

According to “A Short Introduction to the Ethics of Artificial Intelligence”, by S. Matthew Liao, what is the positive/negative side of using facial recognition?

the positive side of using facial recognition to track anyone who enters a public space is that it can help police find criminals more quickly and identify missing or kidnapped children.

On the negative side, the government could use this technology to monitor it’s citizens or discriminate against minorities

9
New cards

The point of the example of Gradual Substitution was to show that:

you can still gradually replace any defects but in the end, you still might not be your old self even if you feel like you are.

10
New cards

According to Chapter 1, “All Animals are Equal….” of Peter Singer’s Animal Liberation Now, there is no longer a dispute amongst whom and what is the dispute about?

there is no longer any serious dispute among scientists that at least some unhuman animals feel pain and experience other conscious states, both positive and negative.

11
New cards

According to Chapter 2, “Tools for Research…” of Peter Singer’s Animal Liberation Now, What is the ethical dilemma that researchers faced?

trying to balance the benefits of animal experimentation while understanding the right of animals to be free from suffering.

12
New cards

According to Chapter 3, “Down On the Factory Farm….” of Peter Singer’s Animal Liberation Now, boars kept for breeding are kept permanently in ______.

pens

13
New cards

Where does morality come from?

nowhere. morality cannot be created. exists independently of us, like logical truths.

14
New cards

What are the interpretations of the questions why be moral psychologically and philosophically?

Psychological question: What motives might someone have for doing the right thing rather than the wrong thing?

Philosophical question: what justifies doing the right thing?

15
New cards

Ring of Gyges:

There was an earthquake that created a hole in the ground, a ring inside the bronze horse that was inside these doors, so Gyges took the ring and turned it a certain way and became invisible, killing the king, seducing the queen, and becoming a king himself.

16
New cards

What was the point of the Gyges example?

why do the right thing if you can get away with doing the wrong to your advantage with no consequences.

17
New cards

Regarding Mill’s answer to the ring of gyges argument what did he say are the 2 motivations for doing the right thing?

external rewards and punishments

internal rewards and punishments

18
New cards

What are External Rewards and Punishments? & the two problems for external rewards and punishments:

ERP: imposed on us by others when they notice our behavior. example: putting someone in prison for committing a crime.

Problem 1: external punishments can be avoided. like criminals who convince themselves they can get away with a crime

Problem 2: coercion problem. For example, convincing people that God exists and will punish them in the after life. Acting out of your own interest, meaning it’s coercive.

19
New cards

Internal rewards and punishments:

we impose these on ourselves. Example: donating to charity out of own self-interest to stop self-guilt or knowing that stealing candy from a baby will make a person feel guilty so they choose not to do so.

20
New cards

What is the distinction between Artificial General Intelligence and Artificial Special Intelligence?

AGI: can learn a wide variety of things like humans can

ASI: can do just one thing, sometimes better than humans

21
New cards

Future potential applications of AI:

Predictive Policing (when and where future crimes will occur)

Assess Recidivation Rates- rates that criminals are likely to recidivate ( relapse) a crime.

Future predictions such as what is an infants’ likelihood of committing crimes.

22
New cards

Potential Evils of AI

Privacy Issues

Intrusiveness

Fraud

Bias AI

23
New cards

According to the estimate mentioned in class, replacing all regular cars with self-driving cars reduces fatalities by __.

90%

24
New cards

what are some advantages of self-driving cars with regard to reduced costs:

cheaper to ensure safer

practical to share cars

25
New cards

Isaac Asimov’s 3 Laws to Robotics

  • A robot may not injure a human being or through an action allowing human beings to come to harm. (first law)

  • A robot must obey orders given to it by human beings except where such order would conflict with the first law (second law)

  • a robot must protect it’s own existence as long as such protection does not conflict with the first or second law. (third law)

26
New cards

What are some problems with Issac Asimov’s 3 Laws to Robotics

Problem with 1st Law: It’s impossible to carry out, impossible to prevent harm from humans, and impossible to save everyone

Problem with 2nd Law: gets conflicting orders from the same or different human

Problem with 3rd Law: assumes robots always less valuable than humans

27
New cards

Explain the trolley problem:

you are the driver of a trolley and unexpectedly the brakes have given out, you notice there are 5 workers and if you continue all 5 workers will be killed but you notice there is a spur that can lead to a track with only one worker.

28
New cards

4 conditions to apply to DDE (doctrine double effect)

  1. the act itself must be morally good or indifferent

  2. the person may not positively intend the bad effect but may force and permit it. if the good could be obtained without the bad effect, it should be.

  3. The good effect must flow from the action immediately as the bad effect.

  4. the good effect must be sufficiently desirable to compensate for allowing the bad effect.

*DDE can solve the trolley example because all 4 conditions are met.

29
New cards

Explain the closeness problem:

A person’s intentions can almost always be identified in a way that eliminates any intention to do wrong so, you can get almost any result you want from DDE, making it arbitrary (random)

30
New cards

Explain the bridge example:

the runaway trolley, split into tracks, has 5 workers, if you keep going you will kill the 5 workers. you and tourists on a bridge over tracks, you can push tourists off the bridge and stop the train.

31
New cards

explain the wildflower trolley example:

kill a wildflower or divert the trolley and kill another person.

32
New cards

(KLD) Killing and letting die example:

it is always worse to kill a person than to let them die.

solve the trolley problem: kill one or kill 5 : must kill one in order to save 5

33
New cards

2 objections to KLD argument:

bystander at the switch- kill one or save (KLD 2 gets this wrong)

is it right to withdraw trampoline and kill someone or let them die.

34
New cards

What are ways that ChatGPT undermines learning:

  • easier to type a question into chatGPT to get an answer then it is to find the answer in class notes or received texts. (ChatGPT could get the answer wrong) - because you spend less time looking at class notes and texts, you don’t learn them as well (learning is inhibited)

  • chatGPT can write papers for people this means that students don’t learn how to write papers. if they don’t learn how to write papers, they don’t learns how to express themselves clearly in writing.

  • people use ChatGPT, to write computer code - then they don’t learn how to check it out or write it themselves.

  • people use chatGPT for anything they can such as homework, tests, etc.

35
New cards

Cheaters tend to violate rules after school, what rules do they tend to violate?

cheat on taxes

steal

engage in substance abuse

unethical work behavior and end up being crappy people

36
New cards

Information to know about Natural and Artificial

  • bias favors the natural (pay more for natural)

  • some prefer natural drugs to artificial even if less safe

  • lack of additives & processing for food and drugs

  • whether something is better or worse (artificial or natural) depends on how you define that.

37
New cards

Primary values of an education :

enables you to lead a richer life

38
New cards

Ethics and Animals: According to most estimates how many rats and mice are used in labs in the US per year?

10 million to 11 million

39
New cards

Ethics and Animals: the testing of pesticides requires the use of approximately:

10,000 animals

40
New cards

How many hogs and pigs are on US farms?

74.6 million

41
New cards

Only _____% of chickens are cage-free

38

42
New cards

What causes illnesses in milk cows?

dehorned/ debuted

tail docking

feed causes illness

43
New cards

Singer’s NET: hedonistic utilitarianism

no higher/lower pleasure

44
New cards

Principle of Equality:

everyone’s well-being counts equally

does this include animals?

45
New cards

What are 2 reasons to include animals in utilitarianism calculations

differences in intelligence

differences in moral capacities - animals are not moral agents

46
New cards

3 arguments made for ethical vegetarianism

  • the argument for unnecessary animal pain- many animals suffer on factory farms

  • fish can feel pain

  • oysters cannot feel pain

47
New cards

Replaceability Argument

you can’t know for sure animals on factory farms live any positive lives, so you can’t claim that having family farms increase the overall good than not having them

48
New cards

Property Argument

we own animals so that gives us the right to torture them

the response to this argument: just because we own animals that doesn’t give us the right to torture them.

49
New cards

conscientious omnivores

shouldn’t make animals suffer unnecessarily - harm is now unjustifiable

50
New cards

singer’s view on animal experimentation

experiments serving no direct and urgent purpose should be stopped & replace animals when possible with non-animal alternatives.

if you do an experiment that saved 1,000 lives on a mouse you should do it, Singer says yes

if we are willing to experiment on an animal we should be willing to do it on a mentally challenged orphan, in the case that the animal is of more value

51
New cards

Unqualified Speciesism

membership in a species itself is morally relevant (being human)

52
New cards

What are other objections to unqualified speciesism?

species are not always relevant when ascribing moral rights.

genetics is not a moral distinguisher