CR Lecture 12 - Ethics for AI and Robotics

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/17

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

18 Terms

1
New cards

Ethical problems for AI and robots

  • Are robots stealing our jobs?

  • Should we worry about super-intelligence and the singularity?

  • How should we treat robots?

  • Should robots become our friends and assistants?

  • Should robots/AI systems be allowed to kill?

    • Should we use so much energy for LLMs?

2
New cards

Differentiation between AI Ethics and Machine Ethics

AI Ethics: Ethics for robotics, Responsible AI

Machine Ethics:

  • Robots that decide with their own rules

  • Can be defined or robots can make their own logical reasoning on what’s right or wrong

3
New cards

AI Ethics definition

AI Ethics is a set of values, principles and techniques that employ widely accepted standards of right or wrong to guide moral conduct in the development and use of AI technologies

To understand ethical implications in the design of safe, acceptable and ethical robots

4
New cards

Machine Ethics definition

An ethical machine is guided by own, intrinsic ethical rule, or set of rules, in deciding how to act in a given situation.

5
New cards

Philosophical Theories for AI ethics

  1. Deontological ethics

  2. Utilitarianism (consequentialism)

  3. Virtue ethics

6
New cards

Deontological ethics

  • Kant: Responsibility of individual to discover the true moral law for themselves

  • Any true, moral law is universally applicable

    • e.g. Asimov’s Laws of Robotics

  • Applications to robot ethics

    • What are the right rules?

    • How are rules applied to decisions?
      What design rules to achieve our desired social goals?

An action is right if it is in accordance with a moral rule or principle.

Typically there is no approach which is deontological

7
New cards

Utilitarianism

Also called Consequentialism
What is the greatest possible good (moral law) for the greatest number of people?

Game theory-adjacent

Utility: Proxy for individual goodness

  • Utilitarian Calculus compares sum of individual utility (positive or negative) over all people in society

Utility: Presentation of the individual agent’s preference

Rationality: selecting actions maximising expected utility

8
New cards

Virtue Ethics

  • Local norms

    • Organised around developing habits and dispositions that help a person achieve goals (reflecting on rules)

  • Phronesis (moral prudence, practical wisdom)

    • Ability to evaluate a given situation and respond fittingly

    • Developed through both education and experience

  • Dominant approach

An action is right if it is what a virtuous agent would do in the circumstances.

Frank and the robot

9
New cards

AI Ethics Potential Harms

  • Bias and discrimination

  • Denial of individual autonomy and rights

  • Non-transparent, unexplainable, or unjustifiable outcomes

  • Invasions of privacy

  • Isolation and disintegration of social connection

  • Unreliable, unsafe, or poor-quality outcomes

  • Job losses/changes

10
New cards

Gender bias on search engine

Search CEO, fewer women CEOs shown than there are women CEOs

11
New cards

Race bias: COMPAS florida police system

Higher reoffending rate prediction for black people

12
New cards

Trolley Problem Bias

Cultural differences for trolley problem
So what is the right solution for the trolley problem? Should it depend on the culture targeted to?

13
New cards

Ethical impact agents

Any machine that can be evaluated for its ethical consequences

14
New cards

Implicit ethical agents

Machines that are designed to avoid unethical outcomes

15
New cards

Explicit ethical agents

Machines that can reason about ethicsF

16
New cards

Full ethical agents

Machines that can make explicit moral judgements and justify them

17
New cards

Solutions: Responsible AI

RRI: Responsible Research and Innovation

  • Involve society in science and innovation very early in the processes, to align outcomes with the value of society

    • Public engagement, open access, gender equality, science education, ethics, governance

18
New cards

Solutions: Explainable AI (XAI)

Aims:

  • Produce explainable models

  • Enable human users to understand, trust and effectively manage AI systems