1/17
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Ethical problems for AI and robots
Are robots stealing our jobs?
Should we worry about super-intelligence and the singularity?
How should we treat robots?
Should robots become our friends and assistants?
Should robots/AI systems be allowed to kill?
Should we use so much energy for LLMs?
Differentiation between AI Ethics and Machine Ethics
AI Ethics: Ethics for robotics, Responsible AI
Machine Ethics:
Robots that decide with their own rules
Can be defined or robots can make their own logical reasoning on what’s right or wrong
AI Ethics definition
AI Ethics is a set of values, principles and techniques that employ widely accepted standards of right or wrong to guide moral conduct in the development and use of AI technologies
To understand ethical implications in the design of safe, acceptable and ethical robots
Machine Ethics definition
An ethical machine is guided by own, intrinsic ethical rule, or set of rules, in deciding how to act in a given situation.
Philosophical Theories for AI ethics
Deontological ethics
Utilitarianism (consequentialism)
Virtue ethics
Deontological ethics
Kant: Responsibility of individual to discover the true moral law for themselves
Any true, moral law is universally applicable
e.g. Asimov’s Laws of Robotics
Applications to robot ethics
What are the right rules?
How are rules applied to decisions?
What design rules to achieve our desired social goals?
An action is right if it is in accordance with a moral rule or principle.
Typically there is no approach which is deontological
Utilitarianism
Also called Consequentialism
What is the greatest possible good (moral law) for the greatest number of people?
Game theory-adjacent
Utility: Proxy for individual goodness
Utilitarian Calculus compares sum of individual utility (positive or negative) over all people in society
Utility: Presentation of the individual agent’s preference
Rationality: selecting actions maximising expected utility
Virtue Ethics
Local norms
Organised around developing habits and dispositions that help a person achieve goals (reflecting on rules)
Phronesis (moral prudence, practical wisdom)
Ability to evaluate a given situation and respond fittingly
Developed through both education and experience
Dominant approach
An action is right if it is what a virtuous agent would do in the circumstances.
Frank and the robot
AI Ethics Potential Harms
Bias and discrimination
Denial of individual autonomy and rights
Non-transparent, unexplainable, or unjustifiable outcomes
Invasions of privacy
Isolation and disintegration of social connection
Unreliable, unsafe, or poor-quality outcomes
Job losses/changes
Gender bias on search engine
Search CEO, fewer women CEOs shown than there are women CEOs
Race bias: COMPAS florida police system
Higher reoffending rate prediction for black people
Trolley Problem Bias
Cultural differences for trolley problem
So what is the right solution for the trolley problem? Should it depend on the culture targeted to?
Ethical impact agents
Any machine that can be evaluated for its ethical consequences
Implicit ethical agents
Machines that are designed to avoid unethical outcomes
Explicit ethical agents
Machines that can reason about ethicsF
Full ethical agents
Machines that can make explicit moral judgements and justify them
Solutions: Responsible AI
RRI: Responsible Research and Innovation
Involve society in science and innovation very early in the processes, to align outcomes with the value of society
Public engagement, open access, gender equality, science education, ethics, governance
Solutions: Explainable AI (XAI)
Aims:
Produce explainable models
Enable human users to understand, trust and effectively manage AI systems