1/15
A set of Question and Answer flashcards covering Railton's deep-feature view, developmental ethics, the Switch vs Footbridge trolley problems, and implications for AI ethics.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What does Railton's 'deep feature view' claim about ethical judgment and intelligence?
Ethical judgment is a deep feature of general intelligence, not just rule application; it develops from infancy and relies on two capacities: attunement to others' interests and the ability to evaluate novel social situations.
What are the two capacities central to the deep-feature view?
Attunement to and prediction of others' interests in the social group; ability to evaluate novel social situations without explicit instruction.
What are 'Priors' in Railton's developmentalism in ethics?
Two epistemic features early in learning: curiosity and trust, which form the foundation of ethical development.
What two early accomplishments follow curiosity and trust?
Intuitive Physics (how cause and effect work) and Intuitive Psychology (theory of mind).
What is the Deep-Feature View in infant learning and ethics?
Infant learning builds an epistemic map of the world that simultaneously develops an intuitive ethics, requiring attunement to others' interests and the ability to evaluate new social situations.
What is Railton's view about ethical judgment in adults?
Sound ethical judgment is largely intuitive rather than rule-based; illustrated by trolley problems like Switch and Footbridge.
How does Railton explain the Switch vs Footbridge asymmetry in moral judgments?
The asymmetry reflects our ideals about the kinds of people we want to be and the trust we place in others, not just pure rule-following.
What is Railton's hypothesis about moral intuitions and the asymmetry?
Moral intuitions express who we want to be and how much we value trust, rather than being fully captured by utilitarian or deontological rules.
What paradigm shift does Railton propose for AI ethics?
Treat AI as infants learning from data, rather than as systems to be hard-coded with fully formed rules.
How are AI systems described in the alternative paradigm?
We are teachers rather than technicians; aim to create reliable social partners (not just tools); LLMs are like 'little babies' learning from data.
What abilities should we cultivate in AI priors?
Curiosity and trust/trustworthiness, enabling the AI to model and collaborate within social contexts.
What are the potential dangers of giving AI social priors?
The risk of creating psychopaths or Machiavellians within AI systems.
Which two anti-social types are discussed as lacking this capacity?
Psychopaths and Machiavellianism.
What role does trust play in ethical intuitions?
Trust underpins moral judgments; people may trust someone who reports a Switch action but not someone who reports a Footbridge action.
In the Switch scenario, what may one feel and how might the family be affected?
One might feel guilt, but expect the family to understand.
In the Footbridge scenario, what may one feel and how might the family react?
One might feel shame and not expect the family to understand (unless the person is a psychopath).