1/50
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Instrumental theory
says technology is just a neutral tool we use
post phenomenology Theory
says technology changes us and our world by shaping perception, action, and meaning.
'technology as a moral patiet
The idea that technology can be the object of moral concern — something that can be treated rightly or wrongly, or be harmed or benefited.
technology as a moral agent
The idea that technology can participate in moral action by influencing or producing morally significant outcomes, even if it has no intentions.
Inclusive definition of AI
Defines AI broadly — it includes any system that performs tasks requiring intelligence, such as pattern recognition, decision-making, or learning. Even simple algorithms or automation can count as AI.
Exclusive definition of AI
Defines AI narrowly — it only includes systems that show human-like or advanced cognitive abilities, such as reasoning, understanding language, or self-learning. Simple automation or rule-based programs would not count.
Hedonism PrincipleHedonism Principle
The idea that pleasure or happiness is the only thing that is intrinsically good, and pain or suffering is intrinsically bad. Moral actions aim to maximize pleasure and minimize pain
Consequentialism Principle
The view that the rightness or wrongness of an action depends only on its consequences — good actions produce good outcomes, bad actions produce bad outcomes.
Impartiality Principle
The idea that everyone's happiness or well-being counts equally — no one's interests are more important than anyone else's.
Nozick's Experience Machine
Imagine a machine that can give you any pleasurable experience you want. Once plugged in, you won't know it's not real — you'll feel total happiness and satisfaction forever.
Counter argument to machine through hedonism
Hedonism is the view that pleasure or happiness is the highest good and the ultimate aim of life, while pain or suffering is bad. Nozick's argument:Most people would choose not to plug in, because we value more than just pleasure. We care about:
Actually doing things, not just feeling like we did them.
Being a certain kind of person, not just having pleasurable experiences.
Living in contact with reality, not in an illusion.
Consequentialism
A consequentialist believes that the morality of an action depends entirely on its outcomes or consequences.
Organ Harvest Experiment
Suppose a doctor has five patients who each need an organ transplant to survive. A healthy person walks in for a check-up. If the doctor kills this one healthy person and harvests their organs, all five patients could be saved.
Burning Building Thought Experiment
Suppose a building is on fire, and you can save either your own child or five strangers inside.
Impartiality
Impartiality is the principle that everyone's well-being or interests should be considered equally when making moral decisions
Obligatory motive
Obligatory actions: Actions that you are morally required to do. Failing to perform them is considered wrong or blameworthy.
Example: Telling the truth, not harming others, paying taxes
Supererogatory motive
Supererogatory actions: Actions that are morally good but not required. Performing them is praiseworthy, but failing to do them is not wrong.
Example: Donating a large sum to charity, risking your life to save a stranger.
Utilitarianism
The right action is the one that maximizes overall happiness or well-being and minimizes suffering.
Universal Law
What would happen if everyone did this action. After thinking that would the world be better off with this action. It is okay to lie whenever it benefits me."
Test: Could this maxim become a universal law?
Problem: If everyone lied when it benefited them, trust would collapse, and the very concept of truth-telling would be impossible.
Conclusion: This maxim cannot be universalized, so lying in this way is morally wrong according to Kant.
Moral Delema
A moral dilemma = a situation where you must choose between conflicting moral obligations, with no fully satisfactory option.
Aboslute Deontology
Holds that moral rules or duties are always binding, no matter the consequences.
Some actions are categorically wrong, and it is never permissible to violate them.
Example: Kantian ethics — lying is always wrong, even to save a life.
Moderate Deontology
Accepts that moral rules are generally binding, but in extreme situations, breaking them may be morally permissible to prevent catastrophic outcomes.
Balances duties with consequences when necessary.
Example: Lying might be acceptable if it saves many lives.
Trolly Thought Experiment
A train is coming down a track with 5 people tied down and the train is going to kill them
Switch Version: There is a switch you can flip and only kill one person
Footbridge Experiment: You can push someone off the bridge to save the 5 people.
Utilitarians - pull the switch
Deontologists - do nothing
The doctrine of double effect
The Doctrine of Double Effect is a principle in moral philosophy that says:
It can be morally permissible to perform an action that has both a good effect and a bad effect, if:
The action itself is morally good or neutral.
The bad effect is not intended (only foreseen).
The good effect is not achieved by means of the bad effect.
There is a proportionately serious reason for allowing the bad effect.
DDE allows the lever case because the harm is a side effect, not a means.
DDE forbids the fat man case because the harm is used as a means to achieve the good outcome.
Non-neutrality of technology thesis
Technology is never truly neutral. It shapes how we think, act, and organize society, rather than merely serving human goals.
Smartphones and social media:
They are not just tools for communication.
They change how we interact, form relationships, and even perceive attention, self-worth, and identity.
Society becomes structured around constant connectivity and digital validation — showing that technology shapes behavior and culture.
printing press and the clock as two examples of the unpredictable impact of technology
Intended purpose: printing press
Make books easier and cheaper to produce.
Unpredictable impacts:
Spread of literacy and education on a massive scale.
Facilitated the Reformation and religious upheaval.
Enabled the scientific revolution by allowing ideas to circulate widely.
Reshaped political power, as knowledge was no longer controlled by elites.
Unpredictable impacts: clock
Changed people's sense of time — time became quantified, standardized, and externalized.
Influenced the rise of industrial capitalism, as punctuality and schedules became socially and economically important.
Technological dualism
Technological dualism is the idea that technology itself is neutral, and its moral or social effects depend entirely on how humans use it.
Technological ecology
Technological ecology is the study of how technologies interact with each other and with society, forming complex systems that shape human life, culture, and the environment.
Technologies do not exist in isolation; they affect and are affected by other technologies, social practices, and cultural norms.
Changes in one technology can have ripple effects across many areas of life.
tool using cultures, technocracies, and technopoy
Tool-using cultures use technology as a controlled tool, technocracies rely on technology and its experts to guide society, and technopolies are dominated by technology, which shapes culture, values, and human behavior.
Technological determinism
Technological determinism is the idea that technology is the primary driver of social, cultural, and historical change, shaping human behavior, institutions, and values.
Technology is seen as autonomous, influencing society independently of human choices or intentions.
Society and culture adapt to technology, rather than technology being shaped by social needs.
Value Sensitive Design
Value Sensitive Design is an approach to technology design that integrates human values into the design process from the start.
It considers ethical, social, and cultural values alongside functionality.
Goal: Create technologies that promote well-being, fairness, privacy, and other important values.
Identifying Relevant Values:
Different stakeholders may have conflicting values.
Determining which values are most important is complex.
Translating Values into Design Requirements:
Abstract values like fairness or autonomy are hard to operationalize in concrete technical features.
Balancing Values and Trade-offs:
Some values may conflict with each other or with functionality (e.g., privacy vs. personalization).
Designers must make difficult ethical trade-offs
The AI responsibility gap
The AI responsibility gap refers to situations in which autonomous AI systems cause harm or make morally significant decisions, but it is unclear who is morally or legally responsible for the outcomes.
Traditional notions of responsibility assume human control and intention.
With AI systems making decisions independently, accountability becomes ambiguous.
The prima facie appeal
The prima facie appeal refers to the initial, surface-level attractiveness of using algorithms to make decisions, before considering potential ethical or practical problems.
Why it seems appealing:
Efficiency: Algorithms can process large amounts of data quickly.
Consistency: Decisions are uniform and not subject to human biases like fatigue or mood.
Predictability: Algorithms follow clear rules and can be audited (in principle).
Objectivity: Decisions appear impartial because they are based on data rather than personal judgment. refers to the initial, surface-level attractiveness of using algorithms to make decisions, before considering potential ethical or practical problems.
Why it seems appealing:
Efficiency: Algorithms can process large amounts of data quickly.
Consistency: Decisions are uniform and not subject to human biases like fatigue or mood.
Predictability: Algorithms follow clear rules and can be audited (in principle).
Objectivity: Decisions appear impartial because they are based on data rather than personal judgment.
The basic problem of algorithmic bias and Algorithmic Matthew Effects
Algorithmic bias occurs when an AI or algorithm systematically produces unfair or discriminatory outcomes.
The Algorithmic Matthew Effect is the phenomenon where "the rich get richer and the poor get poorer" in algorithmic systems.
Algorithms can amplify existing advantages or disadvantages, giving more opportunities to those who are already ahead.
Named after the biblical principle: "For to everyone who has, more will be given, and he will have abundance; but from him who has not, even what he has will be taken away."
The comparison between human bias and algorithmic bias in decision-making
Human bias is personal and situational, whereas algorithmic bias is systematic, scalable, and can unintentionally magnify social inequalities.
Machines of Loving Grace technoptism
Powerful AI: Systems that outperform humans in many domains.
Marginal returns to intelligence: Each increase in AI intelligence gives smaller performance gains.
Five limiting factors:
Speed of the physical world
Need for data
Intrinsic problem complexity
Human constraints
Physical laws
Optimism in biology/health: AI can accelerate discoveries (e.g., AlphaFold predicting protein structures), potentially compressing decades or centuries of progress into a shorter time.
Techno-optimism: an analysis (Danahe
Techno-optimism (Danaher): Optimism about technology can be based on current benefits (presentist, e.g., Pinker: less disease, poverty, violence) or future possibilities (futurist, e.g., transhumanist enhancements). Justification comes from either the preponderance of evidence or counterfactual reasoning (the world would be worse without technology). Pessimism is not the same as fatalism, cynicism, or nihilism. Techno-optimism faces five main objections: overconfidence, unintended consequences, unequal distribution, neglect of values, and uncertainty about the future.
Bostrom's vulnerable world hypothesis
Bostrom's Vulnerable World Hypothesis: Some technologies threaten civilization (blackballs), some are risky but manageable (grey balls), and some are safe (white balls). Because the world is semi-anarchic (no global enforcement), blackball technologies are extremely dangerous. Bostrom's remedy: restrict access, monitor globally, and strengthen international cooperation to prevent catastrophic misuse.
Machine Stops techno pessimism
The Machine Stops depicts a society where technological worship, isolation under virtual connectivity, and complete dependence on machines undermine human values, relationships, and agency, culminating in disaster when the Machine collapses.
Existential risks
Existential risks are threats that could destroy or permanently limit humanity's potential, and can be civilizational vs. suffering, technological vs. natural, or direct vs. indirect.
Three different types of AI risk: accidental, structural, and misuse risk
Definition: Risks that arise when an AI behaves in unintended or unpredictable ways, even if designed with good intentions.
Example: A superintelligent AI optimizes for a goal (like manufacturing paperclips) in a way that harms humans unintentionally.
Structural Risk
Definition: Risks that emerge from how AI is integrated into social, economic, or political systems, leading to systemic vulnerabilities.
Example: AI controlling financial markets or critical infrastructure could cause widespread instability if it malfunctions or interacts poorly with other systems.
Misuse Risk
Definition: Risks caused by humans intentionally using AI for harmful purposes.
Example: Governments or actors using AI for autonomous weapons, mass surveillance, or cyberattacks.
The control problem argument for AI Xrisk
The control problem highlights that superintelligent AI could pursue goals harmful to humanity (orthogonality), often converging on dangerous sub-goals (instrumental convergence), and aligning AI with human values is extremely difficult (value alignment problems). Even "dumb" AI may be risky, so careful control is essential.
Surveillance capitalism
Surveillance capitalism = profiting from predicting and shaping human behavior using personal data, a novel shift from traditional product-based capitalism.
Survallence capitalsim is inevitable
Surveillance capitalism is a choice, not a technological inevitability; technology could be deployed in ways that respect privacy, consent, and social well-being if human values and governance guide it.
survallence capitalism vs technological
Both aim for profit, but industrial capitalism exploits labor and physical goods, while surveillance capitalism exploits data and behavior, using algorithms to predict and shape human actions.
For ad blocking
For: protects privacy, security, and experience; Against: undermines revenue for content creators and the sustainability of free content.
The distinction between techno feudalism and
surveillance capitalism
Surveillance capitalism profits from data-driven behavior prediction, whereas technofeudalism profits from platform control and user dependence, creating a digital hierarchy resembling feudal relationships.
Digital fiedoms and serfs
Digital fiefdoms = platforms controlling the digital "land", digital serfs = users and smaller actors dependent on these platforms for access and survival.
Negative impacts of techno-feudalism
Techno-feudalism undermines democracy by enabling manipulative microtargeting, powerful corporate lobbying, and regulatory capture, concentrating political influence in the hands of dominant digital platforms rather than the public.
Proposed remedies
Remedies include regulatory frameworks (GDPR, DSA, DMA, AI Act), open data policies to reduce monopoly power, and civil society activism to promote accountability, transparency, and fairness in the digital ecosystem.
Still learning (9)
You've started learning these terms. Keep it up!