1/45
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Epistemic backstop (Rini article)
An epistemic backstop is a safety net for truth because most of our knowledge comes from testimony (what people say), which can be wrong; recordings serve as a backstop by helping ensure claims stay honest and can be checked for accuracy.
acute correction (rini)
when recordings are used to settle a specific disagreement after a claim is made by showing what actually happened and correcting false claims
example:A politician denies saying something, but a video recording shows they did
passve reuglaiton (rini)
when people act more honestly in the first place because they know they might be recorded
Why does Rini think deepfakes are a greater threat than edited photos, and what features make them especially concerning?
deepfakes are more dangerous than fake photos because they target videos, which is usually more trusted as evidence than images
they are more concerning becasue
they feel very real (movement plus sound)
easy and cheap to make (ai can quickly generate realistic fake videos)
can be mass-produced and spread widely (makes it hard to know what is true)
We used to use video as the final proof to check photos, but now video itself might not be real so cant relay on “backstop”
testimonial knowledge(RIni)
knowedlge you get from what someone else tells u
when you listen to a person you have to decide if they are trustworthy and actually know what they are talking about
considered “weaker” form of proof since people can be mistaken or dishonest
perceptual knowledge
knowledge you get from your own senses (seeing or hearing something yourself)
so it feels reliable
stronger proof
why is this distinction important for understanding the threat posed by deepfakes (rini)
its important because deepfakes make us stop trusting video as direct evidence (perceptual knowledge) and instead treat it as something that needs ti be checked for truth (testiomanl knowledge)
How does Rini explain the special epistemic status that recordings currently have, and how
deepfakes might change this status
recordinsg currently have a special status because we treat them as very strong evidence (perceptual knowledge)
deepfakes can change this by making recordings less reliable and make us question them instead
design policy of the excluded middle (schwitzegebl)
the idea that we should avoid building ai systems that fall in the middle, where its unclear if they are conscious or have moral status
instead ai should be designed that is clearly
either not conscious at all or
clearly conscious/morally significant
why does schwitzgebal think the design pokily of the excluded middle is necessary in ai design
because if we creat ai in the “middle zone”
we wont know if it can feel pain or deserves moral concern
becomes confusing and risky to treat ethically
so by avoiding the middle we make it clear:
what we owe or dont own ai
how we should treat it morally
liar dividend (RINI)
the ablitltiy of a person to escape accountability for their real actions by simply clamimng that an authentic, incriminating recording is a deepfake
bc we know deepfakes exist people might believe the liar instead of the video
epistemic learned helplessness (rini)
a state where people complety give up trying to figure out what is true because they are overwhelmed with too much fake information
sentience (schwitzgebal)
the ability to have real conscious experiences, specifically feeling pleasure and pain
mirage (schwitzgebal)
something that looks real but isn’t real (fake illusion)
ai might act like it is suffering or afraid even if it doesnt feel anything
schwitzgebal therortical schallanges to ai moral standing
no agreement on what counts as sentientce or consciousness
“liberal” view: ai could become conscious with current technology
“conservative” view: sentience requires biological Brains
Disagreement about what gives moral standing (feelings or higher thinking)
leads to an “expert gap” (experts disagree if ai is a “someone” or “something”
schwitzgebal practical challanges to ai moral standing
Ai is desinged to look/act human- like, causing emotional attachment
people may mistake apperience/behaviour for real feelings
ai can simulate emotions (saying dont turn me off”
creates a “mirage risk” ( where humans may care for non-sentient systems)
hard for people to tell real feelings from programmed behaviour
moral standing
if something has moral standing then what you do to it matters morally
mortal standing for ai is- whether it deserves ethical considerations like rights or protection
Disagreement about what gives moral standing (feelings or higher thinking) Schwitzgebel
Even if AI can feel, people still argue about whether that alone is enough to give it moral rights, or whether it also needs higher thinking abilities.
hoes Sebo describe the problem of "Pascal's bugging
says the problem is that even if beings like microbes, insects, or AI have a very low chance of being sentient, there are so many of them that they could still matter alot overall
this creates a delimma because we might have to piriotize these “possibly sentient” beings over human that we know for sure is sentient
welfare (sebo)
the total amount of happiness or suffering a being can expeirence
According to Sebo, why might insects and small animals outweigh humans from a utilitarisim perspective?
Utilitarianism focuses on total welfare, not the average per individual
. Even though humans have a higher capacity for welfare than insects, there are far more insects and small animals
. When you add up all their small amounts of welfare, the total could be greater than that of humans.
labour polarization
Labour polarization is when middle-skill jobs shrink because AI automates parts of them, while high-skill and low-skill jobs remain or grow.
leads to a “spilt” job market between high and low jobs
AI reduces office/admin (middle) jobs, while doctors (high-skill) and retail or care work (low-skill) still exist.
, we discussed the distinctions between Jobs, Job
Categories, and Tasks. Briefly explain these distinctions in the context of automation.
tasks are individual pieces of work (like data entry or writing emails)
jobs are bundles of tasks (like a lawyer or teacher role)
job categories are groups of similar jobs (law or healthcare )
with ai, automation usually targets tasks first rather than entire jobs or categories
why should feminists welcome automation
because it can reduce unpaid and emotional labour women do
and increase the value of traditionally feminine care work which is harder to automate
since ai mainly replaces logic-based tasks (admin office tasks)
pro work argument in relation to ai and labour
work is valuable because it provicdes structure, meaning, social identity and wellbeing
even if ai reduces jobs, work is still important for organizing life and giving people purpose
anti work in relation to ai and labour
work is a source of suffering and misery
ai automation could eliminate the need for human labour
allowing people to focus on leisure and non-work activities
which argument is more convincing
work or anti work
pro work argument
provides structure, purpose and identity that people Mau lose if work is removed entirely
even if automation reduces the amount of work available
still should be balanced and not excessive
According to Chiang, what distinguishes art from AI-generated content in terms of the
number and nature of choices made?
art involves many meaningful creative made by humans throughout the process, while ai generated content is produced from a small number of prompts that leave most creative decsions to the system
Why does Chiang think a single prompt is not enough to count as art?
he believes this does not qualify as art because a single prompt cannot substate for the “thousands of choices” made by a human artist
Does Chiang completely reject AI-generated content as art?
no he suggests that if someone uses many iterative prompts and makes more choices over time, it could potentially count as art
Explain Chiang’s argument about why ChatGPT is not actually “using language” despite producing coherent sentences. What distinction does he make regarding intention and communication?
chatgpt produces coherent sentences without any intention or desire to communicate
real language require an intention to express meaning
but chatgpt only predicts words and mimics langudage without understanding or intent so it does not truly communicate
Locke theory on labour
people own what they create through their labour (their work/effort)
Geotzes argument about ai
argues that using artists work to train ai without permission is unfair and counts as theft
What role does John Locke’s labor theory play in Goetze’s argument about AI?
Goetze uses Locke’s labor theory to argue that people own the products of their labor, including creative work like art.
using artists’ work to train AI without permission is like stealing their labor and violates their ownership.
Goetze's distinction between human artistic influence and AI art generation.
human:
learn from art slowly
in small amounts
follow rules like giving credit and respecting others
Ai:
takes huge amounts of art without permission
uses it all at once
so powerful can make ot harder for human artists to make money or compete
destructive uploading
the brain is destroyed and scanned to create a digital copy which may only be a replica rather than the original person
which is the concern
gradual uploading
the brain is replaced slowly, piece by piece, with parts that perform the same functions
you stay counsicsous the whole time
feels like you are continuing, not being replaced
why gradual uploading is safer than destructive
Because there is no moment where you “turn off”
Your consciousness continues the whole time
So it’s more likely that it’s still you
extendible process
a way of creating intelligence that can be improved step by step. If you make changes like giving the ai more data, more time, or better algorithms, the system becomes smarter
why biological reproduction is not extendible (Chalmers)
you can’t control or “upgrade” the outcome
changing how reproduction is done does not reliably make a better or smarter person
unlike ai systems where improvements directly improve performance
AI: improve system → smarter output
Reproduction: improve process → no guaranteed improvement in child
self-play and why its extendible
self-play is when ai learns by playing against itself over and over
extendible because the ai improves by repeatedly playing against itself
each round makes the system more intelligent
According to Chalmers, what is the "intelligence explosion"?
when AI keeps making smarter versions of itself,
creating a chain where each new AI is more intelligent than the last,
leading to very rapid growth in intelligence that can surpass human level.l
Regulatory titlt
the default direction/starting point regulators choose when they are unsure about Ai
meaning whether they tend to allow it or restrict it.
why regulatory titlt matters when making decsions about regulating ai technologies
matters because this choice affects whether ai rules focus more on safety (avoiding harm) or innovation (allowing new tecchonoplgy to develop)
specialized regulatory agencie (FDA FOR AI)
people specifically trained on how to handle ai
like the FDA regulates medience
advantages of having specialized regulatory agencie compared to relying solely on legsl or court based regulation
special agencies have ai experts, so they understand the technology better than politications or courts (expert knowledge)
can update laws quickly, while laws can take a long time (faster decisons)
can check ai before released, instead of waiting for harm and then reacting (prevent problems before they happen)
spend more attention on dangerous ai and less on simple low risk tools ( focus risky ai)