AI EXAM | Quizlet

0.0(0)
studied byStudied by 2 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/43

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

44 Terms

1
New cards

Definition of AI

Refers to the capability of computational systems to prefroms taks typically associated with human intelligence, like learning, reasoning and problem solving.

2
New cards

Different streams of AI

Good old fashioned AI --> There is a domain and a function, based on the input and output. A person has control over the function.

Machine learning --> The function is based on the available data (model training)

Deep learning --> Part of machine learning, a person does not need to explain the features of the data, the function is found by the computer itself.

Generative AI --> Based on statistics, guesses the next move based on previous predictions and outcomes. (Chatgpt, audio, images, etc.)

3
New cards

Human Intelligence

- AI does not understand the world as we humans do because everything is based on statistics and data for AI

- Its major weakness is the risk of hallucination: trained to answer questions even though the answers could be misleading or simply outright wrong.

- Biases problem: if the data is bias then the AI system will be too

- AI is not good at logical reasoning

4
New cards

Ethical consideration

- Carbon Footprint

- Monopoly for Technology companies

- Lack of transparency about human workers; eg. people in Kenya are getting hired below minimum wage.

5
New cards

History of AI and Key developments

AI and Law originated in the 1970s with its first key developments. It became more in the 1980s. Where the first international conference on AI and law was organised in 1981. This decade also the first books became published.

Again around 1990s there were some neural network experiments (GOFAI) and from the mid 2000s there was a shift in AI from rule based to data driven as. And 2010s forward there was major progress in deep learning algorithms.

- huge availability of data

- increase in computing powers

6
New cards

Means of AI that can be used in the future and law

  • Document Automation & Contract Analysis

    • AI Tool: NLP-powered platforms (e.g., Kira, eBrevia).

    • Use Case: Automate contract review by identifying key clauses, risks, and anomalies. Reduces time spent on due diligence by up to 90% (Stranieri & Zeleznikow, 2004).

  • Predictive Analytics for Litigation Strategy

    • AI Tool: Predictive models (e.g., Premonition, Lex Machina).

    • Use Case: Analyze historical case data to predict judge rulings or settlement odds. Helps prioritize cases and allocate resources efficiently (Medvedeva et al., 2020).

  • Virtual Legal Assistants

    • AI Tool: LLM-based chatbots (e.g., Harvey AI).

    • Use Case: Provide 24/7 client support for routine queries (e.g., NDAs, compliance FAQs) and draft preliminary memos. Frees lawyers for high-value tasks (Lauritsen, 2006).


7
New cards

Two branches of AI law

IT Law: Focuses on legal implications of technology eg. electric signatures, data protection, cybersecruity etc

IT for Lawyers: use of technology to support legal work

8
New cards

Type of legal systems

Rule based systems: Apply legal rules to the facts

Case based Reading (CBR) When AI compares new cases to precedents to suggest an outcome

Neural Networks: Useful patterns recognition between cases but lacks explanations

9
New cards

Limitations of AI in legal practices

- lawyers are reluctant to adopt AI tools.

- Legal reasoning is hard to formalize because of vage open ended concepts (especially when it comes to case law) (as previously mentioned AI struggles with logical reasoning)

- Many networks lack transparency and explainability

- Legal outcomes produced without clear reasoning undermine the result of the rule of law.

10
New cards

Core challenges of legal reasoning and AI

1. Legal reasoning is complex

- open texts, multiple valid answers, and dynamic interpretation

- Checking facts and case often fails in legal context

- Instead of legal reasoning it gives iterative theory construction

(process back-and-forth interaction between legal rules, facts, values, and evolving interpretations.)

2. AI must stimulate legal deliberation

- must construct text hypotheses

- balance legal rules, precedents and societal values

11
New cards

The key topics of AI and law (RKLL)

1. Reasoning: defeasible argumentation to handle legal uncertainty

2. Knowledge: legal understanding is underdeveloped

3.Learning: Machine learning in legal learning may have bias issues.

4. Language: AI struggles with truly understanding legal and nuance language

12
New cards

Ethical considerations and safeguards about AI and law

- Avoid harmful outcomes because of biased ruling and privacy breaches

- legal transparency and ethical designs from human oversight

- encouraging hybrid AI systems comiing tules based liguc and data driven learning

- promote collaboration between AI experts, legal theorist and ethicists.

13
New cards

The right to personal data protection

- Under article 8 of the ECHR, a person has the right to protection and the respect of processing personal data forms. Respect to private and family life.

- CoE convention, is the only internationally binding instrument of data protection.

- Under Eu law, data protection is acknowledged under the EU charter of Fundamental rights (article 8) and TFEU under article 16. —> This is also a primary law of the EU. All its member states need to act within its competences.

- Under the EU directive establishes that data protection rules and principles govern personal data processing for the purpose of preventing, investigating, detecting and prosecuting criminal offences or executing criminal penalties.

14
New cards

Personal Data

- There are two main differences between EU law and CoE law

- "personal data" is defined as information relating to an identified or identifiable natural person.

- An individual can be identified or identifiable by this information or;

- An individual, while not identified, can be singled out by the information, which makes it possible to find out who the data subject is by conducting further research.

- "data subject"

- Under EU law: Natural person are the only beneficiaries of data protection rules.

- Under CoE law: protection of individuals regarding the processing of their personal data. Legal persons have the same protection with legitimate interest! (EU data protection does not protect legal persons)

15
New cards

Anonymization

GDPR and convention 108: "data must be kept in a form which permits identification of data subjects no longer than necessary for the purpose for which the personal data is processed"

- The data needs to be anonymised or erased after the data is no longer necessary.

- Once data is successfully anonymised, there are no longer persona data, and data protection legislation no longer applies.

- However if a data subject for the purpose of exercising their right to access, the data which was previously anonymised become personal data again.

16
New cards

Pseudonymisation

Personal information contains attributes such as names, birthdates, sex, etc. that could lead to identification. This process of 'pseudonymisation" is the processing of personal data that can no longer be attributed to a specific data subject without the use of additional information, with such information being kept separately. And ensure that a natural person is unable to identify this specific subject.

- This has been recognised by the GDPR but not by the council of Europe

17
New cards

Authentication

Process by which a person is able to prove that they possesses a certain identity and is authorised to do certain things. Eg. Withdraw money.

- photo, fingerprints passport etc.

18
New cards

Special categories of personal data

Under EU law and CoE law there are special categories of data which pose risks to data subject when processes on needed enhanced protection.

These data subjects are prohibited and there are limited number of conditions under which such processing is lawful.

- Within the frameworks of modernised convention 108 (article 6) and the GDPR (article (9) These are sensitive data

- Personal data revealing racial or ethnic origin

- Political opinion, religious beliefs, and philosophical beliefs

- Trade union membership

- Genetic data and biometric data for the purpose of identify a person

- Health, sexual life or sexual orientation

19
New cards

Special categories of personal data: Criminal convictions

Criminal convictions, offences, and proceedings can reveal serious and highly sensitive information about an individuals past or alleged behaviour is Therefore treated with extra caution and protection.

- Modernised convention 108: criminal data is listed under special categories of personal data under article 6

- GDPR: criminal data is under article 10: is strictly limited to official authority and must have appropriate safeguards.

20
New cards

Key principles of data protection

Under article 5 of the GDPR the principles governing the processing of personal data:

- lawfulness, fairness and transparency: processing must be fair and transparent communication to data subjects are requires

- purpose and limitation: Data must be collected for specific, explicit and legitimate purposes. No further processing is allowed unless the purpose is compatible, data subject consents to it, or is required by law.

- data minimisation: Only data that is adequate, relevant, and necessary should be processed.

- data accuracy: Data should be up to date, inaccurate data should be corrected or erased. Including regular check ups.

- storage limitation: Data should not be kept longer than necessary, if the purpose of data is fulfilled it should be erased on anonymised.

- integrity and confidentiality: Data must be protected using appropriate technical and organisation measures. Example with tools like encryption or access controls.

- Accountability principle: safeguards

21
New cards

Processing of personal data in AI systems

In AI systems like machine learning models, process data in ways that can involve personal data, through datasets that contain variables, and when these variables are detailed enough it can lead to re- identification of individuals.

Protecting training data

- anonymisation and pseudonymization

- Generalisation

Risk of leakage:

- trained model may leak data: using partial knowledge about someone to infer more data via the model or;

membership interference attacks which determines wether a specific individuals data was in training set.

- Models leak by design, intentionally or unintentionally embedding personal data.

Legal Implications:

- Sharing such model which contains person data = 'processing' under article 4(2) of the GDPR.

22
New cards

The AI act

The EU ambition to regulate AI with the goal to ensure EU values, harmonise the AI market, and secure global leadership through regulations.

- The AI act i meant to curb risk of fundamental rights, democracy, and the rule of law.

23
New cards

History of the AI act

The EU was significantly behind when it came to AI tools compared to China and the US. Hence policymakers were concerned about the public interest and what needed to be done to protect the democracy. Because of this the GDPR created new laws and requirements that would guard against AI generated risks. They also created a commission of High level expert group on AI (AI HLEG) to develop a set of harmonised ethical guidelines based on the European values.

1) Oversight 2) technical safety 3) privacy and data governance 4) transparency 5) diversity and non discrimination 6) societal and environmental wellbeing 7) accountability

24
New cards

White paper on AI

This was a blue print for legislation regulating AI based on European values. The enforcement of the White paper is vague but is suggests that high risk systems should be subjected to prior conformity assessment.

- Risk based approach: only high risk systems

Emphasis on market based regulations

Little clarity or enforcement of this legislation

25
New cards

Scope of the AI act

- Harmonise Member states national legislation, and eliminate obstacles to trade on the internal AI market, protect citizens and society.

- Article 114 TFEU

There are some exceptions to the AI act:

- Military defence use

- Personal/ non professional use

- National security

- Open source systems

- AI defined in line with the OECD: interference- based systems

26
New cards

Risk-Based Approach

- unacceptable (banned completely article 5)

- High- Risk (strict requirements Article 8-27)

- GPAI models (transparency + systematic risk obligations)

- Limited Risk (transparency only)

- minimal risk (no obligations)

27
New cards

Prohibited practices (article 5)

This bans AI that

- manipulates behaviour

- exploits VU rumble individuals

- Conduct social scoring

-enables emotion recognition

- integrated image scraping

Infers sensitive attributes

Allow real time facial recognition in public

28
New cards

High risk systems

1. AI embedded in regulated products (cars)

2. Stand alone high risk systems (annex 3)

- Biometrics

- Education

- Employment

-Public/ Private services

- Law enforcement

- Justice/ Democracy

-Border Control

Key requirements:

1. Risk management

2. Data governance

3. Transparency and human overweight

4. Accuracy, robustness, cybersecurity

29
New cards

GPAI models

Covers large language models (LLMs) and other multi task models

They must maintain

- technical documentation

- follow EU copyright law

- Publish training data summaries

GPAI models with systematic risk

- adversarial testing

- risk mitigating

- cybersecurity and incident reporting

Transparency obligations (article 50)

- Chatbots (must identify themselves)

- Emotion recognitions (must inform users and obtain consent)

- AI generated media (must be disclosed)

30
New cards

Non- high risk systems

- no obligations

- encourage voluntary codes of conduct

- still subjected to existing laws

31
New cards

Limitation of the AI act

1. Over reliance on product safety logic

- frameworks are ill suited to capture non quantifiable risks

- the AI act treats the high risk AI systems similarity to physical products which means the focus is on risk of health and safety of the product even though the risks are more social and psychological.

2. Delegation to technical standards

- implementation depend heavily on harmonised technical standards and certification bodies which are often non-transparent and privately governed.

- privatisation of public values and human right protection

3. Fuzziness of rights integration

- the law assumes that providers can translate legal principles into software requirements but since legal principles are often vague or subjective it can lead to problems.

- may result in arbitrary or inconsistent enforcement.

32
New cards

Aim and legal status of the Guidelines Of the definition of AI systems

The guidlines is used to determine wether a system constitutes as an AI system within the meaning of the AI act, thereby facilitating the effective application and enforcement of the AI act.

The guidelines are non binding under para 9: Any authoritative interpretation of the AI act may ultimately be given to the CJEU.

33
New cards

The Brussels Effect

Is a phenomenon where Eu regulation becomes a global standard due to the Eu market size, regulatory cap ax city, stringency etc.

This will also be a thing regarding the AI act when coming to high risk and general purpose AI systems, since the EU is wealthy and has territorial extensions.

However, there is a side effect to this because the AI act will have global diffusion regarding its standards and could weaken protection of fundamental rights. Other Jurisdiction might copy the EU product-safety approach without adding further safeguards and developing countries my lack the technical or institutional capacity to adapt the AI act wholesale.

34
New cards

Emotional AI

Technologies that use AI to sense, interfere and respond to human emotional life.

Hard biometrics include facial images, fingerprints etc to identify an individual.

Soft biometrics are example moods, facial expressions without ID< that are non-identifying but still personal and sensitive.

These systems challenge fundamental human rights like human dignity, privacy, and data protection.

35
New cards

Non Identifying emotional AI

Retail: mood detection systems offering emotional based recommendations

Assistive tech: smart glasses aiding children with autism to identify emotions

Automotive: emotion detection in vehicles to enhance safety and personalise driving.

—> despite not identifying individuals, such systems influence behaviour, posing regulatory and ethical concerns.

36
New cards

Fundamental rights

1. Human Dignity (article 1 UN charter)

- Humans must never be treated as mere tools, even without identification emotional AI can manipulate decisions covertly, exploit emotional vulnerabilities and violate dignity by objectifying emotions

2. Privacy (article 7 and 8 UN charter)

- Emotional Data, even non identifying, can pertain ones private life.

- GDPR does not apply to soft biometric data (only hard 'identifiable')

37
New cards

Regulatory analysis (gaps in the EU framework)

GDPR —> Defines biometric data as identifiable (excludes soft biometrics)

AI Act —> Constraints personal data (excludes soft biometrics)

—> Article 3(39) Emotion AI in work and education is prohibited unless its used for medical or saftey reasons. And only applies for emotional interference in the systems specific purpose.

Generic AI act provisions —> Article 5 prohibits manipulative AI techniques but has a high threshold because it is only applicable when appreciably impairs the decision making process and cause significant harm.

38
New cards

Commercial profiling of Children

Children have international protection under the profession of the UNCSR. The case follows that there are challenges regarding AI and child protection and children rights because of 'automated processing of persona data data linked to a child profile, and concerns the decisions of the child and prediction of persona preferences and behaviours'.

This infringes rights such as

Article 16 —> right to privacy and data protection

Article 6 —> right to development

Article 13 —> Right to the freedom of expression

Article 14 —> Freedom of thought

article 32 —> Economic exploitation

The GDPR has not been able to restrict commercial profiling of children.

39
New cards

Case 1: The prohibitions of article 5 and the extend to anticipate emotional AI and meeting the threshold

Under article 5:

  • A persons consciousness or purposefully manipulative of materially distorting the behavior of a person and appreciably imparting the ability to make an informed decision.

  • AI exploits any vulnerabilities of a natural person based on a specific group of persons: due to their age, disability or specific social or economic situation, with materially distorting their behavior;

  • To cause significant harm

  • Article 6(2) in conjunction Annex 3

40
New cards

European Commission Guidelines on the definition of AI systems

There are 7 core elements of the definition

1. Machine based system

2. Varying level of autonomy —> Degree of independence/ can act without humans

3. Adaptiveness —> Self learning abilities

4. Explicit or Implicit objectives —>

5. Inference process —> Interfere how to generate outputs

6. Outputs that influence the environment

7. Influence of physical or virtual environments

41
New cards

The Case SyRI in the Netherlands

The SyRI system detects social welfare fraud risk via profiling using data from various governmental agencies, the data is collected through a risk model to generate fraud risk profiles, which are normally targeted at low income or immigrant communities.

The main issue is that the system lacks transparency and proper safeguards and violated article 8 of the ECHR.

—> The Hague court ruling: SYRI violated article 8 due to risk of reinforcing discrimination and stigmatisation.

—> Article 8 of the ECHR: Protects private life, interference must be lawful

—> Article 22 of the GDPR: prohibits decisions solely based on automated processing unless lawful safeguards exist.

Lack of transparency, algorithmic bias that reinforces structural inequalities, there is a lack of safeguards.

There needs to be safeguards, transparency and oversight over these AI systems

42
New cards

Honas case

Concerns the legal actions against the Dutch Healthcare Authority regarding the use of mental health scoring system in administrative data processing. This is critical of how data and Ai technologies are being implemented without sufficient legal safeguards, and are often without patient consent.

  • HONOS is used to assess mental health of patients. These scores are administratively linked to data sources.

  • This raises privacy concerns as the legal basis and proportionality of this data usage, especially in the highly sensitive nature of mental health data.

  • Bring to concerns and violations of Article 8 of the ECHR the right to privacy.

    • Lack of transparency and accountability

    • Datafication of mental health

    • Informational power and the erosion of individual autonomy

43
New cards

Generative AI and deepfakes

Deepfakes: AI act article 3 (60) Are generated synthetic media that appear authentic but are fabricated. Under article 3 there is a bit of a problem because of it says it uses real work examples like people, objectives, etc. but that is not necessarily true because the things are fabricated.

The EU AI act has been approved in 2024 and aims to regulate AI risks but may conflict with human rights, like proivacy, freedoms of expression under article 8 and 10 of the ECHR)

Risk and harms of deepfakes:

  • Political Manipulation: Fake audio

  • Sextortion: Ai generates explicit content

  • Child Exploitation: AI generated child sexual abuse material

—> Vicitms suffer trauma, Reputational damage, and financial losses.

Under article 3(60) of the Ai act there are key provisions:

  • Transparency obligations providers must be labelled AI generated content

  • Risk classification: AI deepfakes are high risk manipulation

    • There is no explicit “high risk” classification on extortion or child abuse.

    • Exemptions for law enforcement and creative deepfakes are prone to abuse.

44
New cards

What are the risks of deepfake pornographers and politics

Risk of deepfake pornography

  • sexual inter gritty and privacy (article 8) of the ECHR, that non consensual deepfake port violate right to private life and human dignity and often victims suffer from trauma, anxiety and reputation harm .

  • This is also disproportionate to women and girls, which reinforces objectification of women. “Frankie pro” which dehumanises women by reducing them to manipulated sexual content exacerbating toxic masculinity and gender stereotypes.

  • Legal practices: In the EU AI act, deepfakes are not completely banned, but imposes transparency obligations.

Political deepfakes:

  • Undermines democracy: fabricated videos of politicians can spread false narratives, and weakens public trust. Deepfakes can also spread extrem ideologies like far right propaganda.

  • This leads to manipulating the public opinion and influence their choices. Additionally, political deepfakes can cross borders, complicating within different jurisdictions.

  • Regulatory problem: Political deepfakes are not classified as high risk (biometrics) under the AI Act (annex 3) despite the societal impact they have, also on a personal level.

Individual Harm, societal collapse.