L17 - FOCUS: GENERATIVE AI AND BIOMETRICS

General Purpose AI

Generative AI

Generative AI is defined as a ‘general purpose AI model’ which exhibits significant generality and can competently perform a wide array of distinct tasks. It can be integrated into various downstream systems or applications, excluding AI models used for research, development, and prototyping before market release (Art. 3, par. 1, n. 63).

The AI Act refers to general purpose AI models as foundation models and emphasizes horizontal compliance obligations, particularly concerning the transparency of copyright-protected content used in their training (rec. 105).

General Obligations for GenAI

Providers of generative AI must adhere to the following general obligations:

  • Provide technical documentation for the model.
  • Offer documentation for downstream providers.
  • Implement a policy to comply with copyright law.
  • Furnish a summary of the training content utilized.
  • Appoint a representative within the EU.

General Purpose AI Systems (GPAI) and Systemic Risk

GPAIs are classified based on the risk they pose. A GPAI is considered a 'systemic risk' if it:

  • Has high impact capabilities as assessed via technical tools, methodologies, indicators, and benchmarks.
  • Is designated by the European Commission, following a qualified referral by the Scientific Panel, to have equivalent capacity or impact.

The classification of GPAI models as posing systemic risks will initially depend on the capacity, based either on a quantitative threshold of the cumulative amount of computation used for its training, measured in floating point operations (FLOPs), or on an individual designation decision of the Commission taking into account the criteria listed in Annex IXc (e.g. number of parameters, quality and security level).

The identified threshold is 102510^{25} FLOPs for the training of GenAI.

Specific Obligations for Systemic Risk (Art. 51-55)

GPAIs identified as systemic risks are subject to specific obligations:

  • Model evaluation
  • Risk assessment and evaluation
  • Incident reporting
  • Cybersecurity measures

Comparing Obligations: FRIA, RMS, and SRA

Fundamental Rights Impact Assessment (FRIA)
  • Objective: Assessing and mitigating the impacts of high-risk AI systems on fundamental rights in specific implementation contexts.
  • Legal Basis: Article 27 of the AI Act, focusing on fundamental rights according to the Charter of Fundamental Rights of the European Union (CFREU).
  • Actors: Deployers of high-risk AI systems, including public administrations and some private AI providers.
  • Timing: Conducted before commissioning and updated when necessary during the AI system's life cycle.
  • Methodology: Mainly qualitative; a methodology shared by the AI Office is still lacking (Art. 27(5)).
  • Focus: Directly addresses fundamental rights with extensive coverage of rights enshrined in the CFREU.
  • Result: Structured report outlining risks to fundamental rights, to be submitted to market surveillance authorities.
Risk Management System (RMS)
  • Objective: Managing wider health, safety, and fundamental rights risks during the design and development phases of AI systems.
  • Legal Basis: Article 9 of the AI Act, which establishes a risk management system for all high-risk AI systems.
  • Actors: Provider of high risk AI systems during the development and design phases.
  • Timing: Carried out during the development and design phases of the AI System.
  • Methodology: Based on harmonised standards, but often focused on identifying risks throughout the life cycle.
  • Focus: Includes fundamental rights together with a broader perspective on the overall performance of the system
  • Result: Continuous risk management plan to ensure compliance with harmonised standards.
Systemic Risk Assessment (SRA)
  • Objective: Assessing and mitigating systemic risks of general-purpose AI (GPAI) models at EU level, including cybersecurity and societal risks.
  • Legal Basis: Article 55 of the AI Act, aimed at identifying and managing systemic risks of GPAI models with potential EU-wide impact.
  • Actors: Providers of general-purpose AI models identified as a source of systemic risks.
  • Timing: Performed periodically to monitor and address systemic risks.
  • Methodology: Broad assessment of systemic risks, often focusing on individual impacts on fundamental rights.
  • Focus: Addresses fundamental rights indirectly, focusing on systemic impacts.
  • Result: Recommendations to mitigate systemic risks, e.g. in the area of cybersecurity or related to social impact.

General Purpose AI Systems (GPAI) Code of Practice

The Code of Practice is a work in progress. The third draft was released on 11/03/2025. EU AI ACT: GENERAL PURPOSE AI

The objectives of the Code are as follows:

I. Commitments by Providers of General-Purpose AI

  • Transparency Section

    • CI.1. Documentation
    • Copyright Section
    • CI.2. Copyright policy
      II. Commitments by Providers of General-Purpose AI…
  • Safety and Security Section

    • C II.1. Safety and Security Framework
    • C II.2. Systemic risk assessment and miti…
    • C II.3. Systemic risk identification
    • C II.4. Systemic risk analysis
    • C II.5. Systemic risk acceptance determin…
    • C II.6. Safety mitigations
    • C II.7. Security mitigations
    • C II.8. Safety and Security Model Reports
    • C II.9. Adequacy assessments
    • C II.10. Systemic risk responsibility alloca…
    • C II.11. Independent external assessors
    • C II.12. Serious incident reporting
    • C II.13. Non-retaliation protections

Biometric Systems

Regulation of Biometric Identification Systems

Context

The regulation of biometric identification systems is shaped by the General Data Protection Regulation (GDPR), the Law Enforcement Directive (LED), and the Artificial Intelligence (AI) Act.

Key Aspects of the Legal Framework
  • General Data Protection Regulation (GDPR):
    • Biometric data is categorized as “special category” data due to its sensitive nature.
    • Processing is limited to cases of explicit consent, substantial public interest, or specific legal provisions.
  • Law Enforcement Directive (LED):
    • Provides law enforcement agencies with guidelines for using biometrics in crime prevention, ensuring minimal invasion of privacy and respecting proportionality.
  • Artificial Intelligence (AI) Act:
    • Introduces classifications for high-risk applications of biometrics, especially real-time facial recognition in public spaces.
    • Restricts use, mandates transparency, and demands robust safeguards.
Definition of Biometric Identification

According to Recital 15 of the AI Act, ‘biometric identification’ is defined as the automated recognition of physical, physiological, and behavioral human features (e.g., face, eye movement, body shape, voice) to establish an individual’s identity by comparing biometric data to stored data in a reference database, regardless of consent.

Biometric Data and GDPR
  • Personal information includes details that directly or indirectly identify a person.
  • Biometric data, such as fingerprints or facial recognition, is a type of personal information.
  • To be classified as biometric data, it must meet certain criteria, primarily related to its nature rather than its usage.
  • The use of biometric data is considered special category biometric data under Article 9 of the GDPR.

Sanctions Against Face Recognition: Trento Case

The Data Protection Authority (Garante) identified violations in data processing for scientific research projects (Marvel and Protector) in Trento Municipality.

The municipality was fined €50,000 and ordered to delete unlawfully processed data.

Project Details
  • Marvel: Analyzes surveillance footage and public street audio to identify security risks using AI.
  • Protector: Analyzes surveillance footage (no audio) and social media hate speech to identify threats to places of worship.
Violations and Recommendations
  • Lack of legal justification for data processing.
  • Inadequate anonymization techniques.
  • Lack of transparency in data processing descriptions.
  • Absence of impact assessment before processing.
  • The authority emphasized the need for dialogue and compliance with privacy laws in future AI projects.

Increasing Use of Automated Facial Recognition Technology (FRT)

Governments are increasingly using FRT for various purposes, including border control and policing.

Usage Statistics
  • 70% of police forces globally have access to FRT.
  • FRT is implemented in 60% of countries' airports.
  • Countries like Australia, France, the UK, Germany, the Netherlands, and the US utilize FRT in border security at arrival gates.
Application in Policing
  • FRT is employed in national policing efforts for suspect detection and locating missing persons.
  • Governments utilize FRT for identity verification, identification, categorization, and counting purposes.
Concerns
  • Privacy and data protection issues
  • Potential biases and discrimination
  • Lack of transparency, explainability, public oversight, and accountability
  • Threats to fundamental rights such as freedom of expression, peaceful association, and assembly.

How Biometric Systems Work

Biometric systems, often falling under deep learning AI, function through:

  • Detection: Identifying a face from an image or video.
  • Alignment: Normalizing the face for consistency.
  • Feature Extraction: Extracting unique features (e.g., eye spacing, nose shape).
  • Template Matching: Comparing features against stored templates for identification or verification.

AI Act: Biometrics

Prohibited Systems
  • Real-time remote biometric identification system (with exceptions for law enforcement for specific crimes listed in Annex II).
  • Emotion recognition systems in the workplace and education.
  • Biometric categorization for race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
  • Any labelling or filtering of lawfully acquired biometric datasets based on biometric data in the area of law enforcement.
High-Risk Systems
  • Post remote biometric identification systems
  • Emotion recognition systems
  • Biometric categorization systems
Definitions
  • Real-time Remote Biometric Identification System:
    • Involves capturing biometric data, comparing it, and identifying individuals without significant delay to prevent circumvention.
  • Post Remote Biometric Identification System:
    • Involves a delay between capturing biometric data and completing the identification process.

AI Act: Real-Time Systems (Art. 5, Annex II)

Conditions for Deployment
  • Only for confirming the identity of specific individuals targeted for law enforcement purposes.
  • Respecting the nature of the situation and consequences for rights and freedoms.
  • Requires prior authorization from a judicial authority or an independent administrative authority.
  • Authorization is based on a reasoned request and detailed national law rules.
Context
  • Publicly accessible places, law enforcement purposes
  • Examples: Traffic of human being, To target a missing person, A suspect of terroristic attack, Crimes in Annex II
Crimes in Annex II
  • Terrorism
  • Trafficking in human beings
  • Sexual exploitation of children and child pornography
  • Illicit trafficking in narcotic drugs or psychotropic substances
  • Illicit trafficking in weapons, munitions, or explosives
  • Murder, grievous bodily injury
  • Illicit trade in human organs or tissue
  • Illicit trafficking in nuclear or radioactive materials
  • Kidnapping, illegal restraint or hostage-taking
  • Crimes within the jurisdiction of the International Criminal Court
  • Unlawful seizure of aircraft or ships
  • Rape
  • Environmental crime
  • Organized or armed robbery
  • Sabotage
  • Participation in a criminal organization involved in one or more of the offences listed above.

AI Act: High-Risk Biometric Systems (Art. 6, Annex III)

  • Post remote biometric identification systems
  • Biometric categorization
  • Emotion recognition, with an exception for biometric verification solely to confirm a person's identity.
Authorisation Requirement
  • High-risk AI systems for post-remote biometric identification need authorisation in criminal investigations
  • Authorization must be sought promptly from a judicial or administrative authority, with a binding decision, reviewable judicially, within 48 hours.
Limitations on Use:
  • Each use of the system must be strictly necessary for investigating a specific criminal offense.

AI Act: Migration and Asylum (Art. 6 & Annex III)

  • Use of AI systems in migration, asylum, and border control management is allowed under relevant Union or national law.
  • Permitted AI Systems:
    • Polygraphs and Similar Tools: Used by competent public authorities for specific purposes.
    • Risk Assessment AI Systems: Used by or on behalf of competent public authorities to evaluate risks.
    • Assistance in Application Examination: AI systems assist competent public authorities in examining applications for asylum, visa, or residence permits.
    • Detection and Recognition Systems: AI systems are used for detecting, recognizing, or identifying individuals in the context of migration, asylum, or border control management.
    • Verification of travel documents is an exception to this use.

Harmonized Legal Standards

  • Need for Unified EU Guidelines: Emphasized by ECtHR’s rulings to avoid fragmentation in biometric surveillance laws.
  • Inspiration for Future EU Policies: Reflects ECtHR principles.
  • Aligning with the AI Act: Promotes transparency, accountability, and proportionality in biometric surveillance.

A FRIA for Biometrics

Exploring how technologies like Clearview AI could be regulated to protect individual rights while allowing effective law enforcement.

Interest Groups
  • Law Enforcement: Needs effective tools to solve crimes and enhance public safety.
  • Privacy Advocates: Prioritizes protecting individual rights to privacy and freedom from mass surveillance.
  • Government Regulators: Concerned with setting fair and enforceable laws to balance security and privacy.
  • Technologists: Interested in ethical AI use and technological limitations of facial recognition.
Potential Solutions
  • Strict Access Controls and Usage Limits: Restrict the use of Clearview AI’s technology to high-stakes cases with judicial oversight.
  • Data Minimization Policies: Limit the amount of personal data retained, with regular audits to ensure compliance.
  • Independent Audits and Bias Testing: Require regular third-party audits to evaluate accuracy and check for biases.
  • Transparency in Use: Law enforcement agencies must disclose usage statistics.
  • Consent Requirements for Data Sources: Prohibit scraping data without explicit consent.
  • Creation of an Oversight Committee: Establish an independent committee to monitor and investigate complaints.