AI and Autonomous Robots

0.0(0)
studied byStudied by 1 person
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/16

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

17 Terms

1
New cards

what is AI

"AI Systems are defined as Software that is developed with certain techniques and approaches as listed in Annex 1(Machine learning, Logic and knowledge-based approach) and can for a given set of human defined objectives, generates outputs such as contents , prediction , recommendations of decision influencing the environment these systems interact with." (AI Act 2023,)

  • AI Act covers AI Systems that are placed on the market(developer), put into service (Deployer)or utilised in the EU(Users) and globally(global Vendors/users). 

2
New cards

generative AI

  • Generative AI (GenAI) refers to artificial intelligence systems designed to generate new content, such as text, images, audio, video, and code, based on training data and learned patterns.

    • Generative AI creates original outputs that resemble human-generated content

European Union Definition (AI Act Context)

  • In the EU AI Act, Generative AI falls under the category of Foundation Models, which are AI systems trained on vast datasets and capable of performing a wide range of tasks. The Act requires providers of such models to comply with transparency, risk mitigation, and accountability obligations.

Key Features of Generative AI:

  1. Content Creation – Generates text (e.g., ChatGPT), images (e.g., DALL·E), and music (e.g., Jukebox).

  2. Adaptability – Can be fine-tuned for specific domains like law, medicine, or art.

  3. Self-Learning – Uses deep learning and large language models (LLMs) to improve output quality.

  4. Potential Risks – Includes misinformation, bias, and deepfakes.

📌 EU AI Act Reference:
Generative AI obligations are outlined in Article 28b, which covers transparency requirements for Foundation Models and high-risk AI systems.

3
New cards

category of foundational models

The European Union's Artificial Intelligence Act (AI Act) addresses Generative AI within its regulatory framework. While the Act does not explicitly define "Generative AI," it encompasses such systems under the broader category of Foundation Models.

According to the European Parliament's proposal, Foundation Models are AI systems trained on extensive and diverse datasets, designed for versatility across various tasks. Generative AI systems are a subset of these models, specifically intended to autonomously generate content like text, images, audio, or video.

The AI Act imposes specific obligations on providers of Foundation Models, including Generative AI systems. These obligations encompass rigorous data governance, technical documentation, and transparency measures to ensure the responsible development and deployment of AI technologies.

4
New cards

Machine Learning

  • Machine Learning (ML) is a subset of artificial intelligence (AI) that enables computer systems to learn from data and improve their performance without being explicitly programmed. It involves algorithms that analyze patterns, make predictions, and adapt based on experience.

  • The EU AI Act classifies Machine Learning as one of the core techniques used in AI systems. According to the European Commission’s proposal for the AI Act, AI systems are defined as those developed using:

    • "Machine learning approaches, including supervised, unsupervised, and reinforcement learning, using various methods, including deep learning."
      (European Commission, AI Act, 2021, Annex I)

5
New cards

characteristics of ML

  1. Data Driven – Uses structured or unstructured data to learn patterns.

  2. Algorithm-Based – Uses models like neural networks, decision trees, and support vector machines.

  3. Self-Improving – Continuously refines predictions and decisions.

📌 Types of ML

1.Supervised Learning – Learns from labeled data.

2.Unsupervised Learning – Finds hidden patterns in data.

3.Reinforcement Learning – Learns through trial and error.

6
New cards

robotics

Robotics refers to the design, construction, operation, and use of autonomous or semi-autonomous machines (robots) that can perform tasks traditionally carried out by humans. These machines are typically equipped with sensors, actuators, and artificial intelligence (AI) to interact with their environment and make decisions.

Key Aspects of Robotics:

  1. Autonomy – Can function without direct human control.

  2. Interactivity – Uses sensors and AI to engage with its environment.

  3. Physical Form – Unlike software-based AI, robots have a tangible presence.

  4. Industrial & Service Applications – Used in manufacturing, healthcare, autonomous vehicles, etc.

7
New cards

robotics - EU

The EU does not have a single legal definition of robotics, but various reports and policy documents provide guidance. According to the European Parliament’s Committee on Legal Affairs (2017):

  • "A robot is a physical machine equipped with sensors and actuators, capable of gathering and processing data, and performing actions based on programming or autonomous decision-making.“

    • (European Parliament, Report on Civil Law Rules on Robotics, 2017)

In EU legislation, robotics is often discussed in connection with

  • AI,

  • Liability

  • risk-based regulation

    • particularly in the AI Act and the Product Liability Directive.

8
New cards

automated decision making - ADM

Automated Decision-Making (ADM) refers to the use of algorithmic or artificial intelligence (AI)-driven systems to make decisions without direct human intervention. These decisions can impact individuals or businesses in various sectors, including finance, healthcare, law enforcement, and autonomous vehicle operations.

European Union Definition (Legal Perspective)

Under the EU General Data Protection Regulation (GDPR), Article 22, automated decision-making is defined as:

"A decision based solely on automated processing, including profiling, which produces legal effects concerning [an individual] or similarly significantly affects [them]."
(Regulation (EU) 2016/679, GDPR, Article 22)

9
New cards

DEFINE AUTONOMOUS ROBOT

An autonomous robot is a machine or system capable of performing tasks without continuous human intervention, using sensors, artificial intelligence (AI), and decision-making algorithms to navigate and adapt to its environment. These robots can operate independently, learning from data inputs, making real-time decisions, and executing complex actions with minimal external control.

Legal and Regulatory Definition

  • In the context of law and governance, an autonomous robot is often defined based on its level of decision-making and liability implications. For instance, the EU AI Act categorizes autonomous robots with AI-driven decision-making as "high-risk AI systems" when deployed in critical sectors such as healthcare, autonomous driving, and industrial automation.

10
New cards

examples of autonomous robots

  • Autonomous Vehicles (AVs) – Self-driving cars that navigate roads using AI and sensors.

  • Industrial Robots – Robots in manufacturing that operate without direct human oversight.

  • Service Robots – AI-driven machines used in healthcare, hospitality, and logistics.

  • Military Drones – Unmanned aerial vehicles (UAVs) capable of executing missions autonomously.

11
New cards

smart devices and chatbots

Smart Devices

  • Smart devices are internet-connected electronic devices that use sensors, data processing, and automation to perform tasks autonomously or interactively.

    • Example: Smart home assistants (e.g., Amazon Echo), smartwatches, and IoT-enabled appliances.

  • Legal Context: Smart devices raise legal concerns regarding data privacy, cybersecurity, and liability, especially when integrated into autonomous vehicle ecosystems.

Chatbots

  • Chatbots are AI-driven software applications that simulate human conversations using natural language processing (NLP).

  • Example: Virtual assistants like Siri, AI-powered customer service bots.

  • Legal Context: Issues related to transparency, misinformation, and consumer protection arise when chatbots provide automated responses, especially in regulated industries.

12
New cards

facial recognition and biometric surveillance

Facial Recognition

  • Facial recognition is a biometric technology that identifies or verifies individuals by analyzing facial features.

  • Example: Unlocking a smartphone using face ID, security surveillance systems.

  • Legal Context: Raises concerns over privacy rights, consent, data protection (GDPR compliance), and bias in AI-driven recognition systems.

Biometric Surveillance

  • Biometric surveillance involves the automated collection and analysis of biological traits, such as facial features, fingerprints, and gait, for identification or tracking.

  • Example: Airport security systems, public CCTV cameras using AI-based biometric tracking.

  • Legal Context: The EU AI Act and GDPR impose strict rules on biometric data processing to prevent mass surveillance and ensure fundamental rights protection.

13
New cards

european parliament committee on legal affairs

says a smart robot is characterised by:

  • autonomy

    • achieved through sensor or by exchanging data with its environment and the processing and analysis of that data.

  • self-learning

    • capability to learn from experience and through interaction

  • physical support

    • possession of at least a minor physical form, this distinguishes them from virtual entities like software

  • behaviour adaption

    • ability to adapt actions and responses to the environment

  • non-background

    • lack of life in the biological sense

this definition emphasises the robots degree of autonomy, interactivity and physical presence

14
New cards

example of AI in the employment relationship

  • Document analysis: Award winning Luminance 

  • Contract Intelligence: ThoughtRivers scans Contracts/Evaluate risk

  • Document delivery

  • Legal Adviser Support

  • Clinical Negligence Analysis

  • Case Outcome Prediction

  • Public Legal Education: LawBot, DivorceBot, Neota logic platform/McCann Fitzergerald

  • ChatGPT

  • Fast and efficient quality service, Emergence of AI innovation awards, Legal process Manager/Technicians

15
New cards

National Standards Authority of Ireland's Artificial Intelligence Hub

plays a key role in developing AI standards and regulatory frameworks in Ireland.

It supports businesses and researchers in aligning with national and EU AI regulations, including the EU AI Act.

CeADAR is Ireland’s national AI research and innovation center, focused on driving AI adoption through industry partnerships, research, and applied AI solutions. CeADAR provides expertise in AI, machine learning, and big data analytics, helping businesses leverage AI technologies effectively.

A Youth Assembly on AI focused on AI Sandbox Issues in Fintech and EdTech would provide young leaders, policymakers, and innovators a platform to discuss regulatory challenges, opportunities, and ethical considerations in these rapidly evolving sectors.

Key Themes for Discussion

  1. AI Regulatory Sandboxes

    • AI sandboxes are controlled environments where companies can test AI-driven products under regulatory supervision before full deployment. The Youth Assembly can explore:

    • The role of AI sandboxes in Fintech and EdTech innovation.

    • How regulators can balance innovation with consumer protection.

    • Case studies of AI sandboxes in Ireland, the EU, and globally.

  2. AI in Fintech: Opportunities & Risks

    • AI in banking (fraud detection, algorithmic trading, risk assessment).

    • Ethical concerns (bias in AI-driven credit scoring, data privacy).

    • Cross-border fintech regulations and compliance with AI laws.

  3. AI in EdTech: Future of Learning

    • AI-driven personalized learning and adaptive curriculum development.

    • Data privacy in EdTech platforms (student profiling, consent mechanisms).

    • Equity & accessibility in AI-powered education tools.

16
New cards

key elements of Irelands National AI strategy

  • Governance for Trustworthy AI

    • 1.Ireland emphasizes the implementation of the EU AI Act, introducing a risk-based regulatory approach. The National Standards Authority of Ireland is tasked with developing AI standards to support compliance and innovation.

  • AI and the Future of Skills and Work

    • 1.The strategy addresses AI's impact on the labor market, advocating for proactive policies to manage employment and skill transformations. Measures are

  • AI Ecosystem Development:

    • Ireland aims to establish itself as a leader in applied AI by supporting startups and SMEs, accelerating funding for AI initiatives, expanding testbed programs, and aligning national strategies for growth.

  • AI Literacy and Education:

    • The strategy highlights the importance of building AI literacy among educators and students, advocating for equitable access to AI tools in education and the creation of AI literacy programs.

  • AI Sovereignty and Infrastructure:

    • Investments in sovereign data initiatives and energy infrastructure are prioritized to ensure Ireland's competitiveness in the global AI economy, emphasizing control over data and securing resources for AI innovation.

  • Biometrics and Public Services:

    • The strategy provides recommendations for the responsible use of AI-powered facial recognition technology in public services, ensuring that privacy, ethics

  • AI in the Creative Sector:

    • Addressing AI's transformative impact on creative industries, measures are suggested to protect creators' rights and combat potential misuse of AI in art, media, and entertainment.

  • Public Sector AI Adoption:

    • The strategy includes training government workers in AI, with 500 senior civil servants already trained, and plans to expand training across all civil and public service grades to improve public services.

  • Regulatory Sandboxes and Innovation:

    • Establishing an AI regulatory sandbox is proposed to foster innovation, allowing for experimentation within a controlled environment to balance innovation with proportionate regulation and trust-building measures.

  • Economic Impact Studies:

    • The strategy includes commissioning studies on the potential impacts of AI, including generative AI, on key sectors of the Irish economy to inform policy and strategic decisions.

17
New cards

AI civic liability directive 2023/0303

The AI Liability Directive deals with claims for harm caused by AI systems, or the use of AI, adapting non-contractual civil liability rules to artificial intelligence and introduces a new liability regime that ensures legal certainty, consumer AI trustworthiness ,and assists consumers' liability claims for damage caused by AI-enabled products and services