JG

Chapter 12: Technology, Ethics, and Responsibility

Chapter 12: Technology, Ethics, and Responsibility

12-1 Defining Technology

Types of AI

  • Artificial Narrow Intelligence (ANI)

  • Definition: AI systems designed and trained for a specific task or a narrow set of tasks.

  • Strengths: Excels at its intended function.

  • Limitations: Lacks general intelligence or the ability to generalize beyond its trained tasks.

  • Examples:

    • Virtual personal assistants (e.g., Siri, Alexa)
    • Recommendation algorithms
    • Image recognition software
    • Large Language Models (LLMs)
  • Artificial General Intelligence (AGI)

  • Definition: AI with human-like intelligence, capable of understanding and applying knowledge across various domains.

  • Current Status: Theoretical; active research ongoing.

  • Artificial Super Intelligence (ASI)

  • Definition: Hypothetical AI that surpasses human intelligence significantly in all domains and activities.

  • Current Status: Speculative; no concrete examples.

LLMs as Artificial Narrow Intelligence

  • LLMs (e.g., ChatGPT) create an illusion of intelligence:
  • Do not possess true understanding or intelligence; they are advanced systems that generate text based on statistical patterns.
  • Similar to other narrow AI like Midjourney and DALL-E, designed for specific tasks like generating images from text prompts.
  • These systems improve at their specific tasks but remain unreliable outside their narrow focus.

12-2 Ethics Issues in Technology

Privacy

  • Data Protection
  • Cookies: Data saved from websites which can affect user monitoring.
  • Right to Be Forgotten: EU law allowing removal of unwanted links from search results.
  • General Data Protection Regulation (GDPR): EU law on data privacy; compliance can be costly for businesses, especially for those in the U.S. that lack comprehensive data protection laws.

Surveillance

  • Surveillance Tools: Cameras, beacons, facial recognition tech, etc.
  • Concerns:
  • Inaccuracy of recognition technology leading to misidentifications.
  • Racial bias prevalent in technologies like facial recognition.
  • Critics label these technologies as invasive, calling into question their safety versus the privacy rights of individuals.

Employee Privacy

  • Limited legal protections for employee monitoring.
  • Monitoring used to manage productivity and protect resources, but the ethical implications are complex.

12-3 Managing Ethics Issues in Technology

  • Businesses hold the responsibility to use technology ethically.
  • Difficulty in identifying issues with emerging technologies.
  • Chief Privacy Officer (CPO): A designated executive overseeing privacy protection policies and crisis management.Relevant experience with privacy laws is essential.
  • Technology Assessment: A method to evaluate the impacts of new technology on operations and stakeholders.
  • Government roles in maintaining technology infrastructure and regulation.

Debate Issue: Moravec’s Paradox

  • Definition: Tasks that are challenging for humans (like chess) are simple for computers, while tasks easy for humans (like vision) are difficult for AI.
  • Critics' View: Questions the nature of tasks as uniquely human and how this perception shifts as AI capabilities evolve.
  • Proponents' View: Argue AI functions merely as a "fast idiot," producing outputs quickly without mastering inherently human tasks.
  • Stance Options:
  1. AI development should aim to master human-like tasks to confront Moravec’s Paradox.
  2. AI development should prioritize tasks suited to machines, as replicating human features may lead to unforeseen risks.