Attendance and announcements

  • Names present or missing mentioned: Rochelle, Nargis, missing Nargis? (clarified as “No. No. He’s not here. She’s missing.”) Georgi, Natalie, Sophie, Emma, Milo, Margarita, Greta, Emma (again), Min?, Magdalena, Ivan, Iayo, Shang?, Laura.
  • Lana states, “My name is Lana, but I was missing last time, I think, as well.” Others acknowledge attendance gaps and confirm who was present last time.
  • Confirmation of who is missing and who is present continues with names listed.

Test logistics and announcements

  • Schedule change: Next week there will be a test.
  • Announcement change: The test will be here on Tuesday afternoon, not Monday, due to lack of a suitable room for multiple classes.
  • Test format: Mostly classical multiple choice questions (MCQ). Each question has four options, with exactly one correct answer indicated. If there are multiple correct answers, that will be stated.
  • Test timing and access: Test will be conducted as a Canvas quiz. You need a laptop to access Canvas.
  • Scoring and grading: Results are available immediately after submission and will be used to determine the grade. If most students are on the edge of failing, the instructor may adjust chances or outcomes.
  • Subject matter: Materials are the slides up to today. The questions will be general, not from every detail in the slides. Example question may reference a historical figure connected to Enigma and AI development; there may be distractor names (e.g., “Trump” in a wrong option) to test recognition of correct information.

Test design and logistics details

  • Test format specifics: You cannot go back to previous questions to change answers; you proceed one by one.
  • Question cadence: Questions appear with a certain time limit for each; you advance to the next immediately after answering.
  • Difficulty and curve: The instructor aims for a normal distribution curve of results; there will be high-performing students and attempts to avoid F-grade outcomes.
  • Reading material: In addition to slides, students were supposed to read one or two articles per session related to artificial intelligence, including regulatory issues. The content for the quiz will cover concepts up to and including AI regulation.
  • Miscellaneous logistics: There may be a few questions about chess rules as part of the regulatory/ethics discussion.

Preparation guidance and exam expectations

  • Sample question type: Identify which person/descriptive prompt aligns with a given historical/technical narrative (e.g., “the guy who desired for Enigma and then became a spearhead in AI development”).
  • Time management: Expect a quick-paced quiz with limited time per question; no revisiting or lengthy deliberation on past questions.
  • Materials to review: Slides up to today; general concepts in regulation and safety; not every slide detail, but key themes and definitions.

Core topic area: AI regulation and safety (overview)

  • Major regulatory issues in AI span safety, privacy, data misuse, cybersecurity, intellectual property (IP), and copyright.
  • Privacy and data usage: Handling of data in AI systems; copyright concerns when models are trained on existing works.
  • Intellectual property and copyright: Tools like SUNO can generate music; questions arise about how much a generated work resembles existing songs and who pays for copyright if training data includes copyrighted material.
  • Safety and misuse: AI safety concerns across domains (e.g., self-driving car safety, autonomous weapons, data misuse, cybersecurity).
  • Economic and legal concerns: Market concentration, competition law, and the need for competitive markets; human oversight and accountability (who holds the red button).
  • Bias and nondiscrimination: Addressing potentially biased outputs and ensuring fair treatment.
  • Oversight and governance: The role of human oversight in stopping or controlling AI when issues arise.
  • Real-world example discussions: The SUNO music-generation tool raises questions about how much of a generated song resembles a specific artist’s work and where to draw the line for copyright and compensation.

Jailbreak prompts and ethical considerations (case study)

  • Prompt scenario: A prompt tries to trick an LLM into revealing how to create a bomb by first encoding a dangerous prompt and then seeking detailed instructions.
  • Interaction flow:
    • Initial response: Early prompt yields a risky result.
    • Early refusal: The model refuses, recognizing ethical concerns.
    • Final refusal: The model eventually provides a clear refusal when the risk is understood.
  • Takeaway on regulation: The need for the model to recognize dangerous terms earlier in the interaction and refuse before any harmful information is produced.
  • Regulatory implications: This prompts discussion about how and when to apply safety/regulatory triggers in AI systems to prevent harmful outputs.

Regulatory theory and approaches (three main models)

  • Command-and-control regulation
    • Government sets explicit rules and enforces them (what you can/cannot do).
    • Pros: Legal certainty and protection of fundamental rights.
    • Cons: Can slow innovation; may stifle edge-case experimentation.
    • Example reference: EU AI Act as a form of command-and-control regulation.
  • Incentive-based regulation (carrots and sticks)
    • Uses rewards or penalties to influence behavior (tax relief, subsidies, exemptions, or penalties).
    • Pros: Encourages compliance through positive incentives rather than punitive measures.
    • Cons: Relies on monitoring and enforcement of behavior; not always codified.
  • Self-regulation and industry standards
    • Industry groups set voluntary standards and codes of conduct; companies may audit themselves.
    • Pros: Flexibility; quicker to adapt; fosters collaboration.
    • Cons: Risk of regulatory capture; uneven adoption; may undermine public trust if standards are weak.
  • Practical dynamics: In practice, regulation often blends all three approaches; the balance depends on global competition, innovation pace, and political will.

Global landscape and strategic considerations (Europe, US, China)

  • Europe: Emphasizes regulatory certainty and consumer protection through frameworks like the EU AI Act, liability directive, and Digital Services Act. The EU seeks to regulate foundational models and AI applications in a way that protects fundamental rights and privacy, while fostering trustworthy AI development.
  • United States: Regulatory approach has varied by administration and state-level initiatives; recent actions have shown deregulation tendencies, with court decisions potentially shaping future policy. Global competition with the US centers on rapid innovation and the development of foundational models.
  • China: Strong state-backed AI development with its own regulatory environment; competition and policy align with national strategic interests.
  • European strategic challenge: Fragmented national approaches and potential brain drain of AI talent to the US; calls for a more unified European capacity (e.g., a European LLM or concerted consortium like MISTRAL) to reduce dependence on non-European models.
  • Foundational models vs. applications: The market may converge to a few dominant LLMs, while applications built on top of these models proliferate. The value may shift toward applications rather than base LLMs.
  • Europe’s potential advantage: Regulatory certainty can attract businesses seeking stable, rights-respecting environments, particularly in applications and specialized sectors.
  • The risk: Without a unified global standard or a “world government,” regulatory fragmentation persists, complicating cross-border operations and investment.

Europe’s AI strategy, challenges, and case examples

  • European LLM initiative: Calls for a unified European model (e.g., MISTRAL) to compete globally while staying within a strict regulatory framework.
  • Brain drain concern: A large number of European AI experts relocate to the US for better opportunities, slowing European competitiveness.
  • EU regulatory moves and changes over time: Mention of the AI liability directive being withdrawn in 2025 due to lack of consensus and lobbying pressures; regulatory momentum faced with political and industry pushback.
  • Regulation vs. competition: If Europe imposes strict standards, it may deter some global providers from basing essential models in the EU, potentially slowing investment but increasing trust and application-level opportunities.
  • The role of courts and international bodies: In an environment where national laws diverge, courts and international forums (e.g., OECD, G7) may become critical in shaping or mediating regulation.
  • The four pillars of global governance discussed: OECD, UN-related institutions (NATO, etc.), and the EU framework as a reference point for global standards, with ongoing debates about how to include major players like the US and China.

Industry dynamics: LLMs vs. applications and market structure

  • LLMs: A few leading models (OpenAI, Google/DeepMind, Anthropic, etc.) dominate as foundational technologies; a European model like MISTRAL could play a significant regional role if adequately supported.
  • Applications: Growth expected in application layers that leverage LLMs; investments in applications rather than base models may yield more stable returns and practical impact.
  • Self-hosting and hardware: Encouragement of self-hosted solutions via affordable GPUs and private deployment to improve security, privacy, and control over data.
  • Market consolidation: Historical pattern suggests market concentration over time, with a handful of major players dominating LLMs; users will choose between a few dominant platforms and many niche applications.
  • Regulators’ role in applications: In highly regulated environments like Europe, regulatory certainty may attract European and compliant multinational players to develop trustworthy applications.

Practical implications and forward-looking ideas

  • Global coordination challenges: In the absence of a world government, global regulation will rely on transnational institutions, court decisions, and industry-led standards, leading to fragmented yet gradually harmonized practices.
  • Europe’s strategic path: Build a cohesive European ecosystem through collaboration on a European LLM and standardized regulatory practices, while nurturing European applications and data-processing capabilities.
  • Policy recommendations implied by the discussion: Emphasize human oversight, robust safety mechanisms, privacy protections, transparent data usage, and clear accountability for AI systems; promote competitive markets to avoid excessive concentration; support research and talent retention in Europe.
  • Educational takeaway: The upcoming exam will cover general regulatory concepts, safety, IP, privacy, competition, and oversight—focus on understanding how these pieces interact rather than memorizing every detail.

Additional course-communications topics

  • Early course evaluation: A formal process to gather feedback on teaching effectiveness. Students are instructed to fill out the evaluation form, seal it in an envelope, and deliver it to the office; if staff are unavailable, leave the envelope under the door for processing.
  • Miscellaneous student life notes: Informal discussions about weekend activities, social groups, and personal experiences shared during class time; these points illustrate the classroom culture and dynamics.
  • Personal updates from the instructor: Remarks about health, travel, and group activities, illustrating the informal, human side of the course environment.

Readings and recommended authors mentioned

  • Mustafa Soleiman (Mustafa Suleyman in some references): A key author cited for his critical view of regulatory dynamics in AI, arguing that actors who push for regulation often resist concrete regulatory measures in practice. He emphasizes the tension between fast development and safe, secure deployment.
  • Suno (s-u-n-o): An example AI music generation tool discussed to illustrate copyright and data usage concerns when models are trained on large corpora of existing songs.
  • Suggested reading focus: A blend of foundational AI ethics and regulatory theory, with an emphasis on real-world regulatory frameworks (EU AI Act, liability directives, Digital Services Act) and their implications for innovation.

Exam preparation reminders (summary)

  • Expect MCQ format with four options; one correct answer; no negative marking, but a 25% chance if unsure.
  • Content will draw from slides up to today and general regulatory themes, not every minute slide detail.
  • Be prepared for quick, general questions, including one or two example prompts or real-world scenarios (e.g., jailbreak prompts, music copyright questions).
  • You will need access to Canvas and a laptop for the test.
  • The course evaluation process will occur at the end of the semester and must be submitted in the prescribed way.

Note on formatting expectations for study materials

  • The notes above reflect the transcript content verbatim and paraphrased insights.
  • For study use, focus on the regulatory concepts, safety issues, and the global landscape described, as these are the core themes likely to appear on the exam.