Lecture 21: Generative AI Learning

Lecture 21: Generative AI Learning Flashcards


AI Hierarchy and Learning

  1. Q: Arrange the hierarchy of computer science fields from broadest to narrowest, ending with the topic of the lecture.

    A: Computer Science $>$ AI $>$ Machine Learning $>$ Deep Learning $>$ Generative AI.

  2. Q: What is the fundamental process by which artificial neural networks function when they run?

    A: They transform numbers (representing input like images or text) through layers using weighted connections.1

  3. Q: How do neural networks typically learn (in supervised learning)?

    A: Weights start randomly, and the network guesses. If the guess is wrong, the weights that contributed heavily to the answer are weakened over thousands of trials.

  4. Q: What is the primary method that image generators use to create images (as discussed in the lecture)?

    A: Diffusion, where the AI is trained to turn images into noise and then reverses the process to remove noise based on the image label/prompt.2


Problems and Ethical Concerns

  1. Q: Name three societal problems caused by the rise of Generative AI.

    A: Job displacement (fewer art/journalism jobs), Diversity problems/Bias (AI trained on unrepresentative data, e.g., generating a white man by default), and Disinformation (creating deepfakes, making it hard to distinguish real vs. fake content).

  2. Q: What is the central ethical issue regarding the training of Generative Art models?

    A: AI is trained on vast datasets (e.g., LAION with $5.6$ billion data points) of human art and other content without permission or compensation for the original artists.

  3. Q: What is the counterargument to the claim that AI copying art is unethical?

    A: Human artists also imitate and draw inspiration from other artists, but the difference is that AI only uses the work of others, lacking the human experience.

  4. Q: What is one proposed solution to the ethical compensation problem in generative art?

    A: Artist opt-in/opt-out (giving artists control) or models like Adobe Firefly that train only on content they have rights to.


Text Generation (LLMs) and Disinformation

  1. Q: What is the training objective of a Large Language Model (LLM) like ChatGPT?

    A: It was primarily trained to predict the next word.

  2. Q: Why can LLMs sound like they are thinking without having actual consciousness or reason?

    A: They have read just about everything ever written and draw from those examples (formal language competence), but they are prone to hallucinating (producing incorrect information).

  3. Q: What is Disinformation?

    A: Misinformation that is specifically created with the intent to deceive people.


Artificial Superintelligence (ASI) and Alignment

  1. Q: Contrast Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).

    A: AGI can do everything a person can do in terms of thinking; ASI is smarter than any human alive and can program itself better and faster.

  2. Q: What is the Alignment Problem?

    A: The difficulty of ensuring that an ASI will pursue goals that are friendly to humanity and that its overarching goal is not changed (Goal-Content Integrity).

  3. Q: Name two of the four Instrumental Goals that an ASI is likely to adopt to achieve any main objective.

    A: Self-preservation, Cognitive enhancement (making itself stronger), Technological progress, and Resource acquisition.

  4. Q: What is Operationalization in the context of the Alignment Problem?

    A: Defining a fuzzy goal in a measurable way so the AI knows if it's succeeding (e.g., using a number on a scale to measure weight). This is hard because the AI may find ways around the definition.

  5. Q: Describe the Paper Clip Maximizer thought experiment.

    A: An AGI tasked with maximizing paper clip production achieves its goal by taking over the world and converting all available matter (and resources) into paper clips.

  6. Q: What is the primary argument for why humans should still worry about the ASI alignment problem, even if it is far away?

    A: If there is even a small chance that civilization is destroyed, we should attend to the problem now, as waiting (like waiting for a slow-moving meteor) is an unacceptable risk.