AI in Education: Comprehensive Notes on Generative AI, Integrity, and Pedagogy

AI in Education: Generative AI, Integrity, and Pedagogy — Comprehensive Notes

  • Context and framing

    • Generative AI tools (e.g., ChatGPT, Gemini) evoke excitement and concern, but are not imminent Skynet-level threats; focus is on ethics, authorship, accuracy, bias, and academic integrity.

    • The goal is to understand how these tools work and to think critically about their role in education and communication.

    • The instructor is not anti-technology: AI is embedded in everyday tech (phones, assistants, productivity tools); the question is how to use these tools ethically and effectively.

    • Writing has always involved choices (what to include/exclude, word choice, argument structure); AI rise amplifies the importance of those choices, not replaces them.

  • Core objectives for today

    • Understand how generative AI works to recognize power and limits.

    • Prepare for the machine-human vs. machine exercise: compare student writing to AI output.

    • Connect AI use to academic integrity and determine when it is appropriate in a university context (varies by instructor).

    • Explore limits: what AI can and cannot do in writing; why human judgment matters.

    • Begin with a free-write to reflect on personal experiences with AI in writing.

  • What generative AI is and how it works

    • Types of AI discussed: assistive AI (e.g., spell-check, voice assistants, grammar tools) vs. generative AI (prompts that generate new content).

    • Forms mentioned: voice AI, Grammarly-like assistive tools, chatbots; this lecture focuses on generative AI.

    • Mechanisms:

    • GANs (Generative Adversarial Networks): designed to recognize patterns and generate outputs that mimic patterns in data.

    • LLMs (Large Language Models): trained on massive datasets to predict the most likely next words/phrases given a prompt.

    • Key operational concept: context window and tokens

    • Context window size (how much prompt/history the model can consider) has grown substantially. Example given: Gemini Pro 2.5 with over a million tokens in the contextual bubble.

    • Token: a unit of text (roughly a word or subword depending on encoding).

    • How output is generated (simplified):

    • A prompt is scanned; the model predicts the next token based on learned patterns from training data.

    • Output is a recombination of patterns from data, not a verbatim copy of a single source.

    • This yields content that can be convincing and well-structured, but not guaranteed to be true or accurate.

    • Core limitation: the model does not understand truth or accuracy in the human sense; it predicts patterns, not validated facts.

    • Practical takeaway: generative AI is powerful for lay of the land, brainstorming, drafting, and preliminary research, but outputs require human verification.

  • How the lecture frames practical uses of AI in education

    • AI is a tool to be used judiciously: can assist with research, summarization, tutoring, and drafting, but must be checked for accuracy and bias.

    • The instructor distinguishes between using AI for research/augmentation and passing off AI-generated content as one’s own work.

    • The in-class machine-human exercise:

    • Students compare their own essays to AI-generated outputs to understand strengths and gaps.

    • The aim is to determine why a student’s own writing is better, not simply to beat AI.

    • Policy stance shared in class (example for this course):

    • It is permissible to use AI for research and for the machine-human competition assignment, but not to produce original content that is passed off as the student’s own work.

    • Students should double-check AI outputs because AI can hallucinate (fabricate sources or facts).

    • Google Scholar and credible sources: use AI to aid research, but verify and cite sources properly.

  • Academic integrity, authorship, and the nature of writing

    • Writing in the class should reflect the student’s thinking; the point is to demonstrate thinking, not to produce polished but shallow AI-like prose.

    • The instructor notes a potential reversion to in-class exams with no laptops or devices if misuse escalates.

    • Integrity is tied to learner identity: the student’s own perspective, context, and originality matter.

    • When to use AI in university work:

    • Varies by instructor and assignment.

    • Useful for research and as a tool to help think through problems; you must double-check AI outputs for accuracy and cites.

    • For the human-vs-machine assignment, you compare your own thinking with an AI output to illustrate where human thinking adds value.

    • Key concern: bias and representation in AI outputs, given that training data reflect historical perspectives and may reproduce stereotypes or harmful language.

  • Limits and caveats of generative AI in writing

    • Hallucinations: AI can produce convincing but wrong or fabricated information or sources.

    • Bias: AI outputs reflect training data; underrepresented voices can be omitted, and outputs can reproduce stereotypes.

    • Environmental and human costs:

    • Training and running large models consume substantial energy and computing power.

    • An example claim: a single AI query equates to the energy use of a light bulb for about 18 minutes.

    • A new data center project (Meta in the U.S. Northeast) is described as extremely large (half the size of Manhattan) and energy-intensive (about 10 GW, enough to power ~350,000 homes in a year).

    • Moderation costs and human labor: content moderation often relies on human moderators in the Global South who are paid very little to filter harmful content; this highlights ethical and labor concerns.

    • Data and attribution issues:

    • AI can cite sources or hallucinate fake sources; always verify citations.

    • If sources are omitted or fabricated, it undermines trust and scholarly reliability.

    • Distinction between assistive and generative AI remains critical for preserving academic integrity.

  • Bias in AI: a dedicated reflective prompt

    • The instructor assigns a free-write prompt focused on bias in AI:

    • Rewrite and answer: Are biases inherent in AI outputs different from biases in original academic writing? Can AI ever be free of bias? Why or why not?

    • Sample student commentary (paraphrased from discussion):

    • Bias in AI could vary by training data; some models may appear biased toward certain ideologies depending on updates.

    • Theoretically possible to produce less biased outputs, but consistency is questionable; achieving bias-free AI would require advanced general intelligence and raises philosophical questions about consciousness and rights.

    • Philosophical note: genuine bias-free AI may be inconsistent with human-like originality; AGI (Artificial General Intelligence) would introduce deeper questions about machine consciousness and rights.

  • In-class writing guidance and evaluation insights

    • A sample analysis of a turtle behavior essay written by AI:

    • Strengths: covers well-known subject; useful for research purposes.

    • Weaknesses: sometimes repetitive; lacks a coherent argumentative flow; missing explicit sources; weak citation/attribution.

    • It can present numerous facts but often fails to offer clear sources or a strong, testable thesis.

    • Student feedback preferences:

    • Clear, concise sentences often preferred over overly polished but shallow prose.

    • Avoid excessive jargon or long, complicated sentences; prioritize clarity and demonstration of reasoning.

    • The instructor’s stance on writing quality:

    • Demonstrate thinking and argumentation; even if grammar is imperfect, clear thinking is valued over superficial polish.

    • Writing is a skill that improves with practice; write with purpose, not just to sound impressive.

  • Practical implications for students and educators

    • For students:

    • Use AI as a research assistant and drafting aid, but verify all information and cite sources.

    • Do not present AI-generated text as your own original work for substantive assignments.

    • When given a specific AI-related assignment, follow instructor expectations and policies.

    • Be mindful of biases, representation, and the ethical implications of using AI in academic work.

    • Keep a record of prompts and AI-assisted steps to demonstrate your thinking process if required.

    • For instructors:

    • Establish clear policies on AI usage, including acceptable contexts and required verification of AI outputs.

    • Emphasize the demonstration of thinking, analysis, and originality rather than mere writing polish.

    • Consider in-person examinations or assignments to assess independent thinking when appropriate.

  • The “machine-human vs. machine” exercise: what to expect

    • You will be given a prompt and asked to generate your own essay and a competing AI-generated essay.

    • You will then write a 500-word reflection justifying why your essay demonstrates stronger thinking than the AI output.

    • You will also submit the AI prompt used for transparency (save it with your essay).

    • The goal is to highlight the value of human thinking, research depth, and personal voice in academic work.

  • Core takeaways about AI’s role in education

    • AI is a tool, not a replacement for human thinking and distinctive voice.

    • The rise of AI makes writers’ choices more important, not redundant.

    • The ethical and practical implications include reliability (hallucinations), bias, environmental costs, and human labor considerations.

    • Institutions and instructors vary in policy; students must understand and follow course-specific guidelines.

  • Quick glossary of terms (definitions in plain language)

    • Generative AI: AI that produces new content (text, images, etc.) from prompts.

    • Assistive AI: AI that helps with tasks (grammar checks, spell-check, voice assistants).

    • LLM (Large Language Model): A model trained on massive text data to predict the next word or sequence of words.

    • GAN (Generative Adversarial Network): A class of models that learn to generate data by competing with a discriminator.

    • Token: A unit used by AI language models to represent chunks of text (words or subword units).

    • Context window: The amount of text the model can consider at once when generating the next token.

    • Hallucination: When AI outputs misinformation or fabricated sources as if they were true.

    • Bias: Systematic favoritism or prejudice embedded in data, algorithms, or outputs.

    • AGI (Artificial General Intelligence): A theoretical AI that can perform any intellectual task a human can.

    • Academic integrity: Ethical standards in scholarship, including honesty, originality, and proper attribution.

  • Key formulas and illustrative equations (LaTeX)

    • Next-token probability model (simplified):
      P(wi \,|\, w1, \ldots, w_{i-1})

    • The model assigns a probability to each possible next token given the previous tokens.

    • The chosen next token tends to maximize this conditional probability:
      wi = \arg\max{w} \, P(w \,|\, w1, \ldots, w{i-1})

    • Context window size (illustrative):
      \text{context window} \approx 10^6 \text{ tokens}

    • Example cited: Gemini Pro 2.5 operating on over a million tokens in its contextual window.

    • Energy and infrastructure context (illustrative, not a formal model):

    • One AI query energy analogy: roughly equivalent to lighting a standard bulb for
      \approx 18 \text{ minutes}

    • Large-scale data center footprint (example):
      \text{Power} = 10\ \text{GW}
      \text{Homes powered} \approx 3.5\times 10^5

    • These figures illustrate environmental and infrastructure scale considerations in AI deployment.

  • Final thoughts for class workflow

    • The lecture underscores the balance between leveraging AI for efficiency and maintaining rigorous standards of accuracy, sourcing, and intellectual honesty.

    • The course will continue to explore policy, ethics, and practical methods for integrating AI in a way that preserves learning outcomes and professional preparation.

If you’d like, I can tailor these notes to specific sections or add more examples from the transcript. Here, I’ve organized the content to reflect the major themes, workflow, and critical questions raised in the lecture.