AI in Schools — Key Points from Transcript (April 2025)

Context and Paradox

  • AI in education is creating a paradox: educators worry about students using A.I. and cheating, while many educators personally use A.I. tools to save time on routine tasks and even to assist with more meaningful work like grading and tutoring.

  • The broader claim from the piece: A.I. "is already being used by the majority of teachers and students" in some contexts, reflecting rapid adoption alongside ethical concerns.

  • Tension for school leaders: marketing claims that A.I. can “transform,” “personalize,” and “accelerate” learning clash with concerns about data use, bias, and the potential disruption of human teacher-student relationships.

  • Key figure: Jennifer Carolan (Reach Capital) notes widespread current use but cautions about aggressive marketing and the need for educators to evaluate products carefully.

  • Core ethical questions emerge: if students are barred from using A.I. for assignments but teachers grade with A.I., is that fair? How should schools balance efficiency with preserving human relationships and the developmental value of authentic student work?

  • Practical concern: A.I. might shift adults’ attention away from children if not implemented thoughtfully, underscoring the need for A.I. tools that ease bureaucratic burdens without eroding classroom engagement.

  • Examples of where A.I. intersects with teaching practice include tutoring bots, data analysis for targeted supports, and automated feedback on writing.

  • Real-world framing: educators worry about the pace of tech deployment amid marketing claims and the actual impact on learning and equity.

Cheating vs. Homework Help

  • Middle school context: students can photograph a math problem and feed it to free A.I. apps (e.g., PhotoMath, Google Lens) to obtain the correct answer plus step-by-step solutions, enabling copying of these steps.

  • Alex Baron (E.L. Haynes Public Charter School, Washington, D.C.) characterizes this as a form of cheating, highlighting ethical concerns around dependency on A.I. for problem-solving.

  • Counterpoint: Baron also sees legitimate uses of A.I. in his own work, such as analyzing academic and behavioral data to create targeted student groups for support.

  • Google-owned tools in play: PhotoMath and Google Lens are cited as examples of widely available A.I. assistance.

  • Robert Wong (Google, Director of Product Management for Learning and Education) suggests that A.I. tools are especially valuable for students whose parents cannot help with math homework and points to a small study indicating that cheating is less about access to A.I. than engagement in class.

  • Broader use: in Llano, Texas, Maurie Beasley advocates using A.I. to personalize assignments, offering varied problem contexts (e.g., velocity problems framed around a speeding baseball vs. a dancing figure) to accommodate different learner needs.

  • Gray areas” exist where A.I. can be used for supportive, transparent learning activities (e.g., scaffolding, exposure to multiple representations) while avoiding use that erodes core learning outcomes.

Transparency, A.I.-Literacy, and Ethical Use

  • Providence, Rhode Island: middle school history teacher Jon Gold uses generative A.I. to support lesson planning after training ChatGPT on his own curriculum materials.

    • A.I. can edit long readings into shorter segments (e.g., three-paragraph summaries) and generate dummy essays to illustrate good versus weak evidence.

    • Transparency is emphasized: he explains to students how A.I. has been used as a modeling of ethical use.

    • He uses A.I. to demonstrate process, not to replace student work; he wants students to seek diverse sources and synthesize information themselves.

    • He also discusses knotty ethical issues around copyright (A.I. relies on copyrighted material) and energy consumption.

    • Quoted stance: “I am more pro-A.I.-literacy than I am pro-A.I.-use.”

  • Core takeaway: fostering A.I.-literacy—understanding how tools work, when to rely on them, and how to evaluate information—while restricting uncritical use in core tasks like drafting essays or independent research.

A.I. in Writing and Automated Scoring

  • Writing support for teachers: A.I. can provide instant feedback on student writing, enabling teachers to assign more writing even with limited grading time.

  • Commercial tools mentioned: MagicSchool and Brisk Teaching offer AI-driven, real-time feedback on student writing.

  • High-stakes automated scoring:

    • In 2020, Texas signed a five-year contract with Cambium Assessment to deliver automated scoring of student writing as part of state testing, with a contract value of 5 -year×391,000,0005 \text{ -year} \times 391,000,000 (i.e., 5×391,000,000=1,955,000,0005\times 391,000,000 = 1,955,000,000 in the sense of the commitment across the period, though the emphasis is on the annual contract value of 391,000,000391,000,000 over five years).

    • The system used is not open web generative AI; it is an older AI trained on human-graded writing samples.

  • Dallas ISD case:

    • About 4,6004,600 student writing samples submitted for regrading; roughly 2,0002,000 received higher scores after reevaluation.

    • Texas Education Agency spokesperson Jake Kobersky noted the adjustments were minor in the context of the overall set of writing samples (Dallas’s ~71,00071,000 samples).

    • Dallas superintendent Stephanie Elizalde acknowledged concerns about automated scoring but stated the district uses AI to grade practice AP essays, summarize documents, and analyze large data sets; she emphasizes teaching students to verify information from chatbots and frames AI literacy as essential for the future.

    • Key quote: “It’s irresponsible to not teach it. We have to. We are preparing kids for their future.”

  • Broader implications: while automated scoring can be controversial due to bias and error, human grading also has bias; generative AI may offer consistency for simple writing tasks, prompting debates about appropriate uses in assessment.

  • Industry momentum: AI-assisted writing support and automated scoring are part of a broader push toward AI-enabled assessment and feedback across education.

Big Business of AI in Schools

  • Investment landscape: over the past two years, companies operating at the intersection of AI and education have raised approximately 1.5×1091.5\times 10^9 in funding.

  • Leading players promoting AI for student research, tutoring, and teacher lesson planning include Google, Microsoft, and Khan Academy.

  • This escalation positions A.I. as a central element of educational technology markets and policy discussions.

Vision, Tools, and Practical Trade-offs

  • Google’s vision: “a tutor for every learner and a T.A. for every teacher.” The Gemini chatbot exemplifies tools designed to probe and prompt students to demonstrate and practice understanding.

  • School leaders’ calibration task: determine which AI tools will be essential vs. which should be passed up or used selectively.

  • Daniel Baron’s caution: one AI product that watches video footage of teachers and provides feedback could threaten the integrity of teacher observation and evaluation; he would rather have AI assist with operational tasks like master scheduling and substitutes than outsource teaching evaluation.

  • Moderation and literacy: Jennifer Carolan warns about overly aggressive marketing of AI products and emphasizes the need for educators to be fluent in evaluating AI technologies themselves.

  • Real-world cautions: Los Angeles experienced a failed attempt to deploy an AI chatbot for students and families via an inexperienced startup; the company’s CEO faced fraud charges, highlighting risks of rushed implementations without sufficient due diligence.

  • Teacher experiences in classrooms:

    • Mike Sullivan (middle school math, Brockton, MA) estimates about half of his students use problem-solvers like Google Lens; some use AI for homework help, and some attempt to use it during in-class quizzes.

    • He sees the convenience of AI but questions overreliance; he would welcome AI tools that could help digitize and transfer student work from paper to a digital grade book, illustrating a desire for AI to handle administrative tasks rather than replace essential teacher time with evaluation.

Ethical, Philosophical, and Practical Implications

  • Core themes:

    • Fairness and equity: if access to AI tools varies, how do schools ensure equitable opportunities and avoid widening gaps?

    • Integrity of learning: balancing efficiency and the authentic development of writing, critical thinking, and research skills.

    • Human relationships: safeguarding the teacher-student relationship as central to learning, while leveraging AI to support instruction.

    • Transparency: educators like Jon Gold emphasize being explicit about how AI is used and modeling ethical behavior for students.

    • Intellectual property and copyright: concerns about how chatbots rely on copyrighted material and the implications for student work.

    • Energy and resource use: awareness of the environmental footprint of AI tools.

    • Preparation for the future: framing AI literacy as essential life skill, not simply teaching students to use AI tools.

  • Practical considerations for schools:

    • Distinguish between using AI to generate content vs. to critique, edit, or analyze existing material.

    • Prioritize tools that save time on administrative tasks and provide reliable feedback, while maintaining rigorous learning processes.

    • Support teachers with professional development to evaluate AI claims, understand limitations, and integrate tools responsibly.

    • Monitor student engagement and learning outcomes to ensure AI use enhances, rather than replaces, human instruction.

Key Terms and Concepts

  • Generative AI: AI systems capable of generating text, images, or other content in response to prompts.

  • AI literacy: The ability to understand how AI works, its limitations, ethical implications, and how to use it responsibly.

  • Automated scoring: Computer-based scoring systems used to evaluate student writing or exams, often trained on prior human-scored data.

  • Tutoring AI / T.A. for teachers: AI tools designed to assist students and support teachers with tasks such as feedback, practice, and scheduling.

  • Personalization: Tailoring assignments or feedback to individual learners using AI-driven insights.

  • Energy consumption: Environmental impact associated with running AI systems, acknowledged as a consideration in ethical use.

  • Copyright considerations: Issues related to the data and sources used to train AI and how that material appears in student work or outputs.

Connections to Foundational Principles and Real-World Relevance

  • Human-centered teaching: The discussions emphasize preserving teacher-student relationships, mentorship, and the human element of learning even as tools automate routine tasks.

  • Equity and access: A recurring theme is ensuring that AI benefits are distributed fairly and do not exacerbate existing gaps.

  • Critical thinking and source verification: The Dallas and Providence examples stress verifying information from AI outputs and teaching students to synthesize multiple sources.

  • Evolution of assessment: Automated scoring highlights ongoing debates about reliability, bias, and the role of human judgment in high-stakes evaluation.

  • Real-world implications: The article situates AI within actual school systems, budgets, partnerships, and policy contexts, illustrating both opportunities and risks of rapid adoption.

Summary Takeaways

  • There is a tension between limiting student use of AI and expanding teacher use to improve efficiency and learning outcomes.

  • AI can support personalized learning, data analysis, and administrative tasks, but raises concerns about cheating, inequity, and weakening teacher-student relationships.

  • Transparency and AI literacy are critical for responsible use; educators advocate modeling ethical use and requiring critical evaluation of sources.

  • Automated scoring and AI-enhanced feedback are increasingly embedded in education, with significant financial investments and policy implications.

  • Real-world cases (Providence, Dallas, Los Angeles, Llano) illustrate both the potential benefits and the risks of AI in schools, including fraud concerns and failures due to rushed implementations.

  • The future of AI in education hinges on thoughtful integration that preserves core educational values while leveraging AI to reduce burdens and enhance learning.