Notes on ‘Who Wouldn't Cheat?’ (New York, 2025)
Case Overview: CHUNGIN "ROY" LEE and the Rise of AI-Assisted Cheating
- Publication context: Article titled "Who Wouldn't Cheat?" published in New York on 5/5/2025 (MasterFILE).
- Central figure: CHUNGIN "ROY" LEE, a computer-science major who used generative AI to cheat on nearly every assignment at Columbia University after arriving on campus in the fall.
- Early admission episode: Born in South Korea and raised outside Atlanta; his parents run a college-prep consulting business. He was admitted to Harvard early in his senior year but the offer was rescinded after a suspension for sneaking out during an overnight field trip. A year later, he applied to 26 schools and did not get into any of them, spending the next year at a community college before transferring to Columbia. His personal essay to Columbia reportedly used ChatGPT.
- Core claim about cheating: By his rough math, AI wrote 80 ext{ extbackslash%} of every essay, with the human touch accounting for 20 ext{ extbackslash%} of his final product.
- Columbia experience and attitude: Lee reportedly didn’t worry about GPA or the core curriculum; he believed many assignments were AI-hackable and not personally meaningful. He framed his time at Columbia as a place to meet a cofounder and his future wife, which did materialize as he formed partnerships and a startup network with Neel Shanmugam.
- Founding of startup ideas and pivot to AI-enabled cheating tools: After forming a cofounder relationship with Shanmugam, Lee helped develop several startup concepts (a dating app for Columbia students, a liquor-distributor sales tool, a note-taking app) that did not succeed. He then pivoted to a program that hid AI from browsers during remote job interviews; this led to the creation of Interview Coder.
- Interview Coder launch (early 2025):
- Website banner stated “f*ck leetcode.”
- Lee posted a YouTube video showing how to cheat a remote internship interview (specifically with Amazon). He ultimately got the internship but declined it.
- A month later, Columbia’s academic-integrity office placed him on disciplinary probation for allegedly "advertising a link to a cheating tool" and "providing students with the knowledge to access this tool and use it how they see fit" per the committee’s report.
- Policy context: Lee argued that Columbia’s AI policy (similar to many universities’) bans AI unless explicitly permitted by the professor; he claimed most students were using AI to cheat, despite policy.
- Personal stance: Lee believes that society will eventually view AI-assisted homework as non-cheating, suggesting a broader shift in norms around AI use for academic tasks.
- Later developments (for context): In late March, Lee was suspended by Columbia after posting details about his disciplinary hearing on X. He later launched Cluely, a wearable-AI feedback product, signaling a pivot from cheating tools to AI-enabled assistance in real-time.
Broader Landscape: AI in Higher Education (Prevalence, Attitudes, and Adoption)
- Early surge in AI usage: A January 2023 survey of 1000 college students found that nearly 90 ext{ extbackslash%} had used ChatGPT to help with homework.
- Growth trajectory: In ChatGPT’s first year, monthly visits grew MoM until the summer of 2023; traffic dipped in summer 2024, reflecting seasonal usage patterns.
- Widespread adoption across sectors: By 2025, students across large state schools, Ivies, liberal-arts colleges, universities abroad, professional schools, and community colleges used AI to assist with note-taking, study guides, practice tests, summarization, brainstorming, outlining, and drafting essays. STEM students used AI to automate research and data analyses and to speed up coding tasks.
- Anecdotes illustrating use:
- A Utah student captioned a video describing copy-pasting a Genocide and Mass Atrocity chapter into ChatGPT to cheat.
- A Wilfrid Laurier University freshman (name changed) used ChatGPT for all classes (Indigenous studies, law, English, Green Industries) and reported dramatically improved grades, with concerns about dependence and time spent on screens.
- Countermeasures and classroom responses:
- Many professors retried AI-resistant assess strategies (oral exams, handwritten or paper exams, or formats designed to deter AI use).
- Some educators paused or stopped assigning certain types of tasks to curb AI misuse.
- Teacher and student attitudes:
- Some teachers report the language in AI-generated essays to be robotic, overly polished, or too balanced, signaling AI authorship.
- Students describe AI as a cognitive crutch that reduces the perceived effort and thought required to complete assignments.
- AI policy landscape at universities:
- Policies vary: AI allowed if cited as a source, allowed for conceptual help, or prohibited unless explicitly permitted by the professor.
- Students often treat policy language as a guideline rather than a hard rule.
- Trojan horse prompts and prompt-embedded traps:
- Professors have experimented with hidden prompts or phrases (e.g., small, unusual text inserted into prompts) to induce AI to reveal non sequiturs or inconsistencies in student work.
- Examples include phrases like "broccoli" or commands like asking about Aristotle’s perspective when not studied.
- Effectiveness is inconsistent; some teachers have observed success, others not.
- AI detectors and their reliability:
- Turnitin and other detectors analyze text for AI-like patterns and assign a likelihood score of AI-generated content.
- Detector performance varies across models and texts; some claim very low false-positive rates, others show higher false positives for neurodivergent or ESL students.
- Turnitin’s stance: detectors are trained to minimize false accusations (err on the side of caution to avoid false positives for students), sometimes yielding false negatives.
- Demonstrated detector results (examples from the article):
- A free AI detector (ZeroGPT) labeled Wendy’s essay as 11.74 ext{ extbackslash%} AI-generated, while a Genesis text yielded 93.33 ext{ extbackslash%} AI-generated to illustrate variability.
- Evasion techniques used by students:
- Rewriting AI-produced text in their own voice, introducing typos, or using multiple AI tools in sequence to obfuscate origin.
- Asking AI to rewrite in a specific style (e.g., a college freshman) to simulate authentic voice.
- Laundering text through other AIs or training AIs on past essays to tailor outputs to a student’s voice.
- Practical implications of detection limits:
- Difficulty in identifying AI authorship when a student heavily edits AI output.
- The problem compounds as AI quality improves, making detection harder and education policy more complex.
Real-World Consequences for Educators and Institutions
- Educator perspectives and burnout:
- Professors report a sense of existential crisis; humanities may become de-emphasized as AI lowers the perceived value of writing and critical thinking.
- Some educators consider retirement due to the scale of AI-enabled cheating and the difficulty of upholding traditional learning outcomes.
- Impacts on learning and cognitive development:
- Early research links AI usage with deteriorations in memory, problem-solving, and critical-thinking skills, especially among younger users.
- Microsoft and Carnegie Mellon University found that confidence in generative AI correlates with reduced critical-thinking effort.
- The broader concern: a potential shift away from high-effort inquiry toward AI-assisted verification and integration tasks.
- The Flynn effect and broader cognitive concerns:
- Flynn effect: historical rise in IQ scores slowed or reversed around 2006; some scholars worry generative AI may accelerate cognitive declines by reducing engagement in hard thinking.
- The role of mental health and belonging:
- Studies show cheating correlates with mental health, well-being, sleep, anxiety, depression, and sense of belonging; AI potentially exacerbates or reflects these issues.
Notable Narratives: Students and Educators Shared Experiences
- Wendy (fictionalized name): a freshman finance major who uses AI for all classes and demonstrates how students structure essays via outlines and bullet points before filling in content.
- Step-by-step approach: (1) establish persona; (2) input class context and prompt; (3) obtain an outline or topic-sentences to complete.
- Result: a five-page paper timed at 10:17 a.m., graded well, raising questions about the value of planning vs. rote generation.
- Irony: topic centered on Paulo Freire’s critical pedagogy; reflects tension between AI-assisted writing and humanities themes.
- Sam Williams (teaching assistant, University of Iowa): observed dramatic shifts in student writing quality and factual accuracy in an AI-enabled class; by November, ~50% of students used AI for papers; department stance discouraged failures due to AI use and emphasized grading the student’s true effort rather than the AI’s output.
- Outcome: Williams felt disillusioned and contemplated leaving academia; he was instructed to grade for a "true attempt at a paper" rather than penalizing AI-generated content.
- Denise Pope (Stanford): highlights that stopping rampant AI abuse would require holistic reforms beyond policing individual cases; emphasizes mental health and belonging factors.
- Lakshya Jain (UC Berkeley): argues that if you hand in AI-generated work, you’re effectively a human assistant to an AI—making you replaceable in the job market.
- Daniel and Mark (students at UF and University of Chicago, respectively): reflect on personal experiences of using AI for coding help and drafting; they discuss the boundary between assistance and original work, and the tension between cheating and learning.
Ethical, Philosophical, and Practical Implications
- Core ethical tension: balancing innovation with academic integrity; Is it cheating to leverage AI to complete tasks or to learn only through AI-augmented processes?
- Philosophical reflection on learning and humanity:
- Orin Starn (Duke): language asserting that
- Paulo Freire’s ideas about education emphasize critical thinking and human agency; AI usage tests whether schooling remains a space for genuine cognitive development.
- Practical implications for education systems:
- If AI becomes central to completing work, the value of traditional assignments as measures of learning declines.
- Institutions may need to redesign curricula and assessment strategies to emphasize process, mentorship, and holistic development rather than output quality alone.
- The social and economic dimension:
- Deloitte survey indicates that just over half of college graduates felt their education was worth the cost, compared with 76% of trade-school graduates, highlighting perceived value gaps and differing expectations between higher education and vocational pathways.
- OpenAI’s stance and market dynamics:
- Sam Altman frames ChatGPT as a calculator for words and argues that the definition of cheating should evolve with technology.
- OpenAI offered free access to ChatGPT Plus during finals for students, signaling institutional marketing and market expansion strategies; they also offer the ChatGPT Edu product for academic institutions.
Timeline of Key Data, Trends, and Policy Details
- Early adoption and growth metrics:
- 1000 students surveyed in January 2023; 90 ext{ extbackslash%} used ChatGPT for homework.
- ChatGPT’s user base increased through 2023 and 2024 with seasonal dips in summers.
- Policy and enforcement landscape (highlights):
- Many universities adopted a mix of allowances and prohibitions on AI usage, often requiring citation or limiting usage to conceptual help.
- Some institutions experimented with their own AI tools to assist students; others introduced new AI-focused majors or certificate programs.
- Comparative governance and detector accuracy:
- AI detectors exhibit variable efficacy; some claim low false-positive rates, while others reveal high false positives among certain student groups.
- Turnitin’s policy emphasizes avoiding false accusations, aiming for thresholds that minimize mislabeling genuine student work.
Case Studies and Real-World Outcomes
- Lee’s suspension and aftermath:
- After he disclosed details of his disciplinary hearing on X, Columbia suspended him; he did not plan to return to school or work for a big tech company.
- He pivoted to entrepreneurship, launching Cluely, a real-time AI feedback tool for screens and glasses, with the aim of transforming AI into a personal assistant for live interactions.
- Cluely: a startup with a publicized ad campaign:
- A $140,000 scripted advertisement depicted Lee using Cluely on his glasses to navigate a date, signaling a shift from academic misuse to consumer-oriented AI-enabled interaction.
- The venture raised 5.3 ext{ million} in funding and established a small team in San Francisco.
- The goal is to enable AI feedback and answers in real time, potentially extending to wearable devices and, eventually, brain interfaces.
Key Takeaways for Exam Preparation
- The AI cheating narrative is multifaceted, spanning individual behavior, institutional policy, and broader societal implications.
- AI can both erode traditional learning processes and catalyze new forms of skill development (e.g., prompt engineering, systems thinking, rapid ideation).
- Detection challenges persist: detectors are imperfect, and sophisticated users leverage multi-AI pipelines to reduce detectability.
- Policy responses are often pragmatic and evolving, with emphasis on transparency (citation) and balancing innovation with integrity.
- The long-term impact on education depends on systemic changes to curricula, assessment design, and the cultivation of critical thinking and resilience in students.
- Proportions and probabilities:
- Student AI usage in some contexts: frac{1}{2} ext{ or } 50 ext{ extbackslash%} undergraduates without AI access previously; current usage far exceeds that baseline.
- Specific numeric claims from the article:
- Two years: 2 years since ChatGPT’s emergence and rapid integration into coursework.
- AI-generated content share in Lee’s statements: 80 ext{ extbackslash%} AI-generated essays; 20 ext{ extbackslash%} human-authored refinements.
- Sample numbers cited for the Chegg pricing: oldsymbol{ ext{ extbackslash$15.95}} per month.
- Exam cheating and detector statistics: AI-generated detection results such as 11.74 ext{ extbackslash%} AI-generated (ZeroGPT) versus 93.33 ext{ extbackslash%} in another sample (Genesis).
- Policy thresholds cited by some professors for flagging: 25 ext{ extbackslash%} as an approximate benchmark.
- Admissions and school counts: 26 schools; 1000 surveyed students; 90 ext{ extbackslash%} usage rate.
- Monetary figures:
- Interview-related ad spend: ext{ extbackslash$140{,}000}$ for a scripted campaign.
- Investment in Cluely: ext{ extbackslash$5.3 million} in funding.
- High-level numeric estimates:
- AI is claimed to have influenced roughly 50 ext{ extbackslash%}$$ of undergraduates’ learning experiences in some contexts (as per related discussions about access and usage).
Final Reflection: What This Means for Studying and Exams
- The interview/article emphasizes the tension between leveraging AI as a tool and maintaining the integrity and depth of learning.
- It suggests that educators, policy-makers, and students will need to rethink assessment, pedagogy, and the definition of cheating in a world where AI is ubiquitous.
- The trajectory points toward a broader shift from traditional problem-solving and content generation toward evaluating how students curate, verify, and integrate AI-assisted outputs in meaningful ways.