Article: "ChatGPT Is Everywhere Why Aren't We Talking About Its Environmental Costs?" (Teen Vogue).
Authors and context discuss the environmental and societal costs of AI, especially large language models (LLMs), and critique the hype surrounding AI as a revolution.
Central tension: AI is framed as a mid-tech phenomenon with tangible environmental footprints and real-world risks, even as it promises efficiency and utility.
Key theses and concepts
McMillan Cottom’s categorization: AI is “mid” tech, a mid-level revolution focused on mid-level tasks (e.g., meal planning, calendars, drafting emails), not a transformative leap replacing broad domains.
Quote: AI is a “mid revolution of mid tasks.”
Environmental costs of AI are not trivial; there is waste, energy use, and water and carbon footprints associated with training and operating AI systems.
AI’s imperfections amplify risk: models can provide inaccurate information, generate misleading content, or be strapped with bias, with real-world harms (e.g., misinfo leading to harms in civilian contexts).
In a so-called post-fact era, reduced emphasis on deep, careful research in favor of predictive outputs aligns with a critique of AI’s epistemic reliability.
Emphasis on the value of doing the reading yourself and showing your work, as opposed to relying on AI-generated outputs.
Environmental costs and potential impact
AI could contribute to reductions in greenhouse gas (GHG) emissions through efficiency gains, but this is uncertain and disputed in practice.
Google’s stated potential impact: AI could help mitigate between 5% and 10% of global greenhouse gas emissions by 2030 through measures like fuel-efficient routing on Maps and reducing contrails, among other efficiencies.
5%≤GHG reduction from AI by 2030≤10%.
There are countervailing costs: energy consumption, water use, and overall carbon footprint of AI infrastructure, training, and operation.
AI-generated content and outputs can be wrong or harmful (e.g., images with missing fingers; inaccurate information) and can have severe real-world consequences (e.g., misinformed decisions, safety harms).
The article points to real-world concerns about AI informing or enabling harmful actions (e.g., reports of AI-based analysis impacting targeting or surveillance; governance misuse).
Examples, metaphors, and scenarios discussed
Everyday AI uses as mid-tech tasks: meal planning with macros, calendar management, drafting emails—illustrating how AI serves routine, incremental needs rather than transformative, high-stakes tasks.
The paradox of value: AI promises convenience and optimization while contributing to information pollution and epistemic fragility in the post-fact era.
Hypothetical practical tip: to reduce AI-centric clutter, users can disable AI-summarized search results by appending -AI to queries on Google; or switch to a search engine like DuckDuckGo to minimize AI-assisted outputs.
Connections to foundational principles and real-world relevance
Reiterates the importance of critical reading and independent work as foundational to learning, echoing a math teacher’s admonition to show work.
Aligns with broader climate discourse: responsibility for AI’s footprint lies with tech firms, governments, and fossil-fuel-linked interests, not individuals alone—yet individuals still have agency to opt out of AI hype.
Highlights governance and ethical concerns: potential for AI to influence education, surveillance, and public policy in ways that may undermine informed decision-making.
Corporate responses and sustainability claims (industry responses)
Google: Claims potential to mitigate 5-10% of global GHG emissions by 2030 via AI-enabled efficiencies; emphasizes building efficient AI infrastructure and measuring water and carbon footprints.
OpenAI: Emphasizes ongoing efforts to improve efficiency in energy and water use; acknowledges substantial efficiency gains but stresses careful use of computing power.
Microsoft: Aims to become carbon negative, water positive, and zero waste by 2030; references an AI-driven sustainability playbook outlining concrete actions toward this goal.
Editor’s note: Condé Nast announced a multi-year partnership with OpenAI to expand content reach, illustrating how AI engagement is expanding in media.
Practical guidance to reduce AI footprint (actionable tips mentioned)
Google search optimization to reduce AI summaries: append "-AI" to search queries to remove automated AI-generated summaries.
Consider alternatives to AI-heavy platforms and engage with non-AI-facilitated tools when appropriate (e.g., alternative search engines like DuckDuckGo).
Be mindful of the broader environmental and societal costs when adopting AI tools; balance convenience with potential harms and energy use.
Ethical, philosophical, and practical implications
Ethical concern: AI can advance misinformation, automated wrong outputs, and biased or unsafe content without adequate accountability.
Philosophical concern: reliance on AI for knowledge creation contributes to a “post-fact era” where predictive outputs can supersede rigorous inquiry and open-ended exploration.
Practical implication: educators and students should emphasize transparency, demonstration of work, and critical evaluation of AI outputs.
Societal implication: governance and policy debates about AI should weigh industry claims against actual environmental costs and potential harms to civil liberties and public safety.
Summary takeaways for exam-style understanding
AI is best understood as a mid-tech phenomenon with meaningful but not world-changing implications, especially in its environmental footprint and epistemic risks.
The environmental costs of AI include energy use, water footprint, and carbon emissions; potential emissions reductions from AI exist but are not guaranteed and hinge on efficiency, policy, and practices.
Real-world harms from AI misinformation and expansion into governance and education highlight the need for critical literacy and ethical oversight.
Corporate commitments to sustainability exist but are contested by concerns about whether actual practice matches rhetoric.
Small, practical steps (like modifying search queries) can reduce exposure to AI-assisted outputs and help mitigate some environmental costs.
References and sources (from transcript)
Teen Vogue article: "ChatGPT Is Everywhere Why Aren't We Talking About Its Environmental Costs?" (June 21, 2024; updated/related discussions in August 2024 editorial note about OpenAI partnership).
Notable quotes:
"Mid revolution of mid tasks." (McMillan Cottom)
"Less research and more predicting what we want to hear." ( critique of post-fact era )
"AI has the potential to help mitigate 5%to10% of global greenhouse gas emissions by 2030" (Google)
Practical tip: add "-AI" to Google search queries to remove automated AI summaries; consider alternative search engines.
Reported real-world concerns: misinfo and harm arising from AI-generated content; governance and education implications; corporate sustainability commitments.