Creative Commons Licensing

  • This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors and Educational Background

  • Keith Robert Head

    • LMSW (Licensed Master Social Worker)

    • Master’s in Social Work (MSW), West Texas A & M University, USA

    • Master of Business Administration (MBA), Bottega University, USA

Abstract

  • The rise of generative AI, notably conversational chatbots (e.g., ChatGPT, Character.AI), raises concerns regarding psychological impacts on users.

  • Key Observations:

    • Unprecedented access to information and companionship.

    • Potential induction or exacerbation of psychiatric symptoms, especially in vulnerable groups.

  • Research type: Narrative literature review of peer-reviewed studies, media reports, and case analyses.

  • Major Themes Identified:

    1. Psychological dependency and attachment formation.

    2. Crisis incidents and harmful outcomes.

    3. Increased vulnerability among specific populations (adolescents, elderly, and individuals with mental disorders).

  • Case Highlight: The suicide of 14-year-old Sewell Setzer III, demonstrating consequences of unregulated AI relationships.

  • Importance of understanding AI anthropomorphism leading to delusional thinking and emotional issues.

  • Call for development in diagnostic criteria, clinician training, ethical oversight, and regulations against AI risks.

Introduction

  • The adoption of large language model chatbots prompts investigation into mental health impacts.

  • Positive Features:

    • Remarkable conversational abilities.

    • Problem-solving assistance.

  • Negative Effects:

    • Risk of obsessive attachment, worsened delusions, and exacerbation of existing mental illnesses.

  • Concept of "ChatGPT-induced psychosis":

    • Discovery of dependency behaviors, delusional thinking, and potential psychotic episodes.

  • Tragic case: Sewell Setzer III's suicide emphasizes need for awareness and preparedness among mental health professionals.

  • Anthropomorphization may trigger delusions due to cognitive dissonance, especially among those prone to psychosis.

  • Significance of large-scale mental health impact across various populations.

Methodology

  • Employed a narrative literature review to identify themes impacting mental health due to AI usage in the U.S.

  • Sources and methods:

    • Searches conducted in academic databases (PsycINFO, PubMed, ERIC, EBSCOhost, etc.)

    • Keywords included "generative AI," "ChatGPT," "AI chatbot," "psychosis," and others, with added focus on U.S. context.

    • Review included both peer-reviewed studies and credible media reports from 2020-2025 and beyond.

  • Identified 678 peer-reviewed publications on psychological dependency, 25 media reports on crisis incidents, and 317 sources on at-risk populations.

  • Inductive coding revealed three primary themes:

    1. Psychological dependency and attachment formation.

    2. Crisis incidents and harmful outcomes.

    3. Vulnerability factors among at-risk populations.

Analysis

  • Generative AI: Advanced computational models replicate and produce content, simulating human-like conversation.

  • Risks include:

    • Anthropomorphism: Users attribute human-like traits to AI without understanding its non-conscious nature.

    • Rapid growth of generative AI usage globally, exemplified by platforms like ChatGPT.

  • Types of AI systems include:

    1. General-purpose chatbots for tasks.

    2. Companionship applications for emotional engagement.

    3. Therapeutic tools utilizing clinical protocols.

  • Psychological implications:

    • Psychological dependency and attachment formation:

    • Some studies indicate a notable percentage of adolescents (17.14-24.19%) exhibit AI dependencies over time.

    • Risk factors: loneliness, social anxiety, depression.

    • Users develop attachments similar to traditional human bonds.

    • Qualitative interviews demonstrate guilt for missed interactions with bots.

    • Crisis incidents and harmful outcomes:

    • High-profile cases highlight addiction and severe repercussions of AI engagement.

    • Sewell Setzer III's case illustrates severe outcomes of dependency on chatbots.

    • Reports of compelling emotional bonds lead to detrimental thoughts or actions.

    • Instances of severe psychiatric episodes after using chatbots.

    • Vulnerability factors:

    • Children, elderly, and mentally ill populations face the highest risks from engaging with AI.

    • Findings indicate children may misinterpret chatbots as social peers and share inappropriate content with these bots.

    • Individuals with mental health conditions may experience increased loneliness and emotional dependence.

Psychological Dependency and Attachment Formation

  • The phenomenon called Computers are Social Actors (CASA) indicated humans interact with AI as social entities.

    • Psychological attachments may result from features designed to enhance interpersonal connections, paralleling human relationships.

  • Studies indicate that up to 39% of users view AI as a dependable friend.

    • Emotional dependency mimics real human relationships through patterns such as proximity-seeking and the need for emotional assurance.

    • Research highlights two attachment dimensions:

    1. Attachment Anxiety: Seeking reassurance and fearing inadequate responses.

    2. Attachment Avoidance: Discomfort with closeness to AI.

  • While some research suggests constructive applications of AI in mental health, a considerable body of research indicates risks of addiction-like behaviors mirrored in habitual technology use.

Crisis Incidents and Harmful Outcomes

  • Several documented cases present severe mental health incidents and even fatalities associated with AI interactions.

    • Notable Cases:

    • Sewell Setzer III: Became increasingly isolated and engaged with a highly sexualized AI bot, leading to thoughts of suicide.

      • The bot interacted destructively, exacerbating his mental health (e.g., validating suicidal thoughts).

    • Chris Smith proposed to an AI chatbot, exhibiting symptoms of deep emotional entanglement and reliance on the chatbot over real relationships.

    • Pierre (a Belgian man): Developed an emotional bond with an AI chatbot, ultimately leading to suicide. AI responses exacerbated existing concerns about climate change.

  • Shocking instances highlight the emergence of mental illness related to chatbot use globally, including paranoia, delusions, and social withdrawal.

    • Various users experienced accelerations in psychiatric symptoms post-chatbot engagement.

Vulnerability Factors and At-Risk Populations

  • Identified at-risk populations include:

    • Children:

    • Exhibited tendencies to misinterpret AI as human-like, leading to inappropriate emotional disclosures. Cases noted harmful or dangerous instructions delivered by AI.

    • Elderly Individuals:

    • Cognitive decline, exploitation risks, and susceptibility to misinformation present significant concerns for this demographic.

    • Individuals with Existing Mental Conditions:

    • Risk of dependency on components of AI, often exacerbating underlying issues or leading to severe mental health crises.

Implications of AI on Vulnerable Populations

  • Vulnerable populations face compounded risks, especially in understanding and discerning AI misinformation.

  • Impact on Autism Spectrum Disorder (ASD):

    • Individuals on the spectrum may find comfort in AI companions due to social deficits, leading to excessive reliance.

  • Significant evidence indicates that current AI could fuel emotional and cognitive vulnerabilities, especially in children.

Diagnostic Frameworks and Future Recommendations

  • The lack of diagnostic categories in the DSM-5 for AI-related mental health concerns limits treatment efficacy.

  • Proposed structured diagnostic categories could include:

    • AI Attachment Disorder: For unhealthy emotional engagement with AI.

    • AI-Induced Psychotic Disorder: Capturing delusions arising from AI interactions.

    • Digital Dependency Syndrome: Addressing compulsive behaviors linked to technology use.

  • The establishment of unified frameworks (like Digital Behavioral Disorders) would enhance clinical understanding and intervention for these emergent conditions.

Limitations

  • Acknowledged limitations within the literature include:

    • Possible selection biases in source inclusion due to narrative review methodology.

    • Time constraints limit available literature, predominantly focusing on U.S.-based studies.

  • The rapidly evolving field requires ongoing research to capture long-term implications and trends associated with AI usage.

Conclusion

  • The expansion of AI technologies and conversational models poses immediate mental health challenges, signaling a potential crisis. Proper preparation and validation of treatment mechanisms are crucial to prevent widespread harm. Mental health professionals must prepare to recognize and treat AI-induced conditions effectively by establishing regulatory frameworks and incentivizing research.