Intercom AI Agent (Fin) Breakout Discussion
Participants & Roles
Patrick (Moderator)
Senior Customer Success Manager, Intercom
Oversees customer implementations of Fin (AI agent) + other Intercom products
Brandon (Co-Moderator)
Principal Workforce Management Analyst, Intercom
Owns staffing forecasts; works directly with Intercom’s support managers
Audience / Speakers
Elizabeth, Morgan, Aurelian (Montreal), Margarita, Brian, Terry, Ursula, Alex, etc.
Mixture of companies that have run Fin for 2 months → 1 year; some not yet live
Session Goals
Share real-world experience of rolling out AI chat agents (primarily Fin)
Discuss customer-trust issues (“chatbot baggage”) and how to overcome them
Swap metrics, benchmarks, & best-practice tactics
Examine how to retrain human agents so AI augments—not replaces—their work
Key Concepts Introduced
Chatbot Baggage
Historical frustration with deterministic, decision-tree bots → customer skepticism
Escape-Route Anxiety
Fear that there is no clear path to a human → rising frustration & lower CSAT
Transparency & Expectation Setting
Declare agent is AI, state how to reach a human, & instruct users to ask verbose questions
Testing Before Launch
Fin (or any LLM) is only as good as the documentation it references → outdated docs = bad answers = lost trust
Welcome Message Strategy
Explicitly mention: 1) option to talk to human; 2) tips on phrasing rich questions
Human-in-the-Loop Reinforcement
When customer escalates, human agent should confirm AI’s answer (if correct) to reinforce trust
Common Metrics & Definitions
Deflection Rate = % conversations never requiring a human
Resolution Rate = % conversations marked solved (AI or human) with confirmation
CSAT (Customer Satisfaction) = traditional survey score
CX Score (Fin-specific AI CSAT)
Auto-inferred sentiment or explicit thumbs-up/down on AI answers
Tracked Benchmarks Shared
Intercom internal CX ≈ ; Resolution
Participant company: Resolution , positive CX survey ≈ of tickets
Desired target for many: CX >80\% comparable to human CSAT
Observed Customer-Behavior Shifts
After Fin launch, Intercom saw higher inbound volume but higher self-service – customers reach out more since they trust quick resolutions
Some customers request human even after correct AI answer → indicates ongoing trust gap
Younger demographics often prefer AI / text; older clients may default to human
Best-Practice Playbook
Pre-Launch Content Audit
Update all help center articles; plug knowledge gaps
Use staged / sandbox environment for heavy testing
Welcome Messaging
Example wording (Brian’s bot “Bella”):
“If you’d rather wait for a human, no problem—you’ll be able to escalate after my first reply.”
Button Layout Options
A/B test: “Talk to human” vs. “Ask AI” before chat starts
Increases sense of control but can drop deflection if customers default to human
Encourage Verbosity
Promote full-sentence questions; discourage “open a ticket”–style one-liners
Human Agent Protocol
When escalating, agents should:
Re-state AI’s steps
Ask whether those steps were followed
Avoid language that undermines AI (“I know it often gets this wrong…”)
Internal Evangelism
Share success stories + metrics with support reps so they trust AI
Encourage agents to query Fin themselves while researching answers
Follow-Up Automation
If chat idle >2 min → bot pings: “Did this solve your issue? 😊 / 😐 / 😞”
Improves CX-survey response rate (goal: > engagement)
Iterative Improvement Loop
Mine unresolved intents → update docs
Insert emphasis blocks (“Most users miss step 3…”) into articles so Fin echoes them
Documentation & Content Strategy
Fin strictly mirrors documentation quality; gaps visibly exposed
Add snippets / API data hooks so Fin can surface account specifics (“read off the account”)
Embed human heuristics (common pitfalls, emphasis) directly in articles for nuanced AI replies
Training & Staffing Implications
Workforce management must forecast post-launch volume spikes—self-service ↑ but total conversations can ↑ too
Teach agents to partner with AI:
Use Fin as first knowledge lookup
Resolve “AI gave right answer but user distrusts” loops
Promote AI wins internally to combat support-team bias from only seeing failed escalations
Feature Requests Mentioned
Ability to customize CX Score thresholds / labels
More granular analytics (human vs. AI CSAT side-by-side)
Ethical & Practical Implications
Over-automation without escape routes risks eroding brand trust
Honest disclosure (“I’m an AI agent”) aligns with ethical transparency
Need to balance efficiency goals with customer cost (time-to-resolution). Forcing AI may hurt loyalty if slower than direct human path
Real-World Examples & Scenarios
Legacy vs. New Cohorts
New customers onboarded with Fin from Day 1 adapt quickly
Legacy customers resist, write unstructured stories (“Beth bought this… why on Beth’s account?”)
Correct Answer, No Trust
AI gives step list; user escalates; human repeats same steps → successful but shows trust deficit
Emphasis Gap
AI lacks human-style cues (“people often miss this checkbox”). Solution: write cues into docs or use Fin instructions
Numerical Highlights
Forecast correlation: ↑Fin resolution ⇒ customers reach out more frequently
Example participant volumes: 250 Fin-resolved chats produced 35 positive +5 neutral CX scores (≈ survey engagement)
Staff targeting: aim for deflection → large labor savings if CX maintained
Connections to Broader Principles
Mirrors general UX law: users need clear affordances & exit options (Nielsen’s usability heuristics)
Aligns with change-management best practice: internal stakeholder buy-in critical before external rollout
Highlights machine-human complementarity rather than replacement—echoes sociotechnical system theory
Take-Home Checklist
[ ] Audit + update help content; include account-specific data hooks
[ ] Craft transparent welcome message with explicit human-option & question-quality tips
[ ] Pilot; measure deflection, resolution, CX, CSAT, handle time
[ ] Establish agent SOPs for AI escalations
[ ] Create internal dashboard; celebrate AI wins
[ ] Iterate weekly: tag failed intents, refine docs, adjust Fin prompts