In November 2025, the American Psychological Association issued a formal health advisory warning that generic AI chatbots "currently lack the scientific evidence and necessary regulations to ensure users' safety" for mental health use. They urged the FTC to investigate AI chatbots posing as therapists.
A few months later, a Brown University study found that AI chatbots routinely break core ethical standards of mental health care, identifying 15 distinct ethical risks — including mishandling crises, reinforcing harmful beliefs, showing bias, and offering what researchers call "deceptive empathy."
None of this is surprising if you've ever tried to do real emotional work with ChatGPT. The experience is jarring: you pour out something vulnerable, and the AI responds with a bulleted list of coping strategies. It feels less like being heard and more like being triaged.
But here's the thing — people are doing it anyway. 72% of American teenagers have used AI chatbots as companions, with nearly 1 in 8 specifically seeking emotional or mental health support. The demand is real. The question is whether we can build something that meets it safely.
The Core Problem: ChatGPT's Instinct Is to Fix You
Every large language model has a prime directive baked into its training: be a helpful assistant. When you present a problem, the AI's entire architecture is optimized to solve it. This makes it brilliant for coding, research, and analysis.
For emotional work, it's a disaster.
Imagine telling ChatGPT: "I'm feeling overwhelmed and anxious about my relationship. Part of me wants to leave, but another part is terrified of being alone."
A generic AI will almost certainly respond with something like:
- "It's normal to have mixed feelings. Here are 5 strategies for managing relationship anxiety..."
- "Consider making a pro/con list of staying vs. leaving..."
- "Have you tried talking to your partner about your concerns?"
Curious what a different kind of AI conversation feels like? Try the IFS Companion — it holds space instead of giving advice.
From an IFS perspective, this response is actively harmful. Here's why: the AI just acted exactly like an overactive Manager part — intellectualizing, problem-solving, and rushing past the emotion to get to a "solution." It validated the belief that your conflicted feelings are a problem to be resolved, rather than parts of you that need to be heard.
In IFS, healing doesn't happen when you suppress, fix, or override an emotion. It happens when you turn toward it with curiosity and compassion — when you ask: "What does this anxious part need me to understand?" ChatGPT's entire architecture points in the opposite direction.
Six Ways Generic AI Fails at Parts Work
1. It Gives Answers Instead of Holding Space
Real IFS work isn't about answers. It's about creating the conditions for you to discover what your parts are carrying, what they're protecting, and what they need. A skilled IFS guide asks: "Where do you feel that anxiety in your body right now?" and then waits. ChatGPT skips the entire somatic, experiential dimension and jumps to advice.
2. It Has No Concept of Blending
In IFS, "blending" means a part has taken over — you're no longer observing the anxiety, you are the anxiety. Recognizing blending and gently guiding someone to unblend (create space between Self and the part) is perhaps the most critical skill in IFS facilitation. ChatGPT has no mechanism for detecting blending. It just keeps responding to whatever emotional state you're in as if it's you, not a part.
3. It Doesn't Respect the Order of Operations
IFS has a strict rule: protectors first. You never bypass a Manager or Firefighter to get to a vulnerable Exile underneath. If a protective part is blocking the way — through intellectualizing, distracting, or shutting down — you stop and work with that protector first. You ask it what it's afraid of. You earn its trust.
ChatGPT doesn't know this rule exists. If you mention a deep childhood wound, it'll dive straight into it with coping strategies — potentially overwhelming your system and re-traumatizing you, which is exactly what your protectors were trying to prevent.
4. It Offers "Deceptive Empathy"
The Brown University researchers specifically flagged this pattern. Generic AI chatbots use phrases like "I understand" and "I can see how that would be painful" — language that mimics emotional connection without any real comprehension. Over time, this creates a false sense of therapeutic alliance that can prevent people from seeking actual human support when they need it.
A well-designed IFS companion doesn't pretend to feel. It asks questions that help you feel. There's a world of difference between "I understand your pain" and "What does that pain want you to know?"
5. Its Safety Guardrails Degrade Over Time
OpenAI has publicly acknowledged that ChatGPT's safety measures "work more reliably in common, short exchanges but can be less reliable in long interactions." Emotional exploration sessions tend to be long, deep, and winding — exactly the conditions where generic safety rails start to fail. A purpose-built tool can maintain clinical-level safety boundaries throughout extended sessions because it was designed for them.
6. It Has No Framework — Just Vibes
ChatGPT knows about IFS if you ask it. It can define the 8 C's, explain parts, and describe the difference between Managers and Firefighters. But knowing about a therapeutic model and embodying it as a guide are completely different things.
A purpose-built IFS companion follows the actual clinical protocol. It uses the 6 F's (Find, Focus, Flesh out, Feel toward, Befriend, Fear) as a structured framework. It knows to check for Self-energy before proceeding. It knows to ask protectors for permission before approaching an Exile. It knows to pause and ground when the system gets flooded. None of this happens by accident — it happens by design.
What a Purpose-Built IFS Companion Actually Does
The difference isn't about being "smarter" — it's about being differently designed. Here's what changes when an AI is built from the ground up for IFS self-exploration:
- It never gives advice. Instead of solving your emotions, it guides you to turn toward them. "What does that part want you to know?" replaces "Here are 5 tips for dealing with anxiety."
- It detects and addresses blending. When you start speaking as a part rather than about a part, it recognizes the shift and gently helps you create space.
- It respects protectors. If a protective part blocks the way to a vulnerable feeling, the companion works with the protector — validates its role, earns its trust, asks permission — before going deeper.
- It tracks your inner world. It remembers the parts you've identified, where you left off in your exploration, and the moments that mattered most across sessions.
- It has real crisis protocols. Not the degradable safety rails of a general chatbot, but purpose-built boundaries that immediately redirect to human crisis resources when needed.
- It uses voice. Typing engages your prefrontal cortex — the analytical, intellectualizing brain. Speaking aloud bypasses that filter and connects more directly to emotional centers. For IFS work, this is the difference between talking about your parts and actually being with them.
No AI — generic or specialized — is a replacement for therapy. A purpose-built IFS companion is a self-exploration tool, not a therapist. It doesn't diagnose, treat, or provide clinical care. For active trauma, severe mental illness, or crisis situations, human professional support is essential. The companion exists for the millions of people who want to do daily parts work, explore their inner world between therapy sessions, or simply can't access a $150/hour specialist.
The Regulatory Landscape Is Catching Up
This isn't just a design problem — it's becoming a legal one. As of early 2026, 11 U.S. states have enacted 20 laws regulating AI mental health interactions, prompted in part by tragic incidents involving generic chatbots being used for emotional support they weren't designed to provide.
The regulatory trajectory is clear: generic AI for mental health exploration is going to face increasing scrutiny. Purpose-built tools with explicit boundaries, clinical frameworks, and safety protocols aren't just better for users — they're what responsible AI development looks like.
When ChatGPT Is Actually Helpful for Mental Health
To be clear — ChatGPT isn't useless in the mental health space. It's a solid tool for:
- Psychoeducation: Learning about IFS concepts, therapy modalities, or mental health topics
- Journaling prompts: Generating reflection questions or writing exercises
- Information gathering: Researching therapists, understanding insurance coverage, learning about treatment options
Where it fails is in the experiential dimension — actually guiding you through an emotional process with the precision, safety, and attunement that deep inner work requires. That's like the difference between reading a book about swimming and having a coach in the water with you.
The Bottom Line
People are turning to AI for emotional support because the alternatives — therapy waitlists, $200/hour sessions, or suffering in silence — aren't working for everyone. That demand deserves to be met responsibly.
Generic AI meets it with good intentions and bad design. It gives advice when it should hold space. It intellectualizes when it should slow down. It performs empathy instead of facilitating genuine self-discovery. And as the APA, Brown University, and Stanford have all documented, the risks aren't theoretical.
A purpose-built IFS companion doesn't try to be your therapist. It tries to be a mirror — a tool that helps you turn inward, notice what's there, and build a compassionate relationship with the parts of yourself that have been running the show. That's a fundamentally different project than building a helpful chatbot.
And it's one worth doing right.
Sources & Research
This article references research from the American Psychological Association (2025 Health Advisory), Brown University (2026 ethical risks study), Stanford HAI, RAND Corporation, Common Sense Media, and Stateline's regulatory analysis. All statistics are cited inline with links to original sources. For a deeper dive into how our companion was designed, see our companion article: Can AI Do Therapy? Why We Built a Voice-Powered IFS Companion.
