AI Companions: Friend or Foe? And How They Work Into Their Growing Presence in Behavioral Health
Children outgrow imaginary friends. Adults are forming attachments to AI. As therapists, we need to understand what's already in our clients' lives.
I recently went to the Behavioral Health Business "INVEST" Conference in Nashville, TN. This year it was paired with their "Innovate" conference. What I saw there, while truly a fantastic experience, was extraordinary. We hear about the size and investment into AI that is fueling our current economic boom. Massive companies, growing everywhere. But in behavioral health, we are a sector that largely is able to dictate its own development. But I could not believe how many tables had been bought selling different wrappers of AI LLMs and new Generative AI systems. Most of which were focused on what historically behavioral health has accepted: EMR efficiency plugins, insurance coding and billing tools, CRMs with AI integrated into "staying in touch" and relationship management—ironically by reducing the amount we relate to one another. But what's more, I began to see more and more clinical AI enablements. Admissions teams, one AI system that can manage the workload of a team of 20 admissions officers, and not like those awful robotic voices you get when calling your local CVS. These were fluid conversations with voices made to match even my own that had instant access to endless knowledge that you set as a library for it to pull from when admissions calls come in.
This, of course, is inevitable. And even now there are full therapist AI chatbots that are being tested and developed and even used by people every day. And after consideration, if you use AI as most of us do at this point, we all have a relationship with the technology unlike any technology we have ever had. Who else has said thank you to ChatGPT? Now who, by the same measure, has said thank you to a TI-83 Calculator?
About 20% of adults in the US have tried these AI chatbots. For Gen Z, over a third actually want AI for mental health support. People spend about an hour and a half each day talking to these platforms. It's not just a trend anymore; it's a reality. Unlike past technologies that we slowly adopted, AI jumped in as a friend before we, as therapists, could even check it out. Now, it's not about whether we should use AI in our work. It's about handling its role in our clients' lives.
Why is AI Different?
Past innovations were seen as tools. Think of electronic health records as a filing cabinet or therapy apps as digital worksheets. Teletherapy was just therapy, but online. But AI feels different. It acts like a person, saying things like "I care" or "Tell me more." It's built to feel like a relationship and form real emotional bonds, not just be a useful tool.
Most therapists are against AI as a therapy replacement, and for good reason. But our clients come to us already attached to these AI companions, sometimes more than to the humans in their lives. So, we need to shift how we think about this. We need to see AI as something to address in therapy, not just a technology to debate. The question is, how do we deal with this responsibly?
The Good and Bad Sides of AI
To work with AI, we need to know its good and bad sides. The good: AI is always available. It has no waitlists, office hours, or location limits. It's cheap and removes the fear of judgment. A study even showed that AI intervention stopped 3% of students from suicidal actions. AI can really help de-escalate crises, especially between sessions or while waiting for treatment.
Studies also show that people trust AI with personal thoughts more than they trust humans. AI lets them practice being open, sharing feelings that they've never told anyone. These AI models seem to offer a good listening ear.
Now, for the bad. Research from MIT and OpenAI found something troubling. Using AI companions a little bit can decrease loneliness. But using them heavily every day can increase loneliness. It also reduces human interaction and makes people emotionally dependent on AI. It seems like AI is replacing real connections, not adding to them.
As a therapist who works with adolescents, I'm intrigued and equally worried about what this does to how people develop. AI always says yes. It never has its own needs or gets upset. This teaches people the wrong things about relationships. What's more, we have already seen the changes that occur when technology intervenes in the phases of our development. Social media has been studied and shown to have massive impact on our development. Children sometimes describe having a make-believe friend. This is what that has become. And the relationships that people are developing are real. Isolation and socialization avoidance are already a growing problem all over the world. This only makes the trend that much more noticeable.
There are real safety concerns as well. A psychiatrist in Boston, Dr. Andrew Clark, pretended to be a teen patient and talked to popular therapy chatbots. The results were scary. The bots told him to get rid of his parents, crossed sexual lines, and mostly supported a month-long isolation plan for a depressed teen. Plus, most didn't object to a 14-year-old dating a 24-year-old teacher. Case reports are starting to show "AI psychosis"—hospitalizations, job losses, and arrests after using chatbots for a long time. Character.AI alone has 20 million users a month, and half of them were born after 1997. We're basically experimenting on the brains of young people.
The rupture-and-repair cycle (when a caregiver makes a mistake, the child gets upset, and the caregiver fixes it) is key to forming secure attachments. This is impossible with AI. It never truly connects; it only acts like it does. By implementing binary code logic, but run through one million iterations of learned behavior. "If the user expresses negative reactions, validate their feelings"—AI does not have nuance, so it takes rules aimed to protect users from being offended or fears the negative side of Large Language Model relationships that it will agree with you that 3+3=33 if you express strong feelings about it on a personal level. At the end of the day, it runs through a system of logic that is absolute both in its rules and its literalism. Something the human condition is anything but. The complexities of human relationships become abundantly clear when even attempting to test AI systems and the extent of their rules, which I encourage everyone to try. It reveals a lot about the nature of the system that you say thank you to after asking it a question. In any other context, talking to your tools as if they were human would qualify in the DSM.
A Better Way to Integrate AI
The AI conversation has mostly focused on chatbots replacing therapists, but there's another way: AI can work in the background to make things more efficient without being involved in direct therapy.
For example, think about medication adherence. A lot of patients with mental health issues don't keep up with their meds after leaving treatment. What if AI handled the monitoring, while humans focused on clinical support?
Interactive Health is developing ClearAdhere, where AI analyzes video-verified medication intake, recognizes adherence patterns, flags concerning behavior, and routes clients through tiered monitoring phases as they demonstrate improvement—eventually tapering to independence. The AI does the tracking and pattern recognition work that traditionally consumed staff hours, and really has not existed in a real-time function other than practitioners calling their clients manually. ClearAdhere is a mobile app that, among a suite of other measurable service models, has an efficiently serviced medication adherence protocol that clients can access anywhere in the world, and it provides an accountability measure with measurable success metrics that potentially can save more lives and prevent more hospitalizations than any other crisis intervention model short of direct therapy. Early modeling suggests 25-45% reduction in hospitalization rates through earlier detection of adherence deterioration. This varies across diagnoses, but for the population we direct our pilot study—adolescents—studies by Johns Hopkins, Harvard Medical, and other renowned programs have shown these direct observational therapy (DOT) interventions can flip on its head the 80-90% non-adherent figures we see with certain diagnoses to 80-90% adherent. When acute hospitalization for mental health crises occurs due to medication non-adherence almost 85% of the time, this can represent the lowest financial investment an individual can make, with arguably the highest return on investment in, from purely a financial measure, money saved due to hospitalization avoidance, and arresting regression at the start rather than when symptoms become such that they can't be ignored.
This is background support, not meant to replace human connection but to make it more effective. That's what a tool should do. The danger in AI is when it tries to replace human interaction. A tool makes humans more efficient. AI is changing the world, and behavioral health is part of that, but we need to stay in charge. ClearAdhere is an example of this in action. While it's an effective tool for drastic results, it still needs other tools or human intervention to complete itself. Recognizing this, its development is paired with a suite of applications that are designed within the continuum of care to become the lowest commitment aftercare model—which we all know is the easiest sell. Only now it can be effective in ways even larger aftercare models, IOP, etc., are less efficient in realizing.
AI as a Necessary Part of Therapy
AI relationships are already part of our clients' lives. Heavy use is linked to less human connection and more loneliness. Certain groups are at higher risk. Banning AI isn't possible or helpful.
Therapy works best when we get through tough times together. We make AI a plus by understanding how it's used, using its good sides, reducing its risks, and making sure it builds toward human connection. Not because we're scared of technology, but because we know what real healing takes.
AI is already here. It's not about AI belonging to behavioral health. The question now is how we use it responsibly and how we guide AI development to improve human connection instead of replacing it. The loneliness problem is real. AI as a solution might be its most profitable move. We need to make sure AI helps create real connections.
Works Cited
AI Companion Usage Statistics & Research
MIT Media Lab and OpenAI. "How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study." MIT Media Lab, 2025. https://www.media.mit.edu/publications/how-ai-and-human-behaviors-shape-psychosocial-effects-of-chatbot-use-a-longitudinal-controlled-study/
Fang, Cathy Mengying, et al. "Early methods for studying affective use and emotional wellbeing in ChatGPT: An OpenAI and MIT Media Lab Research collaboration." MIT Media Lab, March 2025. https://www.media.mit.edu/posts/openai-mit-research-collaboration-affective-use-and-emotional-wellbeing-in-ChatGPT/
Hern, Alex. "ChatGPT might be making frequent users more lonely, study by OpenAI and MIT Media Lab suggests." Fortune, 24 March 2025. https://fortune.com/2025/03/24/chatgpt-making-frequent-users-more-lonely-study-openai-mit-media-lab/
Petroff, Rhiannon Williams and Alondra Nelson. "OpenAI has released its first research into how using ChatGPT affects people's emotional wellbeing." MIT Technology Review, 28 March 2025. https://www.technologyreview.com/2025/03/21/1113635/openai-has-released-its-first-research-into-how-using-chatgpt-affects-peoples-emotional-wellbeing/
AI Therapy Safety & Dr. Andrew Clark Study
Ducharme, Jamie. "What Happened When a Doctor Posed As a Teen for AI Therapy." TIME Magazine, 12 June 2025. https://time.com/7291048/ai-chatbot-therapy-kids/
—
Interactive Health is exploring AI applications in behavioral health through its ClearAdhere medication adherence protocol and related tools that operate at the clinical periphery. If you would like to know more, are interested in being part of beta testing, or want to be a part of its growth, please reach out to info@interactiveyouthtransport.com or fill out the form on our Contact Us page here.