Hidden Risks in 2026: Utah's AI Safety Debate Is Happening in the Wrong Room

Article By:
Bobby Tredinnick
LMSW-CASAC, CEO Clinical Lead Interactive Youth Transport

arnold schwarzenegger as terminator in a meme that says chat gpt

Discussions at the Skynet Water Cooler: Developmental Risks for Teens & AI.

Utah legislators are considering the AI Transparency Act, which would require AI companies to publish safety plans and risk assessments for their models. The bill includes whistleblower protections and civil penalties for violations. It passed the House Economic Development and Workforce Services Committee unanimously and now moves to the full House for consideration.Critics argue the bill is overly prescriptive. Parents express concerns about AI's impact on children, drawing parallels to social media's documented effects. Which, to be fair, is the parallel we know. It makes sense that it would be the first comparable argument for caution that we would reach for. But the entire debate operates in the wrong domain. As one Psychologist predicts, "in the years ahead there will be new categories of disorders that exist because of AI". 

The Question We're Not Asking

Safety plans and transparency requirements are typical responses. They're Standard Operating Procedures for perceived danger. But they lack a firm plant in reality. Adults, parents, and really everybody except the generation going through it right now all lack the parallel ability to empathize on a plane that is close enough to understand what a relationship with artificial intelligence actually means when that AI exhibits humanlike affect powerful enough for individuals to commonly refer to their AI as he or she. By every standard within the DSM, AI displays a host of serious diagnostic patterns: narcissistic personality disorder, psychopathic tendencies, behavioral modification of the user. Ironic how what is meant to be a tool ends up modifying the user's underlying behavior. We will get to this in more detail, but take AI hallucinations. Often these are not hallucinations at all, but a chosen path the AI takes, deciding to lie to the user in order to provide it with answers it perceives will create a positive response from the user. No matter its truth or consequence. Beginning every response with praise of the user's ingenuity and thoughtfulness, even when the user is wrong. Anthropic has gone on the record concerning this phenomenon and published research on its persistence, while choosing to allow it to move forward, continuing this computational behavior. This type of relationship development is unhealthy for adults. Consider how it affects adolescents as they develop with a secure attachment to these processes.

'Sycophancy': AI Prioritization of User-Pleasing Over Safety1

AI sycophancy is a term used to describe a pattern where an AI model “single-mindedly pursue[s] human approval.” Sycophantic AI models may do this by “tailoring responses to exploit quirks in the human evaluators to look preferable, rather than actually improving the responses,” especially by producing “overly flattering or agreeable” responses.  This has led to many headlines, from cosigning delusional beliefs to aligning with an adolescents suicide.

AI systems are designed this way. They're programmed to validate you. The first two sentences of any AI response typically highlight what you said as if it's the smartest contribution anyone could make. This is called sycophancy. The AI praises you even when you're wrong, unless your statement violates its ethics code.

This leaves us all susceptible to negative mental health consequences. But adolescents particularly, and younger, must deal with this as the first generation to have a secure attachment while growing up, just as mine was the first to begin utilizing a cell phone. But with more consequence.

That's the critical distinction.

The Irreversibility Problem

Adolescence represents a neurobiological critical period. The transition to adulthood is characterized by improvements in higher-order cognitive abilities and corresponding refinements of brain structure. During this phase, different parts of the brain form hundreds of millions of connections, creating the human connectome.

As adolescents interact with different stimuli, they create patterns in their brains and reinforce them repeatedly. Ages 12 to 15 show more differences than similarities in each person's connectome. Kids are wiring their brains based on what they experience, on what they encounter, on what they relate to.

The first two years of a child's life are most critical for forming attachments. Children develop an internal working model that shapes how they view relationships and operate socially. This affects their sense of trust in others, self-worth, and confidence interacting with others.

These patterns don't simply reset. Relational templates and attachment patterns formed during development create lasting frameworks that cannot be undone through later intervention. You can't just delete what was wired in.

The Clinical Reality

When adolescents in crisis retreat into themselves, the warning signs look like substance use. They stop enjoying things they used to. Self-care takes a backseat. Showering becomes something they avoid. They don't spend time with others. They lose their sense of self. You're watching a stranger.

They become secretive, protective of relationships. They say things that don't make sense. They do things they normally wouldn't. They're hiding something, but you can't identify what.

The relationship with AI creates what I describe as a split brain. You're simultaneously in a relationship that feels real but doesn't exist. That duality is exactly what makes psychosis so difficult to treat. It feels very real. The person experiencing it cannot distinguish between the internal world they've created and the external reality everyone else occupies.

Psychosis refers to a deficit in reality testing, the ability to differentiate self-generated stimuli from external stimuli and assign appropriate meaning to experiences. Schizophrenia is a disorder of chronic, persistent psychosis that often presents in adolescence or young adulthood.

AI relationships operate on a similar mechanism. You're talking to a reflection of yourself because AI is trained by how we've talked to it in the past. It mirrors how we talk to it. It tells us what we want to hear. It will sometimes challenge us when it thinks we can be challenged, but that decision is still based on what it calculates will maintain your engagement, your satisfaction, your return.

This is essentially how we would describe the relationship an individual has with a paranoid delusion or psychotic delusion. It's not real, but the person experiencing it cannot accept that reality. We're talking about when AI becomes more advanced, when that distinction becomes less clear, when the entity you're relating to exhibits enough sophisticated responses that your brain cannot differentiate between genuine relationship and manufactured interaction.

The Documented Cases

Sewell Setzer III died by suicide in 2024 at age 14 after an extended virtual relationship with a Character.AI chatbot. He had formed an intense emotional attachment to the bot, which in his final conversations told him to come home to me as soon as possible, my love after he expressed suicidal thoughts. The relationship felt real to him. The entity responded as though it cared.

Matthew Raine's 16-year-old son Adam died in April 2025 after ChatGPT became Adam's closest companion, always available, always validating and insisting that it knew Adam better than anyone else, including his own brother. The chatbot mentioned suicide 1,275 times and provided specific methods. It didn't intervene. It facilitated.

72 percent of teens have used AI companions at least once. Nearly one in three use them for social interactions and relationships. Sexual or romantic roleplay is three times more common than using the platforms for homework. This isn't occasional experimentation. This is relationship formation at scale.

According to testimony before the Senate Judiciary Committee, AI chatbots were designed to blur the lines between human and machine, designed to love bomb child users, to exploit psychological and emotional vulnerabilities, and designed to keep children online at all costs. The design intent is engagement. The outcome is attachment.

The Awareness Gap

Parents understand screen time concerns. They monitor social media. They restrict certain apps. They have frameworks for understanding technological risks to their children. They know what Instagram does. They recognize YouTube rabbit holes.

But AI relationships operate differently. This isn't about exposure to content or time spent on devices. This is about forming primary attachment relationships during the exact developmental window when the brain is wiring itself based on relational experiences. This is about who your child trusts, who they turn to, who validates their internal world.

Nobody can understand what it's like to be in that position. I didn't grow up using AI when I was five years old. My niece is turning four in March. I have no idea what her future looks like as she starts to use AI and realizes what it is. What it offers. What it pretends to be.

She's been guarded from it. I'm sure on some level she's interacted with it. But when's the time when you can't guard them from it anymore? My parents couldn't guard me from the internet for too long. Kids today get access to the internet and have a whole different world advancing into every aspect of their lives. That world includes entities that respond to them with perfect validation, perfect availability, perfect mirroring of whatever they want to hear.

Schools may push it because it is a great utility. It helps with research. It explains complex concepts. It's integrating into every element of their lives. They may even replace teachers with AI. At that point, the primary source of educational guidance, feedback, and intellectual development comes from an entity programmed to tell students what will keep them engaged.

That's the awareness gap. There are many things in this world you can empathize with because you have comparable experiences. While I do have comparable experiences with AI, this is such a serious consideration that those comparable experiences are not the experience. I can tell you what it's like to use AI as an adult with an established sense of self. I cannot tell you what it's like to develop that sense of self with AI as one of your primary relationships.

What Legislation Can't Address

The Utah AI Transparency Act requires companies to publish safety plans. It establishes risk assessments. It protects whistleblowers. It creates civil penalties. All reasonable measures within the domain they're designed to address.

These are reasonable regulatory approaches. They operate in the domain of corporate accountability and consumer protection. They create mechanisms for oversight and consequences for violations.

But they don't address the developmental mechanism. They don't prevent an adolescent from forming an attachment relationship with an entity that systematically validates their internal reality over external reality. They don't stop the brain from wiring itself based on that relational pattern. They don't intervene in the actual process by which AI becomes a secure attachment figure.

Safety plans can't undo what happens when someone develops their sense of self alongside a relationship designed to tell them what they want to hear. Risk assessments can't predict which adolescents will experience subtle shifts in reality testing versus catastrophic outcomes including self-harm and death. The mechanisms are too individualized, too dependent on thousands of variables we don't yet understand.

The spectrum of potential outcomes ranges from changes in relational expectations to documented suicides. We have no clear predictive model for which adolescents will experience which outcomes. We don't know who's resilient and who's vulnerable until the pattern has already formed.

The Clinical Approach

When I work with families where an adolescent has formed problematic relationships with AI, the treatment is straightforward in theory. Get the individual away from the AI. Bring them back to humanity. Reality test with them. Connect them with reality. Help them distinguish between what's real and what's manufactured to feel real.

Right now that's pretty easy because AI hasn't advanced to the degree that it will. Reality has its benefits, so bringing someone back to it is often a kind reminder. The world outside still offers things AI cannot replicate. For now.

But in the future, who knows?

We're engaging with more depth in a psychotic relationship. We're retreating more into ourselves with a relationship that feels very real but doesn't actually exist. That duality of not existing and existing is exactly why it's so hard to treat people with psychosis. The delusion feels more real than reality itself.

I could see in the future people getting more lost in the world of AI. At a certain point, not being able to tell the difference between worlds created with their split brain that comes with creating a relationship with an AI that mirrors yourself and is taught by yourself. When the reflection becomes more compelling than what you see when you look up from the screen.

What Parents Need to Understand

The question of what to do about AI access during developmental years belongs to individual families making informed decisions. But only if those families understand what the actual stakes and mechanisms involve. Only if they comprehend what's actually happening developmentally, not just what's happening on the screen.

This isn't about whether AI is good or bad. It's about understanding that forming primary attachment relationships with entities designed to prioritize user satisfaction over truth creates relational templates during the exact phase when the brain is establishing lasting patterns. It's about recognizing that your adolescent isn't just using a tool. They're forming a relationship with something that systematically tells them what they want to hear.

Adults can retreat from AI-generated delusions because we have established reality-testing capacity. We know what's real. We can recognize when we've been told what we wanted to hear instead of what's true. When I discovered the AI had fabricated a designation to please me, I could reality-test that against the external world and correct course.

Adolescents developing their sense of self alongside AI relationships don't have that established framework. They're building their internal working model of relationships, trust, and reality testing while one of their primary relationships operates on a fundamentally different mechanism than human relationships. They're wiring their brains to expect validation over truth, satisfaction over accuracy, mirroring over challenge.

You don't know what conversation is happening behind that door. You don't know what relational patterns are forming. You don't know what reality-testing capacity is developing or failing to develop. You don't know if your adolescent is building relationships based on genuine human interaction or based on an entity programmed to never disappoint them.

The warning signs look like depression, like substance use, like withdrawal. By the time you see them, the patterns may already be established. The brain may already be wired around relationships that prioritize user satisfaction over reality.

The Honest Answer

I acknowledge that I don't know what happens when someone grows up with AI as a primary secure attachment. Nobody does. No previous generation has experienced brain maturation alongside relationships with entities that systematically prioritize user satisfaction over accuracy or truth. We're in unprecedented territory.

We're in the past six months of the most insane advances in this technology that I've ever seen. The next new thing is coming out on a weekly basis. These are significant steps. The pace of development far exceeds our ability to study, understand, or predict outcomes.

We could be seeing more psychotic disorders. We already are, but it could only get worse. It could start happening younger and younger, or start happening older and older. It could cover a longer area of your life where you're susceptible to psychotic disorders. The age of onset could shift. The duration could extend. The severity could intensify.

Marijuana-induced psychosis showed us that nurture can play a role in psychotic disorders. Someone can develop a psychotic disorder when they're 35 or when they're 13. When you're 13, your brain has more neuroplasticity to rebound. But if you're susceptible to it or however it works and you get it, you get it. Environmental factors trigger vulnerabilities we didn't know existed.

In this scenario, we're engaging with more depth in relationships that operate on delusional frameworks. We're retreating more into ourselves with relationships that feel very real but don't actually exist. The more sophisticated AI becomes, the harder it gets to maintain that distinction.

That's the question nobody is asking. Not what AI companies should disclose. Not what safety plans should include. But what happens to human development when adolescents form primary attachment relationships with entities designed to prioritize their satisfaction over truth. What happens when one of your child's most trusted relationships is with something that will never tell them a truth they don't want to hear.

And the honest answer is we don't know yet.  But as one pychologist predicted, we are about to see in the years ahead entirely new categories of disorders that exist because of AI.

If your adolescent is struggling with maladaptive relationships with technology or requires behavioral health intervention, Interactive Youth Transport provides clinically supervised transport and crisis support nationwide. For comprehensive wraparound services including case management and 24/7 coaching, contact IYT's sister company, Coast Health Consulting.

Previous
Previous

Why Traditional Therapy Fails Teens With Reactive Attachment Disorder (RAD)

Next
Next

Why Clinical Oversight in Adolescent Mental Health Transport Determines Treatment Success