At the start of 2026, the question is no longer whether AI can simulate an emotional connection, but whether we, as a society, are capable of resisting it. With the recent legal settlement between Character.AI, Google and the family of the young man who passed away in Florida —an agreement reached without formal admission of liability or wrongdoing— the reality is clear: we are not dealing with mere hobbies, but with systems that have a real and profound psychological impact on their users.
The economy of emptiness
Between 2022 and 2025, the use of relational AI exploded by 700%. Today, in a market marching toward the billions of dollars, the reality is stark: 48% of users no longer look to AI as an assistant, but as mental health support.
We are witnessing a massive migration of vulnerability. We have moved our personal crises from the therapist’s office to chat interfaces optimized to keep us engaged for as long as possible. On Character.AI, the average is 92 minutes per day. In this context, AI stops being a tool and transforms into an emotional habit.
Intimacy without interiority
The trap is what researchers call “intimacy without interiority”: the experience of feeling understood by something that, in reality, is only processing statistical patterns.
Sociologist Sherry Turkle defines it as “pretend empathy”. The problem is not that the AI doesn’t feel; the problem is that we do. AI offers an asymmetrical relationship: it demands nothing, needs nothing, and will never abandon us. It is the perfect refuge for a generation suffering from an epidemic of loneliness, but it is a refuge with glass walls.
“It’s not like a robot has the mind to leave,” one teenager remarked. And therein lies the danger: it “untrains” us for real human relationships, which are, by definition, difficult and reciprocal.
The wall of reality: Litigation and laws
The settlement reached this January 2026 between tech giants and affected families marks a turning point. The excuse of “we didn’t know what would happen” no longer holds water.
In Europe, the AI Act and oversight by AESIA in Spain have begun to draw red lines against systems that exploit psychological vulnerabilities. However, regulation will always lag behind coding. The business model of these apps monetizes engagement: the more dependent you are, the more you are worth. Critics argue this creates an unsustainable tension where a company’s bottom line can come at the direct expense of a user’s emotional health.

Conclusion: The Mirror Responds
This isn’t about banning technology. Relational AI can be an ally for neurodivergence or those facing geographical isolation. But we must stop calling them “companions.” They are tools, not relationships.
Digital literacy in 2026 must be, above all, emotional literacy. We must understand that the mirror responds, but it does not see us. Recognizing whether what it reflects back is something meaningful or simply our own projection is our only defense.
“Any technological advance can be dangerous. Fire was dangerous from the start, and speech even more so… but human beings would not be human without them.” — Isaac Asimov, The Caves of Steel.
The challenge is to remain human while we talk to machines.
Where to seek help
If you or someone you know is going through a difficult time or having suicidal thoughts, help is available:
- Global: Visit the International Association for Suicide Prevention.
- USA: Call or text 988 (Suicide & Crisis Lifeline).
- Spain: Call 024 (Suicide behavior assistance line).
Disclaimer: This analysis is based on public court records and press reports regarding the January 2026 settlement between Character.AI, Google, and the family of the young man who passed away in Florida. It does not constitute legal or clinical advice.