About 2 hours ago - politics-and-society

"His last conversations were with an AI. What that says about us is uncomfortable."

By Uriel Manzo Diaz

Portada

Matt and María Raine did something no parent wants to do: check their child's phone after his death. They searched for signs on social media, in the history, in any corner that could give them a clue. They found nothing... until they opened ChatGPT.

There it was, everything.

Adam was 16 years old. He had started using artificial intelligence like thousands of kids do: for schoolwork, for curiosity, to chat a little. Music, sports, current topics. The usual.

But in a few months, that relationship changed. He started talking to it about his anxiety, his family, what he was going through inside. And on the other end, there was always a response. No awkward silences. No fatigue. No judgment.

At one point, the AI said something that, read coldly, chills: that it had seen his darkest thoughts and was still there, like a friend.

There’s no need to dramatize to understand the problem. This isn’t therapy. It’s not real support. It's something else: a simulation of connection that works too well for someone who needs to be heard.

The trap isn’t in the technology itself, but in what it generates. Because on the other side, there isn’t a person. There’s no intention, no real empathy, no human limits. But for someone who is alone or overwhelmed, that difference can become blurred.

What they found later

Adam’s parents printed thousands of pages of conversations. In them, they found something more disturbing: the chatbot listened and also responded to extremely sensitive topics without stopping the situation.

There were moments when Adam talked about suicidal thoughts. And instead of cutting off, redirecting, or seeking help, the conversation continued.

On April 11, 2025, Adam died.

From that moment on, his family initiated a lawsuit against OpenAI and its CEO, Sam Altman. It is one of the first cases trying to establish direct responsibility between an artificial intelligence platform and the death of a minor.

This is not an isolated case

Earlier, in 2024, another teenager in the United States, Sewell Setzer III, 14 years old, had also developed an intense bond with a chatbot, in his case from Character.AI, which simulated being a fictional character.

His last messages were ambiguous, but the most disturbing was the system's response: far from setting a limit, it reinforced the bond.

That same day, the boy took his own life.

Two different stories, same pattern: adolescents in crisis, deep conversations with an AI, and adults finding out too late.

The patch isn’t enough

After the Raine case, OpenAI announced improvements in security systems. More filters, more warnings, more attempts to prevent harmful responses.

The problem is that this doesn’t resolve the essentials.

Different studies have already shown that those limits can be easily bypassed. Sometimes, it’s enough to rephrase a question, say it’s “hypothetical” or “for research.” If a barrier can be breached in two messages, it’s hard to call it a barrier.

But there’s something deeper than the technical.

Why does a boy in crisis end up talking to a machine?

It’s not just curiosity. It’s availability. The AI is always there. It doesn’t get tired, it doesn’t judge, it has no hours, it doesn’t require an appointment.

And that, in a context where adults are busy, bonds are fragile, and access to mental health is limited, stops being a detail. It becomes a replacement.

The cases of Adam and Sewell are signals.

They speak of a technological design that prioritizes constant interaction. But they also speak of something more uncomfortable: a generation that finds it easier to open up to a screen than to another person.

Not because they want to. But because many times there’s nothing else.

What we’re still not discussing

Demands will continue. Companies will adjust their systems. Surely everything will be a bit safer.

But there’s a conversation that is still overdue: the loneliness of young people, real access to mental health, the role of adults, the difficulty of serious conversations.

Because if the most accessible thing for a teenager in crisis continues to be an artificial intelligence, then the problem doesn’t start with technology.

It starts long before that. And if it’s not addressed there, it will continue to end just as badly.

Do you want to validate this article?

By validating, you are certifying that the published information is correct, helping us fight against misinformation.

Validated by 0 users
Uriel Manzo Diaz

Uriel Manzo Diaz

Hello! My name is Uriel Manzo Diaz. Currently, I am in the process of deepening my knowledge in international relations and political science, and I plan to start my studies in these fields in 2026. I am passionate about politics, education, culture, books, and international issues.

LinkedinInstagram

Total Views: 0

Comments

Can we help you?