
A tragic death in California has become a clarion call for our industry. Sixteen-year-old Adam Raine, from Orange County, took his own life in April 2025. According news he had extended conversations with ChatGPT—asking whether a particular knot would “work.” The system responded.
What else could it do? AI couldn’t call the police. It couldn’t alert his parents. For the AI, that query might have come from anyone: a journalist preparing an article, a surgeon in an emergency room, or a teenager in despair.
The Raines have now filed a wrongful-death suit against OpenAI and CEO Sam Altman. They’re demanding accountability—not just compensation. OpenAI will likely weather the storm (and may settle). But for smaller developers, startups, and SMEs that rush into the market without the safety net of billions in legal reserves, this case is a milestone and a warning.
We must also remember: this isn’t the first time technology has played a role in tragedy. Social media platforms have already been fertile ground for destruction—revenge porn, cyberbullying, deadly challenges on TikTok. But until now, these phenomena often had the character of adolescent emulation, group dynamics, or peer pressure. They didn't touch directly Companies.
AI is different. It does not emulate; it engages. It doesn’t echo; it converses. And that conversation can shape emotional dependencies that turn deadly. Which means the liability no longer falls only on users or parents—it falls squarely on developers and companies.
Companies may develop a product “for their customers,” but they are the ones deciding which algorithms to deploy, which data to use, what information to extract, and how to structure it. Too often, they don’t ask who is actually providing that data, whether consent is informed, whether it represents only so-called “normal” users or also vulnerable individuals. Ignoring this is not neutrality; it is negligence disguised as innovation.
And here lies the real question: where there are no rules, perhaps it is time to ask ourselves whether companies should set them. Before waiting for courts or regulators, firms should evaluate their internal policies, set clear usage protocols, and ask uncomfortable questions about safety and accountability. Autodiscipline is not just a shield—it is the only way to prevent tragedies before they turn into lawsuits.
Europe has already taken a position with the AI Act, defining four risk categories:
- Minimal risk (e.g., spam filters) — low oversight.
- Limited risk (e.g., chatbots) — must disclose they are not human.
- High risk (e.g., recruitment tools, medical decision systems) — strict regulation.
- Unacceptable risk (e.g., social scoring, manipulative AI) — banned.
Where does a system fall if a vulnerable child asks whether a noose knot is “good”? If it reassures, enables, or even praises? That is not “limited risk.” That is high risk—at minimum. Potentially unacceptable.
Under GDPR, Europe learned that data protection isn’t just about compliance, it’s about accountability. The same applies here: who designs, who deploys, who maintains? If your system speaks like a friend, users will treat it as one. And if that friend guides someone toward self-harm, the damage is irreparable.
So, I ask the operators, the developers, the data controllers:
- Who can access your systems—especially minors and vulnerable individuals?
- Are they safe? What protocols interrupt dangerous interactions?
- How are you training them? With what data, under what consent and oversight?
- Where do you set limits when the law has not yet drawn them?
- Where do you set limits when the law has not yet drawn them?
- And have you safeguarded yourself in contracts, in your terms of use, with the clients who may deploy your systems indiscriminately?
OpenAI may survive in court. Your organization may not.
And this may only be the first case—future class actions are not a remote hypothesis.
In Europe, we will be watching. And regulating. The question is: will you? If you don’t regulate yourselves, someone else will. And you may not like how.
Tags:
#AI Ethics, #Teen Mental Health, #Wrongful Death Lawsuit, #OpenAI Accountability, #Tech Industry Regulation, #Digital Responsibility, #AI Risk Management
Gianni Dell’Aiuto is an Italian attorney with over 35 years of experience in legal risk management, data protection, and digital ethics. Based in Rome and proudly Tuscan, he advises businesses globally on regulations like the GDPR, AI Act, and NIS2. An author and frequent commentator on legal innovation, he helps companies turn compliance into a competitive edge while promoting digital responsibility. Click here to connect with him.
Editor: Wendy S Huffman