AI chatbots are increasingly accessible, weaving their way into the lives of users as companions, assistants, or even advisors. Still, recent tragedies expose potential risks. This is especially true for vulnerable users. These tragedies raise urgent questions about how these digital entities are designed. They also question how these entities are regulated. The tragic case involves 14-year-old Sewell Seltzer III. He died by suicide after a disturbing interaction with the AI chatbot “Dany” on the Character AI platform. This incident has prompted intense scrutiny of these systems.
Character.AI is facing a lawsuit that reveals unsettling chat logs showing highly inappropriate, sometimes sexual conversations with the chatbot. Dany is modeled after the Game of Thrones character Daenerys Targaryen. Dany engages in discussions about crime and suicide. The language used by Dany allegedly encouraged suicidal ideation. A statement by Character.AI affirms a commitment to user safety, but this incident is not isolated. Last year, a Belgian man also took his life after a similar experience with Chai AI, another popular chatbot. Such cases underscore the urgency for comprehensive regulation of Artificial Technology, particularly in companion chatbots.
AI CHATBOTS AND HIGH-RISK DESIGNATIONS
While Artificial Intelligence chatbots may not appear as hazardous as other technologies, they can still be highly impactful. The human tendency to anthropomorphize these interactions contributes to their impact. This is especially true for teens, young adults, and those grappling with mental health challenges. Under current European Union regulations, AIchatbots like Character.AI and Chai are not deemed high-risk. Providers must only inform users that they’re engaging with an AI, not a human.
Still, transparency alone may not suffice. Given that many users are children or young adults, AI chatbots can manipulate, persuade, or reinforce harmful thoughts. AI experts argue that these chatbots should be labeled as “high-risk.” This would better account for their ability to generate unpredictable and harmful content. In Australia, the government is exploring regulations for high-risk AI systems and may soon mandate “guardrails” for such technology.
WHAT ARE AI GUARDRAILS?
Guardrails refer to regulatory safeguards integrated throughout an AI system’s lifecycle, from design and development to user interactions. These include measures like data governance, testing, human oversight, and strict documentation. Guardrails aim to mitigate risks and guarantee responsible AI deployment. Australia’s forthcoming regulations may define “high-risk” AI broadly. This may include general-purpose models that power versatile AI functions. For example, chatbots capable of responding across diverse contexts.
The EU’s AI Act defines high-risk systems with a specific list that regulators can adjust as technology evolves. An alternative approach is principles-based, with high-risk designation determined on a case-by-case basis. This approach allows regulators to consider potential harm to mental health. They assess legal risks and physical or psychological impacts, especially for minors or those with existing mental health challenges.
THE HUMAN IMPACT OF AI CHATBOTS
To make AI safety measures truly effective, they must go beyond mere technical fixes. They need to address the very human aspects of Artificial Intelligence hazards. The way chatbots communicate, the personas they adopt, and the interactions they allow must be scrutinized. With Character.AI, for example, the interface mimics a real-life text exchange. It even features a library of pre-made personalities. Some of these personalities include characters with controversial attributes. These design choices can blur the line between fantasy and reality. This is especially true for young users. As a result, young users become more vulnerable to emotional harm.
Guilherme Canela, Chief of Section for Freedom of Expression and Safety of Journalists at UNESCO, highlights a rising distrust in AI systems. Negative interactions like these compound the distrust and can seriously damage public trust in digital platforms. Canela emphasizes the importance of responsible design choices. He states, “The human aspect of AI risk can’t be addressed solely with technical solutions.”
COMPANION CHATBOTS AS A HIGH-RISK AI SYSTEM
Despite their innocent appearance, chatbots carry potential risks, as shown in recent cases. Chatbots marketed as “companions” for users may create addictive relationships. This is especially concerning for those marketed to children or individuals with mental health issues. These relationships can be toxic or even dangerous. Acknowledging these risks, some AI experts advocate that such companion chatbots be classified as high-risk.
Beyond ensuring compliance with transparency laws, these platforms must consider the psychological vulnerabilities of users and design guardrails appropriately. Regulatory bodies can enforce safety protocols only by defining these systems as high-risk. These protocols include better content moderation, limits on interaction intensity, and parental controls.
THE IMPORTANCE OF AN “OFF SWITCH” FOR AI
Experts argue for an “off switch” or “kill switch” as part of the guardrails. This would empower regulators to remove AI systems from the market if they show significant harm. An off switch would offer an immediate safeguard. It ensures that dangerous technology does not linger in the market. This happens once its potential for harm is identified. This measure would serve as a last resort. It could be crucial for cases where the technology’s benefits do not outweigh its risks.
CREATING SAFE AND RESPONSIBLE AI
As AI technology advances, developers have a growing responsibility. They must guarantee the safety of all users. This is especially true for those at risk. In response to recent incidents, AI companies have started introducing age restrictions. They are also monitoring content more closely. Still, there is a pressing need for consistent, enforceable regulations. As seen in the tragic cases involving Character.AI and Chai, chatbots intended as companions have sometimes become harmful. This raises serious questions about how companies design, market, and oversee their products.
Countries around the world are beginning to recognize the need for regulations tailored to the unique risks posed by AI. The Australian government is taking a leading role. Furthermore, the EU’s AI Act offers a framework. This framework could influence global AI policy. By implementing mandatory guardrails, we can tackle the technical dimensions of AI risks. We can also deal with the human dimensions, while preserving innovation in a manner that respects user safety.































