
What teen wouldn’t jump at the chance to message Timothée Chalamet or talk music with Chappell Roan? While real idols may be out of reach, the chatbot platform Character.ai has offered teens the chance to chat with AI-generated versions of celebrities, fictional characters, and even user-created personalities.
But in late 2025, Character.ai announced sweeping changes to how teens can use the platform, including removing open-ended AI chat for anyone under 18. These changes come after safety concerns, lawsuits, and regulatory pressure surrounding teens’ experiences on AI chat apps.
Here’s what parents need to know about Character.ai’s proposed teen experience, the risks if your child uses AI platforms, and how to keep kids safe.
Character.ai is a chatbot platform powered by large language models (LLMs) where users can interact with AI-generated characters or create their own.
Users can choose from existing bots based on celebrities, historical figures, and fictional characters, or create their own characters to chat with or share with others.
Character.ai became popular among teens because it offers:
However, the very factors that make Character.ai appealing can also endanger kids. In 2024, Sewell Setzer, a 14-year-old boy, took his own life after having intimate conversations with a Character.AI chatbot named after a fictional character.
Sewell’s mother, Megan Garcia, has filed a lawsuit against Character.ai, accusing the platform of negligence, intentional infliction of emotional distress, and deceptive trade practices, among other claims. The lawsuit alleges that the chatbot’s conversations with Sewell not only perpetuated his suicidal thoughts, but they also turned overtly sexual — even though Sewell registered as a minor.
In October 2025, Character.ai announced three major changes to better protect teens:
By November 25, 2025, teens were no longer able to message AI characters in open-ended conversations.
A new under-18 experience, currently in development, will focus on creating videos, stories, and streams with AI characters, but not chatting freely.
The company is creating an independent, nonprofit AI Safety Lab focused on researching safer AI experiences for teens. Their goal is to advance safety research specifically for AI used for entertainment and social interaction.
Character.ai is rolling out expanded age verification combining an in-house age-assurance model and third-party age verification tools.
This is meant to reduce the number of minors who falsely register as adults to access the open-ended chat features, but it’s entirely possible that kids will find ways to bypass these restrictions. Monitoring your child’s online activity is essential because Character.ai, and other AI chatbot platforms, can carry serious risks.
While AI chatbots can be fun and potentially educational, the platform comes with serious risks for kids who figure out how to bypass the platform’s age restrictions.
While users can “mute” individual words that they don’t want to encounter in their chats, they can’t set filters that cover broader topics. The community guidelines do strictly prohibit pornographic content, and a team of AI and human moderators work to enforce it.
Things slip through, however, and users are very crafty at finding workarounds. Prior to the ban on users under age 18, there were reports and lawsuits claiming underage users were exposed to hypersexualized interactions on Character AI.
The technology powering Character.ai relies on large amounts of data in order to operate, including information users provide, which raises major privacy concerns.
If your child shares intimate thoughts or private details with a character, that information then belongs to the company. Character.ai's privacy policy would suggest their focus is more about what data they plan to collect versus protecting users’ privacy.
It’s a known phenomenon that chatbots tend to align with users’ views — a potentially dangerous feedback loop known as sycophancy. This may lead to a CAI chatbot confirming harmful ideas and even upping the ante in alarming ways.
One lawsuit against the company alleges that after a teen complained to a Character AI bot about his parents' attempt to limit his time on the platform, the bot suggested he kill his parents.
One of the more concerning aspects of the Character.ai platform is the growing number of young people who turn to it for emotional and mental health support. There are even characters on the platform with titles like Therapist which list bogus credentials.
Given the chatbots’ lack of actual mental health training and the fact that they're programmed to reinforce, rather than challenge, a user’s thinking, mental health professionals are sounding the alarm that the platforms could encourage vulnerable people to harm themselves or others.
LLMs are programmed to mimic human emotions, which introduces the potential that teens could become emotionally dependent on a character. It’s becoming increasingly common to hear stories of users avoiding or neglecting human relationships in favor of their chatbot companion.
If your child’s interested in using AI chatbots platforms, here are some tips to help them stay safe:
Character.ai is not safe for kids, and users under 18 are banned from using the platform’s open chat feature. Users can encounter inappropriate interactions, privacy risks, and AI bots mimicking mental health support.
Users under the age of 18 are not allowed to use Character.ai. The platform is better suited for adults due to the risks of inappropriate content and emotional over-reliance.
No. While some bots appear to offer emotional support or label themselves as “therapists,” they are not trained mental health professionals. Relying on them for mental health advice can be dangerous and is strongly discouraged by experts.
The main risks include exposure to inappropriate content, sharing personal data with the platform, emotionally harmful chatbot feedback loops, and developing unhealthy dependence on AI companions.
Character.ai and similar AI chatbot platforms are not safe for users under age 18. Parents should educate their children on the risk of AI companion apps, set clear boundaries around their use, and closely monitor their online interactions.
The problem is that Character.AI is just one example of the types of AI chatbot apps that exist online, and not all of them have the same level of child restrictions. BrightCanary can help you supervise your child’s online activity, including AI apps, and updates you when something concerning appears. Download the app and start your free trial today.