How Parents Can Monitor AI Companion Apps

By Rebecca Paredes
September 18, 2025
Phone with ChatGPT in front of laptop

Artificial intelligence (AI) apps are everywhere, and teens are using them more than ever. From ChatGPT to Character.ai to Snapchat’s My AI, these chatbots are marketed as harmless, but they can expose kids to serious risks like dependency, inappropriate content, and even harmful advice.

Parents are asking: what parental controls exist for AI apps? And more importantly: what’s the best way to monitor them?

This guide breaks down:

  • What parental controls exist for popular AI apps
  • Why those tools aren’t enough
  • The best parental monitor for AI that actually works across apps on Apple devices

Quick answer: What’s the best parental monitoring app for AI?

The best parental monitor for AI apps is BrightCanary. Unlike app-based parental controls, BrightCanary works across every app your child uses — including AI chatbots.

  • Tracks every AI app on your child’s iPhone
  • Provides real-time alerts when kids type about risky topics (like drugs, predators, or self-harm)
  • Offers insightful summaries of their chatbot conversations, saving you from scrolling through endless chats.

While built-in parental controls for AI apps are minimal, BrightCanary gives parents a simple way to monitor all AI interactions in one place.

Why are AI companion apps in the news?

AI safety has become a major issue for parents and lawmakers. 

The Federal Trade Commission is investigating the effects of AI chatbots on children. The FTC’s goal is to limit potential negative effects and inform users and parents of their risks, demanding child safety information from OpenAI, Meta, Alphabet, Snap, xAI, and other tech companies. 

The news comes after a slew of stories about teenagers committing suicide or facing extreme mental health challenges after having extended conversations with AI chatbots, such as Sewell Setzer, a 14-year-old boy who killed himself following months of romantic conversations with AI characters on the platform Character.ai. 

In August 2025, Matthew and Maria Raine brought the first wrongful death suit against OpenAI. They alleged that ChatGPT “coached” their son Adam Raine into suicide over several months.

“The truth is, AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” said Megan Garcia, Setzer’s mom, in a recent testimony before the Senate Judiciary Committee. 

Odds are, your teen is likely chatting with AI. According to Common Sense Media, around 70% of teens are using AI companions, but only 37% of parents know that their kids are using the apps.

Parental controls for AI apps: What’s available now

Here’s an overview of the existing parental controls for popular AI apps your child might use:

ChatGPT

  • Self-harm alerts: Parents will “receive notifications when the system detects their teen is in a moment of acute distress” in forthcoming parental controls, according to a blog post.
  • Parent controls: Parents will be able to link their account with their teen’s, although it’s not yet clear what parents will be able to see.
  • Usage limits: ChatGPT will encourage users to take a break during longer periods of use, and parents will be able to set blackout periods.
  • Content filters: The company will veer away from flirting with teen users and won’t engage in conversations about suicide or self-harm, although this update has not yet been released.
  • Age verification: ChatGPT is developing a new system to estimate a user’s age. The minimum age for ChatGPT is 13 and it’s relatively easy to bypass age restrictions.

Character.ai

  • Self-harm alerts: None.
  • Parental controls: Parental Insights shows parents a weekly activity report, including how much time their teen spends on CAI and the top characters they interact with (but not the content of their messages).
  • Usage limits: Users receive a notification after spending an hour on the platform.
  • Content filters: Chats go through filters according to the platform’s content guidelines, but it’s possible to bypass NSFW filters. 
  • Age verification: Users have to indicate their birthdate when they create an account, but it’s simple to select an older age. The minimum age for Character.AI users is 13.

My AI (Snapchat)

  • Self-harm alerts: My AI shares safety resources when it detects a user is messaging about self-harm, eating disorders, or mental health challenges.
  • Parental controls: When parents set up Snapchat Family Center, they can disable My AI from responding to their teen. 
  • Usage limits: None.
  • Content filters: Parents can restrict sensitive content in Family Center, and the company says the chatbot is age-aware and takes that into account during conversations.
  • Age verification: The minimum age to use Snapchat is 13, but it’s easy to set an older birthdate during account setup.

Meta AI (Instagram, Facebook, WhatsApp)

  • Self-harm alerts: Meta blocks its chatbots from talking with teens about self-harm and suicide. 
  • Parental controls: None.
  • Usage limits: None, although Instagram Teen Accounts feature a one-hour daily time limit reminder.
  • Content filters: Meta chatbots provide safety resources instead of engaging in conversions about disordered eating and self-harm, but the company has come under fire for allowing its chatbots to have inappropriate conversations with children. 
  • Age verification: Instagram uses AI to determine if a child lied about their age during account setup.

Why built-in AI parental controls aren’t enough

Every platform handles parental controls differently, and some offer none at all. Additionally, teens can fake their ages, use private browsers, or download new AI apps parents haven’t heard of — all of which can negate the parental controls on AI platforms.

It’s an unfortunate truth that AI companies didn’t start to roll out meaningful parental controls until tragedies made headlines. Parents need to stay informed and involved, but kids can fall into the digital danger zone faster than they might think.

The best parental monitor for AI: BrightCanary

BrightCanary monitors every app your child uses, including AI companions. The child safety software just released a new feature that specifically shows you every AI app your child is using and summaries of their activity. 

Kids use AI for a range of activities, like entertainment and homework help — but if they’re using AI inappropriately, you’ll get an alert in real-time. It all starts with a keyboard that you install on your child’s iPhone. It’s easy to set up, and it analyzes what your child types across every app: messaging, social media, forums, and AI.

Unlike app-by-app controls, BrightCanary is a unified parental monitor for AI that adapts to whatever platform your child is using.

How parents can protect kids who use AI apps

Even with BrightCanary, it’s important to pair monitoring with parenting strategies:

  1. Talk openly about AI. Let your child know why you’re concerned and what healthy AI use looks like.
  2. Set boundaries. Agree on when and how often AI apps can be used.
  3. Encourage human connections. Remind kids that AI is a tool, not a substitute for real friendships or support.
  4. Stay informed. AI apps evolve quickly — keep learning about new risks and features.

The bottom line

AI companions like ChatGPT, Character.ai, and Snapchat’s My AI are widely used by teens, but built-in parental controls are minimal and inconsistent.

The best parental monitor for AI apps is BrightCanary, which works across every app, provides real-time alerts, and helps you understand your child’s conversations with AI through summaries and emotional insights.

Protect your child today. Download BrightCanary and get started for free.

Instagram logo iconFacebook logo icontiktok logo iconYouTube logo iconLinkedIn logo icon
Be the most informed parent in the room.
Sign up for bimonthly digital parenting updates.
@2024 Tacita, Inc. All Rights Reserved.