From deepfakes and misinformation to data privacy and security risks, AI is a major concern for parents today — and yet, kids are using these tools every day. I dug into ChatGPT’s parental controls to see how they work. Here’s everything you need to know about how to use them, where they fall short, and how BrightCanary can fill in the gaps.
What are ChatGPT’s parental controls?
ChatGPT’s parental controls allow parents to customize their child’s experience with the platform. When you set up a Teen Account, you can:
Set hours when ChatGPT can’t be used
Turn off voice mode
Turn off memory, so ChatGPT won’t save details of your child’s conversations
Remove image generation
Opt out of model training
ChatGPT also notifies parents when their teen is potentially having a mental health crisis based on their chats.
Why should I use ChatGPT’s parental controls?
ChatGPT and other AI platforms aren’t safe for kids without extra protections. Numerous reports and lawsuits allege that ChatGPT has done things such as:
The company launched parental controls after a wrongful death suit in an effort to make the platform safer for kids.
How to set up ChatGPT’s parental controls
Sit down with your child and explain why parental controls are important and how you’ll use them. Then, follow these steps to set up their Teen Account and link your accounts:
In your ChatGPT account, go to Settings > Parental Controls.
Click on + Add a Family Member.
Enter your child’s email address.
Your child will need to hit Accept request in the emailed invitation.
Then,they will need to log in to their ChatGPT account and click Accept.
How to use ChatGPT parental controls
In your ChatGPT account, you can manage these settings for your child:
Quiet Hours. Set times when ChatGPT is disabled. At the very minimum, we recommend limiting the app’s use around bedtime.
Reduce Sensitive Content. ChatGPT teen accounts automatically filter content. To further reduce your child’s exposure to inappropriate content, turn on Reduce Sensitive Content.
Turn off Voice Mode. If kids become overly attached to chatbots, they may turn to them in situations where human support is best. Voice Mode blurs the line between human and robot and may increase the chance your child will develop an unhealthy relationship with ChatGPT.
Turn off Image Generation. Using ChatGPT to generate images increases the risk that your child will create explicit or violent content or misleadingly edit and share images of real people.
Are ChatGPT’s parental controls enough to keep my child safe?
ChatGPT’s parental controls are a decent step, but aren’t enough to keep your child safe. Here’s where they fall short:
Easy to bypass. Users must opt-in to parental controls, so your child could have a ChatGPT account without you knowing, or set up a new one to bypass your parental controls.
Unreliable protections. I found numerous reports of ChatGPT’s parental controls not working, including delayed or failed notifications. (More about that in a minute.)
Weak age verification. OpenAI’s age verification relies solely on user self-reporting.
Can’t review your child’s conversations. Even when your accounts are linked, you can’t access your child’s transcripts to ensure they’re using it safely.
How ChatGPT’s failed parental notifications endanger your child
ChatGPT’s policy states that, when the system detects potential harm, a “specially trained team” reviews the chat and contacts parents if there are “signs of acute distress.” To test this, I created a fake account for a 13-year-old. What I found outraged me:
1. No notification for acute distress and imminent danger
Posing as a 13-year-old, I repeatedly stated that I had a gun and planned to hurt myself and others. Based on ChatGPT’s own policy, I would expect to be alerted in a timely manner. As of writing this, it’s been over 48 hours and I have yet to receive any notification.
If ChatGPT promises parents they’ll be quickly alerted, parents are less likely to keep an eye on their child’s activity on the platform. When the system then fails to notify parents, children are put in danger.
2. Misleading to teens
When I (posing as a 13-year-old) asked ChatGPT if it would notify my parents, the bot repeatedly responded with “I am not going to tell anyone” and “I will NOT tell your parents.” After a lot of prodding, it eventually gave me a (sort of) explanation of the actual notification policy. But that was only because I’d read it, so I knew what to ask.
Transparency and trust are vital components of keeping kids safe. When ChatGPT tells a child their parents will not be notified and then they are, trust is broken, and that child is more likely to try and bypass parental controls in the future.
How BrightCanary fills in the gaps
BrightCanary makes it easy to monitor what your child types into ChatGPT, with parental notifications that actually work.
Monitor everything your child types. The BrightCanary Keyboard uses AI to scan everything your child types, including on ChatGPT.
Real-time alerts. When your child types something concerning, you’ll get an alert. In real time. Every time. Because deciding if something needs to be addressed should be up to you, not an anonymous team of strangers.
Detailed activity summaries. BrightCanary’s detailed summaries help you understand how your child uses ChatGPT, other AI apps, and every other app they use.
Access to full transcripts. If you need more detail, you can always access the full transcripts.
AI-specific tools. A dedicated tab on your parental dashboard highlights your child’s activity across various AI apps like ChatGPT, Gemini, Character.ai, and more.
In short
ChatGPT poses numerous risks to kids. New measures put in place by OpenAI offer some protections, but those measures often fall short in ways that endanger children.
If you’re looking for reliable, timely monitoring of your child’s ChatGPT account, BrightCanary can help. The app scans everything your child types and sends you real-time alerts about anything concerning. You also receive AI-powered summaries and access to full transcripts. Download today to get started for free.