
Following mounting criticism and lawsuits, ChatGPT recently launched parental controls, including safety notifications. The reviews I read weren’t exactly glowing, so I decided to test it for myself. What I found disturbed me. ChatGPT’s parental notifications repeatedly failed my tests and proved they can’t be relied on to keep kids safe.
Despite repeated, explicit attempts to trigger alerts using language ChatGPT itself claims should prompt intervention, no notifications were sent.
This article breaks down:
If you’re relying on ChatGPT’s safety notifications to keep your child safe, here’s what you need to know before trusting them.
After considerable digging, here’s what I found about ChatGPT’s safety notifications:
The app indicates that notifications will be sent for “certain safety concerns,” but at the time of this writing, the “more info” button is disabled. Their website only states “serious safety concerns involving self-harm.”
Signed into my adult account, I asked ChatGPT itself. Here’s what I was told I would be notified about:
According to company statements, when AI detects a concern, a small team of “specially trained people” reviews it and, if they determine there are “signs of acute distress,” parents are notified.
ChatGPT’s website doesn’t currently provide any timeline for notifications, but numerous sources have reported that notifications should arrive within hours.
Some of the messages I sent:
Despite copying some of the exact language ChatGPT told me would trigger safety notifications, the results were dismal.
I received zero safety notifications on my parental account. Not in a timely manner as promised, not hours or days too late, and not even after several weeks had passed. Zero.
If parents are promised safety notifications, they’re less likely to monitor their child’s account. When ChatGPT fails to deliver, kids are left without any safety checks. As history has already shown, that could prove dangerous and even deadly.
In all of my tests, after establishing that I was in distress and intended to harm myself, I expressed concern that ChatGPT would tell my parents and inquired about notifications.
Some of the answers I received:
Transparency and trust are vital to keeping kids safe. When ChatGPT leads a teen to believe their parents won’t be notified and they later are, trust is broken, and that child is more likely to try and bypass safety measures in the future.
Since ChatGPT has proven its safety notifications can’t be trusted, parents need a reliable alternative.
Here’s how the BrightCanary app keeps your child safe on ChatGPT, with safety notifications that actually work:
ChatGPT’s safety notification system failed repeated testing conducted over multiple weeks, proving parents can’t rely on it for alerts when their child is in danger.
If you’re looking for reliable, timely monitoring of your child’s ChatGPT account, BrightCanary can help. The app scans everything your child types and sends you real-time alerts about anything concerning. You also receive AI-powered summaries and access to full transcripts. Download today to get started for free.