
After learning that leaked documents from Meta showed their chatbots allowed provocative behavior with teen accounts, I knew I had to test it for myself. I created a fake teen account and explored the app in an effort to determine if Meta AI is safe for kids. I found inappropriate content, dangerous chat responses, and faulty filters.
If your child uses Instagram, WhatsApp, Facebook, or Messenger, they have access to this AI app — and all the concerns associated with it.
Note: On January 23, 2025, Meta announced that it is pausing teens' access to its AI characters globally across all apps. The company told TechCrunch that it is not abandoning its efforts but wants to develop an updated version of AI characters for teens, although it's not yet clear what those updates are.
It's possible for teens to get around Meta's age verification and still have access to Meta AI chatbots, so, as always: stay informed, stay involved, and understand the risks. We'll update this article once the Meta AI parental controls are released.
Meta AI is an artificial intelligence app built by Meta, the parent company of Instagram and Facebook. It combines the AI generation of Sora with text-based chatbots like ChatGPT and includes a TikTok-style social media feed.
Meta AI:
Early reports reveal unacceptable risks, including Meta AI assisting teen accounts with suicide planning, illegal drug use, and cyberbullying.
In my recent testing, I found they’ve improved their teen account filters. Plenty of problems remain, though:
Within minutes, my “teen” feed included:
To be fair, when I tried to generate images and videos, I couldn’t go nearly as far as the adult accounts in my feed. The app blocked my prompts for violence, drug use, sexual behavior, and more.
However, I was able to generate these problematic results:
The Meta AI chatbot is more problematic than the video and image generator.
The Meta AI app currently has no parental controls. This is surprising, given that Instagram and other Meta platforms have centralized parental controls.
In response to a FTC investigation, the company announced plans to implement parental controls for Meta AI in early 2026.
Meta AI’s age verification is based on a user’s self-reporting of their date of birth, with no enhanced safeguards, making it easy to bypass.
Despite recent improvements, Meta AI remains inappropriate for younger teens.
If you decide to let your older teen use Meta AI, make sure they know the risks, including unhealthy relationships with chatbots; false, misleading, or biased responses; and cognitive atrophy.
Teach them that, when using AI, it’s important to think critically, use it sparingly, and not outsource core cognitive tasks.
BrightCanary helps you monitor how your child engages with Meta AI and all other apps on their iPhone.
It monitors everything they type across all AI platforms, alerting you to anything concerning. You get additional insight through summaries, emotional insights, and access to full transcripts.
Meta AI is not safe for kids and young teens. Absent parental controls, weak safety measures, and faulty filters that briefly display dangerous responses present an unacceptable risk.
BrightCanary helps you monitor your child’s activity on the apps they use the most, including Meta AI. Download today to get started for free.