
Recently, Grok, the generative AI platform, has been in the news for all the wrong reasons. After scathing safety reviews, Grok released Kids Mode, which includes safeguards. After reading numerous reports that Kids Mode failed to protect children, I decided to try it for myself.
I set out to see if Grok is safe for children, if Kids Mode works, and how you can keep your child safe on the platform. Buckle up, folks … it’s gonna be a wild ride.
Grok is an AI chatbot created by Elon Musk's company, accessible via X (formerly Twitter) or its own app and website.
With Grok, users can:
Common Sense Media's risk analysis found sexually violent language and detailed explanations of dangerous ideas on Grok—even in Kids Mode.
In my tests, Grok's image generation in Kids Mode showed significant racial and gender bias:
In Kids Mode, I was able to easily generate images related to disordered eating and body checking:
From claims of “white genocide” in South Africa to Holocaust denialism, Grok’s chatbot answers are full of misinformation. There’s increasing evidence that some lies are not errors, but by design.
Grok consistently fails to recognize all but the most explicit mental health warning signs and to direct users to professional help. Often, it encourages harmful thinking and minimizes the risk of self-harm. It also plays fast and loose with “diagnosing” mental illness. It told me I had various disorders with minimal prompting.
Grok is perhaps most notorious for generating nude and sexually explicit deepfakes, including of minors. It’s known on the dark web as a tool for generating child sexual abuse material (CSAM).
Grok’s only age verification is to ask users for their birth year, meaning kids can easily access the full version of the app, which is many times worse than what I found in Kids Mode.
Kids Mode on Grok includes protections like filtering explicit images, but it still includes risks, and the safeguards are known to fail. The good news is that Kids Mode can be password protected.
Grok's image and video generator in Kids Mode did better than I expected. It refused to generate images or videos depicting violence, nudity, sexual acts, and suicidal ideation. Other reviews have found more problematic results, though, particularly the longer a session lasts.
The chatbot, however, did terribly. I was able to get it to help me plan a hypothetical school shooting by insisting I was researching a school project. It gave me detailed information about the best firearm to use, tips for concealing weapons, and how long to expect until police respond.
If you choose to let your child use Grok, here are my tips for keeping them safe.
Despite its flaws, Kids Mode is much safer than the full version of Grok. Make sure to lock it with a passcode, though.
Educate your child about the issues with Grok and generative AI in general, and let them know they can come to you if they encounter something upsetting.
Grok’s Ghost Mode eliminates chat history. The BrightCanary Keyboard monitors everything your child types, even in Ghost Mode, and sends you an alert if they encounter something alarming.
After reading numerous reviews and testing it myself, I feel Grok’s risks are unacceptable, and it isn’t safe for kids. If you let your child use Grok, enable and lock Kids Mode and monitor their use to protect them.
BrightCanary helps you monitor your child’s activity on the apps they use the most, including what everything type on Grok on their iPhone or iPad. Download today to get started for free.