
In late December 2025 and early January 2026, multiple outlets and watchdogs reported that people were using Grok’s new image generation features to create and share nonconsensual sexualized images, including images involving children and young teens. Let’s lay out what you need to know as a parent.
Grok is an artificial intelligence chatbot built into X (formerly Twitter) and tied to Elon Musk’s xAI. It’s also a standalone app and website.
Here’s what its image features can do:
These deepfake threats can affect any family because they use the same things every teen already has: photos and social accounts.
In the past, making realistic abusive imagery took skill, time, exploiting bugs, and usually private tools. When a popular app bakes it in, the harm becomes easier and faster.
A teen can be pulled into deepfake dangers, despite having done nothing. For example, a classmate can generate or edit an image of them and share it, or a fake image can circulate in group chats, direct messages, or public posts, and pull your teen into the fallout.
This is the same emotional mechanism as sextortion and deepfake harassment — humiliation, plus panic, plus “everyone will see it.”
You do not need to become an expert on AI. You need a plan.
The safest assumption is that any photo posted publicly can be misused. The goal is not to scare them into isolation; it is to teach smart sharing. Get to the heart of what matters because shame and fear are what keeps teens quiet.
Here’s an example of what you can say to your teen:
“There’s a new wave of AI tools that can mess with photos and make fake sexual images, even of kids. If you ever see something like that, or if someone uses your photo, you’re not in trouble. Bring it to me. We’ll report it and handle it together.”
If any account your teen does not know comments, tags, or DMs about this content, tell them not to reply. This rule applies whether the photo is real or a fake image someone made. Your teen should screenshot it, block the account, report it, and tell you.
In the United States, the National Center for Missing and Exploited Children runs the CyberTipline for reporting suspected child sexual exploitation. You can also report to local law enforcement, and to the FBI (either through tips.fbi.gov or by contacting your nearest FBI field office). If there is an immediate threat or your child’s safety is at risk, call 911.
NCMEC’s Take It Down service helps remove sexually explicit images or videos depicting minors from participating platforms.
The concern at hand is not about whether Grok is “edgy,” but about whether a mainstream platform’s built-in AI image features can be used to generate and spread nonconsensual sexualized imagery, including material involving minors. You do not need to panic, but you do need a plan, and you need your teen to know they can come to you immediately if something happens.
Learn more about what to do if you find something inappropriate on your child’s phone, and stay informed about your child’s online activity with BrightCanary monitoring.