What Is the Grok Image Controversy? A Parent’s Guide to the Risks for Kids

By Bill Green, CFE, CISA
January 18, 2026
Teen boy looks stressed in bedroom with Grok logo next to him

In late December 2025 and early January 2026, multiple outlets and watchdogs reported that people were using Grok’s new image generation features to create and share nonconsensual sexualized images, including images involving children and young teens. Let’s lay out what you need to know as a parent.

What is Grok?

Grok is an artificial intelligence chatbot built into X (formerly Twitter) and tied to Elon Musk’s xAI. It’s also a standalone app and website.

Here’s what its image features can do:

  • Make a brand-new image from scratch from a text prompt. This can look like a real photo.
  • Edit a real photo that someone uploads, including changing clothing, the setting, or how a person looks. It can also generate sexualized versions. This is the part that can turn a normal teen photo into illegal deepfake content.
  • Generate lots of versions. Someone can keep trying variations until the system gives them something they want. The resulting images are then posted or shared on X or other social media, where they can spread quickly.

What Grok cannot do

  • Pull photos from your teen’s phone, camera roll, or private accounts by itself. Someone has to provide an image and prompt to generate an image.
  • Prove anything happened. The harm that AI-generated photos can cause is real, but it is not automatically “proof” that a child did something.
  • Keep an image contained once it is shared. What’s generated on X doesn’t stay on X. Screenshots and reposts can spread it beyond any given platform.

Why this matters for parents

These deepfake threats can affect any family because they use the same things every teen already has: photos and social accounts.

It lowers the barrier for harm

In the past, making realistic abusive imagery took skill, time, exploiting bugs, and usually private tools. When a popular app bakes it in, the harm becomes easier and faster.

It turns kids into targets 

A teen can be pulled into deepfake dangers, despite having done nothing. For example, a classmate can generate or edit an image of them and share it, or a fake image can circulate in group chats, direct messages, or public posts, and pull your teen into the fallout.

This is the same emotional mechanism as sextortion and deepfake harassment — humiliation, plus panic, plus “everyone will see it.”

What you can do right now

You do not need to become an expert on AI. You need a plan.

1. Talk to your teen

The safest assumption is that any photo posted publicly can be misused. The goal is not to scare them into isolation; it is to teach smart sharing. Get to the heart of what matters because shame and fear are what keeps teens quiet.

Here’s an example of what you can say to your teen:

“There’s a new wave of AI tools that can mess with photos and make fake sexual images, even of kids. If you ever see something like that, or if someone uses your photo, you’re not in trouble. Bring it to me. We’ll report it and handle it together.”

2. Have an action plan

If any account your teen does not know comments, tags, or DMs about this content, tell them not to reply. This rule applies whether the photo is real or a fake image someone made. Your teen should screenshot it, block the account, report it, and tell you.

3. If your child sees or receives abusive content involving minors, report it

In the United States, the National Center for Missing and Exploited Children runs the CyberTipline for reporting suspected child sexual exploitation. You can also report to local law enforcement, and to the FBI (either through tips.fbi.gov or by contacting your nearest FBI field office). If there is an immediate threat or your child’s safety is at risk, call 911.

4. If a minor’s explicit image is being shared, use Take It Down

NCMEC’s Take It Down service helps remove sexually explicit images or videos depicting minors from participating platforms.

The concern at hand is not about whether Grok is “edgy,” but about whether a mainstream platform’s built-in AI image features can be used to generate and spread nonconsensual sexualized imagery, including material involving minors. You do not need to panic, but you do need a plan, and you need your teen to know they can come to you immediately if something happens.

Learn more about what to do if you find something inappropriate on your child’s phone, and stay informed about your child’s online activity with BrightCanary monitoring.

Instagram logo iconFacebook logo icontiktok logo iconYouTube logo iconLinkedIn logo icon
Be the most informed parent in the room.
Sign up for bimonthly digital parenting updates.
@2024 Tacita, Inc. All Rights Reserved.