Discord Age Verification: What the 2026 “Teen-by-Default” Rollout Means for Parents

By Bill Green, CFE, CISA
February 12, 2026
Discord on iPhone

Discord is moving to a “teen-by-default” model worldwide. Starting with a phased global rollout in early March 2026, Discord will apply stricter default safety settings to new and existing accounts and will require age assurance to access 18+ channels, unblur flagged content, or to turn off certain safety protections. 

This update is part of Discord’s global effort to comply with online safety laws and age-appropriate design standards. For parents, the practical reality is straightforward: teens do not need to “unlock” adult settings, so Discord’s default safety settings are worth keeping. 

When does Discord trigger age verification?

Discord says age assurance prompts can appear when an unverified user tries to:

  • Unblur sensitive media or change sensitive media filter settings
  • Access an 18+ channel or server
  • Change message request screening and similar safety settings
  • Use certain server features like speaking in some “stage” or voice contexts

If they do not verify as an adult, the usual outcome is simple: the protections stay on and adult-only features stay locked.

Related: What Parents Should Know About Discord Servers for Teens

How does Discord age assurance work?

Discord describes three methods:

  • Age inference model: A machine-learning model assigns an age group only when Discord’s confidence is high. Discord states it does not use message content in this model. 
  • Face scan / facial age estimation: A short “video selfie” is used for facial age estimation. Discord claims it is processed on-device and never leaves the device. 
  • ID scan: A user submits a government ID and typically a selfie for matching. Discord says it works through vendor partners and that documents are deleted quickly, often immediately after age confirmation. 

What happens if a teen fails or refuses verification?

Discord says the experience is designed so most users can keep using Discord without verifying; the practical consequence of refusing is that the “teen-by-default” protections stay in place and adult-only features remain inaccessible. 

If age assurance fails, Discord indicates the user can retry (often with ID scan as a fallback), and if a user is verified as below the minimum required age in their country, the account will be banned with an appeal path. 

What data does Discord collect for age verification?

At account creation, Discord requires a birthday and states that in some cases it may require additional information to verify age. 

  • Age group result stored in the account: Discord says verification status/age group is private to the user, and the user can check it in account settings. 
  • For face scan: Discord says the video selfie is processed on device and does not leave the device; Discord claims it only receives an age group. 
  • For ID scan: Vendor partners receive ID and selfie images to confirm age; Discord says documents are deleted quickly and Discord only receives an age result. If you submit an ID for an age verification appeal, it will delete it within 60 days after the ticket is closed
  • For age inference: Discord says it uses account signals and behavior patterns and does not use message content. 

What remains ambiguous for parents:

  • Where verification data is processed: Discord’s privacy policy discusses international transfers and that it processes and stores information in the US and other countries, depending on vendors/users, but it does not provide a simple “age assurance data residency” chart. 
  • What specific signals power age inference beyond “behavior patterns and other signals” and how bias is measured: Discord describes the approach but does not publish external audit results or error rates. 
  • How “quick deletion” is verified: Discord asserts these practices, but without public third-party audits, parents must largely rely on trust rather than independent verification. 

Practical actions for parents before March 2026

What to teach your kids before March:

  • “You don’t need adult mode.” Very few teens need to access 18+ servers or unblur sensitive content. The adult verification is a site-wide setting for the account. There is not a channel or server-specific way to allow whitelisting or to bypass the age check.
  • Never respond to “verify your age” requests that arrive by email, text, direct messages, or chat. Discord says it prompts for age assurance only inside the app. 
  • Treat ID images like financial credentials. If an ID image is ever stolen, it cannot be changed the way a password can. In October 2025, a Discord security incident exposed ID documents submitted through service tickets — a reminder that government ID images carry long-term risk if compromised. This illustrates the level of care you should consider when submitting ID information is the amount of time they retain your data by policy.   
  • Age assurance prompts happen in-app, not by an email, text, or chat message

Actions you can take:

  1. Make a hard rule: no ID uploads, no face scans, without a parent present. For most teens, the correct answer is “do not unlock adult mode.” Currently, there is not a technical way through Discord to block the child from attempting it.
  2. Ensure account security basics are activated: Strong unique password + multi-factor authentication (MFA) where possible. Have the MFA code or passkey be a parent device or email rather than the child's account. Use this as an opportunity to make sure the information on your child’s accounts and linked accounts are up to date.
  3. Help your child set message and friend-request boundaries to screen for strangers.
  4. Explain what blurred or blocked content means and why they cannot see it.

And, finally, monitor their activity across Discord and all the other apps they use. BrightCanary is the most robust way to supervise your child’s activity on iOS, from keyboard monitoring to text message monitoring and emotional insights. Get started today with a free trial on the App Store.

Teen boy using Gemini Nano Banana on his smartphone

Imagine this: you are scrolling on your phone after dinner when a parent shares a photo in the class group chat. The image shows your child behind the cafeteria, holding a vape, and your name is getting tagged in the conversation. Your child is right there in the doorway, insisting the photo is fake. You can feel the urge to respond immediately to it, ground your child, or take some kind of action.

But with today’s AI tools, a picture can look perfectly ordinary and still be generated, edited, or ripped out of context. Before you treat it as evidence, the most important step is verification. Where did the image come from? Who posted it first? And could it have been altered using AI? 

One of the tools parents should understand right now is Nano Banana, Google’s highly realistic image-generation and editing system built into Gemini. Its capabilities raise serious concerns for families — especially when deepfake images are used to embarrass, sexualize, or falsely accuse kids of behavior they didn’t engage in.

What is Nano Banana?

Nano Banana is Google’s AI model for text-to-image generation and editing. Officially, it’s called Gemini 2.5 Flash Image. Gemini can generate extremely convincing images from a prompt and edit real photos in ways that blend smoothly, like swapping backgrounds or details, inserting or removing objects, and retouching.

Learn which AI apps your child is using with BrightCanary monitoring for iOS.

Why Nano Banana deepfakes are a real risk for kids

Nano Banana images look almost identical to real images. A photo can be pulled from a profile, a team site, a yearbook page, or a friend’s camera roll, and then edited to embarrass, sexualize, or frame a child as doing something they never did. 

Even when an image is later proven to be an AI-generated deepfake, the social damage can stick — especially in middle school, where reputations move faster than corrections. 

How parents can respond when an image might be AI-generated

Parental controls for Nano Banana are the same parental controls for Gemini apps. Currently, there is no separate switch that only disables image generation. 

In Family Link, Gemini access on a supervised account is essentially on or off. For kids under 13, or your country’s age threshold, a parent has to enable access before the child can use Gemini apps, and you can turn it off at any time. 

Google says minors signed into their Google account have added content filters, and Gemini attempts to block categories like sexually explicit material, violence, harassment, and some harmful roleplay. But they also explicitly say these filters are not perfect.

The most practical form of protection is to replace “I saw it so it’s true” with “Don’t trust. Verify” and teach a vital life skill. Make it safe for your child to come to you without losing their phone or being grounded on the spot, because fear of punishment drives secrecy. 

If there are threats, coercion, or sexualized content, preserve evidence and escalate the issue through the platform, the school, or local/federal law enforcement.

Does Gemini have built-in ways to identify AI images?

Yes, if it is an image generated by Nano Banana. 

Google’s main identifier is SynthID, an invisible watermark designed to remain detectable after common changes like resizing and compression. Currently, SynthID is only used by Google, so Gemini wouldn’t be able to detect images generated elsewhere, like ChatGPT. Google does plan to implement AI image detection in search results, but there’s no target date for that yet.

So, how do you identify AI images? The SynthID page describes two ways to check. Both involve uploading the image to either Gemini or SynthID Detector portal (currently in beta) and asking if there is a watermark. The AI will tell you if it’s generated or edited by Gemini. 

What if you only have a screenshot? SynthID can still work if the image is not heavily degraded or modified. Cropping the single image into smaller pieces has been reported to aid in detection. 

If SynthID is detected, that is compelling evidence the content was made or edited with Google’s tools. If SynthID is not detected, that’s still not conclusive that it is real. The image could be from a different AI tool, or it could have been altered after creation or reposted enough that detection is harder. Feeding the image to another AI (i.e., taking an image from Nano Banana and input it into Grok or Midjourney) will likely not keep the watermark though regeneration. 

For example, here is a SynthID check on an image generated in Gemini:

And here is the SynthID check after the image went through ChatGPT:

It’s an AI-generated photo, but the SynthID only identified the image generated with Gemini. 

Remember: Visible watermarks may appear on some images, but you should never treat the absence of a visible mark as proof an image is real. Verifying can help prevent arguments, and it demonstrates that you trust your child. 

Other ways to verify AI images before treating them as proof

With Nano Banana creations, the most common visual giveaways are not usually cartoon mistakes like weird eyes and hands. They are small failures in fine details or background elements.

  • Look for overly smooth blending or fuzzing where hair meets the background and where items touch objects or textures. 
  • Check reflections in sunglasses, windows, and shiny surfaces for missing objects or impossible angles. 
  • Zoom in on tiny text, labels, jersey names, and fine print. Letters and numbers can look almost right while warping or spacing oddly.
  • Watch for textures that feel too perfect, like skin that is uniformly airbrushed, fabric that loses its shape, or backgrounds that repeat patterns. 

Finally, ask whether the scene makes sense as a real moment. Do the setting, timing, clothing, and context line up with what you know? Ask who posted it first, whether anyone has the original post or file, and whether there are independent photos or videos from the same moment. Real events usually produce more than one angle, while a fake often lives as a single screenshot with a lot of emotion wrapped around it.

Add up what makes sense, and then decide how long they are grounded for the rest of their life. 

A quick example to close with: if I saw a picture of my youngest child and saw they had matching socks? No way. That is 100% fake. Those are the kinds of details to look for.

Learn about the risks of AI and brain rot, and save these tips to help your child use AI responsibly. Monitor every app your child uses, including AI apps like Gemini and Grok, with BrightCanary.

Teen boy looks stressed in bedroom with Grok logo next to him

In late December 2025 and early January 2026, multiple outlets and watchdogs reported that people were using Grok’s new image generation features to create and share nonconsensual sexualized images, including images involving children and young teens. Let’s lay out what you need to know as a parent.

What is Grok?

Grok is an artificial intelligence chatbot built into X (formerly Twitter) and tied to Elon Musk’s xAI. It’s also a standalone app and website.

Here’s what its image features can do:

  • Make a brand-new image from scratch from a text prompt. This can look like a real photo.
  • Edit a real photo that someone uploads, including changing clothing, the setting, or how a person looks. It can also generate sexualized versions. This is the part that can turn a normal teen photo into illegal deepfake content.
  • Generate lots of versions. Someone can keep trying variations until the system gives them something they want. The resulting images are then posted or shared on X or other social media, where they can spread quickly.

What Grok cannot do

  • Pull photos from your teen’s phone, camera roll, or private accounts by itself. Someone has to provide an image and prompt to generate an image.
  • Prove anything happened. The harm that AI-generated photos can cause is real, but it is not automatically “proof” that a child did something.
  • Keep an image contained once it is shared. What’s generated on X doesn’t stay on X. Screenshots and reposts can spread it beyond any given platform.

Why this matters for parents

These deepfake threats can affect any family because they use the same things every teen already has: photos and social accounts.

It lowers the barrier for harm

In the past, making realistic abusive imagery took skill, time, exploiting bugs, and usually private tools. When a popular app bakes it in, the harm becomes easier and faster.

It turns kids into targets 

A teen can be pulled into deepfake dangers, despite having done nothing. For example, a classmate can generate or edit an image of them and share it, or a fake image can circulate in group chats, direct messages, or public posts, and pull your teen into the fallout.

This is the same emotional mechanism as sextortion and deepfake harassment — humiliation, plus panic, plus “everyone will see it.”

What you can do right now

You do not need to become an expert on AI. You need a plan.

1. Talk to your teen

The safest assumption is that any photo posted publicly can be misused. The goal is not to scare them into isolation; it is to teach smart sharing. Get to the heart of what matters because shame and fear are what keeps teens quiet.

Here’s an example of what you can say to your teen:

“There’s a new wave of AI tools that can mess with photos and make fake sexual images, even of kids. If you ever see something like that, or if someone uses your photo, you’re not in trouble. Bring it to me. We’ll report it and handle it together.”

2. Have an action plan

If any account your teen does not know comments, tags, or DMs about this content, tell them not to reply. This rule applies whether the photo is real or a fake image someone made. Your teen should screenshot it, block the account, report it, and tell you.

3. If your child sees or receives abusive content involving minors, report it

In the United States, the National Center for Missing and Exploited Children runs the CyberTipline for reporting suspected child sexual exploitation. You can also report to local law enforcement, and to the FBI (either through tips.fbi.gov or by contacting your nearest FBI field office). If there is an immediate threat or your child’s safety is at risk, call 911.

4. If a minor’s explicit image is being shared, use Take It Down

NCMEC’s Take It Down service helps remove sexually explicit images or videos depicting minors from participating platforms.

The concern at hand is not about whether Grok is “edgy,” but about whether a mainstream platform’s built-in AI image features can be used to generate and spread nonconsensual sexualized imagery, including material involving minors. You do not need to panic, but you do need a plan, and you need your teen to know they can come to you immediately if something happens.

Learn more about what to do if you find something inappropriate on your child’s phone, and stay informed about your child’s online activity with BrightCanary monitoring.

Instagram logo iconFacebook logo icontiktok logo iconYouTube logo iconLinkedIn logo icon
Be the most informed parent in the room.
Sign up for bimonthly digital parenting updates.
@2024 Tacita, Inc. All Rights Reserved.