Discord is moving to a “teen-by-default” model worldwide. Starting with a phased global rollout in early March 2026, Discord will apply stricter default safety settings to new and existing accounts and will require age assurance to access 18+ channels, unblur flagged content, or to turn off certain safety protections.
This update is part of Discord’s global effort to comply with online safety laws and age-appropriate design standards. For parents, the practical reality is straightforward: teens do not need to “unlock” adult settings, so Discord’s default safety settings are worth keeping.
When does Discord trigger age verification?
Discord says age assurance prompts can appear when an unverified user tries to:
Unblur sensitive media or change sensitive media filter settings
Access an 18+ channel or server
Change message request screening and similar safety settings
Use certain server features like speaking in some “stage” or voice contexts
If they do not verify as an adult, the usual outcome is simple: the protections stay on and adult-only features stay locked.
Age inference model: A machine-learning model assigns an age group only when Discord’s confidence is high. Discord states it does not use message content in this model.
Face scan / facial age estimation: A short “video selfie” is used for facial age estimation. Discord claims it is processed on-device and never leaves the device.
ID scan: A user submits a government ID and typically a selfie for matching. Discord says it works through vendor partners and that documents are deleted quickly, often immediately after age confirmation.
What happens if a teen fails or refuses verification?
Discord says the experience is designed so most users can keep using Discord without verifying; the practical consequence of refusing is that the “teen-by-default” protections stay in place and adult-only features remain inaccessible.
If age assurance fails, Discord indicates the user can retry (often with ID scan as a fallback), and if a user is verified as below the minimum required age in their country, the account will be banned with an appeal path.
What data does Discord collect for age verification?
At account creation, Discord requires a birthday and states that in some cases it may require additional information to verify age.
Age group result stored in the account: Discord says verification status/age group is private to the user, and the user can check it in account settings.
For face scan: Discord says the video selfie is processed on device and does not leave the device; Discord claims it only receives an age group.
For ID scan: Vendor partners receive ID and selfie images to confirm age; Discord says documents are deleted quickly and Discord only receives an age result. If you submit an ID for an age verification appeal, it will delete it within 60 days after the ticket is closed.
For age inference: Discord says it uses account signals and behavior patterns and does not use message content.
What remains ambiguous for parents:
Where verification data is processed: Discord’s privacy policy discusses international transfers and that it processes and stores information in the US and other countries, depending on vendors/users, but it does not provide a simple “age assurance data residency” chart.
What specific signals power age inference beyond “behavior patterns and other signals” and how bias is measured: Discord describes the approach but does not publish external audit results or error rates.
How “quick deletion” is verified: Discord asserts these practices, but without public third-party audits, parents must largely rely on trust rather than independent verification.
Practical actions for parents before March 2026
What to teach your kids before March:
“You don’t need adult mode.” Very few teens need to access 18+ servers or unblur sensitive content. The adult verification is a site-wide setting for the account. There is not a channel or server-specific way to allow whitelisting or to bypass the age check.
Never respond to “verify your age” requests that arrive by email, text, direct messages, or chat. Discord says it prompts for age assurance only inside the app.
Treat ID images like financial credentials. If an ID image is ever stolen, it cannot be changed the way a password can. In October 2025, a Discord security incident exposed ID documents submitted through service tickets — a reminder that government ID images carry long-term risk if compromised. This illustrates the level of care you should consider when submitting ID information is the amount of time they retain your data by policy.
Age assurance prompts happen in-app, not by an email, text, or chat message.
Actions you can take:
Make a hard rule: no ID uploads, no face scans, without a parent present. For most teens, the correct answer is “do not unlock adult mode.” Currently, there is not a technical way through Discord to block the child from attempting it.
Ensure account security basics are activated: Strong unique password + multi-factor authentication (MFA) where possible. Have the MFA code or passkey be a parent device or email rather than the child's account. Use this as an opportunity to make sure the information on your child’s accounts and linked accounts are up to date.
Help your child set message and friend-request boundaries to screen for strangers.
Explain what blurred or blocked content means and why they cannot see it.
And, finally, monitor their activity across Discord and all the other apps they use. BrightCanary is the most robust way to supervise your child’s activity on iOS, from keyboard monitoring to text message monitoring and emotional insights. Get started today with a free trial on the App Store.