
Discord is one of the most popular social networks for teens, and when used safely, it can be an incredibly positive space. But there are also potential risks, like exposure to explicit material, contact with strangers, and cyberbullying.
This guide covers everything parents need to know about Discord servers for teens, including how they work, which are most popular, red flags to watch for, and how Discord family Center features can help parents support their teens safely.
Discord is a messaging platform and social network where users come together around shared interests. Teens use it to chat with friends, join fandoms, coordinate gaming sessions, share art, and more.
Users can exchange:
The message board interface is reminiscent of Reddit, but it also includes real-time communication features.
Discord servers are communities built around a specific topic. They can be public (open to anyone) or private (invite-only).
Servers are organized into text or voice channels, dictating how users can communicate within that server.
There is a huge variety of servers, from harmless hobby communities to spaces filled with NSFW content.
If your teen is on Discord, chances are they’re hanging out in one of these types of servers:
Discord can be a very positive experience for teens. But, like any platform allowing open chat, there’s a risk that your child may be exposed to inappropriate material, experience cyberbullying, or come in contact with predators.
Here’s how to evaluate if a Discord server is safe for your teen.
These are generally signs a Discord server is safe for teens, although you should still periodically check in:
These signs could mean a server is unsafe for your teen:
Pro tip: When you install BrightCanary on your child's iPhone or iPad, you can monitor what your kids type on Discord. Try it for free today.
Discord recently rolled out major updates to Family Center, giving guardians more visibility while still respecting teen privacy. Here’s what’s new:
Parents can now see:
This helps you spot unusual patterns — like chatting with unknown older users or sudden increases in server activity.
Parents can now control:
These settings are clearly visible to teens to ensure transparency.
If your teen reports a user, they can choose to let you know. The notification does not include message content, but it’s a helpful conversation starter.
It’s worth noting that Discord Family Center does not show parents the content of their child’s messages. To monitor their chats, you’ll need to spot-check or use a monitoring app like BrightCanary on their iPhone or iPad.
If your teen uses Discord, it’s important to take steps to help them do so safely.
Discord’s minimum age requirement is 13, but safety depends on how your teen uses it. Public servers may expose kids to explicit content, bullying, or predators. Safer use often comes from private servers with real-life friends, combined with active parental monitoring on BrightCanary.
Yes. Parents and teens can block NSFW (Not Suitable for Work) servers and channels. However, since Discord doesn’t verify ages, kids can bypass age restrictions by entering a false birthdate.
First, help your child block and report the user. Encourage them to screenshot messages for documentation. Then, talk openly about what happened, reassure them it isn’t their fault, and monitor their activity more closely.
Look for servers labeled as “teen,” “family friendly,” or “safe space.” Check that they have clear community guidelines, active moderators, and rules against harassment and explicit content. Private servers with friends your child knows in real life are generally safest.
Parents can see activity, not messages, using Discord’s Family Center. To view what your child types, you’ll need a monitoring tool like BrightCanary.
BrightCanary scans everything your child types on Discord and across other apps, alerting you to harmful content.
Public servers are open to anyone and often have thousands of members, making it harder to control what your child sees. Private servers are invite-only, typically with smaller, safer communities. Most experts recommend steering teens toward private servers with known friends.
Discord is an excellent way for your teen to interact with friends, learn about their passions, and connect with others around shared interests. But the open chat feature also creates risks for teens. Parents should screen Discord servers their teen wants to join, educate their teen on how to use Discord safely, and monitor their use on the platform.
Want to monitor your child's Discord activity on their iPhone or iPad? Download BrightCanary today and start monitoring what they send across every app they use, including Discord, Roblox, Reddit, and more.

Your teen or tween is begging to use Discord — but is Discord safe for kids? If the idea of vetting one more app feels daunting, we’ve got you. Discord parental controls and safety settings can help limit explicit content, who can contact your child, and more.
This Discord parents’ guide breaks down the platform, the newest Family Center updates, and how to use safety settings to help your child stay safe online.
Discord is a messaging platform and social network where users communicate through text, voice, or video in DMs (direct messages), group chats, and large topic-based communities called servers.
Originally created as a way for gamers to chat while playing online together, it’s expanded to include many different (and often extremely niche) interests.
It has the real-time communication vibe of FaceTime with the message-board functionality of Reddit.
Pro tip: When you install BrightCanary on your child's iPhone or iPad, you can also monitor what your kids type on Discord, helping you spot risks early. Try it for free today.
“Servers” are what Discord calls communities formed around specific topics. These servers are set as either text or voice channels. Anyone can create a server and set it to either public or private.
Like any platform allowing open chat, there’s some risk that your child may be exposed to inappropriate material.
However, Discord parental controls and safety features make the platform a reasonably safe experience for teens. (The minimum age for Discord users is 13.)
Here’s a rundown on the available safety features, including new Family Center updates released in late 2025:
Discord requires that users be over the age of 13 to use their platform. There are additional built-in restrictions for users under 18.
Because age isn’t verified, be aware that it’s possible for your kid to skirt around this restriction. It’s a good idea to sit down with your child when they first set up their account and explain why those age guardrails matter.
The person who sets up and runs a server can set certain limitations, such as automatic filtering of explicit images and videos. Server owners are free to establish ground rules for users, such as prohibiting swearing or hate speech. Owners can choose to moderate the server themselves or have this done by a bot.
As a parent, it’s a good idea to review the guidelines of the servers your child joins and understand the content users post.
If your child wants to use Discord primarily to chat with friends, consider asking them to set up a private server that can be joined by invitation only.

Discord’s Family Center allows parents to see more, guide more, and collaborate with their teen, while still protecting message privacy. Here’s a rundown of Family Center:
When parents set up Family Center, they can see:
Parents can’t see the content of their child’s messages in Discord Family Center, but you can monitor what they type with BrightCanary on iOS.
If your teen reports a user or piece of content, they can choose to notify you. The alert does not include report details, but it’s a helpful prompt to start a conversation.
Parents can now manage select account settings for teens, including:
Discord encourages parents and teens to set these together to align safety with your teen’s autonomy.
Discord’s platform can feel overwhelming to the uninitiated, so they’ve created a guide that walks through setting up your Family Center. Here’s the TL;DR:
To set up Family Center, you’ll need to create a Discord account on your mobile device. From there, go to User Settings → Family Center → Enable Family Center tab.
Your teen will need to give you the generated QR code that is located in their Family Center tab under the Connect with Guardian option.
Once linked, tap on My Family to see:
Under the Privacy and Safety section of your child’s account settings, there are a handful of features to maximize the safety of their Discord experience. We recommend setting the following:
With this feature, users can elect to have all direct messages (DMs), or only messages from non-friends, scanned and filtered for explicit material.
Users can also choose to have their DMs scanned and filtered for spam.
This setting can be toggled on to allow messages from other users in a server.
If direct messages are enabled, a user can allow messages from any users on that server — or only messages from their friends. We recommend the latter.
One caveat: Messages from non-friends aren’t blocked entirely, they’re sent to a separate “message requests” folder, so your kid can still access them if they choose.
Servers can be set as age-restricted by the owner, preventing users under 18 from accessing them.
Discord allows servers to be set as Not Suitable for Work (NSFW). Users under 18 are automatically prevented from joining NSFW servers.
In the Privacy and Safety settings, users can prevent the app from collecting and using their data for customization or analytics.
Discord is an excellent way for your teen to interact with friends, learn about topics of interest, and connect with others around shared interests. Because it’s an open chat platform, parents and kids need to be aware of the risks. By using safety features, talking to your kid about how to use the platform responsibly, and monitoring their use, it’s possible for Discord to be a safe, positive environment for your teen.
Want to monitor your child's Discord activity on their iPhone or iPad? Download BrightCanary today and start monitoring what they send across every app they use, including Discord, Roblox, Reddit, and more.

What teen wouldn’t jump at the chance to message Timothée Chalamet or talk music with Chappell Roan? While real idols may be out of reach, the chatbot platform Character.ai has offered teens the chance to chat with AI-generated versions of celebrities, fictional characters, and even user-created personalities.
But in late 2025, Character.ai announced sweeping changes to how teens can use the platform, including removing open-ended AI chat for anyone under 18. These changes come after safety concerns, lawsuits, and regulatory pressure surrounding teens’ experiences on AI chat apps.
Here’s what parents need to know about Character.ai’s proposed teen experience, the risks if your child uses AI platforms, and how to keep kids safe.
Character.ai is a chatbot platform powered by large language models (LLMs) where users can interact with AI-generated characters or create their own.
Users can choose from existing bots based on celebrities, historical figures, and fictional characters, or create their own characters to chat with or share with others.
Character.ai became popular among teens because it offers:
However, the very factors that make Character.ai appealing can also endanger kids. In 2024, Sewell Setzer, a 14-year-old boy, took his own life after having intimate conversations with a Character.AI chatbot named after a fictional character.
Sewell’s mother, Megan Garcia, has filed a lawsuit against Character.ai, accusing the platform of negligence, intentional infliction of emotional distress, and deceptive trade practices, among other claims. The lawsuit alleges that the chatbot’s conversations with Sewell not only perpetuated his suicidal thoughts, but they also turned overtly sexual — even though Sewell registered as a minor.
In October 2025, Character.ai announced three major changes to better protect teens:
By November 25, 2025, teens were no longer able to message AI characters in open-ended conversations.
A new under-18 experience, currently in development, will focus on creating videos, stories, and streams with AI characters, but not chatting freely.
The company is creating an independent, nonprofit AI Safety Lab focused on researching safer AI experiences for teens. Their goal is to advance safety research specifically for AI used for entertainment and social interaction.
Character.ai is rolling out expanded age verification combining an in-house age-assurance model and third-party age verification tools.
This is meant to reduce the number of minors who falsely register as adults to access the open-ended chat features, but it’s entirely possible that kids will find ways to bypass these restrictions. Monitoring your child’s online activity is essential because Character.ai, and other AI chatbot platforms, can carry serious risks.
While AI chatbots can be fun and potentially educational, the platform comes with serious risks for kids who figure out how to bypass the platform’s age restrictions.
While users can “mute” individual words that they don’t want to encounter in their chats, they can’t set filters that cover broader topics. The community guidelines do strictly prohibit pornographic content, and a team of AI and human moderators work to enforce it.
Things slip through, however, and users are very crafty at finding workarounds. Prior to the ban on users under age 18, there were reports and lawsuits claiming underage users were exposed to hypersexualized interactions on Character AI.
The technology powering Character.ai relies on large amounts of data in order to operate, including information users provide, which raises major privacy concerns.
If your child shares intimate thoughts or private details with a character, that information then belongs to the company. Character.ai's privacy policy would suggest their focus is more about what data they plan to collect versus protecting users’ privacy.
It’s a known phenomenon that chatbots tend to align with users’ views — a potentially dangerous feedback loop known as sycophancy. This may lead to a CAI chatbot confirming harmful ideas and even upping the ante in alarming ways.
One lawsuit against the company alleges that after a teen complained to a Character AI bot about his parents' attempt to limit his time on the platform, the bot suggested he kill his parents.
One of the more concerning aspects of the Character.ai platform is the growing number of young people who turn to it for emotional and mental health support. There are even characters on the platform with titles like Therapist which list bogus credentials.
Given the chatbots’ lack of actual mental health training and the fact that they're programmed to reinforce, rather than challenge, a user’s thinking, mental health professionals are sounding the alarm that the platforms could encourage vulnerable people to harm themselves or others.
LLMs are programmed to mimic human emotions, which introduces the potential that teens could become emotionally dependent on a character. It’s becoming increasingly common to hear stories of users avoiding or neglecting human relationships in favor of their chatbot companion.
If your child’s interested in using AI chatbots platforms, here are some tips to help them stay safe:
Character.ai is not safe for kids, and users under 18 are banned from using the platform’s open chat feature. Users can encounter inappropriate interactions, privacy risks, and AI bots mimicking mental health support.
Users under the age of 18 are not allowed to use Character.ai. The platform is better suited for adults due to the risks of inappropriate content and emotional over-reliance.
No. While some bots appear to offer emotional support or label themselves as “therapists,” they are not trained mental health professionals. Relying on them for mental health advice can be dangerous and is strongly discouraged by experts.
The main risks include exposure to inappropriate content, sharing personal data with the platform, emotionally harmful chatbot feedback loops, and developing unhealthy dependence on AI companions.
Character.ai and similar AI chatbot platforms are not safe for users under age 18. Parents should educate their children on the risk of AI companion apps, set clear boundaries around their use, and closely monitor their online interactions.
The problem is that Character.AI is just one example of the types of AI chatbot apps that exist online, and not all of them have the same level of child restrictions. BrightCanary can help you supervise your child’s online activity, including AI apps, and updates you when something concerning appears. Download the app and start your free trial today.

If you’ve tried using Bark on an Apple device and you’re fed up with how poorly it functions on iOS, you’re not alone. Many parents report unreliable alerts and limited coverage, leaving huge gaps in protection.
The good news is: there are better alternatives to Bark that give you the robust protection you’re looking for without the headache.
Check out our roundup of five alternatives to Bark, including key features, limitations, and pricing, to find the option that’s right for you.
Best for: Real-time monitoring across all apps on iOS
Why it’s a top Bark alternative: BrightCanary is the most robust parental monitoring app available for iOS devices. Because it’s powered by the BrightCanary Keyboard, your child is protected on every app, website, and messaging platform they use, including Snapchat, TIkTok, Instagram, Discord, Roblox, and AI chat apps.
You get:
Considerations:
Best for: Families who have both Android and iOS devices.
Why it’s a good alternative to Bark: Qustodio offers very similar features and functionality for both iOS and Android devices. You can use it to set limits on apps and block your child from accessing specific apps and websites, and Qustodio sends an alert when your child encounters something alarming.
For Android devices, it also features a panic button so your child can send an emergency alert with their location to a list of trusted contacts.
Considerations:
Best for: Parents who want customizable restrictions and filters.
Why it’s a good alternative to Bark: With Microsoft Family Safety, you can set screen time limits at the device level as well as for specific apps, and view activity summaries to see how your child spends their time online. It also allows for custom filters to limit the websites and apps that your child can access.
Considerations:
Best for: Parents who are primarily concerned with their child’s internet searches.
Why it’s a good alternative to Bark: Net Nanny allows you to see what apps your child uses and provides real-time alerts when they search for terms related to porn, suicide, weapons, and drugs. You can also set internet filters, block pornography, track your child’s location, and restrict access to specific websites.
Considerations:
Best for: Allowing kids to take an active role in regulating their own screen time.
Why it’s a good alternative to Bark: If you’re looking for an easy interface that works across multiple platforms, Mobicip is worth exploring. It can be used to set screentime schedules, monitor social media conversations, and remotely lock a device’s screen. It also offers AI-powered content filtering and an activity summary.
Of particular note is the collaboration feature, which invites kids to take an active role in regulating their own screentime.
Considerations:
If Bark isn’t giving you the coverage, reliability, or peace of mind you need, especially on iOS, you’re far from alone. The good news is that today’s parental monitoring tools offer stronger protection, smarter AI insights, and better compatibility across the apps your kids use every day.
BrightCanary is the best Bark alternative for parents who want real-time monitoring across every app on iPhone and iPad, along with actionable insights and instant alerts. But depending on your family’s devices and priorities, Qustodio, Microsoft Family Safety, Net Nanny, and Mobicip each offer distinct advantages worth considering.
Want comprehensive protection on iOS with real-time visibility across all apps? Try BrightCanary today and start your free trial.

Statistically, kids today are far safer out in the world than their parents were growing up. At the same time, the internet has introduced new risks that parents didn't have to contend with when they were young. Many parents overestimate the dangers their kids face outside of the home and underestimate the risks of online spaces.
The result is kids who are occupied by screens, without enough protections, instead of being encouraged to roam freely outside where they’re safer.
In this article, we’ll go over the facts about online safety versus real-world dangers, the negative impact on kids when parents get the equation wrong, and what to do about it.
The spaces that kids occupy outside the home are safer today than in decades past, while online dangers continue to increase. Let’s look at the facts:
Despite the risks, most parents underestimate the threats their children face online. Here are three ways getting it wrong could put your child in danger:
There are also major downsides to overprotecting children offline. Here are three to consider:
Here are some tips on striking a better balance between online protection and offline freedom:
The world outside your house can feel like a scary place to send your children, but it’s actually online spaces that pose a far bigger threat. Giving your children more freedom offline helps them build important skills, while keeping a closer eye on them online helps them stay safe.
Help keep your child safe online with BrightCanary. Our advanced technology monitors everything your child types online and alerts you when there’s a problem. Download the app and get started today.

This morning, my husband suggested we delete the Instagram accounts he created for our kids when they were born. We started them to make sharing photos with family and friends easier, and the accounts have always been private. But, like many parents in 2025, we’re reexamining social media decisions that previously made sense.
The era of artificial intelligence (AI) — particularly the corresponding explosion of deepfakes — has changed the calculation when it comes to “sharenting.”
Sharenting is a mash-up of “sharing” and “parenting” and is used to describe when parents post news, images, or videos of their children on social media, especially when it’s done excessively.
Deepfakes are images or recordings that have been digitally manipulated using AI to make it appear as if someone did or said something they didn’t. Especially concerning for kids are nudify apps, which can transform a clothed photo into a nude at the press of a button. Some apps even create pornographic animations.
Here are four potential dangers of sharing about your child online in the era of AI:
Considering the risks, you might decide to never post anything online about your child. If you’re like me and aren’t willing to go quite that far, here are five things to consider when deciding whether to post or not.
Ask yourself what benefit you or your child will gain from the post and if the potential risk is worth it to you. Everyone’s calculations are different, and we all have our own risk tolerance, so this is a decision only you can make.
Group iMessage threads or WhatsApp chats are great for sharing updates about your child with family and friends without posting on social media.
If you choose to post about your child online, here are some steps you can take to reduce the risk:
Especially as they get older, it’s valuable to ask your child before you post about them. This empowers them to have control over their digital footprint, and it helps them learn the valuable skill of pausing before posting.
Talk to your children about what you post and let them in on your thought process. Not only will it give you pause to consider your choice, but it will also teach them critical thinking skills they can use when they have their own accounts.
The era of AI has made sharing about our kids online much riskier. Deepfakes, nudifying apps, and increased avenues for predators are just a few of the dangers that artificial intelligence has introduced. Parents need to consider the potential threats before posting about their child online and take steps to minimize the risk, such as using private accounts and Close Friends lists.
While it’s important to be mindful of what you post, it’s even more critical to keep an eye on your child’s activity online. BrightCanary can help you do just that. AI has introduced new risks to kids online, but at BrightCanary, we’ve harnessed the technology for good, using it to monitor your child’s activity online and alert you to any red flags. Download the app today to get started.

For kids who need to be able to call and text family but who aren’t ready for a smartphone yet, Apple Watches are a great option. Apple Watches are also very safe — if they’re set up properly.
This guide will walk you through how to keep your child safe when using their smart watch, including what settings they need and how to monitor their text messages on Apple Watch.
The first thing you should do is set up Family Sharing. This free feature allows you to access important parental controls.
These are the settings we recommend for your child’s Apple Watch to help them stay safe while using the device:
Monitoring your child’s digital communication is an important component of keeping them safe. The best way to monitor your child’s text messages on an Apple Watch is through BrightCanary. Here’s how to get started:
Our Protection Plan is powered by the BrightCanary Keyboard, which can’t be installed on Apple Watches. In order to monitor texts on your child’s Apple Watch, you’ll need Text Message Plus. This allows you to monitor the texts they send and receive on their watch.
If your child’s Apple Watch is paired with an iPhone, then you can use BrightCanary to monitor all incoming and outgoing texts — whether from an iPhone or an Android. However, if your child only uses an Apple Watch that isn’t paired to an iPhone, BrightCanary can’t monitor texts sent between Android devices and their watch.
Apple Watches are a great alternative to giving your child a smartphone. By using Apple Family to set up parental controls and BrightCanary’s Text Message Plus plan to monitor their incoming and outgoing messages, you can ensure your child is safe while using their Apple Watch.
Ready to get started monitoring your child’s text messages on their Apple Watch? BrightCanary offers the most comprehensive child safety app for Apple devices. Start your free trial today.

Text message scams, sometimes referred to as smishing, have increased by 50% in the past year. Unfortunately, scammers often target tweens and teens. It’s important to teach your child how to respond to a scam text so they can recognize red flags and how to respond. This guide breaks down how to teach your child to spot scam texts and what they should do when they receive one.
Scam texts are fake messages designed to trick the recipient into doing something that benefits the sender, such as by convincing the target to send them money.
Some scammers try to get personal information, which they will then use to steal their victim’s identity or break into their bank accounts. They may also sell the personal information to other scammers.
In most cases, the best way to respond to a scam text is to not respond at all. Kids often think they have to respond to a text. Teach them that when in doubt, don’t reply.
Here are some other ways you can talk about responding, especially if they aren’t sure if a message is a scam:
Prevention is the best protection.
Text scams are on the rise, and even kids can be targeted. To protect your child, teach them how to respond to a scam text, such as not responding, never sending personal information, and reporting the incident.
BrightCanary offers the most comprehensive monitoring of your child’s text messages. Our advanced AI scans their texts and alerts you to anything suspicious so you can spot scams right away and intervene. Download BrightCanary today to get started.

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
👾 You can now monitor your child’s Discord purchases: The popular messaging platform and social network Discord recently rolled out updates to its parental control hub, Family Center. Parents can now get insights into their child’s purchases, who they’re chatting with, and total time spent on the platform. This information is limited to the prior seven days.
To set it up, you’ll need to download the Discord app on your device and link your account to your child’s under User Settings. Parents can also control who can message their teen, whether sensitive content should be filtered, and how Discord uses their child’s data, among other features.
And if you want to take Discord monitoring a step further, BrightCanary is the best way on iOS to keep track of what your child types in their chats — so you can stay on top of their interests and any red flags. On our blog, we covered what you should know about Discord servers and how kids use them.
🇩🇰 Denmark aims to ban access to social media for kids: Denmark is the land of smørrebrød, the birthplace of LEGO, and, maybe soon, kids who can’t use TikTok until 15. Many social media platforms require users to be 13 to create an account, but it’s relatively easy for kids to fib their age. Denmark’s proposed ban would force tech giants to enforce age verification, such as through a national electronic ID system, or forfeit up to 6% of their global income. The move follows Australia’s landmark ban on social media for children, which set the minimum age at 16 and likely led to a surge in Australian teens asking ChatGPT about how to set up a VPN.
“We’ve given the tech giants so many chances to stand up and to do something about what is happening on their platforms. They haven’t done it,” said Caroline Stage, Denmark’s minister for digital affairs. “So now we will take over the steering wheel and make sure that our children’s futures are safe.”
While bans aren’t foolproof, they send a clear message that children shouldn’t have unrestricted access to digital spaces — not just social media. If you aren’t sure how to talk to your child about social media’s risks, here are the facts about how it impacts mental health (and what parents and teens can do about it).
📵 Want to fix your child’s screen time? Check your own habits: One of the strongest predictors of a child’s screen time isn’t proximity to an iPad or knowing Mom’s passcode — it’s the parent’s screen time. If you spend all your free time scrolling, odds are high that your child will practice the same behaviors. Fortunately, there’s a fix for that, without having to go back to a flip phone. Here’s how parents can find a balance with their screen time, according to NPR:
Ask yourself if the response can wait. If you’re with your family, do you really have to check your email? If it’s urgent, explain what you’re doing and why you’re doing it — otherwise, you’re signaling to your child that they’re less important than what’s on your screen.
Prioritize quality over quantity with your personal screen time. Maybe FaceTiming family is a source of joy, but binging your go-to true crime podcast before bed is impacting your sleep quality.
Keep devices outside your bedroom, and practice this as a family. Set up a communal charging station or phone lock box in the kitchen or living room, and encourage better-quality bedtime habits (and no true crime binges).
Create device-free zones and times in the house, like no phones at the kitchen table or setting devices to do-not-disturb on the drive to and from school.
Above all else, be kind to yourself — you’re not perfect, and even if you mess up, it’s easy to get back on track.
Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.
Could you imagine growing up without social media? Denmark might soon make that a reality for kids under 15. Whether or not you agree with bans, the idea raises a good question for families: What would life look like if social media waited? Here’s how to spark that discussion:
💬 More teens are speaking up after experiencing challenges online. Seven in 10 teens (71%) aged 13–17 said they sought help or spoke with someone after being exposed to an online risk, like unwanted contact or online bullying. That number is up from 68% in 2024 and a low of 59% in 2023. The findings come from Snap’s five-year study into digital well-being among Generation Z across the globe. Notably, when faced with sexual risks, violent extremist content, and self-harm, fewer teens approached their parents, leaving adults to discover these types of teens’ struggles on their own or from someone else.
💅 Beauty filters may look harmless, but they may be contributing to negative sense of self. Popular beauty filters on social media tend to highlight Eurocentric beauty standards, like small noses and blue eyes. A study found that these beauty filters, along with other race-related online experiences, can negatively affect Black adolescents’ sleep and ability to concentrate on schoolwork the following day.
🍂 The holidays are around the corner, and for many families, that means more screen time because, well, school’s out and there are only so many times you can sing “Deck the Halls” in a car together on the way to find the perfect Christmas tree. (Hit reply if you know what movie we’re talking about.) We’re saving this list of 12 screen-free ways to stay connected over the holidays.

When it’s time to give your kid their own phone, do you give them the hand-me-down iPhone, or opt for a smartphone alternative?
A growing number of families are so concerned about the negative impacts of smartphones that they’re pledging to wait until the end of 8th grade to give their child their first smartphone. Some of the negative impacts of excessive smartphone use include poor sleep quality, declining academic performance, and worse mental health.
But delaying devices doesn’t mean leaving your kids in the dark. There are a number of ways your child can still stay connected. Here are six alternatives to smartphones for kids.
Dumb phones are stripped-down devices that include basic features like calling and texting but forgo the more advanced elements that you’ll find on a smartphone.
They typically include some of the conveniences of a smartphone, like a touchscreen, navigation, and music-playing capabilities, but they leave out the more problematic features like access to social media.
For an even more stripped-down experience, some families are turning to flip phones reminiscent of the early 2000s.
With a flip phone, your child won’t have any of the conveniences of modern communication technology that they might get with a dumb phone, but they also won’t get access to the internet, which some dumb phones include.
Flip phones are also having a bit of a moment with teens, so it might not be so hard to sell your child on the idea.
Smart watches allow kids to text and call preapproved contacts, access navigation tools, and make purchases without access to the internet or social media.
Most also include device tracking, so you can use it to keep tabs on your child’s whereabouts.
Gabb and Garmin both make products specifically designed for kids, or you can use parental controls to customize an Apple Watch for your child.
When my 10-year-old is going to be home alone, we always make sure the family iPad is fully charged. We can then use it to communicate with him over iMessage or FaceTime. He also has contact information for family and friends saved so he can connect with them on the device.
The system is a bit clunky, but it’s a reasonable stopgap, especially if your family already has a tablet.
If you want your child to have more functionality than a tablet or smartwatch when they’re home, but you don’t want to give them their own phone, consider a family cell phone.
The idea here is that it functions much like a landline of yore. That means you’ll want to strip it of all the things you don’t want your child to access and make it clear this is not a personal device.
It’s a good idea to have it stay in a designated spot so it doesn’t sneak into rooms or wander off and get lost.
Much like flip phones, landlines are making a major comeback. The humble landline is an excellent way to help your child stay connected with friends without giving them a smartphone.
As a bonus, they get to practice the lost art of phone skills, which may give them a leg up as an adult.
There’s no reason alternatives to smartphones have to be digital. Here are some great, tech-free options:
There are many benefits to delaying the age at which a child gets their first smartphone, but holding off can leave kids and families with unmet communication needs. Dumb phones, smart watches, and landlines are all great alternatives to smartphones, as are old-fashioned methods like knocking on a friend’s door and arranging in-person hangouts.
If your child uses an Apple Watch or iPad as an alternative to a smartphone, BrightCanary can help you keep them safe with advanced monitoring tools. Our AI scans their activity and sends you an alert if they encounter any red flags. Download the app today to get started.